Skip to content
Projects
Groups
Snippets
Help
Loading...
Help
Support
Keyboard shortcuts
?
Submit feedback
Contribute to GitLab
Sign in / Register
Toggle navigation
L
linux
Project overview
Project overview
Details
Activity
Releases
Repository
Repository
Files
Commits
Branches
Tags
Contributors
Graph
Compare
Issues
0
Issues
0
List
Boards
Labels
Milestones
Merge Requests
0
Merge Requests
0
Analytics
Analytics
Repository
Value Stream
Wiki
Wiki
Snippets
Snippets
Members
Members
Collapse sidebar
Close sidebar
Activity
Graph
Create a new issue
Commits
Issue Boards
Open sidebar
nexedi
linux
Commits
d05b695c
Commit
d05b695c
authored
Jan 31, 2018
by
Jiri Kosina
Browse files
Options
Browse Files
Download
Plain Diff
Merge branch 'for-4.16/remove-immediate' into for-linus
Pull 'immediate' feature removal from Miroslav Benes.
parents
8869016d
d0807da7
Changes
8
Show whitespace changes
Inline
Side-by-side
Showing
8 changed files
with
33 additions
and
181 deletions
+33
-181
Documentation/livepatch/livepatch.txt
Documentation/livepatch/livepatch.txt
+27
-62
include/linux/livepatch.h
include/linux/livepatch.h
+0
-4
kernel/livepatch/core.c
kernel/livepatch/core.c
+1
-11
kernel/livepatch/transition.c
kernel/livepatch/transition.c
+5
-44
samples/livepatch/livepatch-callbacks-demo.c
samples/livepatch/livepatch-callbacks-demo.c
+0
-15
samples/livepatch/livepatch-sample.c
samples/livepatch/livepatch-sample.c
+0
-15
samples/livepatch/livepatch-shadow-fix1.c
samples/livepatch/livepatch-shadow-fix1.c
+0
-15
samples/livepatch/livepatch-shadow-fix2.c
samples/livepatch/livepatch-shadow-fix2.c
+0
-15
No files found.
Documentation/livepatch/livepatch.txt
View file @
d05b695c
...
@@ -72,8 +72,7 @@ example, they add a NULL pointer or a boundary check, fix a race by adding
...
@@ -72,8 +72,7 @@ example, they add a NULL pointer or a boundary check, fix a race by adding
a missing memory barrier, or add some locking around a critical section.
a missing memory barrier, or add some locking around a critical section.
Most of these changes are self contained and the function presents itself
Most of these changes are self contained and the function presents itself
the same way to the rest of the system. In this case, the functions might
the same way to the rest of the system. In this case, the functions might
be updated independently one by one. (This can be done by setting the
be updated independently one by one.
'immediate' flag in the klp_patch struct.)
But there are more complex fixes. For example, a patch might change
But there are more complex fixes. For example, a patch might change
ordering of locking in multiple functions at the same time. Or a patch
ordering of locking in multiple functions at the same time. Or a patch
...
@@ -125,12 +124,6 @@ safe to patch tasks:
...
@@ -125,12 +124,6 @@ safe to patch tasks:
b) Patching CPU-bound user tasks. If the task is highly CPU-bound
b) Patching CPU-bound user tasks. If the task is highly CPU-bound
then it will get patched the next time it gets interrupted by an
then it will get patched the next time it gets interrupted by an
IRQ.
IRQ.
c) In the future it could be useful for applying patches for
architectures which don't yet have HAVE_RELIABLE_STACKTRACE. In
this case you would have to signal most of the tasks on the
system. However this isn't supported yet because there's
currently no way to patch kthreads without
HAVE_RELIABLE_STACKTRACE.
3. For idle "swapper" tasks, since they don't ever exit the kernel, they
3. For idle "swapper" tasks, since they don't ever exit the kernel, they
instead have a klp_update_patch_state() call in the idle loop which
instead have a klp_update_patch_state() call in the idle loop which
...
@@ -138,27 +131,16 @@ safe to patch tasks:
...
@@ -138,27 +131,16 @@ safe to patch tasks:
(Note there's not yet such an approach for kthreads.)
(Note there's not yet such an approach for kthreads.)
A
ll the above approaches may be skipped by setting the 'immediate' flag
A
rchitectures which don't have HAVE_RELIABLE_STACKTRACE solely rely on
in the 'klp_patch' struct, which will disable per-task consistency and
the second approach. It's highly likely that some tasks may still be
patch all tasks immediately. This can be useful if the patch doesn't
running with an old version of the function, until that function
change any function or data semantics. Note that, even with this flag
returns. In this case you would have to signal the tasks. This
set, it's possible that some tasks may still be running with an ol
d
especially applies to kthreads. They may not be woken up and would nee
d
version of the function, until that function returns
.
to be forced. See below for more information
.
There's also an 'immediate' flag in the 'klp_func' struct which allows
Unless we can come up with another way to patch kthreads, architectures
you to specify that certain functions in the patch can be applied
without HAVE_RELIABLE_STACKTRACE are not considered fully supported by
without per-task consistency. This might be useful if you want to patch
the kernel livepatching.
a common function like schedule(), and the function change doesn't need
consistency but the rest of the patch does.
For architectures which don't have HAVE_RELIABLE_STACKTRACE, the user
must set patch->immediate which causes all tasks to be patched
immediately. This option should be used with care, only when the patch
doesn't change any function or data semantics.
In the future, architectures which don't have HAVE_RELIABLE_STACKTRACE
may be allowed to use per-task consistency if we can come up with
another way to patch kthreads.
The /sys/kernel/livepatch/<patch>/transition file shows whether a patch
The /sys/kernel/livepatch/<patch>/transition file shows whether a patch
is in transition. Only a single patch (the topmost patch on the stack)
is in transition. Only a single patch (the topmost patch on the stack)
...
@@ -197,6 +179,11 @@ modules is permanently disabled when the force feature is used. It cannot be
...
@@ -197,6 +179,11 @@ modules is permanently disabled when the force feature is used. It cannot be
guaranteed there is no task sleeping in such module. It implies unbounded
guaranteed there is no task sleeping in such module. It implies unbounded
reference count if a patch module is disabled and enabled in a loop.
reference count if a patch module is disabled and enabled in a loop.
Moreover, the usage of force may also affect future applications of live
patches and cause even more harm to the system. Administrator should first
consider to simply cancel a transition (see above). If force is used, reboot
should be planned and no more live patches applied.
3.1 Adding consistency model support to new architectures
3.1 Adding consistency model support to new architectures
---------------------------------------------------------
---------------------------------------------------------
...
@@ -234,13 +221,6 @@ few options:
...
@@ -234,13 +221,6 @@ few options:
a good backup option for those architectures which don't have
a good backup option for those architectures which don't have
reliable stack traces yet.
reliable stack traces yet.
In the meantime, patches for such architectures can bypass the
consistency model by setting klp_patch.immediate to true. This option
is perfectly fine for patches which don't change the semantics of the
patched functions. In practice, this is usable for ~90% of security
fixes. Use of this option also means the patch can't be unloaded after
it has been disabled.
4. Livepatch module
4. Livepatch module
===================
===================
...
@@ -296,9 +276,6 @@ into three levels:
...
@@ -296,9 +276,6 @@ into three levels:
only for a particular object ( vmlinux or a kernel module ). Note that
only for a particular object ( vmlinux or a kernel module ). Note that
kallsyms allows for searching symbols according to the object name.
kallsyms allows for searching symbols according to the object name.
There's also an 'immediate' flag which, when set, patches the
function immediately, bypassing the consistency model safety checks.
+ struct klp_object defines an array of patched functions (struct
+ struct klp_object defines an array of patched functions (struct
klp_func) in the same object. Where the object is either vmlinux
klp_func) in the same object. Where the object is either vmlinux
(NULL) or a module name.
(NULL) or a module name.
...
@@ -317,9 +294,6 @@ into three levels:
...
@@ -317,9 +294,6 @@ into three levels:
symbols are found. The only exception are symbols from objects
symbols are found. The only exception are symbols from objects
(kernel modules) that have not been loaded yet.
(kernel modules) that have not been loaded yet.
Setting the 'immediate' flag applies the patch to all tasks
immediately, bypassing the consistency model safety checks.
For more details on how the patch is applied on a per-task basis,
For more details on how the patch is applied on a per-task basis,
see the "Consistency model" section.
see the "Consistency model" section.
...
@@ -334,14 +308,12 @@ section "Livepatch life-cycle" below for more details about these
...
@@ -334,14 +308,12 @@ section "Livepatch life-cycle" below for more details about these
two operations.
two operations.
Module removal is only safe when there are no users of the underlying
Module removal is only safe when there are no users of the underlying
functions. The immediate consistency model is not able to detect this. The
functions. This is the reason why the force feature permanently disables
code just redirects the functions at the very beginning and it does not
the removal. The forced tasks entered the functions but we cannot say
check if the functions are in use. In other words, it knows when the
that they returned back. Therefore it cannot be decided when the
functions get called but it does not know when the functions return.
livepatch module can be safely removed. When the system is successfully
Therefore it cannot be decided when the livepatch module can be safely
transitioned to a new patch state (patched/unpatched) without being
removed. This is solved by a hybrid consistency model. When the system is
forced it is guaranteed that no task sleeps or runs in the old code.
transitioned to a new patch state (patched/unpatched) it is guaranteed that
no task sleeps or runs in the old code.
5. Livepatch life-cycle
5. Livepatch life-cycle
...
@@ -355,19 +327,12 @@ First, the patch is applied only when all patched symbols for already
...
@@ -355,19 +327,12 @@ First, the patch is applied only when all patched symbols for already
loaded objects are found. The error handling is much easier if this
loaded objects are found. The error handling is much easier if this
check is done before particular functions get redirected.
check is done before particular functions get redirected.
Second, the immediate consistency model does not guarantee that anyone is not
Second, it might take some time until the entire system is migrated with
sleeping in the new code after the patch is reverted. This means that the new
the hybrid consistency model being used. The patch revert might block
code needs to stay around "forever". If the code is there, one could apply it
the livepatch module removal for too long. Therefore it is useful to
again. Therefore it makes sense to separate the operations that might be done
revert the patch using a separate operation that might be called
once and those that need to be repeated when the patch is enabled (applied)
explicitly. But it does not make sense to remove all information until
again.
the livepatch module is really removed.
Third, it might take some time until the entire system is migrated
when a more complex consistency model is used. The patch revert might
block the livepatch module removal for too long. Therefore it is useful
to revert the patch using a separate operation that might be called
explicitly. But it does not make sense to remove all information
until the livepatch module is really removed.
5.1. Registration
5.1. Registration
...
...
include/linux/livepatch.h
View file @
d05b695c
...
@@ -40,7 +40,6 @@
...
@@ -40,7 +40,6 @@
* @new_func: pointer to the patched function code
* @new_func: pointer to the patched function code
* @old_sympos: a hint indicating which symbol position the old function
* @old_sympos: a hint indicating which symbol position the old function
* can be found (optional)
* can be found (optional)
* @immediate: patch the func immediately, bypassing safety mechanisms
* @old_addr: the address of the function being patched
* @old_addr: the address of the function being patched
* @kobj: kobject for sysfs resources
* @kobj: kobject for sysfs resources
* @stack_node: list node for klp_ops func_stack list
* @stack_node: list node for klp_ops func_stack list
...
@@ -76,7 +75,6 @@ struct klp_func {
...
@@ -76,7 +75,6 @@ struct klp_func {
* in kallsyms for the given object is used.
* in kallsyms for the given object is used.
*/
*/
unsigned
long
old_sympos
;
unsigned
long
old_sympos
;
bool
immediate
;
/* internal */
/* internal */
unsigned
long
old_addr
;
unsigned
long
old_addr
;
...
@@ -137,7 +135,6 @@ struct klp_object {
...
@@ -137,7 +135,6 @@ struct klp_object {
* struct klp_patch - patch structure for live patching
* struct klp_patch - patch structure for live patching
* @mod: reference to the live patch module
* @mod: reference to the live patch module
* @objs: object entries for kernel objects to be patched
* @objs: object entries for kernel objects to be patched
* @immediate: patch all funcs immediately, bypassing safety mechanisms
* @list: list node for global list of registered patches
* @list: list node for global list of registered patches
* @kobj: kobject for sysfs resources
* @kobj: kobject for sysfs resources
* @enabled: the patch is enabled (but operation may be incomplete)
* @enabled: the patch is enabled (but operation may be incomplete)
...
@@ -147,7 +144,6 @@ struct klp_patch {
...
@@ -147,7 +144,6 @@ struct klp_patch {
/* external */
/* external */
struct
module
*
mod
;
struct
module
*
mod
;
struct
klp_object
*
objs
;
struct
klp_object
*
objs
;
bool
immediate
;
/* internal */
/* internal */
struct
list_head
list
;
struct
list_head
list
;
...
...
kernel/livepatch/core.c
View file @
d05b695c
...
@@ -366,11 +366,6 @@ static int __klp_enable_patch(struct klp_patch *patch)
...
@@ -366,11 +366,6 @@ static int __klp_enable_patch(struct klp_patch *patch)
/*
/*
* A reference is taken on the patch module to prevent it from being
* A reference is taken on the patch module to prevent it from being
* unloaded.
* unloaded.
*
* Note: For immediate (no consistency model) patches we don't allow
* patch modules to unload since there is no safe/sane method to
* determine if a thread is still running in the patched code contained
* in the patch module once the ftrace registration is successful.
*/
*/
if
(
!
try_module_get
(
patch
->
mod
))
if
(
!
try_module_get
(
patch
->
mod
))
return
-
ENODEV
;
return
-
ENODEV
;
...
@@ -894,12 +889,7 @@ int klp_register_patch(struct klp_patch *patch)
...
@@ -894,12 +889,7 @@ int klp_register_patch(struct klp_patch *patch)
if
(
!
klp_initialized
())
if
(
!
klp_initialized
())
return
-
ENODEV
;
return
-
ENODEV
;
/*
if
(
!
klp_have_reliable_stack
())
{
* Architectures without reliable stack traces have to set
* patch->immediate because there's currently no way to patch kthreads
* with the consistency model.
*/
if
(
!
klp_have_reliable_stack
()
&&
!
patch
->
immediate
)
{
pr_err
(
"This architecture doesn't have support for the livepatch consistency model.
\n
"
);
pr_err
(
"This architecture doesn't have support for the livepatch consistency model.
\n
"
);
return
-
ENOSYS
;
return
-
ENOSYS
;
}
}
...
...
kernel/livepatch/transition.c
View file @
d05b695c
...
@@ -82,7 +82,6 @@ static void klp_complete_transition(void)
...
@@ -82,7 +82,6 @@ static void klp_complete_transition(void)
struct
klp_func
*
func
;
struct
klp_func
*
func
;
struct
task_struct
*
g
,
*
task
;
struct
task_struct
*
g
,
*
task
;
unsigned
int
cpu
;
unsigned
int
cpu
;
bool
immediate_func
=
false
;
pr_debug
(
"'%s': completing %s transition
\n
"
,
pr_debug
(
"'%s': completing %s transition
\n
"
,
klp_transition_patch
->
mod
->
name
,
klp_transition_patch
->
mod
->
name
,
...
@@ -104,16 +103,9 @@ static void klp_complete_transition(void)
...
@@ -104,16 +103,9 @@ static void klp_complete_transition(void)
klp_synchronize_transition
();
klp_synchronize_transition
();
}
}
if
(
klp_transition_patch
->
immediate
)
klp_for_each_object
(
klp_transition_patch
,
obj
)
goto
done
;
klp_for_each_func
(
obj
,
func
)
klp_for_each_object
(
klp_transition_patch
,
obj
)
{
klp_for_each_func
(
obj
,
func
)
{
func
->
transition
=
false
;
func
->
transition
=
false
;
if
(
func
->
immediate
)
immediate_func
=
true
;
}
}
/* Prevent klp_ftrace_handler() from seeing KLP_UNDEFINED state */
/* Prevent klp_ftrace_handler() from seeing KLP_UNDEFINED state */
if
(
klp_target_state
==
KLP_PATCHED
)
if
(
klp_target_state
==
KLP_PATCHED
)
...
@@ -132,7 +124,6 @@ static void klp_complete_transition(void)
...
@@ -132,7 +124,6 @@ static void klp_complete_transition(void)
task
->
patch_state
=
KLP_UNDEFINED
;
task
->
patch_state
=
KLP_UNDEFINED
;
}
}
done:
klp_for_each_object
(
klp_transition_patch
,
obj
)
{
klp_for_each_object
(
klp_transition_patch
,
obj
)
{
if
(
!
klp_is_object_loaded
(
obj
))
if
(
!
klp_is_object_loaded
(
obj
))
continue
;
continue
;
...
@@ -146,16 +137,11 @@ static void klp_complete_transition(void)
...
@@ -146,16 +137,11 @@ static void klp_complete_transition(void)
klp_target_state
==
KLP_PATCHED
?
"patching"
:
"unpatching"
);
klp_target_state
==
KLP_PATCHED
?
"patching"
:
"unpatching"
);
/*
/*
* See complementary comment in __klp_enable_patch() for why we
* klp_forced set implies unbounded increase of module's ref count if
* keep the module reference for immediate patches.
* the module is disabled/enabled in a loop.
*
* klp_forced or immediate_func set implies unbounded increase of
* module's ref count if the module is disabled/enabled in a loop.
*/
*/
if
(
!
klp_forced
&&
!
klp_transition_patch
->
immediate
&&
if
(
!
klp_forced
&&
klp_target_state
==
KLP_UNPATCHED
)
!
immediate_func
&&
klp_target_state
==
KLP_UNPATCHED
)
{
module_put
(
klp_transition_patch
->
mod
);
module_put
(
klp_transition_patch
->
mod
);
}
klp_target_state
=
KLP_UNDEFINED
;
klp_target_state
=
KLP_UNDEFINED
;
klp_transition_patch
=
NULL
;
klp_transition_patch
=
NULL
;
...
@@ -223,9 +209,6 @@ static int klp_check_stack_func(struct klp_func *func,
...
@@ -223,9 +209,6 @@ static int klp_check_stack_func(struct klp_func *func,
struct
klp_ops
*
ops
;
struct
klp_ops
*
ops
;
int
i
;
int
i
;
if
(
func
->
immediate
)
return
0
;
for
(
i
=
0
;
i
<
trace
->
nr_entries
;
i
++
)
{
for
(
i
=
0
;
i
<
trace
->
nr_entries
;
i
++
)
{
address
=
trace
->
entries
[
i
];
address
=
trace
->
entries
[
i
];
...
@@ -387,13 +370,6 @@ void klp_try_complete_transition(void)
...
@@ -387,13 +370,6 @@ void klp_try_complete_transition(void)
WARN_ON_ONCE
(
klp_target_state
==
KLP_UNDEFINED
);
WARN_ON_ONCE
(
klp_target_state
==
KLP_UNDEFINED
);
/*
* If the patch can be applied or reverted immediately, skip the
* per-task transitions.
*/
if
(
klp_transition_patch
->
immediate
)
goto
success
;
/*
/*
* Try to switch the tasks to the target patch state by walking their
* Try to switch the tasks to the target patch state by walking their
* stacks and looking for any to-be-patched or to-be-unpatched
* stacks and looking for any to-be-patched or to-be-unpatched
...
@@ -437,7 +413,6 @@ void klp_try_complete_transition(void)
...
@@ -437,7 +413,6 @@ void klp_try_complete_transition(void)
return
;
return
;
}
}
success:
/* we're done, now cleanup the data structures */
/* we're done, now cleanup the data structures */
klp_complete_transition
();
klp_complete_transition
();
}
}
...
@@ -457,13 +432,6 @@ void klp_start_transition(void)
...
@@ -457,13 +432,6 @@ void klp_start_transition(void)
klp_transition_patch
->
mod
->
name
,
klp_transition_patch
->
mod
->
name
,
klp_target_state
==
KLP_PATCHED
?
"patching"
:
"unpatching"
);
klp_target_state
==
KLP_PATCHED
?
"patching"
:
"unpatching"
);
/*
* If the patch can be applied or reverted immediately, skip the
* per-task transitions.
*/
if
(
klp_transition_patch
->
immediate
)
return
;
/*
/*
* Mark all normal tasks as needing a patch state update. They'll
* Mark all normal tasks as needing a patch state update. They'll
* switch either in klp_try_complete_transition() or as they exit the
* switch either in klp_try_complete_transition() or as they exit the
...
@@ -513,13 +481,6 @@ void klp_init_transition(struct klp_patch *patch, int state)
...
@@ -513,13 +481,6 @@ void klp_init_transition(struct klp_patch *patch, int state)
pr_debug
(
"'%s': initializing %s transition
\n
"
,
patch
->
mod
->
name
,
pr_debug
(
"'%s': initializing %s transition
\n
"
,
patch
->
mod
->
name
,
klp_target_state
==
KLP_PATCHED
?
"patching"
:
"unpatching"
);
klp_target_state
==
KLP_PATCHED
?
"patching"
:
"unpatching"
);
/*
* If the patch can be applied or reverted immediately, skip the
* per-task transitions.
*/
if
(
patch
->
immediate
)
return
;
/*
/*
* Initialize all tasks to the initial patch state to prepare them for
* Initialize all tasks to the initial patch state to prepare them for
* switching to the target state.
* switching to the target state.
...
...
samples/livepatch/livepatch-callbacks-demo.c
View file @
d05b695c
...
@@ -197,21 +197,6 @@ static int livepatch_callbacks_demo_init(void)
...
@@ -197,21 +197,6 @@ static int livepatch_callbacks_demo_init(void)
{
{
int
ret
;
int
ret
;
if
(
!
klp_have_reliable_stack
()
&&
!
patch
.
immediate
)
{
/*
* WARNING: Be very careful when using 'patch.immediate' in
* your patches. It's ok to use it for simple patches like
* this, but for more complex patches which change function
* semantics, locking semantics, or data structures, it may not
* be safe. Use of this option will also prevent removal of
* the patch.
*
* See Documentation/livepatch/livepatch.txt for more details.
*/
patch
.
immediate
=
true
;
pr_notice
(
"The consistency model isn't supported for your architecture. Bypassing safety mechanisms and applying the patch immediately.
\n
"
);
}
ret
=
klp_register_patch
(
&
patch
);
ret
=
klp_register_patch
(
&
patch
);
if
(
ret
)
if
(
ret
)
return
ret
;
return
ret
;
...
...
samples/livepatch/livepatch-sample.c
View file @
d05b695c
...
@@ -71,21 +71,6 @@ static int livepatch_init(void)
...
@@ -71,21 +71,6 @@ static int livepatch_init(void)
{
{
int
ret
;
int
ret
;
if
(
!
klp_have_reliable_stack
()
&&
!
patch
.
immediate
)
{
/*
* WARNING: Be very careful when using 'patch.immediate' in
* your patches. It's ok to use it for simple patches like
* this, but for more complex patches which change function
* semantics, locking semantics, or data structures, it may not
* be safe. Use of this option will also prevent removal of
* the patch.
*
* See Documentation/livepatch/livepatch.txt for more details.
*/
patch
.
immediate
=
true
;
pr_notice
(
"The consistency model isn't supported for your architecture. Bypassing safety mechanisms and applying the patch immediately.
\n
"
);
}
ret
=
klp_register_patch
(
&
patch
);
ret
=
klp_register_patch
(
&
patch
);
if
(
ret
)
if
(
ret
)
return
ret
;
return
ret
;
...
...
samples/livepatch/livepatch-shadow-fix1.c
View file @
d05b695c
...
@@ -133,21 +133,6 @@ static int livepatch_shadow_fix1_init(void)
...
@@ -133,21 +133,6 @@ static int livepatch_shadow_fix1_init(void)
{
{
int
ret
;
int
ret
;
if
(
!
klp_have_reliable_stack
()
&&
!
patch
.
immediate
)
{
/*
* WARNING: Be very careful when using 'patch.immediate' in
* your patches. It's ok to use it for simple patches like
* this, but for more complex patches which change function
* semantics, locking semantics, or data structures, it may not
* be safe. Use of this option will also prevent removal of
* the patch.
*
* See Documentation/livepatch/livepatch.txt for more details.
*/
patch
.
immediate
=
true
;
pr_notice
(
"The consistency model isn't supported for your architecture. Bypassing safety mechanisms and applying the patch immediately.
\n
"
);
}
ret
=
klp_register_patch
(
&
patch
);
ret
=
klp_register_patch
(
&
patch
);
if
(
ret
)
if
(
ret
)
return
ret
;
return
ret
;
...
...
samples/livepatch/livepatch-shadow-fix2.c
View file @
d05b695c
...
@@ -128,21 +128,6 @@ static int livepatch_shadow_fix2_init(void)
...
@@ -128,21 +128,6 @@ static int livepatch_shadow_fix2_init(void)
{
{
int
ret
;
int
ret
;
if
(
!
klp_have_reliable_stack
()
&&
!
patch
.
immediate
)
{
/*
* WARNING: Be very careful when using 'patch.immediate' in
* your patches. It's ok to use it for simple patches like
* this, but for more complex patches which change function
* semantics, locking semantics, or data structures, it may not
* be safe. Use of this option will also prevent removal of
* the patch.
*
* See Documentation/livepatch/livepatch.txt for more details.
*/
patch
.
immediate
=
true
;
pr_notice
(
"The consistency model isn't supported for your architecture. Bypassing safety mechanisms and applying the patch immediately.
\n
"
);
}
ret
=
klp_register_patch
(
&
patch
);
ret
=
klp_register_patch
(
&
patch
);
if
(
ret
)
if
(
ret
)
return
ret
;
return
ret
;
...
...
Write
Preview
Markdown
is supported
0%
Try again
or
attach a new file
Attach a file
Cancel
You are about to add
0
people
to the discussion. Proceed with caution.
Finish editing this message first!
Cancel
Please
register
or
sign in
to comment