Commit 0ef76878 authored by Linus Torvalds's avatar Linus Torvalds

Merge branch 'for-linus' of ssh://gitolite.kernel.org/pub/scm/linux/kernel/git/jikos/livepatching

Pull livepatching updates from Jiri Kosina:

 - shadow variables support, allowing livepatches to associate new
   "shadow" fields to existing data structures, from Joe Lawrence

 - pre/post patch callbacks API, allowing livepatch writers to register
   callbacks to be called before and after patch application, from Joe
   Lawrence

* 'for-linus' of ssh://gitolite.kernel.org/pub/scm/linux/kernel/git/jikos/livepatching:
  livepatch: __klp_disable_patch() should never be called for disabled patches
  livepatch: Correctly call klp_post_unpatch_callback() in error paths
  livepatch: add transition notices
  livepatch: move transition "complete" notice into klp_complete_transition()
  livepatch: add (un)patch callbacks
  livepatch: Small shadow variable documentation fixes
  livepatch: __klp_shadow_get_or_alloc() is local to shadow.c
  livepatch: introduce shadow variable API
parents 9682b3de fc41efc1
This diff is collapsed.
================
Shadow Variables
================
Shadow variables are a simple way for livepatch modules to associate
additional "shadow" data with existing data structures. Shadow data is
allocated separately from parent data structures, which are left
unmodified. The shadow variable API described in this document is used
to allocate/add and remove/free shadow variables to/from their parents.
The implementation introduces a global, in-kernel hashtable that
associates pointers to parent objects and a numeric identifier of the
shadow data. The numeric identifier is a simple enumeration that may be
used to describe shadow variable version, class or type, etc. More
specifically, the parent pointer serves as the hashtable key while the
numeric id subsequently filters hashtable queries. Multiple shadow
variables may attach to the same parent object, but their numeric
identifier distinguishes between them.
1. Brief API summary
====================
(See the full API usage docbook notes in livepatch/shadow.c.)
A hashtable references all shadow variables. These references are
stored and retrieved through a <obj, id> pair.
* The klp_shadow variable data structure encapsulates both tracking
meta-data and shadow-data:
- meta-data
- obj - pointer to parent object
- id - data identifier
- data[] - storage for shadow data
It is important to note that the klp_shadow_alloc() and
klp_shadow_get_or_alloc() calls, described below, store a *copy* of the
data that the functions are provided. Callers should provide whatever
mutual exclusion is required of the shadow data.
* klp_shadow_get() - retrieve a shadow variable data pointer
- search hashtable for <obj, id> pair
* klp_shadow_alloc() - allocate and add a new shadow variable
- search hashtable for <obj, id> pair
- if exists
- WARN and return NULL
- if <obj, id> doesn't already exist
- allocate a new shadow variable
- copy data into the new shadow variable
- add <obj, id> to the global hashtable
* klp_shadow_get_or_alloc() - get existing or alloc a new shadow variable
- search hashtable for <obj, id> pair
- if exists
- return existing shadow variable
- if <obj, id> doesn't already exist
- allocate a new shadow variable
- copy data into the new shadow variable
- add <obj, id> pair to the global hashtable
* klp_shadow_free() - detach and free a <obj, id> shadow variable
- find and remove a <obj, id> reference from global hashtable
- if found, free shadow variable
* klp_shadow_free_all() - detach and free all <*, id> shadow variables
- find and remove any <*, id> references from global hashtable
- if found, free shadow variable
2. Use cases
============
(See the example shadow variable livepatch modules in samples/livepatch/
for full working demonstrations.)
For the following use-case examples, consider commit 1d147bfa6429
("mac80211: fix AP powersave TX vs. wakeup race"), which added a
spinlock to net/mac80211/sta_info.h :: struct sta_info. Each use-case
example can be considered a stand-alone livepatch implementation of this
fix.
Matching parent's lifecycle
---------------------------
If parent data structures are frequently created and destroyed, it may
be easiest to align their shadow variables lifetimes to the same
allocation and release functions. In this case, the parent data
structure is typically allocated, initialized, then registered in some
manner. Shadow variable allocation and setup can then be considered
part of the parent's initialization and should be completed before the
parent "goes live" (ie, any shadow variable get-API requests are made
for this <obj, id> pair.)
For commit 1d147bfa6429, when a parent sta_info structure is allocated,
allocate a shadow copy of the ps_lock pointer, then initialize it:
#define PS_LOCK 1
struct sta_info *sta_info_alloc(struct ieee80211_sub_if_data *sdata,
const u8 *addr, gfp_t gfp)
{
struct sta_info *sta;
spinlock_t *ps_lock;
/* Parent structure is created */
sta = kzalloc(sizeof(*sta) + hw->sta_data_size, gfp);
/* Attach a corresponding shadow variable, then initialize it */
ps_lock = klp_shadow_alloc(sta, PS_LOCK, NULL, sizeof(*ps_lock), gfp);
if (!ps_lock)
goto shadow_fail;
spin_lock_init(ps_lock);
...
When requiring a ps_lock, query the shadow variable API to retrieve one
for a specific struct sta_info:
void ieee80211_sta_ps_deliver_wakeup(struct sta_info *sta)
{
spinlock_t *ps_lock;
/* sync with ieee80211_tx_h_unicast_ps_buf */
ps_lock = klp_shadow_get(sta, PS_LOCK);
if (ps_lock)
spin_lock(ps_lock);
...
When the parent sta_info structure is freed, first free the shadow
variable:
void sta_info_free(struct ieee80211_local *local, struct sta_info *sta)
{
klp_shadow_free(sta, PS_LOCK);
kfree(sta);
...
In-flight parent objects
------------------------
Sometimes it may not be convenient or possible to allocate shadow
variables alongside their parent objects. Or a livepatch fix may
require shadow varibles to only a subset of parent object instances. In
these cases, the klp_shadow_get_or_alloc() call can be used to attach
shadow variables to parents already in-flight.
For commit 1d147bfa6429, a good spot to allocate a shadow spinlock is
inside ieee80211_sta_ps_deliver_wakeup():
#define PS_LOCK 1
void ieee80211_sta_ps_deliver_wakeup(struct sta_info *sta)
{
DEFINE_SPINLOCK(ps_lock_fallback);
spinlock_t *ps_lock;
/* sync with ieee80211_tx_h_unicast_ps_buf */
ps_lock = klp_shadow_get_or_alloc(sta, PS_LOCK,
&ps_lock_fallback, sizeof(ps_lock_fallback),
GFP_ATOMIC);
if (ps_lock)
spin_lock(ps_lock);
...
This usage will create a shadow variable, only if needed, otherwise it
will use one that was already created for this <obj, id> pair.
Like the previous use-case, the shadow spinlock needs to be cleaned up.
A shadow variable can be freed just before its parent object is freed,
or even when the shadow variable itself is no longer required.
Other use-cases
---------------
Shadow variables can also be used as a flag indicating that a data
structure was allocated by new, livepatched code. In this case, it
doesn't matter what data value the shadow variable holds, its existence
suggests how to handle the parent object.
3. References
=============
* https://github.com/dynup/kpatch
The livepatch implementation is based on the kpatch version of shadow
variables.
* http://files.mkgnu.net/files/dynamos/doc/papers/dynamos_eurosys_07.pdf
Dynamic and Adaptive Updates of Non-Quiescent Subsystems in Commodity
Operating System Kernels (Kritis Makris, Kyung Dong Ryu 2007) presented
a datatype update technique called "shadow data structures".
......@@ -87,10 +87,35 @@ struct klp_func {
bool transition;
};
struct klp_object;
/**
* struct klp_callbacks - pre/post live-(un)patch callback structure
* @pre_patch: executed before code patching
* @post_patch: executed after code patching
* @pre_unpatch: executed before code unpatching
* @post_unpatch: executed after code unpatching
* @post_unpatch_enabled: flag indicating if post-unpatch callback
* should run
*
* All callbacks are optional. Only the pre-patch callback, if provided,
* will be unconditionally executed. If the parent klp_object fails to
* patch for any reason, including a non-zero error status returned from
* the pre-patch callback, no further callbacks will be executed.
*/
struct klp_callbacks {
int (*pre_patch)(struct klp_object *obj);
void (*post_patch)(struct klp_object *obj);
void (*pre_unpatch)(struct klp_object *obj);
void (*post_unpatch)(struct klp_object *obj);
bool post_unpatch_enabled;
};
/**
* struct klp_object - kernel object structure for live patching
* @name: module name (or NULL for vmlinux)
* @funcs: function entries for functions to be patched in the object
* @callbacks: functions to be executed pre/post (un)patching
* @kobj: kobject for sysfs resources
* @mod: kernel module associated with the patched object
* (NULL for vmlinux)
......@@ -100,6 +125,7 @@ struct klp_object {
/* external */
const char *name;
struct klp_func *funcs;
struct klp_callbacks callbacks;
/* internal */
struct kobject kobj;
......@@ -164,6 +190,14 @@ static inline bool klp_have_reliable_stack(void)
IS_ENABLED(CONFIG_HAVE_RELIABLE_STACKTRACE);
}
void *klp_shadow_get(void *obj, unsigned long id);
void *klp_shadow_alloc(void *obj, unsigned long id, void *data,
size_t size, gfp_t gfp_flags);
void *klp_shadow_get_or_alloc(void *obj, unsigned long id, void *data,
size_t size, gfp_t gfp_flags);
void klp_shadow_free(void *obj, unsigned long id);
void klp_shadow_free_all(unsigned long id);
#else /* !CONFIG_LIVEPATCH */
static inline int klp_module_coming(struct module *mod) { return 0; }
......
obj-$(CONFIG_LIVEPATCH) += livepatch.o
livepatch-objs := core.o patch.o transition.o
livepatch-objs := core.o patch.o shadow.o transition.o
......@@ -54,11 +54,6 @@ static bool klp_is_module(struct klp_object *obj)
return obj->name;
}
static bool klp_is_object_loaded(struct klp_object *obj)
{
return !obj->name || obj->mod;
}
/* sets obj->mod if object is not vmlinux and module is found */
static void klp_find_object_module(struct klp_object *obj)
{
......@@ -285,6 +280,11 @@ static int klp_write_object_relocations(struct module *pmod,
static int __klp_disable_patch(struct klp_patch *patch)
{
struct klp_object *obj;
if (WARN_ON(!patch->enabled))
return -EINVAL;
if (klp_transition_patch)
return -EBUSY;
......@@ -295,6 +295,10 @@ static int __klp_disable_patch(struct klp_patch *patch)
klp_init_transition(patch, KLP_UNPATCHED);
klp_for_each_object(patch, obj)
if (obj->patched)
klp_pre_unpatch_callback(obj);
/*
* Enforce the order of the func->transition writes in
* klp_init_transition() and the TIF_PATCH_PENDING writes in
......@@ -388,13 +392,18 @@ static int __klp_enable_patch(struct klp_patch *patch)
if (!klp_is_object_loaded(obj))
continue;
ret = klp_patch_object(obj);
ret = klp_pre_patch_callback(obj);
if (ret) {
pr_warn("failed to enable patch '%s'\n",
patch->mod->name);
pr_warn("pre-patch callback failed for object '%s'\n",
klp_is_module(obj) ? obj->name : "vmlinux");
goto err;
}
klp_cancel_transition();
return ret;
ret = klp_patch_object(obj);
if (ret) {
pr_warn("failed to patch object '%s'\n",
klp_is_module(obj) ? obj->name : "vmlinux");
goto err;
}
}
......@@ -403,6 +412,11 @@ static int __klp_enable_patch(struct klp_patch *patch)
patch->enabled = true;
return 0;
err:
pr_warn("failed to enable patch '%s'\n", patch->mod->name);
klp_cancel_transition();
return ret;
}
/**
......@@ -854,9 +868,15 @@ static void klp_cleanup_module_patches_limited(struct module *mod,
* is in transition.
*/
if (patch->enabled || patch == klp_transition_patch) {
if (patch != klp_transition_patch)
klp_pre_unpatch_callback(obj);
pr_notice("reverting patch '%s' on unloading module '%s'\n",
patch->mod->name, obj->mod->name);
klp_unpatch_object(obj);
klp_post_unpatch_callback(obj);
}
klp_free_object_loaded(obj);
......@@ -906,13 +926,25 @@ int klp_module_coming(struct module *mod)
pr_notice("applying patch '%s' to loading module '%s'\n",
patch->mod->name, obj->mod->name);
ret = klp_pre_patch_callback(obj);
if (ret) {
pr_warn("pre-patch callback failed for object '%s'\n",
obj->name);
goto err;
}
ret = klp_patch_object(obj);
if (ret) {
pr_warn("failed to apply patch '%s' to module '%s' (%d)\n",
patch->mod->name, obj->mod->name, ret);
klp_post_unpatch_callback(obj);
goto err;
}
if (patch != klp_transition_patch)
klp_post_patch_callback(obj);
break;
}
}
......
......@@ -2,6 +2,46 @@
#ifndef _LIVEPATCH_CORE_H
#define _LIVEPATCH_CORE_H
#include <linux/livepatch.h>
extern struct mutex klp_mutex;
static inline bool klp_is_object_loaded(struct klp_object *obj)
{
return !obj->name || obj->mod;
}
static inline int klp_pre_patch_callback(struct klp_object *obj)
{
int ret = 0;
if (obj->callbacks.pre_patch)
ret = (*obj->callbacks.pre_patch)(obj);
obj->callbacks.post_unpatch_enabled = !ret;
return ret;
}
static inline void klp_post_patch_callback(struct klp_object *obj)
{
if (obj->callbacks.post_patch)
(*obj->callbacks.post_patch)(obj);
}
static inline void klp_pre_unpatch_callback(struct klp_object *obj)
{
if (obj->callbacks.pre_unpatch)
(*obj->callbacks.pre_unpatch)(obj);
}
static inline void klp_post_unpatch_callback(struct klp_object *obj)
{
if (obj->callbacks.post_unpatch_enabled &&
obj->callbacks.post_unpatch)
(*obj->callbacks.post_unpatch)(obj);
obj->callbacks.post_unpatch_enabled = false;
}
#endif /* _LIVEPATCH_CORE_H */
......@@ -28,6 +28,7 @@
#include <linux/slab.h>
#include <linux/bug.h>
#include <linux/printk.h>
#include "core.h"
#include "patch.h"
#include "transition.h"
......
/*
* shadow.c - Shadow Variables
*
* Copyright (C) 2014 Josh Poimboeuf <jpoimboe@redhat.com>
* Copyright (C) 2014 Seth Jennings <sjenning@redhat.com>
* Copyright (C) 2017 Joe Lawrence <joe.lawrence@redhat.com>
*
* This program is free software; you can redistribute it and/or
* modify it under the terms of the GNU General Public License
* as published by the Free Software Foundation; either version 2
* of the License, or (at your option) any later version.
*
* This program is distributed in the hope that it will be useful,
* but WITHOUT ANY WARRANTY; without even the implied warranty of
* MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. See the
* GNU General Public License for more details.
*
* You should have received a copy of the GNU General Public License
* along with this program; if not, see <http://www.gnu.org/licenses/>.
*/
/**
* DOC: Shadow variable API concurrency notes:
*
* The shadow variable API provides a simple relationship between an
* <obj, id> pair and a pointer value. It is the responsibility of the
* caller to provide any mutual exclusion required of the shadow data.
*
* Once a shadow variable is attached to its parent object via the
* klp_shadow_*alloc() API calls, it is considered live: any subsequent
* call to klp_shadow_get() may then return the shadow variable's data
* pointer. Callers of klp_shadow_*alloc() should prepare shadow data
* accordingly.
*
* The klp_shadow_*alloc() API calls may allocate memory for new shadow
* variable structures. Their implementation does not call kmalloc
* inside any spinlocks, but API callers should pass GFP flags according
* to their specific needs.
*
* The klp_shadow_hash is an RCU-enabled hashtable and is safe against
* concurrent klp_shadow_free() and klp_shadow_get() operations.
*/
#define pr_fmt(fmt) KBUILD_MODNAME ": " fmt
#include <linux/hashtable.h>
#include <linux/slab.h>
#include <linux/livepatch.h>
static DEFINE_HASHTABLE(klp_shadow_hash, 12);
/*
* klp_shadow_lock provides exclusive access to the klp_shadow_hash and
* the shadow variables it references.
*/
static DEFINE_SPINLOCK(klp_shadow_lock);
/**
* struct klp_shadow - shadow variable structure
* @node: klp_shadow_hash hash table node
* @rcu_head: RCU is used to safely free this structure
* @obj: pointer to parent object
* @id: data identifier
* @data: data area
*/
struct klp_shadow {
struct hlist_node node;
struct rcu_head rcu_head;
void *obj;
unsigned long id;
char data[];
};
/**
* klp_shadow_match() - verify a shadow variable matches given <obj, id>
* @shadow: shadow variable to match
* @obj: pointer to parent object
* @id: data identifier
*
* Return: true if the shadow variable matches.
*/
static inline bool klp_shadow_match(struct klp_shadow *shadow, void *obj,
unsigned long id)
{
return shadow->obj == obj && shadow->id == id;
}
/**
* klp_shadow_get() - retrieve a shadow variable data pointer
* @obj: pointer to parent object
* @id: data identifier
*
* Return: the shadow variable data element, NULL on failure.
*/
void *klp_shadow_get(void *obj, unsigned long id)
{
struct klp_shadow *shadow;
rcu_read_lock();
hash_for_each_possible_rcu(klp_shadow_hash, shadow, node,
(unsigned long)obj) {
if (klp_shadow_match(shadow, obj, id)) {
rcu_read_unlock();
return shadow->data;
}
}
rcu_read_unlock();
return NULL;
}
EXPORT_SYMBOL_GPL(klp_shadow_get);
static void *__klp_shadow_get_or_alloc(void *obj, unsigned long id, void *data,
size_t size, gfp_t gfp_flags, bool warn_on_exist)
{
struct klp_shadow *new_shadow;
void *shadow_data;
unsigned long flags;
/* Check if the shadow variable already exists */
shadow_data = klp_shadow_get(obj, id);
if (shadow_data)
goto exists;
/* Allocate a new shadow variable for use inside the lock below */
new_shadow = kzalloc(size + sizeof(*new_shadow), gfp_flags);
if (!new_shadow)
return NULL;
new_shadow->obj = obj;
new_shadow->id = id;
/* Initialize the shadow variable if data provided */
if (data)
memcpy(new_shadow->data, data, size);
/* Look for <obj, id> again under the lock */
spin_lock_irqsave(&klp_shadow_lock, flags);
shadow_data = klp_shadow_get(obj, id);
if (unlikely(shadow_data)) {
/*
* Shadow variable was found, throw away speculative
* allocation.
*/
spin_unlock_irqrestore(&klp_shadow_lock, flags);
kfree(new_shadow);
goto exists;
}
/* No <obj, id> found, so attach the newly allocated one */
hash_add_rcu(klp_shadow_hash, &new_shadow->node,
(unsigned long)new_shadow->obj);
spin_unlock_irqrestore(&klp_shadow_lock, flags);
return new_shadow->data;
exists:
if (warn_on_exist) {
WARN(1, "Duplicate shadow variable <%p, %lx>\n", obj, id);
return NULL;
}
return shadow_data;
}
/**
* klp_shadow_alloc() - allocate and add a new shadow variable
* @obj: pointer to parent object
* @id: data identifier
* @data: pointer to data to attach to parent
* @size: size of attached data
* @gfp_flags: GFP mask for allocation
*
* Allocates @size bytes for new shadow variable data using @gfp_flags
* and copies @size bytes from @data into the new shadow variable's own
* data space. If @data is NULL, @size bytes are still allocated, but
* no copy is performed. The new shadow variable is then added to the
* global hashtable.
*
* If an existing <obj, id> shadow variable can be found, this routine
* will issue a WARN, exit early and return NULL.
*
* Return: the shadow variable data element, NULL on duplicate or
* failure.
*/
void *klp_shadow_alloc(void *obj, unsigned long id, void *data,
size_t size, gfp_t gfp_flags)
{
return __klp_shadow_get_or_alloc(obj, id, data, size, gfp_flags, true);
}
EXPORT_SYMBOL_GPL(klp_shadow_alloc);
/**
* klp_shadow_get_or_alloc() - get existing or allocate a new shadow variable
* @obj: pointer to parent object
* @id: data identifier
* @data: pointer to data to attach to parent
* @size: size of attached data
* @gfp_flags: GFP mask for allocation
*
* Returns a pointer to existing shadow data if an <obj, id> shadow
* variable is already present. Otherwise, it creates a new shadow
* variable like klp_shadow_alloc().
*
* This function guarantees that only one shadow variable exists with
* the given @id for the given @obj. It also guarantees that the shadow
* variable will be initialized by the given @data only when it did not
* exist before.
*
* Return: the shadow variable data element, NULL on failure.
*/
void *klp_shadow_get_or_alloc(void *obj, unsigned long id, void *data,
size_t size, gfp_t gfp_flags)
{
return __klp_shadow_get_or_alloc(obj, id, data, size, gfp_flags, false);
}
EXPORT_SYMBOL_GPL(klp_shadow_get_or_alloc);
/**
* klp_shadow_free() - detach and free a <obj, id> shadow variable
* @obj: pointer to parent object
* @id: data identifier
*
* This function releases the memory for this <obj, id> shadow variable
* instance, callers should stop referencing it accordingly.
*/
void klp_shadow_free(void *obj, unsigned long id)
{
struct klp_shadow *shadow;
unsigned long flags;
spin_lock_irqsave(&klp_shadow_lock, flags);
/* Delete <obj, id> from hash */
hash_for_each_possible(klp_shadow_hash, shadow, node,
(unsigned long)obj) {
if (klp_shadow_match(shadow, obj, id)) {
hash_del_rcu(&shadow->node);
kfree_rcu(shadow, rcu_head);
break;
}
}
spin_unlock_irqrestore(&klp_shadow_lock, flags);
}
EXPORT_SYMBOL_GPL(klp_shadow_free);
/**
* klp_shadow_free_all() - detach and free all <*, id> shadow variables
* @id: data identifier
*
* This function releases the memory for all <*, id> shadow variable
* instances, callers should stop referencing them accordingly.
*/
void klp_shadow_free_all(unsigned long id)
{
struct klp_shadow *shadow;
unsigned long flags;
int i;
spin_lock_irqsave(&klp_shadow_lock, flags);
/* Delete all <*, id> from hash */
hash_for_each(klp_shadow_hash, i, shadow, node) {
if (klp_shadow_match(shadow, shadow->obj, id)) {
hash_del_rcu(&shadow->node);
kfree_rcu(shadow, rcu_head);
}
}
spin_unlock_irqrestore(&klp_shadow_lock, flags);
}
EXPORT_SYMBOL_GPL(klp_shadow_free_all);
......@@ -82,6 +82,10 @@ static void klp_complete_transition(void)
unsigned int cpu;
bool immediate_func = false;
pr_debug("'%s': completing %s transition\n",
klp_transition_patch->mod->name,
klp_target_state == KLP_PATCHED ? "patching" : "unpatching");
if (klp_target_state == KLP_UNPATCHED) {
/*
* All tasks have transitioned to KLP_UNPATCHED so we can now
......@@ -109,9 +113,6 @@ static void klp_complete_transition(void)
}
}
if (klp_target_state == KLP_UNPATCHED && !immediate_func)
module_put(klp_transition_patch->mod);
/* Prevent klp_ftrace_handler() from seeing KLP_UNDEFINED state */
if (klp_target_state == KLP_PATCHED)
klp_synchronize_transition();
......@@ -130,6 +131,27 @@ static void klp_complete_transition(void)
}
done:
klp_for_each_object(klp_transition_patch, obj) {
if (!klp_is_object_loaded(obj))
continue;
if (klp_target_state == KLP_PATCHED)
klp_post_patch_callback(obj);
else if (klp_target_state == KLP_UNPATCHED)
klp_post_unpatch_callback(obj);
}
pr_notice("'%s': %s complete\n", klp_transition_patch->mod->name,
klp_target_state == KLP_PATCHED ? "patching" : "unpatching");
/*
* See complementary comment in __klp_enable_patch() for why we
* keep the module reference for immediate patches.
*/
if (!klp_transition_patch->immediate && !immediate_func &&
klp_target_state == KLP_UNPATCHED) {
module_put(klp_transition_patch->mod);
}
klp_target_state = KLP_UNDEFINED;
klp_transition_patch = NULL;
}
......@@ -145,6 +167,9 @@ void klp_cancel_transition(void)
if (WARN_ON_ONCE(klp_target_state != KLP_PATCHED))
return;
pr_debug("'%s': canceling patching transition, going to unpatch\n",
klp_transition_patch->mod->name);
klp_target_state = KLP_UNPATCHED;
klp_complete_transition();
}
......@@ -408,9 +433,6 @@ void klp_try_complete_transition(void)
}
success:
pr_notice("'%s': %s complete\n", klp_transition_patch->mod->name,
klp_target_state == KLP_PATCHED ? "patching" : "unpatching");
/* we're done, now cleanup the data structures */
klp_complete_transition();
}
......@@ -426,7 +448,8 @@ void klp_start_transition(void)
WARN_ON_ONCE(klp_target_state == KLP_UNDEFINED);
pr_notice("'%s': %s...\n", klp_transition_patch->mod->name,
pr_notice("'%s': starting %s transition\n",
klp_transition_patch->mod->name,
klp_target_state == KLP_PATCHED ? "patching" : "unpatching");
/*
......@@ -482,6 +505,9 @@ void klp_init_transition(struct klp_patch *patch, int state)
*/
klp_target_state = state;
pr_debug("'%s': initializing %s transition\n", patch->mod->name,
klp_target_state == KLP_PATCHED ? "patching" : "unpatching");
/*
* If the patch can be applied or reverted immediately, skip the
* per-task transitions.
......@@ -547,6 +573,11 @@ void klp_reverse_transition(void)
unsigned int cpu;
struct task_struct *g, *task;
pr_debug("'%s': reversing transition from %s\n",
klp_transition_patch->mod->name,
klp_target_state == KLP_PATCHED ? "patching to unpatching" :
"unpatching to patching");
klp_transition_patch->enabled = !klp_transition_patch->enabled;
klp_target_state = !klp_target_state;
......
......@@ -71,11 +71,10 @@ config SAMPLE_RPMSG_CLIENT
the rpmsg bus.
config SAMPLE_LIVEPATCH
tristate "Build live patching sample -- loadable modules only"
tristate "Build live patching samples -- loadable modules only"
depends on LIVEPATCH && m
help
Builds a sample live patch that replaces the procfs handler
for /proc/cmdline to print "this has been live patched".
Build sample live patch demonstrations.
config SAMPLE_CONFIGFS
tristate "Build configfs patching sample -- loadable modules only"
......
obj-$(CONFIG_SAMPLE_LIVEPATCH) += livepatch-sample.o
obj-$(CONFIG_SAMPLE_LIVEPATCH) += livepatch-shadow-mod.o
obj-$(CONFIG_SAMPLE_LIVEPATCH) += livepatch-shadow-fix1.o
obj-$(CONFIG_SAMPLE_LIVEPATCH) += livepatch-shadow-fix2.o
obj-$(CONFIG_SAMPLE_LIVEPATCH) += livepatch-callbacks-demo.o
obj-$(CONFIG_SAMPLE_LIVEPATCH) += livepatch-callbacks-mod.o
obj-$(CONFIG_SAMPLE_LIVEPATCH) += livepatch-callbacks-busymod.o
/*
* Copyright (C) 2017 Joe Lawrence <joe.lawrence@redhat.com>
*
* This program is free software; you can redistribute it and/or
* modify it under the terms of the GNU General Public License
* as published by the Free Software Foundation; either version 2
* of the License, or (at your option) any later version.
*
* This program is distributed in the hope that it will be useful,
* but WITHOUT ANY WARRANTY; without even the implied warranty of
* MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. See the
* GNU General Public License for more details.
*
* You should have received a copy of the GNU General Public License
* along with this program; if not, see <http://www.gnu.org/licenses/>.
*/
/*
* livepatch-callbacks-busymod.c - (un)patching callbacks demo support module
*
*
* Purpose
* -------
*
* Simple module to demonstrate livepatch (un)patching callbacks.
*
*
* Usage
* -----
*
* This module is not intended to be standalone. See the "Usage"
* section of livepatch-callbacks-mod.c.
*/
#define pr_fmt(fmt) KBUILD_MODNAME ": " fmt
#include <linux/module.h>
#include <linux/kernel.h>
#include <linux/workqueue.h>
#include <linux/delay.h>
static int sleep_secs;
module_param(sleep_secs, int, 0644);
MODULE_PARM_DESC(sleep_secs, "sleep_secs (default=0)");
static void busymod_work_func(struct work_struct *work);
static DECLARE_DELAYED_WORK(work, busymod_work_func);
static void busymod_work_func(struct work_struct *work)
{
pr_info("%s, sleeping %d seconds ...\n", __func__, sleep_secs);
msleep(sleep_secs * 1000);
pr_info("%s exit\n", __func__);
}
static int livepatch_callbacks_mod_init(void)
{
pr_info("%s\n", __func__);
schedule_delayed_work(&work,
msecs_to_jiffies(1000 * 0));
return 0;
}
static void livepatch_callbacks_mod_exit(void)
{
cancel_delayed_work_sync(&work);
pr_info("%s\n", __func__);
}
module_init(livepatch_callbacks_mod_init);
module_exit(livepatch_callbacks_mod_exit);
MODULE_LICENSE("GPL");
/*
* Copyright (C) 2017 Joe Lawrence <joe.lawrence@redhat.com>
*
* This program is free software; you can redistribute it and/or
* modify it under the terms of the GNU General Public License
* as published by the Free Software Foundation; either version 2
* of the License, or (at your option) any later version.
*
* This program is distributed in the hope that it will be useful,
* but WITHOUT ANY WARRANTY; without even the implied warranty of
* MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. See the
* GNU General Public License for more details.
*
* You should have received a copy of the GNU General Public License
* along with this program; if not, see <http://www.gnu.org/licenses/>.
*/
/*
* livepatch-callbacks-demo.c - (un)patching callbacks livepatch demo
*
*
* Purpose
* -------
*
* Demonstration of registering livepatch (un)patching callbacks.
*
*
* Usage
* -----
*
* Step 1 - load the simple module
*
* insmod samples/livepatch/livepatch-callbacks-mod.ko
*
*
* Step 2 - load the demonstration livepatch (with callbacks)
*
* insmod samples/livepatch/livepatch-callbacks-demo.ko
*
*
* Step 3 - cleanup
*
* echo 0 > /sys/kernel/livepatch/livepatch_callbacks_demo/enabled
* rmmod livepatch_callbacks_demo
* rmmod livepatch_callbacks_mod
*
* Watch dmesg output to see livepatch enablement, callback execution
* and patching operations for both vmlinux and module targets.
*
* NOTE: swap the insmod order of livepatch-callbacks-mod.ko and
* livepatch-callbacks-demo.ko to observe what happens when a
* target module is loaded after a livepatch with callbacks.
*
* NOTE: 'pre_patch_ret' is a module parameter that sets the pre-patch
* callback return status. Try setting up a non-zero status
* such as -19 (-ENODEV):
*
* # Load demo livepatch, vmlinux is patched
* insmod samples/livepatch/livepatch-callbacks-demo.ko
*
* # Setup next pre-patch callback to return -ENODEV
* echo -19 > /sys/module/livepatch_callbacks_demo/parameters/pre_patch_ret
*
* # Module loader refuses to load the target module
* insmod samples/livepatch/livepatch-callbacks-mod.ko
* insmod: ERROR: could not insert module samples/livepatch/livepatch-callbacks-mod.ko: No such device
*
* NOTE: There is a second target module,
* livepatch-callbacks-busymod.ko, available for experimenting
* with livepatch (un)patch callbacks. This module contains
* a 'sleep_secs' parameter that parks the module on one of the
* functions that the livepatch demo module wants to patch.
* Modifying this value and tweaking the order of module loads can
* effectively demonstrate stalled patch transitions:
*
* # Load a target module, let it park on 'busymod_work_func' for
* # thirty seconds
* insmod samples/livepatch/livepatch-callbacks-busymod.ko sleep_secs=30
*
* # Meanwhile load the livepatch
* insmod samples/livepatch/livepatch-callbacks-demo.ko
*
* # ... then load and unload another target module while the
* # transition is in progress
* insmod samples/livepatch/livepatch-callbacks-mod.ko
* rmmod samples/livepatch/livepatch-callbacks-mod.ko
*
* # Finally cleanup
* echo 0 > /sys/kernel/livepatch/livepatch_callbacks_demo/enabled
* rmmod samples/livepatch/livepatch-callbacks-demo.ko
*/
#define pr_fmt(fmt) KBUILD_MODNAME ": " fmt
#include <linux/module.h>
#include <linux/kernel.h>
#include <linux/livepatch.h>
static int pre_patch_ret;
module_param(pre_patch_ret, int, 0644);
MODULE_PARM_DESC(pre_patch_ret, "pre_patch_ret (default=0)");
static const char *const module_state[] = {
[MODULE_STATE_LIVE] = "[MODULE_STATE_LIVE] Normal state",
[MODULE_STATE_COMING] = "[MODULE_STATE_COMING] Full formed, running module_init",
[MODULE_STATE_GOING] = "[MODULE_STATE_GOING] Going away",
[MODULE_STATE_UNFORMED] = "[MODULE_STATE_UNFORMED] Still setting it up",
};
static void callback_info(const char *callback, struct klp_object *obj)
{
if (obj->mod)
pr_info("%s: %s -> %s\n", callback, obj->mod->name,
module_state[obj->mod->state]);
else
pr_info("%s: vmlinux\n", callback);
}
/* Executed on object patching (ie, patch enablement) */
static int pre_patch_callback(struct klp_object *obj)
{
callback_info(__func__, obj);
return pre_patch_ret;
}
/* Executed on object unpatching (ie, patch disablement) */
static void post_patch_callback(struct klp_object *obj)
{
callback_info(__func__, obj);
}
/* Executed on object unpatching (ie, patch disablement) */
static void pre_unpatch_callback(struct klp_object *obj)
{
callback_info(__func__, obj);
}
/* Executed on object unpatching (ie, patch disablement) */
static void post_unpatch_callback(struct klp_object *obj)
{
callback_info(__func__, obj);
}
static void patched_work_func(struct work_struct *work)
{
pr_info("%s\n", __func__);
}
static struct klp_func no_funcs[] = {
{ }
};
static struct klp_func busymod_funcs[] = {
{
.old_name = "busymod_work_func",
.new_func = patched_work_func,
}, { }
};
static struct klp_object objs[] = {
{
.name = NULL, /* vmlinux */
.funcs = no_funcs,
.callbacks = {
.pre_patch = pre_patch_callback,
.post_patch = post_patch_callback,
.pre_unpatch = pre_unpatch_callback,
.post_unpatch = post_unpatch_callback,
},
}, {
.name = "livepatch_callbacks_mod",
.funcs = no_funcs,
.callbacks = {
.pre_patch = pre_patch_callback,
.post_patch = post_patch_callback,
.pre_unpatch = pre_unpatch_callback,
.post_unpatch = post_unpatch_callback,
},
}, {
.name = "livepatch_callbacks_busymod",
.funcs = busymod_funcs,
.callbacks = {
.pre_patch = pre_patch_callback,
.post_patch = post_patch_callback,
.pre_unpatch = pre_unpatch_callback,
.post_unpatch = post_unpatch_callback,
},
}, { }
};
static struct klp_patch patch = {
.mod = THIS_MODULE,
.objs = objs,
};
static int livepatch_callbacks_demo_init(void)
{
int ret;
if (!klp_have_reliable_stack() && !patch.immediate) {
/*
* WARNING: Be very careful when using 'patch.immediate' in
* your patches. It's ok to use it for simple patches like
* this, but for more complex patches which change function
* semantics, locking semantics, or data structures, it may not
* be safe. Use of this option will also prevent removal of
* the patch.
*
* See Documentation/livepatch/livepatch.txt for more details.
*/
patch.immediate = true;
pr_notice("The consistency model isn't supported for your architecture. Bypassing safety mechanisms and applying the patch immediately.\n");
}
ret = klp_register_patch(&patch);
if (ret)
return ret;
ret = klp_enable_patch(&patch);
if (ret) {
WARN_ON(klp_unregister_patch(&patch));
return ret;
}
return 0;
}
static void livepatch_callbacks_demo_exit(void)
{
WARN_ON(klp_unregister_patch(&patch));
}
module_init(livepatch_callbacks_demo_init);
module_exit(livepatch_callbacks_demo_exit);
MODULE_LICENSE("GPL");
MODULE_INFO(livepatch, "Y");
/*
* Copyright (C) 2017 Joe Lawrence <joe.lawrence@redhat.com>
*
* This program is free software; you can redistribute it and/or
* modify it under the terms of the GNU General Public License
* as published by the Free Software Foundation; either version 2
* of the License, or (at your option) any later version.
*
* This program is distributed in the hope that it will be useful,
* but WITHOUT ANY WARRANTY; without even the implied warranty of
* MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. See the
* GNU General Public License for more details.
*
* You should have received a copy of the GNU General Public License
* along with this program; if not, see <http://www.gnu.org/licenses/>.
*/
/*
* livepatch-callbacks-mod.c - (un)patching callbacks demo support module
*
*
* Purpose
* -------
*
* Simple module to demonstrate livepatch (un)patching callbacks.
*
*
* Usage
* -----
*
* This module is not intended to be standalone. See the "Usage"
* section of livepatch-callbacks-demo.c.
*/
#define pr_fmt(fmt) KBUILD_MODNAME ": " fmt
#include <linux/module.h>
#include <linux/kernel.h>
static int livepatch_callbacks_mod_init(void)
{
pr_info("%s\n", __func__);
return 0;
}
static void livepatch_callbacks_mod_exit(void)
{
pr_info("%s\n", __func__);
}
module_init(livepatch_callbacks_mod_init);
module_exit(livepatch_callbacks_mod_exit);
MODULE_LICENSE("GPL");
/*
* Copyright (C) 2017 Joe Lawrence <joe.lawrence@redhat.com>
*
* This program is free software; you can redistribute it and/or
* modify it under the terms of the GNU General Public License
* as published by the Free Software Foundation; either version 2
* of the License, or (at your option) any later version.
*
* This program is distributed in the hope that it will be useful,
* but WITHOUT ANY WARRANTY; without even the implied warranty of
* MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. See the
* GNU General Public License for more details.
*
* You should have received a copy of the GNU General Public License
* along with this program; if not, see <http://www.gnu.org/licenses/>.
*/
/*
* livepatch-shadow-fix1.c - Shadow variables, livepatch demo
*
* Purpose
* -------
*
* Fixes the memory leak introduced in livepatch-shadow-mod through the
* use of a shadow variable. This fix demonstrates the "extending" of
* short-lived data structures by patching its allocation and release
* functions.
*
*
* Usage
* -----
*
* This module is not intended to be standalone. See the "Usage"
* section of livepatch-shadow-mod.c.
*/
#define pr_fmt(fmt) KBUILD_MODNAME ": " fmt
#include <linux/module.h>
#include <linux/kernel.h>
#include <linux/livepatch.h>
#include <linux/slab.h>
/* Shadow variable enums */
#define SV_LEAK 1
/* Allocate new dummies every second */
#define ALLOC_PERIOD 1
/* Check for expired dummies after a few new ones have been allocated */
#define CLEANUP_PERIOD (3 * ALLOC_PERIOD)
/* Dummies expire after a few cleanup instances */
#define EXPIRE_PERIOD (4 * CLEANUP_PERIOD)
struct dummy {
struct list_head list;
unsigned long jiffies_expire;
};
struct dummy *livepatch_fix1_dummy_alloc(void)
{
struct dummy *d;
void *leak;
d = kzalloc(sizeof(*d), GFP_KERNEL);
if (!d)
return NULL;
d->jiffies_expire = jiffies +
msecs_to_jiffies(1000 * EXPIRE_PERIOD);
/*
* Patch: save the extra memory location into a SV_LEAK shadow
* variable. A patched dummy_free routine can later fetch this
* pointer to handle resource release.
*/
leak = kzalloc(sizeof(int), GFP_KERNEL);
klp_shadow_alloc(d, SV_LEAK, &leak, sizeof(leak), GFP_KERNEL);
pr_info("%s: dummy @ %p, expires @ %lx\n",
__func__, d, d->jiffies_expire);
return d;
}
void livepatch_fix1_dummy_free(struct dummy *d)
{
void **shadow_leak, *leak;
/*
* Patch: fetch the saved SV_LEAK shadow variable, detach and
* free it. Note: handle cases where this shadow variable does
* not exist (ie, dummy structures allocated before this livepatch
* was loaded.)
*/
shadow_leak = klp_shadow_get(d, SV_LEAK);
if (shadow_leak) {
leak = *shadow_leak;
klp_shadow_free(d, SV_LEAK);
kfree(leak);
pr_info("%s: dummy @ %p, prevented leak @ %p\n",
__func__, d, leak);
} else {
pr_info("%s: dummy @ %p leaked!\n", __func__, d);
}
kfree(d);
}
static struct klp_func funcs[] = {
{
.old_name = "dummy_alloc",
.new_func = livepatch_fix1_dummy_alloc,
},
{
.old_name = "dummy_free",
.new_func = livepatch_fix1_dummy_free,
}, { }
};
static struct klp_object objs[] = {
{
.name = "livepatch_shadow_mod",
.funcs = funcs,
}, { }
};
static struct klp_patch patch = {
.mod = THIS_MODULE,
.objs = objs,
};
static int livepatch_shadow_fix1_init(void)
{
int ret;
if (!klp_have_reliable_stack() && !patch.immediate) {
/*
* WARNING: Be very careful when using 'patch.immediate' in
* your patches. It's ok to use it for simple patches like
* this, but for more complex patches which change function
* semantics, locking semantics, or data structures, it may not
* be safe. Use of this option will also prevent removal of
* the patch.
*
* See Documentation/livepatch/livepatch.txt for more details.
*/
patch.immediate = true;
pr_notice("The consistency model isn't supported for your architecture. Bypassing safety mechanisms and applying the patch immediately.\n");
}
ret = klp_register_patch(&patch);
if (ret)
return ret;
ret = klp_enable_patch(&patch);
if (ret) {
WARN_ON(klp_unregister_patch(&patch));
return ret;
}
return 0;
}
static void livepatch_shadow_fix1_exit(void)
{
/* Cleanup any existing SV_LEAK shadow variables */
klp_shadow_free_all(SV_LEAK);
WARN_ON(klp_unregister_patch(&patch));
}
module_init(livepatch_shadow_fix1_init);
module_exit(livepatch_shadow_fix1_exit);
MODULE_LICENSE("GPL");
MODULE_INFO(livepatch, "Y");
/*
* Copyright (C) 2017 Joe Lawrence <joe.lawrence@redhat.com>
*
* This program is free software; you can redistribute it and/or
* modify it under the terms of the GNU General Public License
* as published by the Free Software Foundation; either version 2
* of the License, or (at your option) any later version.
*
* This program is distributed in the hope that it will be useful,
* but WITHOUT ANY WARRANTY; without even the implied warranty of
* MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. See the
* GNU General Public License for more details.
*
* You should have received a copy of the GNU General Public License
* along with this program; if not, see <http://www.gnu.org/licenses/>.
*/
/*
* livepatch-shadow-fix2.c - Shadow variables, livepatch demo
*
* Purpose
* -------
*
* Adds functionality to livepatch-shadow-mod's in-flight data
* structures through a shadow variable. The livepatch patches a
* routine that periodically inspects data structures, incrementing a
* per-data-structure counter, creating the counter if needed.
*
*
* Usage
* -----
*
* This module is not intended to be standalone. See the "Usage"
* section of livepatch-shadow-mod.c.
*/
#define pr_fmt(fmt) KBUILD_MODNAME ": " fmt
#include <linux/module.h>
#include <linux/kernel.h>
#include <linux/livepatch.h>
#include <linux/slab.h>
/* Shadow variable enums */
#define SV_LEAK 1
#define SV_COUNTER 2
struct dummy {
struct list_head list;
unsigned long jiffies_expire;
};
bool livepatch_fix2_dummy_check(struct dummy *d, unsigned long jiffies)
{
int *shadow_count;
int count;
/*
* Patch: handle in-flight dummy structures, if they do not
* already have a SV_COUNTER shadow variable, then attach a
* new one.
*/
count = 0;
shadow_count = klp_shadow_get_or_alloc(d, SV_COUNTER,
&count, sizeof(count),
GFP_NOWAIT);
if (shadow_count)
*shadow_count += 1;
return time_after(jiffies, d->jiffies_expire);
}
void livepatch_fix2_dummy_free(struct dummy *d)
{
void **shadow_leak, *leak;
int *shadow_count;
/* Patch: copy the memory leak patch from the fix1 module. */
shadow_leak = klp_shadow_get(d, SV_LEAK);
if (shadow_leak) {
leak = *shadow_leak;
klp_shadow_free(d, SV_LEAK);
kfree(leak);
pr_info("%s: dummy @ %p, prevented leak @ %p\n",
__func__, d, leak);
} else {
pr_info("%s: dummy @ %p leaked!\n", __func__, d);
}
/*
* Patch: fetch the SV_COUNTER shadow variable and display
* the final count. Detach the shadow variable.
*/
shadow_count = klp_shadow_get(d, SV_COUNTER);
if (shadow_count) {
pr_info("%s: dummy @ %p, check counter = %d\n",
__func__, d, *shadow_count);
klp_shadow_free(d, SV_COUNTER);
}
kfree(d);
}
static struct klp_func funcs[] = {
{
.old_name = "dummy_check",
.new_func = livepatch_fix2_dummy_check,
},
{
.old_name = "dummy_free",
.new_func = livepatch_fix2_dummy_free,
}, { }
};
static struct klp_object objs[] = {
{
.name = "livepatch_shadow_mod",
.funcs = funcs,
}, { }
};
static struct klp_patch patch = {
.mod = THIS_MODULE,
.objs = objs,
};
static int livepatch_shadow_fix2_init(void)
{
int ret;
if (!klp_have_reliable_stack() && !patch.immediate) {
/*
* WARNING: Be very careful when using 'patch.immediate' in
* your patches. It's ok to use it for simple patches like
* this, but for more complex patches which change function
* semantics, locking semantics, or data structures, it may not
* be safe. Use of this option will also prevent removal of
* the patch.
*
* See Documentation/livepatch/livepatch.txt for more details.
*/
patch.immediate = true;
pr_notice("The consistency model isn't supported for your architecture. Bypassing safety mechanisms and applying the patch immediately.\n");
}
ret = klp_register_patch(&patch);
if (ret)
return ret;
ret = klp_enable_patch(&patch);
if (ret) {
WARN_ON(klp_unregister_patch(&patch));
return ret;
}
return 0;
}
static void livepatch_shadow_fix2_exit(void)
{
/* Cleanup any existing SV_COUNTER shadow variables */
klp_shadow_free_all(SV_COUNTER);
WARN_ON(klp_unregister_patch(&patch));
}
module_init(livepatch_shadow_fix2_init);
module_exit(livepatch_shadow_fix2_exit);
MODULE_LICENSE("GPL");
MODULE_INFO(livepatch, "Y");
/*
* Copyright (C) 2017 Joe Lawrence <joe.lawrence@redhat.com>
*
* This program is free software; you can redistribute it and/or
* modify it under the terms of the GNU General Public License
* as published by the Free Software Foundation; either version 2
* of the License, or (at your option) any later version.
*
* This program is distributed in the hope that it will be useful,
* but WITHOUT ANY WARRANTY; without even the implied warranty of
* MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. See the
* GNU General Public License for more details.
*
* You should have received a copy of the GNU General Public License
* along with this program; if not, see <http://www.gnu.org/licenses/>.
*/
/*
* livepatch-shadow-mod.c - Shadow variables, buggy module demo
*
* Purpose
* -------
*
* As a demonstration of livepatch shadow variable API, this module
* introduces memory leak behavior that livepatch modules
* livepatch-shadow-fix1.ko and livepatch-shadow-fix2.ko correct and
* enhance.
*
* WARNING - even though the livepatch-shadow-fix modules patch the
* memory leak, please load these modules at your own risk -- some
* amount of memory may leaked before the bug is patched.
*
*
* Usage
* -----
*
* Step 1 - Load the buggy demonstration module:
*
* insmod samples/livepatch/livepatch-shadow-mod.ko
*
* Watch dmesg output for a few moments to see new dummy being allocated
* and a periodic cleanup check. (Note: a small amount of memory is
* being leaked.)
*
*
* Step 2 - Load livepatch fix1:
*
* insmod samples/livepatch/livepatch-shadow-fix1.ko
*
* Continue watching dmesg and note that now livepatch_fix1_dummy_free()
* and livepatch_fix1_dummy_alloc() are logging messages about leaked
* memory and eventually leaks prevented.
*
*
* Step 3 - Load livepatch fix2 (on top of fix1):
*
* insmod samples/livepatch/livepatch-shadow-fix2.ko
*
* This module extends functionality through shadow variables, as a new
* "check" counter is added to the dummy structure. Periodic dmesg
* messages will log these as dummies are cleaned up.
*
*
* Step 4 - Cleanup
*
* Unwind the demonstration by disabling the livepatch fix modules, then
* removing them and the demo module:
*
* echo 0 > /sys/kernel/livepatch/livepatch_shadow_fix2/enabled
* echo 0 > /sys/kernel/livepatch/livepatch_shadow_fix1/enabled
* rmmod livepatch-shadow-fix2
* rmmod livepatch-shadow-fix1
* rmmod livepatch-shadow-mod
*/
#include <linux/kernel.h>
#include <linux/module.h>
#include <linux/sched.h>
#include <linux/slab.h>
#include <linux/stat.h>
#include <linux/workqueue.h>
MODULE_LICENSE("GPL");
MODULE_AUTHOR("Joe Lawrence <joe.lawrence@redhat.com>");
MODULE_DESCRIPTION("Buggy module for shadow variable demo");
/* Allocate new dummies every second */
#define ALLOC_PERIOD 1
/* Check for expired dummies after a few new ones have been allocated */
#define CLEANUP_PERIOD (3 * ALLOC_PERIOD)
/* Dummies expire after a few cleanup instances */
#define EXPIRE_PERIOD (4 * CLEANUP_PERIOD)
/*
* Keep a list of all the dummies so we can clean up any residual ones
* on module exit
*/
LIST_HEAD(dummy_list);
DEFINE_MUTEX(dummy_list_mutex);
struct dummy {
struct list_head list;
unsigned long jiffies_expire;
};
noinline struct dummy *dummy_alloc(void)
{
struct dummy *d;
void *leak;
d = kzalloc(sizeof(*d), GFP_KERNEL);
if (!d)
return NULL;
d->jiffies_expire = jiffies +
msecs_to_jiffies(1000 * EXPIRE_PERIOD);
/* Oops, forgot to save leak! */
leak = kzalloc(sizeof(int), GFP_KERNEL);
pr_info("%s: dummy @ %p, expires @ %lx\n",
__func__, d, d->jiffies_expire);
return d;
}
noinline void dummy_free(struct dummy *d)
{
pr_info("%s: dummy @ %p, expired = %lx\n",
__func__, d, d->jiffies_expire);
kfree(d);
}
noinline bool dummy_check(struct dummy *d, unsigned long jiffies)
{
return time_after(jiffies, d->jiffies_expire);
}
/*
* alloc_work_func: allocates new dummy structures, allocates additional
* memory, aptly named "leak", but doesn't keep
* permanent record of it.
*/
static void alloc_work_func(struct work_struct *work);
static DECLARE_DELAYED_WORK(alloc_dwork, alloc_work_func);
static void alloc_work_func(struct work_struct *work)
{
struct dummy *d;
d = dummy_alloc();
if (!d)
return;
mutex_lock(&dummy_list_mutex);
list_add(&d->list, &dummy_list);
mutex_unlock(&dummy_list_mutex);
schedule_delayed_work(&alloc_dwork,
msecs_to_jiffies(1000 * ALLOC_PERIOD));
}
/*
* cleanup_work_func: frees dummy structures. Without knownledge of
* "leak", it leaks the additional memory that
* alloc_work_func created.
*/
static void cleanup_work_func(struct work_struct *work);
static DECLARE_DELAYED_WORK(cleanup_dwork, cleanup_work_func);
static void cleanup_work_func(struct work_struct *work)
{
struct dummy *d, *tmp;
unsigned long j;
j = jiffies;
pr_info("%s: jiffies = %lx\n", __func__, j);
mutex_lock(&dummy_list_mutex);
list_for_each_entry_safe(d, tmp, &dummy_list, list) {
/* Kick out and free any expired dummies */
if (dummy_check(d, j)) {
list_del(&d->list);
dummy_free(d);
}
}
mutex_unlock(&dummy_list_mutex);
schedule_delayed_work(&cleanup_dwork,
msecs_to_jiffies(1000 * CLEANUP_PERIOD));
}
static int livepatch_shadow_mod_init(void)
{
schedule_delayed_work(&alloc_dwork,
msecs_to_jiffies(1000 * ALLOC_PERIOD));
schedule_delayed_work(&cleanup_dwork,
msecs_to_jiffies(1000 * CLEANUP_PERIOD));
return 0;
}
static void livepatch_shadow_mod_exit(void)
{
struct dummy *d, *tmp;
/* Wait for any dummies at work */
cancel_delayed_work_sync(&alloc_dwork);
cancel_delayed_work_sync(&cleanup_dwork);
/* Cleanup residual dummies */
list_for_each_entry_safe(d, tmp, &dummy_list, list) {
list_del(&d->list);
dummy_free(d);
}
}
module_init(livepatch_shadow_mod_init);
module_exit(livepatch_shadow_mod_exit);
Markdown is supported
0%
or
You are about to add 0 people to the discussion. Proceed with caution.
Finish editing this message first!
Please register or to comment