Commit 140dfc92 authored by Linus Torvalds's avatar Linus Torvalds

Merge tag 'dm-3.19-changes' of git://git.kernel.org/pub/scm/linux/kernel/git/device-mapper/linux-dm

Pull device mapper updates from Mike Snitzer:

 - Significant DM thin-provisioning performance improvements to meet
   performance requirements that were requested by the Gluster
   distributed filesystem.

   Specifically, dm-thinp now takes care to aggregate IO that will be
   issued to the same thinp block before issuing IO to the underlying
   devices.  This really helps improve performance on HW RAID6 devices
   that have a writeback cache because it avoids RMW in the HW RAID
   controller.

 - Some stable fixes: fix leak in DM bufio if integrity profiles were
   enabled, use memzero_explicit in DM crypt to avoid any potential for
   information leak, and a DM cache fix to properly mark a cache block
   dirty if it was promoted to the cache via the overwrite optimization.

 - A few simple DM persistent data library fixes

 - DM cache multiqueue policy block promotion improvements.

 - DM cache discard improvements that take advantage of range
   (multiblock) discard support in the DM bio-prison.  This allows for
   much more efficient bulk discard processing (e.g.  when mkfs.xfs
   discards the entire device).

 - Some small optimizations in DM core and RCU deference cleanups

 - DM core changes to suspend/resume code to introduce the new internal
   suspend/resume interface that the DM thin-pool target now uses to
   suspend/resume active thin devices when the thin-pool must
   suspend/resume.

   This avoids forcing userspace to track all active thin volumes in a
   thin-pool when the thin-pool is suspended for the purposes of
   metadata or data space resize.

* tag 'dm-3.19-changes' of git://git.kernel.org/pub/scm/linux/kernel/git/device-mapper/linux-dm: (49 commits)
  dm crypt: use memzero_explicit for on-stack buffer
  dm space map metadata: fix sm_bootstrap_get_count()
  dm space map metadata: fix sm_bootstrap_get_nr_blocks()
  dm bufio: fix memleak when using a dm_buffer's inline bio
  dm cache: fix spurious cell_defer when dealing with partial block at end of device
  dm cache: dirty flag was mistakenly being cleared when promoting via overwrite
  dm cache: only use overwrite optimisation for promotion when in writeback mode
  dm cache: discard block size must be a multiple of cache block size
  dm cache: fix a harmless race when working out if a block is discarded
  dm cache: when reloading a discard bitset allow for a different discard block size
  dm cache: fix some issues with the new discard range support
  dm array: if resizing the array is a noop set the new root to the old one
  dm: use rcu_dereference_protected instead of rcu_dereference
  dm thin: fix pool_io_hints to avoid looking at max_hw_sectors
  dm thin: suspend/resume active thin devices when reloading thin-pool
  dm: enhance internal suspend and resume interface
  dm thin: do not allow thin device activation while pool is suspended
  dm: add presuspend_undo hook to target_type
  dm: return earlier from dm_blk_ioctl if target doesn't implement .ioctl
  dm thin: remove stale 'trim' message in block comment above pool_message
  ...
parents f94784bd 1a71d6ff
...@@ -47,20 +47,26 @@ Message and constructor argument pairs are: ...@@ -47,20 +47,26 @@ Message and constructor argument pairs are:
'discard_promote_adjustment <value>' 'discard_promote_adjustment <value>'
The sequential threshold indicates the number of contiguous I/Os The sequential threshold indicates the number of contiguous I/Os
required before a stream is treated as sequential. The random threshold required before a stream is treated as sequential. Once a stream is
considered sequential it will bypass the cache. The random threshold
is the number of intervening non-contiguous I/Os that must be seen is the number of intervening non-contiguous I/Os that must be seen
before the stream is treated as random again. before the stream is treated as random again.
The sequential and random thresholds default to 512 and 4 respectively. The sequential and random thresholds default to 512 and 4 respectively.
Large, sequential ios are probably better left on the origin device Large, sequential I/Os are probably better left on the origin device
since spindles tend to have good bandwidth. The io_tracker counts since spindles tend to have good sequential I/O bandwidth. The
contiguous I/Os to try to spot when the io is in one of these sequential io_tracker counts contiguous I/Os to try to spot when the I/O is in one
modes. of these sequential modes. But there are use-cases for wanting to
promote sequential blocks to the cache (e.g. fast application startup).
Internally the mq policy maintains a promotion threshold variable. If If sequential threshold is set to 0 the sequential I/O detection is
the hit count of a block not in the cache goes above this threshold it disabled and sequential I/O will no longer implicitly bypass the cache.
gets promoted to the cache. The read, write and discard promote adjustment Setting the random threshold to 0 does _not_ disable the random I/O
stream detection.
Internally the mq policy determines a promotion threshold. If the hit
count of a block not in the cache goes above this threshold it gets
promoted to the cache. The read, write and discard promote adjustment
tunables allow you to tweak the promotion threshold by adding a small tunables allow you to tweak the promotion threshold by adding a small
value based on the io type. They default to 4, 8 and 1 respectively. value based on the io type. They default to 4, 8 and 1 respectively.
If you're trying to quickly warm a new cache device you may wish to If you're trying to quickly warm a new cache device you may wish to
......
...@@ -14,68 +14,38 @@ ...@@ -14,68 +14,38 @@
/*----------------------------------------------------------------*/ /*----------------------------------------------------------------*/
struct bucket { #define MIN_CELLS 1024
spinlock_t lock;
struct hlist_head cells;
};
struct dm_bio_prison { struct dm_bio_prison {
spinlock_t lock;
mempool_t *cell_pool; mempool_t *cell_pool;
struct rb_root cells;
unsigned nr_buckets;
unsigned hash_mask;
struct bucket *buckets;
}; };
/*----------------------------------------------------------------*/
static uint32_t calc_nr_buckets(unsigned nr_cells)
{
uint32_t n = 128;
nr_cells /= 4;
nr_cells = min(nr_cells, 8192u);
while (n < nr_cells)
n <<= 1;
return n;
}
static struct kmem_cache *_cell_cache; static struct kmem_cache *_cell_cache;
static void init_bucket(struct bucket *b) /*----------------------------------------------------------------*/
{
spin_lock_init(&b->lock);
INIT_HLIST_HEAD(&b->cells);
}
/* /*
* @nr_cells should be the number of cells you want in use _concurrently_. * @nr_cells should be the number of cells you want in use _concurrently_.
* Don't confuse it with the number of distinct keys. * Don't confuse it with the number of distinct keys.
*/ */
struct dm_bio_prison *dm_bio_prison_create(unsigned nr_cells) struct dm_bio_prison *dm_bio_prison_create(void)
{ {
unsigned i; struct dm_bio_prison *prison = kmalloc(sizeof(*prison), GFP_KERNEL);
uint32_t nr_buckets = calc_nr_buckets(nr_cells);
size_t len = sizeof(struct dm_bio_prison) +
(sizeof(struct bucket) * nr_buckets);
struct dm_bio_prison *prison = kmalloc(len, GFP_KERNEL);
if (!prison) if (!prison)
return NULL; return NULL;
prison->cell_pool = mempool_create_slab_pool(nr_cells, _cell_cache); spin_lock_init(&prison->lock);
prison->cell_pool = mempool_create_slab_pool(MIN_CELLS, _cell_cache);
if (!prison->cell_pool) { if (!prison->cell_pool) {
kfree(prison); kfree(prison);
return NULL; return NULL;
} }
prison->nr_buckets = nr_buckets; prison->cells = RB_ROOT;
prison->hash_mask = nr_buckets - 1;
prison->buckets = (struct bucket *) (prison + 1);
for (i = 0; i < nr_buckets; i++)
init_bucket(prison->buckets + i);
return prison; return prison;
} }
...@@ -101,68 +71,73 @@ void dm_bio_prison_free_cell(struct dm_bio_prison *prison, ...@@ -101,68 +71,73 @@ void dm_bio_prison_free_cell(struct dm_bio_prison *prison,
} }
EXPORT_SYMBOL_GPL(dm_bio_prison_free_cell); EXPORT_SYMBOL_GPL(dm_bio_prison_free_cell);
static uint32_t hash_key(struct dm_bio_prison *prison, struct dm_cell_key *key) static void __setup_new_cell(struct dm_cell_key *key,
struct bio *holder,
struct dm_bio_prison_cell *cell)
{ {
const unsigned long BIG_PRIME = 4294967291UL; memcpy(&cell->key, key, sizeof(cell->key));
uint64_t hash = key->block * BIG_PRIME; cell->holder = holder;
bio_list_init(&cell->bios);
return (uint32_t) (hash & prison->hash_mask);
} }
static int keys_equal(struct dm_cell_key *lhs, struct dm_cell_key *rhs) static int cmp_keys(struct dm_cell_key *lhs,
struct dm_cell_key *rhs)
{ {
return (lhs->virtual == rhs->virtual) && if (lhs->virtual < rhs->virtual)
(lhs->dev == rhs->dev) && return -1;
(lhs->block == rhs->block);
}
static struct bucket *get_bucket(struct dm_bio_prison *prison, if (lhs->virtual > rhs->virtual)
struct dm_cell_key *key) return 1;
{
return prison->buckets + hash_key(prison, key);
}
static struct dm_bio_prison_cell *__search_bucket(struct bucket *b, if (lhs->dev < rhs->dev)
struct dm_cell_key *key) return -1;
{
struct dm_bio_prison_cell *cell;
hlist_for_each_entry(cell, &b->cells, list) if (lhs->dev > rhs->dev)
if (keys_equal(&cell->key, key)) return 1;
return cell;
return NULL; if (lhs->block_end <= rhs->block_begin)
} return -1;
static void __setup_new_cell(struct bucket *b, if (lhs->block_begin >= rhs->block_end)
struct dm_cell_key *key, return 1;
struct bio *holder,
struct dm_bio_prison_cell *cell) return 0;
{
memcpy(&cell->key, key, sizeof(cell->key));
cell->holder = holder;
bio_list_init(&cell->bios);
hlist_add_head(&cell->list, &b->cells);
} }
static int __bio_detain(struct bucket *b, static int __bio_detain(struct dm_bio_prison *prison,
struct dm_cell_key *key, struct dm_cell_key *key,
struct bio *inmate, struct bio *inmate,
struct dm_bio_prison_cell *cell_prealloc, struct dm_bio_prison_cell *cell_prealloc,
struct dm_bio_prison_cell **cell_result) struct dm_bio_prison_cell **cell_result)
{ {
struct dm_bio_prison_cell *cell; int r;
struct rb_node **new = &prison->cells.rb_node, *parent = NULL;
cell = __search_bucket(b, key);
if (cell) { while (*new) {
if (inmate) struct dm_bio_prison_cell *cell =
bio_list_add(&cell->bios, inmate); container_of(*new, struct dm_bio_prison_cell, node);
*cell_result = cell;
return 1; r = cmp_keys(key, &cell->key);
parent = *new;
if (r < 0)
new = &((*new)->rb_left);
else if (r > 0)
new = &((*new)->rb_right);
else {
if (inmate)
bio_list_add(&cell->bios, inmate);
*cell_result = cell;
return 1;
}
} }
__setup_new_cell(b, key, inmate, cell_prealloc); __setup_new_cell(key, inmate, cell_prealloc);
*cell_result = cell_prealloc; *cell_result = cell_prealloc;
rb_link_node(&cell_prealloc->node, parent, new);
rb_insert_color(&cell_prealloc->node, &prison->cells);
return 0; return 0;
} }
...@@ -174,11 +149,10 @@ static int bio_detain(struct dm_bio_prison *prison, ...@@ -174,11 +149,10 @@ static int bio_detain(struct dm_bio_prison *prison,
{ {
int r; int r;
unsigned long flags; unsigned long flags;
struct bucket *b = get_bucket(prison, key);
spin_lock_irqsave(&b->lock, flags); spin_lock_irqsave(&prison->lock, flags);
r = __bio_detain(b, key, inmate, cell_prealloc, cell_result); r = __bio_detain(prison, key, inmate, cell_prealloc, cell_result);
spin_unlock_irqrestore(&b->lock, flags); spin_unlock_irqrestore(&prison->lock, flags);
return r; return r;
} }
...@@ -205,10 +179,11 @@ EXPORT_SYMBOL_GPL(dm_get_cell); ...@@ -205,10 +179,11 @@ EXPORT_SYMBOL_GPL(dm_get_cell);
/* /*
* @inmates must have been initialised prior to this call * @inmates must have been initialised prior to this call
*/ */
static void __cell_release(struct dm_bio_prison_cell *cell, static void __cell_release(struct dm_bio_prison *prison,
struct dm_bio_prison_cell *cell,
struct bio_list *inmates) struct bio_list *inmates)
{ {
hlist_del(&cell->list); rb_erase(&cell->node, &prison->cells);
if (inmates) { if (inmates) {
if (cell->holder) if (cell->holder)
...@@ -222,21 +197,21 @@ void dm_cell_release(struct dm_bio_prison *prison, ...@@ -222,21 +197,21 @@ void dm_cell_release(struct dm_bio_prison *prison,
struct bio_list *bios) struct bio_list *bios)
{ {
unsigned long flags; unsigned long flags;
struct bucket *b = get_bucket(prison, &cell->key);
spin_lock_irqsave(&b->lock, flags); spin_lock_irqsave(&prison->lock, flags);
__cell_release(cell, bios); __cell_release(prison, cell, bios);
spin_unlock_irqrestore(&b->lock, flags); spin_unlock_irqrestore(&prison->lock, flags);
} }
EXPORT_SYMBOL_GPL(dm_cell_release); EXPORT_SYMBOL_GPL(dm_cell_release);
/* /*
* Sometimes we don't want the holder, just the additional bios. * Sometimes we don't want the holder, just the additional bios.
*/ */
static void __cell_release_no_holder(struct dm_bio_prison_cell *cell, static void __cell_release_no_holder(struct dm_bio_prison *prison,
struct dm_bio_prison_cell *cell,
struct bio_list *inmates) struct bio_list *inmates)
{ {
hlist_del(&cell->list); rb_erase(&cell->node, &prison->cells);
bio_list_merge(inmates, &cell->bios); bio_list_merge(inmates, &cell->bios);
} }
...@@ -245,11 +220,10 @@ void dm_cell_release_no_holder(struct dm_bio_prison *prison, ...@@ -245,11 +220,10 @@ void dm_cell_release_no_holder(struct dm_bio_prison *prison,
struct bio_list *inmates) struct bio_list *inmates)
{ {
unsigned long flags; unsigned long flags;
struct bucket *b = get_bucket(prison, &cell->key);
spin_lock_irqsave(&b->lock, flags); spin_lock_irqsave(&prison->lock, flags);
__cell_release_no_holder(cell, inmates); __cell_release_no_holder(prison, cell, inmates);
spin_unlock_irqrestore(&b->lock, flags); spin_unlock_irqrestore(&prison->lock, flags);
} }
EXPORT_SYMBOL_GPL(dm_cell_release_no_holder); EXPORT_SYMBOL_GPL(dm_cell_release_no_holder);
...@@ -267,6 +241,20 @@ void dm_cell_error(struct dm_bio_prison *prison, ...@@ -267,6 +241,20 @@ void dm_cell_error(struct dm_bio_prison *prison,
} }
EXPORT_SYMBOL_GPL(dm_cell_error); EXPORT_SYMBOL_GPL(dm_cell_error);
void dm_cell_visit_release(struct dm_bio_prison *prison,
void (*visit_fn)(void *, struct dm_bio_prison_cell *),
void *context,
struct dm_bio_prison_cell *cell)
{
unsigned long flags;
spin_lock_irqsave(&prison->lock, flags);
visit_fn(context, cell);
rb_erase(&cell->node, &prison->cells);
spin_unlock_irqrestore(&prison->lock, flags);
}
EXPORT_SYMBOL_GPL(dm_cell_visit_release);
/*----------------------------------------------------------------*/ /*----------------------------------------------------------------*/
#define DEFERRED_SET_SIZE 64 #define DEFERRED_SET_SIZE 64
......
...@@ -10,8 +10,8 @@ ...@@ -10,8 +10,8 @@
#include "persistent-data/dm-block-manager.h" /* FIXME: for dm_block_t */ #include "persistent-data/dm-block-manager.h" /* FIXME: for dm_block_t */
#include "dm-thin-metadata.h" /* FIXME: for dm_thin_id */ #include "dm-thin-metadata.h" /* FIXME: for dm_thin_id */
#include <linux/list.h>
#include <linux/bio.h> #include <linux/bio.h>
#include <linux/rbtree.h>
/*----------------------------------------------------------------*/ /*----------------------------------------------------------------*/
...@@ -23,11 +23,14 @@ ...@@ -23,11 +23,14 @@
*/ */
struct dm_bio_prison; struct dm_bio_prison;
/* FIXME: this needs to be more abstract */ /*
* Keys define a range of blocks within either a virtual or physical
* device.
*/
struct dm_cell_key { struct dm_cell_key {
int virtual; int virtual;
dm_thin_id dev; dm_thin_id dev;
dm_block_t block; dm_block_t block_begin, block_end;
}; };
/* /*
...@@ -35,13 +38,15 @@ struct dm_cell_key { ...@@ -35,13 +38,15 @@ struct dm_cell_key {
* themselves. * themselves.
*/ */
struct dm_bio_prison_cell { struct dm_bio_prison_cell {
struct hlist_node list; struct list_head user_list; /* for client use */
struct rb_node node;
struct dm_cell_key key; struct dm_cell_key key;
struct bio *holder; struct bio *holder;
struct bio_list bios; struct bio_list bios;
}; };
struct dm_bio_prison *dm_bio_prison_create(unsigned nr_cells); struct dm_bio_prison *dm_bio_prison_create(void);
void dm_bio_prison_destroy(struct dm_bio_prison *prison); void dm_bio_prison_destroy(struct dm_bio_prison *prison);
/* /*
...@@ -57,7 +62,7 @@ void dm_bio_prison_free_cell(struct dm_bio_prison *prison, ...@@ -57,7 +62,7 @@ void dm_bio_prison_free_cell(struct dm_bio_prison *prison,
struct dm_bio_prison_cell *cell); struct dm_bio_prison_cell *cell);
/* /*
* Creates, or retrieves a cell for the given key. * Creates, or retrieves a cell that overlaps the given key.
* *
* Returns 1 if pre-existing cell returned, zero if new cell created using * Returns 1 if pre-existing cell returned, zero if new cell created using
* @cell_prealloc. * @cell_prealloc.
...@@ -68,7 +73,8 @@ int dm_get_cell(struct dm_bio_prison *prison, ...@@ -68,7 +73,8 @@ int dm_get_cell(struct dm_bio_prison *prison,
struct dm_bio_prison_cell **cell_result); struct dm_bio_prison_cell **cell_result);
/* /*
* An atomic op that combines retrieving a cell, and adding a bio to it. * An atomic op that combines retrieving or creating a cell, and adding a
* bio to it.
* *
* Returns 1 if the cell was already held, 0 if @inmate is the new holder. * Returns 1 if the cell was already held, 0 if @inmate is the new holder.
*/ */
...@@ -87,6 +93,14 @@ void dm_cell_release_no_holder(struct dm_bio_prison *prison, ...@@ -87,6 +93,14 @@ void dm_cell_release_no_holder(struct dm_bio_prison *prison,
void dm_cell_error(struct dm_bio_prison *prison, void dm_cell_error(struct dm_bio_prison *prison,
struct dm_bio_prison_cell *cell, int error); struct dm_bio_prison_cell *cell, int error);
/*
* Visits the cell and then releases. Guarantees no new inmates are
* inserted between the visit and release.
*/
void dm_cell_visit_release(struct dm_bio_prison *prison,
void (*visit_fn)(void *, struct dm_bio_prison_cell *),
void *context, struct dm_bio_prison_cell *cell);
/*----------------------------------------------------------------*/ /*----------------------------------------------------------------*/
/* /*
......
...@@ -14,6 +14,7 @@ ...@@ -14,6 +14,7 @@
#include <linux/vmalloc.h> #include <linux/vmalloc.h>
#include <linux/shrinker.h> #include <linux/shrinker.h>
#include <linux/module.h> #include <linux/module.h>
#include <linux/rbtree.h>
#define DM_MSG_PREFIX "bufio" #define DM_MSG_PREFIX "bufio"
...@@ -34,26 +35,23 @@ ...@@ -34,26 +35,23 @@
/* /*
* Check buffer ages in this interval (seconds) * Check buffer ages in this interval (seconds)
*/ */
#define DM_BUFIO_WORK_TIMER_SECS 10 #define DM_BUFIO_WORK_TIMER_SECS 30
/* /*
* Free buffers when they are older than this (seconds) * Free buffers when they are older than this (seconds)
*/ */
#define DM_BUFIO_DEFAULT_AGE_SECS 60 #define DM_BUFIO_DEFAULT_AGE_SECS 300
/* /*
* The number of bvec entries that are embedded directly in the buffer. * The nr of bytes of cached data to keep around.
* If the chunk size is larger, dm-io is used to do the io.
*/ */
#define DM_BUFIO_INLINE_VECS 16 #define DM_BUFIO_DEFAULT_RETAIN_BYTES (256 * 1024)
/* /*
* Buffer hash * The number of bvec entries that are embedded directly in the buffer.
* If the chunk size is larger, dm-io is used to do the io.
*/ */
#define DM_BUFIO_HASH_BITS 20 #define DM_BUFIO_INLINE_VECS 16
#define DM_BUFIO_HASH(block) \
((((block) >> DM_BUFIO_HASH_BITS) ^ (block)) & \
((1 << DM_BUFIO_HASH_BITS) - 1))
/* /*
* Don't try to use kmem_cache_alloc for blocks larger than this. * Don't try to use kmem_cache_alloc for blocks larger than this.
...@@ -106,7 +104,7 @@ struct dm_bufio_client { ...@@ -106,7 +104,7 @@ struct dm_bufio_client {
unsigned minimum_buffers; unsigned minimum_buffers;
struct hlist_head *cache_hash; struct rb_root buffer_tree;
wait_queue_head_t free_buffer_wait; wait_queue_head_t free_buffer_wait;
int async_write_error; int async_write_error;
...@@ -135,7 +133,7 @@ enum data_mode { ...@@ -135,7 +133,7 @@ enum data_mode {
}; };
struct dm_buffer { struct dm_buffer {
struct hlist_node hash_list; struct rb_node node;
struct list_head lru_list; struct list_head lru_list;
sector_t block; sector_t block;
void *data; void *data;
...@@ -223,6 +221,7 @@ static DEFINE_SPINLOCK(param_spinlock); ...@@ -223,6 +221,7 @@ static DEFINE_SPINLOCK(param_spinlock);
* Buffers are freed after this timeout * Buffers are freed after this timeout
*/ */
static unsigned dm_bufio_max_age = DM_BUFIO_DEFAULT_AGE_SECS; static unsigned dm_bufio_max_age = DM_BUFIO_DEFAULT_AGE_SECS;
static unsigned dm_bufio_retain_bytes = DM_BUFIO_DEFAULT_RETAIN_BYTES;
static unsigned long dm_bufio_peak_allocated; static unsigned long dm_bufio_peak_allocated;
static unsigned long dm_bufio_allocated_kmem_cache; static unsigned long dm_bufio_allocated_kmem_cache;
...@@ -253,6 +252,53 @@ static LIST_HEAD(dm_bufio_all_clients); ...@@ -253,6 +252,53 @@ static LIST_HEAD(dm_bufio_all_clients);
*/ */
static DEFINE_MUTEX(dm_bufio_clients_lock); static DEFINE_MUTEX(dm_bufio_clients_lock);
/*----------------------------------------------------------------
* A red/black tree acts as an index for all the buffers.
*--------------------------------------------------------------*/
static struct dm_buffer *__find(struct dm_bufio_client *c, sector_t block)
{
struct rb_node *n = c->buffer_tree.rb_node;
struct dm_buffer *b;
while (n) {
b = container_of(n, struct dm_buffer, node);
if (b->block == block)
return b;
n = (b->block < block) ? n->rb_left : n->rb_right;
}
return NULL;
}
static void __insert(struct dm_bufio_client *c, struct dm_buffer *b)
{
struct rb_node **new = &c->buffer_tree.rb_node, *parent = NULL;
struct dm_buffer *found;
while (*new) {
found = container_of(*new, struct dm_buffer, node);
if (found->block == b->block) {
BUG_ON(found != b);
return;
}
parent = *new;
new = (found->block < b->block) ?
&((*new)->rb_left) : &((*new)->rb_right);
}
rb_link_node(&b->node, parent, new);
rb_insert_color(&b->node, &c->buffer_tree);
}
static void __remove(struct dm_bufio_client *c, struct dm_buffer *b)
{
rb_erase(&b->node, &c->buffer_tree);
}
/*----------------------------------------------------------------*/ /*----------------------------------------------------------------*/
static void adjust_total_allocated(enum data_mode data_mode, long diff) static void adjust_total_allocated(enum data_mode data_mode, long diff)
...@@ -434,7 +480,7 @@ static void __link_buffer(struct dm_buffer *b, sector_t block, int dirty) ...@@ -434,7 +480,7 @@ static void __link_buffer(struct dm_buffer *b, sector_t block, int dirty)
b->block = block; b->block = block;
b->list_mode = dirty; b->list_mode = dirty;
list_add(&b->lru_list, &c->lru[dirty]); list_add(&b->lru_list, &c->lru[dirty]);
hlist_add_head(&b->hash_list, &c->cache_hash[DM_BUFIO_HASH(block)]); __insert(b->c, b);
b->last_accessed = jiffies; b->last_accessed = jiffies;
} }
...@@ -448,7 +494,7 @@ static void __unlink_buffer(struct dm_buffer *b) ...@@ -448,7 +494,7 @@ static void __unlink_buffer(struct dm_buffer *b)
BUG_ON(!c->n_buffers[b->list_mode]); BUG_ON(!c->n_buffers[b->list_mode]);
c->n_buffers[b->list_mode]--; c->n_buffers[b->list_mode]--;
hlist_del(&b->hash_list); __remove(b->c, b);
list_del(&b->lru_list); list_del(&b->lru_list);
} }
...@@ -532,6 +578,19 @@ static void use_dmio(struct dm_buffer *b, int rw, sector_t block, ...@@ -532,6 +578,19 @@ static void use_dmio(struct dm_buffer *b, int rw, sector_t block,
end_io(&b->bio, r); end_io(&b->bio, r);
} }
static void inline_endio(struct bio *bio, int error)
{
bio_end_io_t *end_fn = bio->bi_private;
/*
* Reset the bio to free any attached resources
* (e.g. bio integrity profiles).
*/
bio_reset(bio);
end_fn(bio, error);
}
static void use_inline_bio(struct dm_buffer *b, int rw, sector_t block, static void use_inline_bio(struct dm_buffer *b, int rw, sector_t block,
bio_end_io_t *end_io) bio_end_io_t *end_io)
{ {
...@@ -543,7 +602,12 @@ static void use_inline_bio(struct dm_buffer *b, int rw, sector_t block, ...@@ -543,7 +602,12 @@ static void use_inline_bio(struct dm_buffer *b, int rw, sector_t block,
b->bio.bi_max_vecs = DM_BUFIO_INLINE_VECS; b->bio.bi_max_vecs = DM_BUFIO_INLINE_VECS;
b->bio.bi_iter.bi_sector = block << b->c->sectors_per_block_bits; b->bio.bi_iter.bi_sector = block << b->c->sectors_per_block_bits;
b->bio.bi_bdev = b->c->bdev; b->bio.bi_bdev = b->c->bdev;
b->bio.bi_end_io = end_io; b->bio.bi_end_io = inline_endio;
/*
* Use of .bi_private isn't a problem here because
* the dm_buffer's inline bio is local to bufio.
*/
b->bio.bi_private = end_io;
/* /*
* We assume that if len >= PAGE_SIZE ptr is page-aligned. * We assume that if len >= PAGE_SIZE ptr is page-aligned.
...@@ -887,23 +951,6 @@ static void __check_watermark(struct dm_bufio_client *c, ...@@ -887,23 +951,6 @@ static void __check_watermark(struct dm_bufio_client *c,
__write_dirty_buffers_async(c, 1, write_list); __write_dirty_buffers_async(c, 1, write_list);
} }
/*
* Find a buffer in the hash.
*/
static struct dm_buffer *__find(struct dm_bufio_client *c, sector_t block)
{
struct dm_buffer *b;
hlist_for_each_entry(b, &c->cache_hash[DM_BUFIO_HASH(block)],
hash_list) {
dm_bufio_cond_resched();
if (b->block == block)
return b;
}
return NULL;
}
/*---------------------------------------------------------------- /*----------------------------------------------------------------
* Getting a buffer * Getting a buffer
*--------------------------------------------------------------*/ *--------------------------------------------------------------*/
...@@ -1433,45 +1480,52 @@ static void drop_buffers(struct dm_bufio_client *c) ...@@ -1433,45 +1480,52 @@ static void drop_buffers(struct dm_bufio_client *c)
} }
/* /*
* Test if the buffer is unused and too old, and commit it. * We may not be able to evict this buffer if IO pending or the client
* is still using it. Caller is expected to know buffer is too old.
*
* And if GFP_NOFS is used, we must not do any I/O because we hold * And if GFP_NOFS is used, we must not do any I/O because we hold
* dm_bufio_clients_lock and we would risk deadlock if the I/O gets * dm_bufio_clients_lock and we would risk deadlock if the I/O gets
* rerouted to different bufio client. * rerouted to different bufio client.
*/ */
static int __cleanup_old_buffer(struct dm_buffer *b, gfp_t gfp, static bool __try_evict_buffer(struct dm_buffer *b, gfp_t gfp)
unsigned long max_jiffies)
{ {
if (jiffies - b->last_accessed < max_jiffies)
return 0;
if (!(gfp & __GFP_FS)) { if (!(gfp & __GFP_FS)) {
if (test_bit(B_READING, &b->state) || if (test_bit(B_READING, &b->state) ||
test_bit(B_WRITING, &b->state) || test_bit(B_WRITING, &b->state) ||
test_bit(B_DIRTY, &b->state)) test_bit(B_DIRTY, &b->state))
return 0; return false;
} }
if (b->hold_count) if (b->hold_count)
return 0; return false;
__make_buffer_clean(b); __make_buffer_clean(b);
__unlink_buffer(b); __unlink_buffer(b);
__free_buffer_wake(b); __free_buffer_wake(b);
return 1; return true;
} }
static long __scan(struct dm_bufio_client *c, unsigned long nr_to_scan, static unsigned get_retain_buffers(struct dm_bufio_client *c)
gfp_t gfp_mask) {
unsigned retain_bytes = ACCESS_ONCE(dm_bufio_retain_bytes);
return retain_bytes / c->block_size;
}
static unsigned long __scan(struct dm_bufio_client *c, unsigned long nr_to_scan,
gfp_t gfp_mask)
{ {
int l; int l;
struct dm_buffer *b, *tmp; struct dm_buffer *b, *tmp;
long freed = 0; unsigned long freed = 0;
unsigned long count = nr_to_scan;
unsigned retain_target = get_retain_buffers(c);
for (l = 0; l < LIST_SIZE; l++) { for (l = 0; l < LIST_SIZE; l++) {
list_for_each_entry_safe_reverse(b, tmp, &c->lru[l], lru_list) { list_for_each_entry_safe_reverse(b, tmp, &c->lru[l], lru_list) {
freed += __cleanup_old_buffer(b, gfp_mask, 0); if (__try_evict_buffer(b, gfp_mask))
if (!--nr_to_scan) freed++;
if (!--nr_to_scan || ((count - freed) <= retain_target))
return freed; return freed;
dm_bufio_cond_resched(); dm_bufio_cond_resched();
} }
...@@ -1533,11 +1587,7 @@ struct dm_bufio_client *dm_bufio_client_create(struct block_device *bdev, unsign ...@@ -1533,11 +1587,7 @@ struct dm_bufio_client *dm_bufio_client_create(struct block_device *bdev, unsign
r = -ENOMEM; r = -ENOMEM;
goto bad_client; goto bad_client;
} }
c->cache_hash = vmalloc(sizeof(struct hlist_head) << DM_BUFIO_HASH_BITS); c->buffer_tree = RB_ROOT;
if (!c->cache_hash) {
r = -ENOMEM;
goto bad_hash;
}
c->bdev = bdev; c->bdev = bdev;
c->block_size = block_size; c->block_size = block_size;
...@@ -1556,9 +1606,6 @@ struct dm_bufio_client *dm_bufio_client_create(struct block_device *bdev, unsign ...@@ -1556,9 +1606,6 @@ struct dm_bufio_client *dm_bufio_client_create(struct block_device *bdev, unsign
c->n_buffers[i] = 0; c->n_buffers[i] = 0;
} }
for (i = 0; i < 1 << DM_BUFIO_HASH_BITS; i++)
INIT_HLIST_HEAD(&c->cache_hash[i]);
mutex_init(&c->lock); mutex_init(&c->lock);
INIT_LIST_HEAD(&c->reserved_buffers); INIT_LIST_HEAD(&c->reserved_buffers);
c->need_reserved_buffers = reserved_buffers; c->need_reserved_buffers = reserved_buffers;
...@@ -1632,8 +1679,6 @@ struct dm_bufio_client *dm_bufio_client_create(struct block_device *bdev, unsign ...@@ -1632,8 +1679,6 @@ struct dm_bufio_client *dm_bufio_client_create(struct block_device *bdev, unsign
} }
dm_io_client_destroy(c->dm_io); dm_io_client_destroy(c->dm_io);
bad_dm_io: bad_dm_io:
vfree(c->cache_hash);
bad_hash:
kfree(c); kfree(c);
bad_client: bad_client:
return ERR_PTR(r); return ERR_PTR(r);
...@@ -1660,9 +1705,7 @@ void dm_bufio_client_destroy(struct dm_bufio_client *c) ...@@ -1660,9 +1705,7 @@ void dm_bufio_client_destroy(struct dm_bufio_client *c)
mutex_unlock(&dm_bufio_clients_lock); mutex_unlock(&dm_bufio_clients_lock);
for (i = 0; i < 1 << DM_BUFIO_HASH_BITS; i++) BUG_ON(!RB_EMPTY_ROOT(&c->buffer_tree));
BUG_ON(!hlist_empty(&c->cache_hash[i]));
BUG_ON(c->need_reserved_buffers); BUG_ON(c->need_reserved_buffers);
while (!list_empty(&c->reserved_buffers)) { while (!list_empty(&c->reserved_buffers)) {
...@@ -1680,36 +1723,60 @@ void dm_bufio_client_destroy(struct dm_bufio_client *c) ...@@ -1680,36 +1723,60 @@ void dm_bufio_client_destroy(struct dm_bufio_client *c)
BUG_ON(c->n_buffers[i]); BUG_ON(c->n_buffers[i]);
dm_io_client_destroy(c->dm_io); dm_io_client_destroy(c->dm_io);
vfree(c->cache_hash);
kfree(c); kfree(c);
} }
EXPORT_SYMBOL_GPL(dm_bufio_client_destroy); EXPORT_SYMBOL_GPL(dm_bufio_client_destroy);
static void cleanup_old_buffers(void) static unsigned get_max_age_hz(void)
{ {
unsigned long max_age = ACCESS_ONCE(dm_bufio_max_age); unsigned max_age = ACCESS_ONCE(dm_bufio_max_age);
struct dm_bufio_client *c;
if (max_age > ULONG_MAX / HZ) if (max_age > UINT_MAX / HZ)
max_age = ULONG_MAX / HZ; max_age = UINT_MAX / HZ;
mutex_lock(&dm_bufio_clients_lock); return max_age * HZ;
list_for_each_entry(c, &dm_bufio_all_clients, client_list) { }
if (!dm_bufio_trylock(c))
continue;
while (!list_empty(&c->lru[LIST_CLEAN])) { static bool older_than(struct dm_buffer *b, unsigned long age_hz)
struct dm_buffer *b; {
b = list_entry(c->lru[LIST_CLEAN].prev, return (jiffies - b->last_accessed) >= age_hz;
struct dm_buffer, lru_list); }
if (!__cleanup_old_buffer(b, 0, max_age * HZ))
break; static void __evict_old_buffers(struct dm_bufio_client *c, unsigned long age_hz)
dm_bufio_cond_resched(); {
} struct dm_buffer *b, *tmp;
unsigned retain_target = get_retain_buffers(c);
unsigned count;
dm_bufio_lock(c);
count = c->n_buffers[LIST_CLEAN] + c->n_buffers[LIST_DIRTY];
list_for_each_entry_safe_reverse(b, tmp, &c->lru[LIST_CLEAN], lru_list) {
if (count <= retain_target)
break;
if (!older_than(b, age_hz))
break;
if (__try_evict_buffer(b, 0))
count--;
dm_bufio_unlock(c);
dm_bufio_cond_resched(); dm_bufio_cond_resched();
} }
dm_bufio_unlock(c);
}
static void cleanup_old_buffers(void)
{
unsigned long max_age_hz = get_max_age_hz();
struct dm_bufio_client *c;
mutex_lock(&dm_bufio_clients_lock);
list_for_each_entry(c, &dm_bufio_all_clients, client_list)
__evict_old_buffers(c, max_age_hz);
mutex_unlock(&dm_bufio_clients_lock); mutex_unlock(&dm_bufio_clients_lock);
} }
...@@ -1834,6 +1901,9 @@ MODULE_PARM_DESC(max_cache_size_bytes, "Size of metadata cache"); ...@@ -1834,6 +1901,9 @@ MODULE_PARM_DESC(max_cache_size_bytes, "Size of metadata cache");
module_param_named(max_age_seconds, dm_bufio_max_age, uint, S_IRUGO | S_IWUSR); module_param_named(max_age_seconds, dm_bufio_max_age, uint, S_IRUGO | S_IWUSR);
MODULE_PARM_DESC(max_age_seconds, "Max age of a buffer in seconds"); MODULE_PARM_DESC(max_age_seconds, "Max age of a buffer in seconds");
module_param_named(retain_bytes, dm_bufio_retain_bytes, uint, S_IRUGO | S_IWUSR);
MODULE_PARM_DESC(retain_bytes, "Try to keep at least this many bytes cached in memory");
module_param_named(peak_allocated_bytes, dm_bufio_peak_allocated, ulong, S_IRUGO | S_IWUSR); module_param_named(peak_allocated_bytes, dm_bufio_peak_allocated, ulong, S_IRUGO | S_IWUSR);
MODULE_PARM_DESC(peak_allocated_bytes, "Tracks the maximum allocated memory"); MODULE_PARM_DESC(peak_allocated_bytes, "Tracks the maximum allocated memory");
......
...@@ -19,6 +19,7 @@ ...@@ -19,6 +19,7 @@
typedef dm_block_t __bitwise__ dm_oblock_t; typedef dm_block_t __bitwise__ dm_oblock_t;
typedef uint32_t __bitwise__ dm_cblock_t; typedef uint32_t __bitwise__ dm_cblock_t;
typedef dm_block_t __bitwise__ dm_dblock_t;
static inline dm_oblock_t to_oblock(dm_block_t b) static inline dm_oblock_t to_oblock(dm_block_t b)
{ {
...@@ -40,4 +41,14 @@ static inline uint32_t from_cblock(dm_cblock_t b) ...@@ -40,4 +41,14 @@ static inline uint32_t from_cblock(dm_cblock_t b)
return (__force uint32_t) b; return (__force uint32_t) b;
} }
static inline dm_dblock_t to_dblock(dm_block_t b)
{
return (__force dm_dblock_t) b;
}
static inline dm_block_t from_dblock(dm_dblock_t b)
{
return (__force dm_block_t) b;
}
#endif /* DM_CACHE_BLOCK_TYPES_H */ #endif /* DM_CACHE_BLOCK_TYPES_H */
...@@ -109,7 +109,7 @@ struct dm_cache_metadata { ...@@ -109,7 +109,7 @@ struct dm_cache_metadata {
dm_block_t discard_root; dm_block_t discard_root;
sector_t discard_block_size; sector_t discard_block_size;
dm_oblock_t discard_nr_blocks; dm_dblock_t discard_nr_blocks;
sector_t data_block_size; sector_t data_block_size;
dm_cblock_t cache_blocks; dm_cblock_t cache_blocks;
...@@ -329,7 +329,7 @@ static int __write_initial_superblock(struct dm_cache_metadata *cmd) ...@@ -329,7 +329,7 @@ static int __write_initial_superblock(struct dm_cache_metadata *cmd)
disk_super->hint_root = cpu_to_le64(cmd->hint_root); disk_super->hint_root = cpu_to_le64(cmd->hint_root);
disk_super->discard_root = cpu_to_le64(cmd->discard_root); disk_super->discard_root = cpu_to_le64(cmd->discard_root);
disk_super->discard_block_size = cpu_to_le64(cmd->discard_block_size); disk_super->discard_block_size = cpu_to_le64(cmd->discard_block_size);
disk_super->discard_nr_blocks = cpu_to_le64(from_oblock(cmd->discard_nr_blocks)); disk_super->discard_nr_blocks = cpu_to_le64(from_dblock(cmd->discard_nr_blocks));
disk_super->metadata_block_size = cpu_to_le32(DM_CACHE_METADATA_BLOCK_SIZE); disk_super->metadata_block_size = cpu_to_le32(DM_CACHE_METADATA_BLOCK_SIZE);
disk_super->data_block_size = cpu_to_le32(cmd->data_block_size); disk_super->data_block_size = cpu_to_le32(cmd->data_block_size);
disk_super->cache_blocks = cpu_to_le32(0); disk_super->cache_blocks = cpu_to_le32(0);
...@@ -528,7 +528,7 @@ static void read_superblock_fields(struct dm_cache_metadata *cmd, ...@@ -528,7 +528,7 @@ static void read_superblock_fields(struct dm_cache_metadata *cmd,
cmd->hint_root = le64_to_cpu(disk_super->hint_root); cmd->hint_root = le64_to_cpu(disk_super->hint_root);
cmd->discard_root = le64_to_cpu(disk_super->discard_root); cmd->discard_root = le64_to_cpu(disk_super->discard_root);
cmd->discard_block_size = le64_to_cpu(disk_super->discard_block_size); cmd->discard_block_size = le64_to_cpu(disk_super->discard_block_size);
cmd->discard_nr_blocks = to_oblock(le64_to_cpu(disk_super->discard_nr_blocks)); cmd->discard_nr_blocks = to_dblock(le64_to_cpu(disk_super->discard_nr_blocks));
cmd->data_block_size = le32_to_cpu(disk_super->data_block_size); cmd->data_block_size = le32_to_cpu(disk_super->data_block_size);
cmd->cache_blocks = to_cblock(le32_to_cpu(disk_super->cache_blocks)); cmd->cache_blocks = to_cblock(le32_to_cpu(disk_super->cache_blocks));
strncpy(cmd->policy_name, disk_super->policy_name, sizeof(cmd->policy_name)); strncpy(cmd->policy_name, disk_super->policy_name, sizeof(cmd->policy_name));
...@@ -626,7 +626,7 @@ static int __commit_transaction(struct dm_cache_metadata *cmd, ...@@ -626,7 +626,7 @@ static int __commit_transaction(struct dm_cache_metadata *cmd,
disk_super->hint_root = cpu_to_le64(cmd->hint_root); disk_super->hint_root = cpu_to_le64(cmd->hint_root);
disk_super->discard_root = cpu_to_le64(cmd->discard_root); disk_super->discard_root = cpu_to_le64(cmd->discard_root);
disk_super->discard_block_size = cpu_to_le64(cmd->discard_block_size); disk_super->discard_block_size = cpu_to_le64(cmd->discard_block_size);
disk_super->discard_nr_blocks = cpu_to_le64(from_oblock(cmd->discard_nr_blocks)); disk_super->discard_nr_blocks = cpu_to_le64(from_dblock(cmd->discard_nr_blocks));
disk_super->cache_blocks = cpu_to_le32(from_cblock(cmd->cache_blocks)); disk_super->cache_blocks = cpu_to_le32(from_cblock(cmd->cache_blocks));
strncpy(disk_super->policy_name, cmd->policy_name, sizeof(disk_super->policy_name)); strncpy(disk_super->policy_name, cmd->policy_name, sizeof(disk_super->policy_name));
disk_super->policy_version[0] = cpu_to_le32(cmd->policy_version[0]); disk_super->policy_version[0] = cpu_to_le32(cmd->policy_version[0]);
...@@ -797,15 +797,15 @@ int dm_cache_resize(struct dm_cache_metadata *cmd, dm_cblock_t new_cache_size) ...@@ -797,15 +797,15 @@ int dm_cache_resize(struct dm_cache_metadata *cmd, dm_cblock_t new_cache_size)
int dm_cache_discard_bitset_resize(struct dm_cache_metadata *cmd, int dm_cache_discard_bitset_resize(struct dm_cache_metadata *cmd,
sector_t discard_block_size, sector_t discard_block_size,
dm_oblock_t new_nr_entries) dm_dblock_t new_nr_entries)
{ {
int r; int r;
down_write(&cmd->root_lock); down_write(&cmd->root_lock);
r = dm_bitset_resize(&cmd->discard_info, r = dm_bitset_resize(&cmd->discard_info,
cmd->discard_root, cmd->discard_root,
from_oblock(cmd->discard_nr_blocks), from_dblock(cmd->discard_nr_blocks),
from_oblock(new_nr_entries), from_dblock(new_nr_entries),
false, &cmd->discard_root); false, &cmd->discard_root);
if (!r) { if (!r) {
cmd->discard_block_size = discard_block_size; cmd->discard_block_size = discard_block_size;
...@@ -818,28 +818,28 @@ int dm_cache_discard_bitset_resize(struct dm_cache_metadata *cmd, ...@@ -818,28 +818,28 @@ int dm_cache_discard_bitset_resize(struct dm_cache_metadata *cmd,
return r; return r;
} }
static int __set_discard(struct dm_cache_metadata *cmd, dm_oblock_t b) static int __set_discard(struct dm_cache_metadata *cmd, dm_dblock_t b)
{ {
return dm_bitset_set_bit(&cmd->discard_info, cmd->discard_root, return dm_bitset_set_bit(&cmd->discard_info, cmd->discard_root,
from_oblock(b), &cmd->discard_root); from_dblock(b), &cmd->discard_root);
} }
static int __clear_discard(struct dm_cache_metadata *cmd, dm_oblock_t b) static int __clear_discard(struct dm_cache_metadata *cmd, dm_dblock_t b)
{ {
return dm_bitset_clear_bit(&cmd->discard_info, cmd->discard_root, return dm_bitset_clear_bit(&cmd->discard_info, cmd->discard_root,
from_oblock(b), &cmd->discard_root); from_dblock(b), &cmd->discard_root);
} }
static int __is_discarded(struct dm_cache_metadata *cmd, dm_oblock_t b, static int __is_discarded(struct dm_cache_metadata *cmd, dm_dblock_t b,
bool *is_discarded) bool *is_discarded)
{ {
return dm_bitset_test_bit(&cmd->discard_info, cmd->discard_root, return dm_bitset_test_bit(&cmd->discard_info, cmd->discard_root,
from_oblock(b), &cmd->discard_root, from_dblock(b), &cmd->discard_root,
is_discarded); is_discarded);
} }
static int __discard(struct dm_cache_metadata *cmd, static int __discard(struct dm_cache_metadata *cmd,
dm_oblock_t dblock, bool discard) dm_dblock_t dblock, bool discard)
{ {
int r; int r;
...@@ -852,7 +852,7 @@ static int __discard(struct dm_cache_metadata *cmd, ...@@ -852,7 +852,7 @@ static int __discard(struct dm_cache_metadata *cmd,
} }
int dm_cache_set_discard(struct dm_cache_metadata *cmd, int dm_cache_set_discard(struct dm_cache_metadata *cmd,
dm_oblock_t dblock, bool discard) dm_dblock_t dblock, bool discard)
{ {
int r; int r;
...@@ -870,8 +870,8 @@ static int __load_discards(struct dm_cache_metadata *cmd, ...@@ -870,8 +870,8 @@ static int __load_discards(struct dm_cache_metadata *cmd,
dm_block_t b; dm_block_t b;
bool discard; bool discard;
for (b = 0; b < from_oblock(cmd->discard_nr_blocks); b++) { for (b = 0; b < from_dblock(cmd->discard_nr_blocks); b++) {
dm_oblock_t dblock = to_oblock(b); dm_dblock_t dblock = to_dblock(b);
if (cmd->clean_when_opened) { if (cmd->clean_when_opened) {
r = __is_discarded(cmd, dblock, &discard); r = __is_discarded(cmd, dblock, &discard);
......
...@@ -70,14 +70,14 @@ dm_cblock_t dm_cache_size(struct dm_cache_metadata *cmd); ...@@ -70,14 +70,14 @@ dm_cblock_t dm_cache_size(struct dm_cache_metadata *cmd);
int dm_cache_discard_bitset_resize(struct dm_cache_metadata *cmd, int dm_cache_discard_bitset_resize(struct dm_cache_metadata *cmd,
sector_t discard_block_size, sector_t discard_block_size,
dm_oblock_t new_nr_entries); dm_dblock_t new_nr_entries);
typedef int (*load_discard_fn)(void *context, sector_t discard_block_size, typedef int (*load_discard_fn)(void *context, sector_t discard_block_size,
dm_oblock_t dblock, bool discarded); dm_dblock_t dblock, bool discarded);
int dm_cache_load_discards(struct dm_cache_metadata *cmd, int dm_cache_load_discards(struct dm_cache_metadata *cmd,
load_discard_fn fn, void *context); load_discard_fn fn, void *context);
int dm_cache_set_discard(struct dm_cache_metadata *cmd, dm_oblock_t dblock, bool discard); int dm_cache_set_discard(struct dm_cache_metadata *cmd, dm_dblock_t dblock, bool discard);
int dm_cache_remove_mapping(struct dm_cache_metadata *cmd, dm_cblock_t cblock); int dm_cache_remove_mapping(struct dm_cache_metadata *cmd, dm_cblock_t cblock);
int dm_cache_insert_mapping(struct dm_cache_metadata *cmd, dm_cblock_t cblock, dm_oblock_t oblock); int dm_cache_insert_mapping(struct dm_cache_metadata *cmd, dm_cblock_t cblock, dm_oblock_t oblock);
......
...@@ -181,24 +181,30 @@ static void queue_shift_down(struct queue *q) ...@@ -181,24 +181,30 @@ static void queue_shift_down(struct queue *q)
* Gives us the oldest entry of the lowest popoulated level. If the first * Gives us the oldest entry of the lowest popoulated level. If the first
* level is emptied then we shift down one level. * level is emptied then we shift down one level.
*/ */
static struct list_head *queue_pop(struct queue *q) static struct list_head *queue_peek(struct queue *q)
{ {
unsigned level; unsigned level;
struct list_head *r;
for (level = 0; level < NR_QUEUE_LEVELS; level++) for (level = 0; level < NR_QUEUE_LEVELS; level++)
if (!list_empty(q->qs + level)) { if (!list_empty(q->qs + level))
r = q->qs[level].next; return q->qs[level].next;
list_del(r);
/* have we just emptied the bottom level? */ return NULL;
if (level == 0 && list_empty(q->qs)) }
queue_shift_down(q);
return r; static struct list_head *queue_pop(struct queue *q)
} {
struct list_head *r = queue_peek(q);
return NULL; if (r) {
list_del(r);
/* have we just emptied the bottom level? */
if (list_empty(q->qs))
queue_shift_down(q);
}
return r;
} }
static struct list_head *list_pop(struct list_head *lh) static struct list_head *list_pop(struct list_head *lh)
...@@ -383,13 +389,6 @@ struct mq_policy { ...@@ -383,13 +389,6 @@ struct mq_policy {
unsigned generation; unsigned generation;
unsigned generation_period; /* in lookups (will probably change) */ unsigned generation_period; /* in lookups (will probably change) */
/*
* Entries in the pre_cache whose hit count passes the promotion
* threshold move to the cache proper. Working out the correct
* value for the promotion_threshold is crucial to this policy.
*/
unsigned promote_threshold;
unsigned discard_promote_adjustment; unsigned discard_promote_adjustment;
unsigned read_promote_adjustment; unsigned read_promote_adjustment;
unsigned write_promote_adjustment; unsigned write_promote_adjustment;
...@@ -406,6 +405,7 @@ struct mq_policy { ...@@ -406,6 +405,7 @@ struct mq_policy {
#define DEFAULT_DISCARD_PROMOTE_ADJUSTMENT 1 #define DEFAULT_DISCARD_PROMOTE_ADJUSTMENT 1
#define DEFAULT_READ_PROMOTE_ADJUSTMENT 4 #define DEFAULT_READ_PROMOTE_ADJUSTMENT 4
#define DEFAULT_WRITE_PROMOTE_ADJUSTMENT 8 #define DEFAULT_WRITE_PROMOTE_ADJUSTMENT 8
#define DISCOURAGE_DEMOTING_DIRTY_THRESHOLD 128
/*----------------------------------------------------------------*/ /*----------------------------------------------------------------*/
...@@ -518,6 +518,12 @@ static struct entry *pop(struct mq_policy *mq, struct queue *q) ...@@ -518,6 +518,12 @@ static struct entry *pop(struct mq_policy *mq, struct queue *q)
return e; return e;
} }
static struct entry *peek(struct queue *q)
{
struct list_head *h = queue_peek(q);
return h ? container_of(h, struct entry, list) : NULL;
}
/* /*
* Has this entry already been updated? * Has this entry already been updated?
*/ */
...@@ -570,10 +576,6 @@ static void check_generation(struct mq_policy *mq) ...@@ -570,10 +576,6 @@ static void check_generation(struct mq_policy *mq)
break; break;
} }
} }
mq->promote_threshold = nr ? total / nr : 1;
if (mq->promote_threshold * nr < total)
mq->promote_threshold++;
} }
} }
...@@ -640,6 +642,30 @@ static int demote_cblock(struct mq_policy *mq, dm_oblock_t *oblock) ...@@ -640,6 +642,30 @@ static int demote_cblock(struct mq_policy *mq, dm_oblock_t *oblock)
return 0; return 0;
} }
/*
* Entries in the pre_cache whose hit count passes the promotion
* threshold move to the cache proper. Working out the correct
* value for the promotion_threshold is crucial to this policy.
*/
static unsigned promote_threshold(struct mq_policy *mq)
{
struct entry *e;
if (any_free_cblocks(mq))
return 0;
e = peek(&mq->cache_clean);
if (e)
return e->hit_count;
e = peek(&mq->cache_dirty);
if (e)
return e->hit_count + DISCOURAGE_DEMOTING_DIRTY_THRESHOLD;
/* This should never happen */
return 0;
}
/* /*
* We modify the basic promotion_threshold depending on the specific io. * We modify the basic promotion_threshold depending on the specific io.
* *
...@@ -653,7 +679,7 @@ static unsigned adjusted_promote_threshold(struct mq_policy *mq, ...@@ -653,7 +679,7 @@ static unsigned adjusted_promote_threshold(struct mq_policy *mq,
bool discarded_oblock, int data_dir) bool discarded_oblock, int data_dir)
{ {
if (data_dir == READ) if (data_dir == READ)
return mq->promote_threshold + mq->read_promote_adjustment; return promote_threshold(mq) + mq->read_promote_adjustment;
if (discarded_oblock && (any_free_cblocks(mq) || any_clean_cblocks(mq))) { if (discarded_oblock && (any_free_cblocks(mq) || any_clean_cblocks(mq))) {
/* /*
...@@ -663,7 +689,7 @@ static unsigned adjusted_promote_threshold(struct mq_policy *mq, ...@@ -663,7 +689,7 @@ static unsigned adjusted_promote_threshold(struct mq_policy *mq,
return mq->discard_promote_adjustment; return mq->discard_promote_adjustment;
} }
return mq->promote_threshold + mq->write_promote_adjustment; return promote_threshold(mq) + mq->write_promote_adjustment;
} }
static bool should_promote(struct mq_policy *mq, struct entry *e, static bool should_promote(struct mq_policy *mq, struct entry *e,
...@@ -839,7 +865,8 @@ static int map(struct mq_policy *mq, dm_oblock_t oblock, ...@@ -839,7 +865,8 @@ static int map(struct mq_policy *mq, dm_oblock_t oblock,
if (e && in_cache(mq, e)) if (e && in_cache(mq, e))
r = cache_entry_found(mq, e, result); r = cache_entry_found(mq, e, result);
else if (iot_pattern(&mq->tracker) == PATTERN_SEQUENTIAL) else if (mq->tracker.thresholds[PATTERN_SEQUENTIAL] &&
iot_pattern(&mq->tracker) == PATTERN_SEQUENTIAL)
result->op = POLICY_MISS; result->op = POLICY_MISS;
else if (e) else if (e)
...@@ -1230,7 +1257,6 @@ static struct dm_cache_policy *mq_create(dm_cblock_t cache_size, ...@@ -1230,7 +1257,6 @@ static struct dm_cache_policy *mq_create(dm_cblock_t cache_size,
mq->tick = 0; mq->tick = 0;
mq->hit_count = 0; mq->hit_count = 0;
mq->generation = 0; mq->generation = 0;
mq->promote_threshold = 0;
mq->discard_promote_adjustment = DEFAULT_DISCARD_PROMOTE_ADJUSTMENT; mq->discard_promote_adjustment = DEFAULT_DISCARD_PROMOTE_ADJUSTMENT;
mq->read_promote_adjustment = DEFAULT_READ_PROMOTE_ADJUSTMENT; mq->read_promote_adjustment = DEFAULT_READ_PROMOTE_ADJUSTMENT;
mq->write_promote_adjustment = DEFAULT_WRITE_PROMOTE_ADJUSTMENT; mq->write_promote_adjustment = DEFAULT_WRITE_PROMOTE_ADJUSTMENT;
...@@ -1265,7 +1291,7 @@ static struct dm_cache_policy *mq_create(dm_cblock_t cache_size, ...@@ -1265,7 +1291,7 @@ static struct dm_cache_policy *mq_create(dm_cblock_t cache_size,
static struct dm_cache_policy_type mq_policy_type = { static struct dm_cache_policy_type mq_policy_type = {
.name = "mq", .name = "mq",
.version = {1, 2, 0}, .version = {1, 3, 0},
.hint_size = 4, .hint_size = 4,
.owner = THIS_MODULE, .owner = THIS_MODULE,
.create = mq_create .create = mq_create
...@@ -1273,7 +1299,7 @@ static struct dm_cache_policy_type mq_policy_type = { ...@@ -1273,7 +1299,7 @@ static struct dm_cache_policy_type mq_policy_type = {
static struct dm_cache_policy_type default_policy_type = { static struct dm_cache_policy_type default_policy_type = {
.name = "default", .name = "default",
.version = {1, 2, 0}, .version = {1, 3, 0},
.hint_size = 4, .hint_size = 4,
.owner = THIS_MODULE, .owner = THIS_MODULE,
.create = mq_create, .create = mq_create,
......
...@@ -95,7 +95,6 @@ static void dm_unhook_bio(struct dm_hook_info *h, struct bio *bio) ...@@ -95,7 +95,6 @@ static void dm_unhook_bio(struct dm_hook_info *h, struct bio *bio)
/*----------------------------------------------------------------*/ /*----------------------------------------------------------------*/
#define PRISON_CELLS 1024
#define MIGRATION_POOL_SIZE 128 #define MIGRATION_POOL_SIZE 128
#define COMMIT_PERIOD HZ #define COMMIT_PERIOD HZ
#define MIGRATION_COUNT_WINDOW 10 #define MIGRATION_COUNT_WINDOW 10
...@@ -237,8 +236,9 @@ struct cache { ...@@ -237,8 +236,9 @@ struct cache {
/* /*
* origin_blocks entries, discarded if set. * origin_blocks entries, discarded if set.
*/ */
dm_oblock_t discard_nr_blocks; dm_dblock_t discard_nr_blocks;
unsigned long *discard_bitset; unsigned long *discard_bitset;
uint32_t discard_block_size; /* a power of 2 times sectors per block */
/* /*
* Rather than reconstructing the table line for the status we just * Rather than reconstructing the table line for the status we just
...@@ -310,6 +310,7 @@ struct dm_cache_migration { ...@@ -310,6 +310,7 @@ struct dm_cache_migration {
dm_cblock_t cblock; dm_cblock_t cblock;
bool err:1; bool err:1;
bool discard:1;
bool writeback:1; bool writeback:1;
bool demote:1; bool demote:1;
bool promote:1; bool promote:1;
...@@ -433,11 +434,12 @@ static void prealloc_put_cell(struct prealloc *p, struct dm_bio_prison_cell *cel ...@@ -433,11 +434,12 @@ static void prealloc_put_cell(struct prealloc *p, struct dm_bio_prison_cell *cel
/*----------------------------------------------------------------*/ /*----------------------------------------------------------------*/
static void build_key(dm_oblock_t oblock, struct dm_cell_key *key) static void build_key(dm_oblock_t begin, dm_oblock_t end, struct dm_cell_key *key)
{ {
key->virtual = 0; key->virtual = 0;
key->dev = 0; key->dev = 0;
key->block = from_oblock(oblock); key->block_begin = from_oblock(begin);
key->block_end = from_oblock(end);
} }
/* /*
...@@ -447,15 +449,15 @@ static void build_key(dm_oblock_t oblock, struct dm_cell_key *key) ...@@ -447,15 +449,15 @@ static void build_key(dm_oblock_t oblock, struct dm_cell_key *key)
*/ */
typedef void (*cell_free_fn)(void *context, struct dm_bio_prison_cell *cell); typedef void (*cell_free_fn)(void *context, struct dm_bio_prison_cell *cell);
static int bio_detain(struct cache *cache, dm_oblock_t oblock, static int bio_detain_range(struct cache *cache, dm_oblock_t oblock_begin, dm_oblock_t oblock_end,
struct bio *bio, struct dm_bio_prison_cell *cell_prealloc, struct bio *bio, struct dm_bio_prison_cell *cell_prealloc,
cell_free_fn free_fn, void *free_context, cell_free_fn free_fn, void *free_context,
struct dm_bio_prison_cell **cell_result) struct dm_bio_prison_cell **cell_result)
{ {
int r; int r;
struct dm_cell_key key; struct dm_cell_key key;
build_key(oblock, &key); build_key(oblock_begin, oblock_end, &key);
r = dm_bio_detain(cache->prison, &key, bio, cell_prealloc, cell_result); r = dm_bio_detain(cache->prison, &key, bio, cell_prealloc, cell_result);
if (r) if (r)
free_fn(free_context, cell_prealloc); free_fn(free_context, cell_prealloc);
...@@ -463,6 +465,16 @@ static int bio_detain(struct cache *cache, dm_oblock_t oblock, ...@@ -463,6 +465,16 @@ static int bio_detain(struct cache *cache, dm_oblock_t oblock,
return r; return r;
} }
static int bio_detain(struct cache *cache, dm_oblock_t oblock,
struct bio *bio, struct dm_bio_prison_cell *cell_prealloc,
cell_free_fn free_fn, void *free_context,
struct dm_bio_prison_cell **cell_result)
{
dm_oblock_t end = to_oblock(from_oblock(oblock) + 1ULL);
return bio_detain_range(cache, oblock, end, bio,
cell_prealloc, free_fn, free_context, cell_result);
}
static int get_cell(struct cache *cache, static int get_cell(struct cache *cache,
dm_oblock_t oblock, dm_oblock_t oblock,
struct prealloc *structs, struct prealloc *structs,
...@@ -474,7 +486,7 @@ static int get_cell(struct cache *cache, ...@@ -474,7 +486,7 @@ static int get_cell(struct cache *cache,
cell_prealloc = prealloc_get_cell(structs); cell_prealloc = prealloc_get_cell(structs);
build_key(oblock, &key); build_key(oblock, to_oblock(from_oblock(oblock) + 1ULL), &key);
r = dm_get_cell(cache->prison, &key, cell_prealloc, cell_result); r = dm_get_cell(cache->prison, &key, cell_prealloc, cell_result);
if (r) if (r)
prealloc_put_cell(structs, cell_prealloc); prealloc_put_cell(structs, cell_prealloc);
...@@ -524,33 +536,57 @@ static dm_block_t block_div(dm_block_t b, uint32_t n) ...@@ -524,33 +536,57 @@ static dm_block_t block_div(dm_block_t b, uint32_t n)
return b; return b;
} }
static void set_discard(struct cache *cache, dm_oblock_t b) static dm_block_t oblocks_per_dblock(struct cache *cache)
{
dm_block_t oblocks = cache->discard_block_size;
if (block_size_is_power_of_two(cache))
oblocks >>= cache->sectors_per_block_shift;
else
oblocks = block_div(oblocks, cache->sectors_per_block);
return oblocks;
}
static dm_dblock_t oblock_to_dblock(struct cache *cache, dm_oblock_t oblock)
{
return to_dblock(block_div(from_oblock(oblock),
oblocks_per_dblock(cache)));
}
static dm_oblock_t dblock_to_oblock(struct cache *cache, dm_dblock_t dblock)
{
return to_oblock(from_dblock(dblock) * oblocks_per_dblock(cache));
}
static void set_discard(struct cache *cache, dm_dblock_t b)
{ {
unsigned long flags; unsigned long flags;
BUG_ON(from_dblock(b) >= from_dblock(cache->discard_nr_blocks));
atomic_inc(&cache->stats.discard_count); atomic_inc(&cache->stats.discard_count);
spin_lock_irqsave(&cache->lock, flags); spin_lock_irqsave(&cache->lock, flags);
set_bit(from_oblock(b), cache->discard_bitset); set_bit(from_dblock(b), cache->discard_bitset);
spin_unlock_irqrestore(&cache->lock, flags); spin_unlock_irqrestore(&cache->lock, flags);
} }
static void clear_discard(struct cache *cache, dm_oblock_t b) static void clear_discard(struct cache *cache, dm_dblock_t b)
{ {
unsigned long flags; unsigned long flags;
spin_lock_irqsave(&cache->lock, flags); spin_lock_irqsave(&cache->lock, flags);
clear_bit(from_oblock(b), cache->discard_bitset); clear_bit(from_dblock(b), cache->discard_bitset);
spin_unlock_irqrestore(&cache->lock, flags); spin_unlock_irqrestore(&cache->lock, flags);
} }
static bool is_discarded(struct cache *cache, dm_oblock_t b) static bool is_discarded(struct cache *cache, dm_dblock_t b)
{ {
int r; int r;
unsigned long flags; unsigned long flags;
spin_lock_irqsave(&cache->lock, flags); spin_lock_irqsave(&cache->lock, flags);
r = test_bit(from_oblock(b), cache->discard_bitset); r = test_bit(from_dblock(b), cache->discard_bitset);
spin_unlock_irqrestore(&cache->lock, flags); spin_unlock_irqrestore(&cache->lock, flags);
return r; return r;
...@@ -562,7 +598,8 @@ static bool is_discarded_oblock(struct cache *cache, dm_oblock_t b) ...@@ -562,7 +598,8 @@ static bool is_discarded_oblock(struct cache *cache, dm_oblock_t b)
unsigned long flags; unsigned long flags;
spin_lock_irqsave(&cache->lock, flags); spin_lock_irqsave(&cache->lock, flags);
r = test_bit(from_oblock(b), cache->discard_bitset); r = test_bit(from_dblock(oblock_to_dblock(cache, b)),
cache->discard_bitset);
spin_unlock_irqrestore(&cache->lock, flags); spin_unlock_irqrestore(&cache->lock, flags);
return r; return r;
...@@ -687,7 +724,7 @@ static void remap_to_origin_clear_discard(struct cache *cache, struct bio *bio, ...@@ -687,7 +724,7 @@ static void remap_to_origin_clear_discard(struct cache *cache, struct bio *bio,
check_if_tick_bio_needed(cache, bio); check_if_tick_bio_needed(cache, bio);
remap_to_origin(cache, bio); remap_to_origin(cache, bio);
if (bio_data_dir(bio) == WRITE) if (bio_data_dir(bio) == WRITE)
clear_discard(cache, oblock); clear_discard(cache, oblock_to_dblock(cache, oblock));
} }
static void remap_to_cache_dirty(struct cache *cache, struct bio *bio, static void remap_to_cache_dirty(struct cache *cache, struct bio *bio,
...@@ -697,7 +734,7 @@ static void remap_to_cache_dirty(struct cache *cache, struct bio *bio, ...@@ -697,7 +734,7 @@ static void remap_to_cache_dirty(struct cache *cache, struct bio *bio,
remap_to_cache(cache, bio, cblock); remap_to_cache(cache, bio, cblock);
if (bio_data_dir(bio) == WRITE) { if (bio_data_dir(bio) == WRITE) {
set_dirty(cache, oblock, cblock); set_dirty(cache, oblock, cblock);
clear_discard(cache, oblock); clear_discard(cache, oblock_to_dblock(cache, oblock));
} }
} }
...@@ -951,10 +988,14 @@ static void migration_success_post_commit(struct dm_cache_migration *mg) ...@@ -951,10 +988,14 @@ static void migration_success_post_commit(struct dm_cache_migration *mg)
} }
} else { } else {
clear_dirty(cache, mg->new_oblock, mg->cblock); if (mg->requeue_holder) {
if (mg->requeue_holder) clear_dirty(cache, mg->new_oblock, mg->cblock);
cell_defer(cache, mg->new_ocell, true); cell_defer(cache, mg->new_ocell, true);
else { } else {
/*
* The block was promoted via an overwrite, so it's dirty.
*/
set_dirty(cache, mg->new_oblock, mg->cblock);
bio_endio(mg->new_ocell->holder, 0); bio_endio(mg->new_ocell->holder, 0);
cell_defer(cache, mg->new_ocell, false); cell_defer(cache, mg->new_ocell, false);
} }
...@@ -978,7 +1019,7 @@ static void copy_complete(int read_err, unsigned long write_err, void *context) ...@@ -978,7 +1019,7 @@ static void copy_complete(int read_err, unsigned long write_err, void *context)
wake_worker(cache); wake_worker(cache);
} }
static void issue_copy_real(struct dm_cache_migration *mg) static void issue_copy(struct dm_cache_migration *mg)
{ {
int r; int r;
struct dm_io_region o_region, c_region; struct dm_io_region o_region, c_region;
...@@ -1057,11 +1098,46 @@ static void avoid_copy(struct dm_cache_migration *mg) ...@@ -1057,11 +1098,46 @@ static void avoid_copy(struct dm_cache_migration *mg)
migration_success_pre_commit(mg); migration_success_pre_commit(mg);
} }
static void issue_copy(struct dm_cache_migration *mg) static void calc_discard_block_range(struct cache *cache, struct bio *bio,
dm_dblock_t *b, dm_dblock_t *e)
{
sector_t sb = bio->bi_iter.bi_sector;
sector_t se = bio_end_sector(bio);
*b = to_dblock(dm_sector_div_up(sb, cache->discard_block_size));
if (se - sb < cache->discard_block_size)
*e = *b;
else
*e = to_dblock(block_div(se, cache->discard_block_size));
}
static void issue_discard(struct dm_cache_migration *mg)
{
dm_dblock_t b, e;
struct bio *bio = mg->new_ocell->holder;
calc_discard_block_range(mg->cache, bio, &b, &e);
while (b != e) {
set_discard(mg->cache, b);
b = to_dblock(from_dblock(b) + 1);
}
bio_endio(bio, 0);
cell_defer(mg->cache, mg->new_ocell, false);
free_migration(mg);
}
static void issue_copy_or_discard(struct dm_cache_migration *mg)
{ {
bool avoid; bool avoid;
struct cache *cache = mg->cache; struct cache *cache = mg->cache;
if (mg->discard) {
issue_discard(mg);
return;
}
if (mg->writeback || mg->demote) if (mg->writeback || mg->demote)
avoid = !is_dirty(cache, mg->cblock) || avoid = !is_dirty(cache, mg->cblock) ||
is_discarded_oblock(cache, mg->old_oblock); is_discarded_oblock(cache, mg->old_oblock);
...@@ -1070,13 +1146,14 @@ static void issue_copy(struct dm_cache_migration *mg) ...@@ -1070,13 +1146,14 @@ static void issue_copy(struct dm_cache_migration *mg)
avoid = is_discarded_oblock(cache, mg->new_oblock); avoid = is_discarded_oblock(cache, mg->new_oblock);
if (!avoid && bio_writes_complete_block(cache, bio)) { if (writeback_mode(&cache->features) &&
!avoid && bio_writes_complete_block(cache, bio)) {
issue_overwrite(mg, bio); issue_overwrite(mg, bio);
return; return;
} }
} }
avoid ? avoid_copy(mg) : issue_copy_real(mg); avoid ? avoid_copy(mg) : issue_copy(mg);
} }
static void complete_migration(struct dm_cache_migration *mg) static void complete_migration(struct dm_cache_migration *mg)
...@@ -1161,6 +1238,7 @@ static void promote(struct cache *cache, struct prealloc *structs, ...@@ -1161,6 +1238,7 @@ static void promote(struct cache *cache, struct prealloc *structs,
struct dm_cache_migration *mg = prealloc_get_migration(structs); struct dm_cache_migration *mg = prealloc_get_migration(structs);
mg->err = false; mg->err = false;
mg->discard = false;
mg->writeback = false; mg->writeback = false;
mg->demote = false; mg->demote = false;
mg->promote = true; mg->promote = true;
...@@ -1184,6 +1262,7 @@ static void writeback(struct cache *cache, struct prealloc *structs, ...@@ -1184,6 +1262,7 @@ static void writeback(struct cache *cache, struct prealloc *structs,
struct dm_cache_migration *mg = prealloc_get_migration(structs); struct dm_cache_migration *mg = prealloc_get_migration(structs);
mg->err = false; mg->err = false;
mg->discard = false;
mg->writeback = true; mg->writeback = true;
mg->demote = false; mg->demote = false;
mg->promote = false; mg->promote = false;
...@@ -1209,6 +1288,7 @@ static void demote_then_promote(struct cache *cache, struct prealloc *structs, ...@@ -1209,6 +1288,7 @@ static void demote_then_promote(struct cache *cache, struct prealloc *structs,
struct dm_cache_migration *mg = prealloc_get_migration(structs); struct dm_cache_migration *mg = prealloc_get_migration(structs);
mg->err = false; mg->err = false;
mg->discard = false;
mg->writeback = false; mg->writeback = false;
mg->demote = true; mg->demote = true;
mg->promote = true; mg->promote = true;
...@@ -1237,6 +1317,7 @@ static void invalidate(struct cache *cache, struct prealloc *structs, ...@@ -1237,6 +1317,7 @@ static void invalidate(struct cache *cache, struct prealloc *structs,
struct dm_cache_migration *mg = prealloc_get_migration(structs); struct dm_cache_migration *mg = prealloc_get_migration(structs);
mg->err = false; mg->err = false;
mg->discard = false;
mg->writeback = false; mg->writeback = false;
mg->demote = true; mg->demote = true;
mg->promote = false; mg->promote = false;
...@@ -1253,6 +1334,26 @@ static void invalidate(struct cache *cache, struct prealloc *structs, ...@@ -1253,6 +1334,26 @@ static void invalidate(struct cache *cache, struct prealloc *structs,
quiesce_migration(mg); quiesce_migration(mg);
} }
static void discard(struct cache *cache, struct prealloc *structs,
struct dm_bio_prison_cell *cell)
{
struct dm_cache_migration *mg = prealloc_get_migration(structs);
mg->err = false;
mg->discard = true;
mg->writeback = false;
mg->demote = false;
mg->promote = false;
mg->requeue_holder = false;
mg->invalidate = false;
mg->cache = cache;
mg->old_ocell = NULL;
mg->new_ocell = cell;
mg->start_jiffies = jiffies;
quiesce_migration(mg);
}
/*---------------------------------------------------------------- /*----------------------------------------------------------------
* bio processing * bio processing
*--------------------------------------------------------------*/ *--------------------------------------------------------------*/
...@@ -1286,31 +1387,27 @@ static void process_flush_bio(struct cache *cache, struct bio *bio) ...@@ -1286,31 +1387,27 @@ static void process_flush_bio(struct cache *cache, struct bio *bio)
issue(cache, bio); issue(cache, bio);
} }
/* static void process_discard_bio(struct cache *cache, struct prealloc *structs,
* People generally discard large parts of a device, eg, the whole device struct bio *bio)
* when formatting. Splitting these large discards up into cache block
* sized ios and then quiescing (always neccessary for discard) takes too
* long.
*
* We keep it simple, and allow any size of discard to come in, and just
* mark off blocks on the discard bitset. No passdown occurs!
*
* To implement passdown we need to change the bio_prison such that a cell
* can have a key that spans many blocks.
*/
static void process_discard_bio(struct cache *cache, struct bio *bio)
{ {
dm_block_t start_block = dm_sector_div_up(bio->bi_iter.bi_sector, int r;
cache->sectors_per_block); dm_dblock_t b, e;
dm_block_t end_block = bio_end_sector(bio); struct dm_bio_prison_cell *cell_prealloc, *new_ocell;
dm_block_t b;
end_block = block_div(end_block, cache->sectors_per_block); calc_discard_block_range(cache, bio, &b, &e);
if (b == e) {
bio_endio(bio, 0);
return;
}
for (b = start_block; b < end_block; b++) cell_prealloc = prealloc_get_cell(structs);
set_discard(cache, to_oblock(b)); r = bio_detain_range(cache, dblock_to_oblock(cache, b), dblock_to_oblock(cache, e), bio, cell_prealloc,
(cell_free_fn) prealloc_put_cell,
structs, &new_ocell);
if (r > 0)
return;
bio_endio(bio, 0); discard(cache, structs, new_ocell);
} }
static bool spare_migration_bandwidth(struct cache *cache) static bool spare_migration_bandwidth(struct cache *cache)
...@@ -1340,9 +1437,8 @@ static void process_bio(struct cache *cache, struct prealloc *structs, ...@@ -1340,9 +1437,8 @@ static void process_bio(struct cache *cache, struct prealloc *structs,
dm_oblock_t block = get_bio_block(cache, bio); dm_oblock_t block = get_bio_block(cache, bio);
struct dm_bio_prison_cell *cell_prealloc, *old_ocell, *new_ocell; struct dm_bio_prison_cell *cell_prealloc, *old_ocell, *new_ocell;
struct policy_result lookup_result; struct policy_result lookup_result;
bool discarded_block = is_discarded_oblock(cache, block);
bool passthrough = passthrough_mode(&cache->features); bool passthrough = passthrough_mode(&cache->features);
bool can_migrate = !passthrough && (discarded_block || spare_migration_bandwidth(cache)); bool discarded_block, can_migrate;
/* /*
* Check to see if that block is currently migrating. * Check to see if that block is currently migrating.
...@@ -1354,6 +1450,9 @@ static void process_bio(struct cache *cache, struct prealloc *structs, ...@@ -1354,6 +1450,9 @@ static void process_bio(struct cache *cache, struct prealloc *structs,
if (r > 0) if (r > 0)
return; return;
discarded_block = is_discarded_oblock(cache, block);
can_migrate = !passthrough && (discarded_block || spare_migration_bandwidth(cache));
r = policy_map(cache->policy, block, true, can_migrate, discarded_block, r = policy_map(cache->policy, block, true, can_migrate, discarded_block,
bio, &lookup_result); bio, &lookup_result);
...@@ -1500,7 +1599,7 @@ static void process_deferred_bios(struct cache *cache) ...@@ -1500,7 +1599,7 @@ static void process_deferred_bios(struct cache *cache)
if (bio->bi_rw & REQ_FLUSH) if (bio->bi_rw & REQ_FLUSH)
process_flush_bio(cache, bio); process_flush_bio(cache, bio);
else if (bio->bi_rw & REQ_DISCARD) else if (bio->bi_rw & REQ_DISCARD)
process_discard_bio(cache, bio); process_discard_bio(cache, &structs, bio);
else else
process_bio(cache, &structs, bio); process_bio(cache, &structs, bio);
} }
...@@ -1715,7 +1814,7 @@ static void do_worker(struct work_struct *ws) ...@@ -1715,7 +1814,7 @@ static void do_worker(struct work_struct *ws)
process_invalidation_requests(cache); process_invalidation_requests(cache);
} }
process_migrations(cache, &cache->quiesced_migrations, issue_copy); process_migrations(cache, &cache->quiesced_migrations, issue_copy_or_discard);
process_migrations(cache, &cache->completed_migrations, complete_migration); process_migrations(cache, &cache->completed_migrations, complete_migration);
if (commit_if_needed(cache)) { if (commit_if_needed(cache)) {
...@@ -2180,6 +2279,45 @@ static int create_cache_policy(struct cache *cache, struct cache_args *ca, ...@@ -2180,6 +2279,45 @@ static int create_cache_policy(struct cache *cache, struct cache_args *ca,
return 0; return 0;
} }
/*
* We want the discard block size to be at least the size of the cache
* block size and have no more than 2^14 discard blocks across the origin.
*/
#define MAX_DISCARD_BLOCKS (1 << 14)
static bool too_many_discard_blocks(sector_t discard_block_size,
sector_t origin_size)
{
(void) sector_div(origin_size, discard_block_size);
return origin_size > MAX_DISCARD_BLOCKS;
}
static sector_t calculate_discard_block_size(sector_t cache_block_size,
sector_t origin_size)
{
sector_t discard_block_size = cache_block_size;
if (origin_size)
while (too_many_discard_blocks(discard_block_size, origin_size))
discard_block_size *= 2;
return discard_block_size;
}
static void set_cache_size(struct cache *cache, dm_cblock_t size)
{
dm_block_t nr_blocks = from_cblock(size);
if (nr_blocks > (1 << 20) && cache->cache_size != size)
DMWARN_LIMIT("You have created a cache device with a lot of individual cache blocks (%llu)\n"
"All these mappings can consume a lot of kernel memory, and take some time to read/write.\n"
"Please consider increasing the cache block size to reduce the overall cache block count.",
(unsigned long long) nr_blocks);
cache->cache_size = size;
}
#define DEFAULT_MIGRATION_THRESHOLD 2048 #define DEFAULT_MIGRATION_THRESHOLD 2048
static int cache_create(struct cache_args *ca, struct cache **result) static int cache_create(struct cache_args *ca, struct cache **result)
...@@ -2204,8 +2342,7 @@ static int cache_create(struct cache_args *ca, struct cache **result) ...@@ -2204,8 +2342,7 @@ static int cache_create(struct cache_args *ca, struct cache **result)
ti->num_discard_bios = 1; ti->num_discard_bios = 1;
ti->discards_supported = true; ti->discards_supported = true;
ti->discard_zeroes_data_unsupported = true; ti->discard_zeroes_data_unsupported = true;
/* Discard bios must be split on a block boundary */ ti->split_discard_bios = false;
ti->split_discard_bios = true;
cache->features = ca->features; cache->features = ca->features;
ti->per_bio_data_size = get_per_bio_data_size(cache); ti->per_bio_data_size = get_per_bio_data_size(cache);
...@@ -2235,10 +2372,10 @@ static int cache_create(struct cache_args *ca, struct cache **result) ...@@ -2235,10 +2372,10 @@ static int cache_create(struct cache_args *ca, struct cache **result)
cache->sectors_per_block_shift = -1; cache->sectors_per_block_shift = -1;
cache_size = block_div(cache_size, ca->block_size); cache_size = block_div(cache_size, ca->block_size);
cache->cache_size = to_cblock(cache_size); set_cache_size(cache, to_cblock(cache_size));
} else { } else {
cache->sectors_per_block_shift = __ffs(ca->block_size); cache->sectors_per_block_shift = __ffs(ca->block_size);
cache->cache_size = to_cblock(ca->cache_sectors >> cache->sectors_per_block_shift); set_cache_size(cache, to_cblock(ca->cache_sectors >> cache->sectors_per_block_shift));
} }
r = create_cache_policy(cache, ca, error); r = create_cache_policy(cache, ca, error);
...@@ -2303,13 +2440,17 @@ static int cache_create(struct cache_args *ca, struct cache **result) ...@@ -2303,13 +2440,17 @@ static int cache_create(struct cache_args *ca, struct cache **result)
} }
clear_bitset(cache->dirty_bitset, from_cblock(cache->cache_size)); clear_bitset(cache->dirty_bitset, from_cblock(cache->cache_size));
cache->discard_nr_blocks = cache->origin_blocks; cache->discard_block_size =
cache->discard_bitset = alloc_bitset(from_oblock(cache->discard_nr_blocks)); calculate_discard_block_size(cache->sectors_per_block,
cache->origin_sectors);
cache->discard_nr_blocks = to_dblock(dm_sector_div_up(cache->origin_sectors,
cache->discard_block_size));
cache->discard_bitset = alloc_bitset(from_dblock(cache->discard_nr_blocks));
if (!cache->discard_bitset) { if (!cache->discard_bitset) {
*error = "could not allocate discard bitset"; *error = "could not allocate discard bitset";
goto bad; goto bad;
} }
clear_bitset(cache->discard_bitset, from_oblock(cache->discard_nr_blocks)); clear_bitset(cache->discard_bitset, from_dblock(cache->discard_nr_blocks));
cache->copier = dm_kcopyd_client_create(&dm_kcopyd_throttle); cache->copier = dm_kcopyd_client_create(&dm_kcopyd_throttle);
if (IS_ERR(cache->copier)) { if (IS_ERR(cache->copier)) {
...@@ -2327,7 +2468,7 @@ static int cache_create(struct cache_args *ca, struct cache **result) ...@@ -2327,7 +2468,7 @@ static int cache_create(struct cache_args *ca, struct cache **result)
INIT_DELAYED_WORK(&cache->waker, do_waker); INIT_DELAYED_WORK(&cache->waker, do_waker);
cache->last_commit_jiffies = jiffies; cache->last_commit_jiffies = jiffies;
cache->prison = dm_bio_prison_create(PRISON_CELLS); cache->prison = dm_bio_prison_create();
if (!cache->prison) { if (!cache->prison) {
*error = "could not create bio prison"; *error = "could not create bio prison";
goto bad; goto bad;
...@@ -2549,11 +2690,11 @@ static int __cache_map(struct cache *cache, struct bio *bio, struct dm_bio_priso ...@@ -2549,11 +2690,11 @@ static int __cache_map(struct cache *cache, struct bio *bio, struct dm_bio_priso
static int cache_map(struct dm_target *ti, struct bio *bio) static int cache_map(struct dm_target *ti, struct bio *bio)
{ {
int r; int r;
struct dm_bio_prison_cell *cell; struct dm_bio_prison_cell *cell = NULL;
struct cache *cache = ti->private; struct cache *cache = ti->private;
r = __cache_map(cache, bio, &cell); r = __cache_map(cache, bio, &cell);
if (r == DM_MAPIO_REMAPPED) { if (r == DM_MAPIO_REMAPPED && cell) {
inc_ds(cache, bio, cell); inc_ds(cache, bio, cell);
cell_defer(cache, cell, false); cell_defer(cache, cell, false);
} }
...@@ -2599,16 +2740,16 @@ static int write_discard_bitset(struct cache *cache) ...@@ -2599,16 +2740,16 @@ static int write_discard_bitset(struct cache *cache)
{ {
unsigned i, r; unsigned i, r;
r = dm_cache_discard_bitset_resize(cache->cmd, cache->sectors_per_block, r = dm_cache_discard_bitset_resize(cache->cmd, cache->discard_block_size,
cache->origin_blocks); cache->discard_nr_blocks);
if (r) { if (r) {
DMERR("could not resize on-disk discard bitset"); DMERR("could not resize on-disk discard bitset");
return r; return r;
} }
for (i = 0; i < from_oblock(cache->discard_nr_blocks); i++) { for (i = 0; i < from_dblock(cache->discard_nr_blocks); i++) {
r = dm_cache_set_discard(cache->cmd, to_oblock(i), r = dm_cache_set_discard(cache->cmd, to_dblock(i),
is_discarded(cache, to_oblock(i))); is_discarded(cache, to_dblock(i)));
if (r) if (r)
return r; return r;
} }
...@@ -2680,15 +2821,86 @@ static int load_mapping(void *context, dm_oblock_t oblock, dm_cblock_t cblock, ...@@ -2680,15 +2821,86 @@ static int load_mapping(void *context, dm_oblock_t oblock, dm_cblock_t cblock,
return 0; return 0;
} }
/*
* The discard block size in the on disk metadata is not
* neccessarily the same as we're currently using. So we have to
* be careful to only set the discarded attribute if we know it
* covers a complete block of the new size.
*/
struct discard_load_info {
struct cache *cache;
/*
* These blocks are sized using the on disk dblock size, rather
* than the current one.
*/
dm_block_t block_size;
dm_block_t discard_begin, discard_end;
};
static void discard_load_info_init(struct cache *cache,
struct discard_load_info *li)
{
li->cache = cache;
li->discard_begin = li->discard_end = 0;
}
static void set_discard_range(struct discard_load_info *li)
{
sector_t b, e;
if (li->discard_begin == li->discard_end)
return;
/*
* Convert to sectors.
*/
b = li->discard_begin * li->block_size;
e = li->discard_end * li->block_size;
/*
* Then convert back to the current dblock size.
*/
b = dm_sector_div_up(b, li->cache->discard_block_size);
sector_div(e, li->cache->discard_block_size);
/*
* The origin may have shrunk, so we need to check we're still in
* bounds.
*/
if (e > from_dblock(li->cache->discard_nr_blocks))
e = from_dblock(li->cache->discard_nr_blocks);
for (; b < e; b++)
set_discard(li->cache, to_dblock(b));
}
static int load_discard(void *context, sector_t discard_block_size, static int load_discard(void *context, sector_t discard_block_size,
dm_oblock_t oblock, bool discard) dm_dblock_t dblock, bool discard)
{ {
struct cache *cache = context; struct discard_load_info *li = context;
if (discard) li->block_size = discard_block_size;
set_discard(cache, oblock);
else if (discard) {
clear_discard(cache, oblock); if (from_dblock(dblock) == li->discard_end)
/*
* We're already in a discard range, just extend it.
*/
li->discard_end = li->discard_end + 1ULL;
else {
/*
* Emit the old range and start a new one.
*/
set_discard_range(li);
li->discard_begin = from_dblock(dblock);
li->discard_end = li->discard_begin + 1ULL;
}
} else {
set_discard_range(li);
li->discard_begin = li->discard_end = 0;
}
return 0; return 0;
} }
...@@ -2730,7 +2942,7 @@ static int resize_cache_dev(struct cache *cache, dm_cblock_t new_size) ...@@ -2730,7 +2942,7 @@ static int resize_cache_dev(struct cache *cache, dm_cblock_t new_size)
return r; return r;
} }
cache->cache_size = new_size; set_cache_size(cache, new_size);
return 0; return 0;
} }
...@@ -2772,11 +2984,22 @@ static int cache_preresume(struct dm_target *ti) ...@@ -2772,11 +2984,22 @@ static int cache_preresume(struct dm_target *ti)
} }
if (!cache->loaded_discards) { if (!cache->loaded_discards) {
r = dm_cache_load_discards(cache->cmd, load_discard, cache); struct discard_load_info li;
/*
* The discard bitset could have been resized, or the
* discard block size changed. To be safe we start by
* setting every dblock to not discarded.
*/
clear_bitset(cache->discard_bitset, from_dblock(cache->discard_nr_blocks));
discard_load_info_init(cache, &li);
r = dm_cache_load_discards(cache->cmd, load_discard, &li);
if (r) { if (r) {
DMERR("could not load origin discards"); DMERR("could not load origin discards");
return r; return r;
} }
set_discard_range(&li);
cache->loaded_discards = true; cache->loaded_discards = true;
} }
...@@ -3079,8 +3302,9 @@ static void set_discard_limits(struct cache *cache, struct queue_limits *limits) ...@@ -3079,8 +3302,9 @@ static void set_discard_limits(struct cache *cache, struct queue_limits *limits)
/* /*
* FIXME: these limits may be incompatible with the cache device * FIXME: these limits may be incompatible with the cache device
*/ */
limits->max_discard_sectors = cache->sectors_per_block; limits->max_discard_sectors = min_t(sector_t, cache->discard_block_size * 1024,
limits->discard_granularity = cache->sectors_per_block << SECTOR_SHIFT; cache->origin_sectors);
limits->discard_granularity = cache->discard_block_size << SECTOR_SHIFT;
} }
static void cache_io_hints(struct dm_target *ti, struct queue_limits *limits) static void cache_io_hints(struct dm_target *ti, struct queue_limits *limits)
...@@ -3104,7 +3328,7 @@ static void cache_io_hints(struct dm_target *ti, struct queue_limits *limits) ...@@ -3104,7 +3328,7 @@ static void cache_io_hints(struct dm_target *ti, struct queue_limits *limits)
static struct target_type cache_target = { static struct target_type cache_target = {
.name = "cache", .name = "cache",
.version = {1, 5, 0}, .version = {1, 6, 0},
.module = THIS_MODULE, .module = THIS_MODULE,
.ctr = cache_ctr, .ctr = cache_ctr,
.dtr = cache_dtr, .dtr = cache_dtr,
......
...@@ -705,7 +705,7 @@ static int crypt_iv_tcw_whitening(struct crypt_config *cc, ...@@ -705,7 +705,7 @@ static int crypt_iv_tcw_whitening(struct crypt_config *cc,
for (i = 0; i < ((1 << SECTOR_SHIFT) / 8); i++) for (i = 0; i < ((1 << SECTOR_SHIFT) / 8); i++)
crypto_xor(data + i * 8, buf, 8); crypto_xor(data + i * 8, buf, 8);
out: out:
memset(buf, 0, sizeof(buf)); memzero_explicit(buf, sizeof(buf));
return r; return r;
} }
......
...@@ -684,11 +684,14 @@ static void __dev_status(struct mapped_device *md, struct dm_ioctl *param) ...@@ -684,11 +684,14 @@ static void __dev_status(struct mapped_device *md, struct dm_ioctl *param)
int srcu_idx; int srcu_idx;
param->flags &= ~(DM_SUSPEND_FLAG | DM_READONLY_FLAG | param->flags &= ~(DM_SUSPEND_FLAG | DM_READONLY_FLAG |
DM_ACTIVE_PRESENT_FLAG); DM_ACTIVE_PRESENT_FLAG | DM_INTERNAL_SUSPEND_FLAG);
if (dm_suspended_md(md)) if (dm_suspended_md(md))
param->flags |= DM_SUSPEND_FLAG; param->flags |= DM_SUSPEND_FLAG;
if (dm_suspended_internally_md(md))
param->flags |= DM_INTERNAL_SUSPEND_FLAG;
if (dm_test_deferred_remove_flag(md)) if (dm_test_deferred_remove_flag(md))
param->flags |= DM_DEFERRED_REMOVE; param->flags |= DM_DEFERRED_REMOVE;
......
...@@ -824,7 +824,7 @@ static int message_stats_create(struct mapped_device *md, ...@@ -824,7 +824,7 @@ static int message_stats_create(struct mapped_device *md,
return 1; return 1;
id = dm_stats_create(dm_get_stats(md), start, end, step, program_id, aux_data, id = dm_stats_create(dm_get_stats(md), start, end, step, program_id, aux_data,
dm_internal_suspend, dm_internal_resume, md); dm_internal_suspend_fast, dm_internal_resume_fast, md);
if (id < 0) if (id < 0)
return id; return id;
......
...@@ -1521,18 +1521,32 @@ fmode_t dm_table_get_mode(struct dm_table *t) ...@@ -1521,18 +1521,32 @@ fmode_t dm_table_get_mode(struct dm_table *t)
} }
EXPORT_SYMBOL(dm_table_get_mode); EXPORT_SYMBOL(dm_table_get_mode);
static void suspend_targets(struct dm_table *t, unsigned postsuspend) enum suspend_mode {
PRESUSPEND,
PRESUSPEND_UNDO,
POSTSUSPEND,
};
static void suspend_targets(struct dm_table *t, enum suspend_mode mode)
{ {
int i = t->num_targets; int i = t->num_targets;
struct dm_target *ti = t->targets; struct dm_target *ti = t->targets;
while (i--) { while (i--) {
if (postsuspend) { switch (mode) {
case PRESUSPEND:
if (ti->type->presuspend)
ti->type->presuspend(ti);
break;
case PRESUSPEND_UNDO:
if (ti->type->presuspend_undo)
ti->type->presuspend_undo(ti);
break;
case POSTSUSPEND:
if (ti->type->postsuspend) if (ti->type->postsuspend)
ti->type->postsuspend(ti); ti->type->postsuspend(ti);
} else if (ti->type->presuspend) break;
ti->type->presuspend(ti); }
ti++; ti++;
} }
} }
...@@ -1542,7 +1556,15 @@ void dm_table_presuspend_targets(struct dm_table *t) ...@@ -1542,7 +1556,15 @@ void dm_table_presuspend_targets(struct dm_table *t)
if (!t) if (!t)
return; return;
suspend_targets(t, 0); suspend_targets(t, PRESUSPEND);
}
void dm_table_presuspend_undo_targets(struct dm_table *t)
{
if (!t)
return;
suspend_targets(t, PRESUSPEND_UNDO);
} }
void dm_table_postsuspend_targets(struct dm_table *t) void dm_table_postsuspend_targets(struct dm_table *t)
...@@ -1550,7 +1572,7 @@ void dm_table_postsuspend_targets(struct dm_table *t) ...@@ -1550,7 +1572,7 @@ void dm_table_postsuspend_targets(struct dm_table *t)
if (!t) if (!t)
return; return;
suspend_targets(t, 1); suspend_targets(t, POSTSUSPEND);
} }
int dm_table_resume_targets(struct dm_table *t) int dm_table_resume_targets(struct dm_table *t)
......
...@@ -1384,42 +1384,38 @@ static bool __snapshotted_since(struct dm_thin_device *td, uint32_t time) ...@@ -1384,42 +1384,38 @@ static bool __snapshotted_since(struct dm_thin_device *td, uint32_t time)
} }
int dm_thin_find_block(struct dm_thin_device *td, dm_block_t block, int dm_thin_find_block(struct dm_thin_device *td, dm_block_t block,
int can_block, struct dm_thin_lookup_result *result) int can_issue_io, struct dm_thin_lookup_result *result)
{ {
int r = -EINVAL; int r;
uint64_t block_time = 0;
__le64 value; __le64 value;
struct dm_pool_metadata *pmd = td->pmd; struct dm_pool_metadata *pmd = td->pmd;
dm_block_t keys[2] = { td->id, block }; dm_block_t keys[2] = { td->id, block };
struct dm_btree_info *info; struct dm_btree_info *info;
if (can_block) {
down_read(&pmd->root_lock);
info = &pmd->info;
} else if (down_read_trylock(&pmd->root_lock))
info = &pmd->nb_info;
else
return -EWOULDBLOCK;
if (pmd->fail_io) if (pmd->fail_io)
goto out; return -EINVAL;
r = dm_btree_lookup(info, pmd->root, keys, &value); down_read(&pmd->root_lock);
if (!r)
block_time = le64_to_cpu(value);
out: if (can_issue_io) {
up_read(&pmd->root_lock); info = &pmd->info;
} else
info = &pmd->nb_info;
r = dm_btree_lookup(info, pmd->root, keys, &value);
if (!r) { if (!r) {
uint64_t block_time = 0;
dm_block_t exception_block; dm_block_t exception_block;
uint32_t exception_time; uint32_t exception_time;
block_time = le64_to_cpu(value);
unpack_block_time(block_time, &exception_block, unpack_block_time(block_time, &exception_block,
&exception_time); &exception_time);
result->block = exception_block; result->block = exception_block;
result->shared = __snapshotted_since(td, exception_time); result->shared = __snapshotted_since(td, exception_time);
} }
up_read(&pmd->root_lock);
return r; return r;
} }
...@@ -1813,3 +1809,8 @@ bool dm_pool_metadata_needs_check(struct dm_pool_metadata *pmd) ...@@ -1813,3 +1809,8 @@ bool dm_pool_metadata_needs_check(struct dm_pool_metadata *pmd)
return needs_check; return needs_check;
} }
void dm_pool_issue_prefetches(struct dm_pool_metadata *pmd)
{
dm_tm_issue_prefetches(pmd->tm);
}
...@@ -139,12 +139,12 @@ struct dm_thin_lookup_result { ...@@ -139,12 +139,12 @@ struct dm_thin_lookup_result {
/* /*
* Returns: * Returns:
* -EWOULDBLOCK iff @can_block is set and would block. * -EWOULDBLOCK iff @can_issue_io is set and would issue IO
* -ENODATA iff that mapping is not present. * -ENODATA iff that mapping is not present.
* 0 success * 0 success
*/ */
int dm_thin_find_block(struct dm_thin_device *td, dm_block_t block, int dm_thin_find_block(struct dm_thin_device *td, dm_block_t block,
int can_block, struct dm_thin_lookup_result *result); int can_issue_io, struct dm_thin_lookup_result *result);
/* /*
* Obtain an unused block. * Obtain an unused block.
...@@ -213,6 +213,11 @@ int dm_pool_register_metadata_threshold(struct dm_pool_metadata *pmd, ...@@ -213,6 +213,11 @@ int dm_pool_register_metadata_threshold(struct dm_pool_metadata *pmd,
int dm_pool_metadata_set_needs_check(struct dm_pool_metadata *pmd); int dm_pool_metadata_set_needs_check(struct dm_pool_metadata *pmd);
bool dm_pool_metadata_needs_check(struct dm_pool_metadata *pmd); bool dm_pool_metadata_needs_check(struct dm_pool_metadata *pmd);
/*
* Issue any prefetches that may be useful.
*/
void dm_pool_issue_prefetches(struct dm_pool_metadata *pmd);
/*----------------------------------------------------------------*/ /*----------------------------------------------------------------*/
#endif #endif
...@@ -11,11 +11,13 @@ ...@@ -11,11 +11,13 @@
#include <linux/device-mapper.h> #include <linux/device-mapper.h>
#include <linux/dm-io.h> #include <linux/dm-io.h>
#include <linux/dm-kcopyd.h> #include <linux/dm-kcopyd.h>
#include <linux/log2.h>
#include <linux/list.h> #include <linux/list.h>
#include <linux/rculist.h> #include <linux/rculist.h>
#include <linux/init.h> #include <linux/init.h>
#include <linux/module.h> #include <linux/module.h>
#include <linux/slab.h> #include <linux/slab.h>
#include <linux/sort.h>
#include <linux/rbtree.h> #include <linux/rbtree.h>
#define DM_MSG_PREFIX "thin" #define DM_MSG_PREFIX "thin"
...@@ -25,7 +27,6 @@ ...@@ -25,7 +27,6 @@
*/ */
#define ENDIO_HOOK_POOL_SIZE 1024 #define ENDIO_HOOK_POOL_SIZE 1024
#define MAPPING_POOL_SIZE 1024 #define MAPPING_POOL_SIZE 1024
#define PRISON_CELLS 1024
#define COMMIT_PERIOD HZ #define COMMIT_PERIOD HZ
#define NO_SPACE_TIMEOUT_SECS 60 #define NO_SPACE_TIMEOUT_SECS 60
...@@ -114,7 +115,8 @@ static void build_data_key(struct dm_thin_device *td, ...@@ -114,7 +115,8 @@ static void build_data_key(struct dm_thin_device *td,
{ {
key->virtual = 0; key->virtual = 0;
key->dev = dm_thin_dev_id(td); key->dev = dm_thin_dev_id(td);
key->block = b; key->block_begin = b;
key->block_end = b + 1ULL;
} }
static void build_virtual_key(struct dm_thin_device *td, dm_block_t b, static void build_virtual_key(struct dm_thin_device *td, dm_block_t b,
...@@ -122,7 +124,55 @@ static void build_virtual_key(struct dm_thin_device *td, dm_block_t b, ...@@ -122,7 +124,55 @@ static void build_virtual_key(struct dm_thin_device *td, dm_block_t b,
{ {
key->virtual = 1; key->virtual = 1;
key->dev = dm_thin_dev_id(td); key->dev = dm_thin_dev_id(td);
key->block = b; key->block_begin = b;
key->block_end = b + 1ULL;
}
/*----------------------------------------------------------------*/
#define THROTTLE_THRESHOLD (1 * HZ)
struct throttle {
struct rw_semaphore lock;
unsigned long threshold;
bool throttle_applied;
};
static void throttle_init(struct throttle *t)
{
init_rwsem(&t->lock);
t->throttle_applied = false;
}
static void throttle_work_start(struct throttle *t)
{
t->threshold = jiffies + THROTTLE_THRESHOLD;
}
static void throttle_work_update(struct throttle *t)
{
if (!t->throttle_applied && jiffies > t->threshold) {
down_write(&t->lock);
t->throttle_applied = true;
}
}
static void throttle_work_complete(struct throttle *t)
{
if (t->throttle_applied) {
t->throttle_applied = false;
up_write(&t->lock);
}
}
static void throttle_lock(struct throttle *t)
{
down_read(&t->lock);
}
static void throttle_unlock(struct throttle *t)
{
up_read(&t->lock);
} }
/*----------------------------------------------------------------*/ /*----------------------------------------------------------------*/
...@@ -155,8 +205,11 @@ struct pool_features { ...@@ -155,8 +205,11 @@ struct pool_features {
struct thin_c; struct thin_c;
typedef void (*process_bio_fn)(struct thin_c *tc, struct bio *bio); typedef void (*process_bio_fn)(struct thin_c *tc, struct bio *bio);
typedef void (*process_cell_fn)(struct thin_c *tc, struct dm_bio_prison_cell *cell);
typedef void (*process_mapping_fn)(struct dm_thin_new_mapping *m); typedef void (*process_mapping_fn)(struct dm_thin_new_mapping *m);
#define CELL_SORT_ARRAY_SIZE 8192
struct pool { struct pool {
struct list_head list; struct list_head list;
struct dm_target *ti; /* Only set if a pool target is bound */ struct dm_target *ti; /* Only set if a pool target is bound */
...@@ -171,11 +224,13 @@ struct pool { ...@@ -171,11 +224,13 @@ struct pool {
struct pool_features pf; struct pool_features pf;
bool low_water_triggered:1; /* A dm event has been sent */ bool low_water_triggered:1; /* A dm event has been sent */
bool suspended:1;
struct dm_bio_prison *prison; struct dm_bio_prison *prison;
struct dm_kcopyd_client *copier; struct dm_kcopyd_client *copier;
struct workqueue_struct *wq; struct workqueue_struct *wq;
struct throttle throttle;
struct work_struct worker; struct work_struct worker;
struct delayed_work waker; struct delayed_work waker;
struct delayed_work no_space_timeout; struct delayed_work no_space_timeout;
...@@ -198,8 +253,13 @@ struct pool { ...@@ -198,8 +253,13 @@ struct pool {
process_bio_fn process_bio; process_bio_fn process_bio;
process_bio_fn process_discard; process_bio_fn process_discard;
process_cell_fn process_cell;
process_cell_fn process_discard_cell;
process_mapping_fn process_prepared_mapping; process_mapping_fn process_prepared_mapping;
process_mapping_fn process_prepared_discard; process_mapping_fn process_prepared_discard;
struct dm_bio_prison_cell *cell_sort_array[CELL_SORT_ARRAY_SIZE];
}; };
static enum pool_mode get_pool_mode(struct pool *pool); static enum pool_mode get_pool_mode(struct pool *pool);
...@@ -232,8 +292,11 @@ struct thin_c { ...@@ -232,8 +292,11 @@ struct thin_c {
struct pool *pool; struct pool *pool;
struct dm_thin_device *td; struct dm_thin_device *td;
struct mapped_device *thin_md;
bool requeue_mode:1; bool requeue_mode:1;
spinlock_t lock; spinlock_t lock;
struct list_head deferred_cells;
struct bio_list deferred_bio_list; struct bio_list deferred_bio_list;
struct bio_list retry_on_resume_list; struct bio_list retry_on_resume_list;
struct rb_root sort_bio_list; /* sorted list of deferred bios */ struct rb_root sort_bio_list; /* sorted list of deferred bios */
...@@ -290,6 +353,15 @@ static void cell_release(struct pool *pool, ...@@ -290,6 +353,15 @@ static void cell_release(struct pool *pool,
dm_bio_prison_free_cell(pool->prison, cell); dm_bio_prison_free_cell(pool->prison, cell);
} }
static void cell_visit_release(struct pool *pool,
void (*fn)(void *, struct dm_bio_prison_cell *),
void *context,
struct dm_bio_prison_cell *cell)
{
dm_cell_visit_release(pool->prison, fn, context, cell);
dm_bio_prison_free_cell(pool->prison, cell);
}
static void cell_release_no_holder(struct pool *pool, static void cell_release_no_holder(struct pool *pool,
struct dm_bio_prison_cell *cell, struct dm_bio_prison_cell *cell,
struct bio_list *bios) struct bio_list *bios)
...@@ -298,19 +370,6 @@ static void cell_release_no_holder(struct pool *pool, ...@@ -298,19 +370,6 @@ static void cell_release_no_holder(struct pool *pool,
dm_bio_prison_free_cell(pool->prison, cell); dm_bio_prison_free_cell(pool->prison, cell);
} }
static void cell_defer_no_holder_no_free(struct thin_c *tc,
struct dm_bio_prison_cell *cell)
{
struct pool *pool = tc->pool;
unsigned long flags;
spin_lock_irqsave(&tc->lock, flags);
dm_cell_release_no_holder(pool->prison, cell, &tc->deferred_bio_list);
spin_unlock_irqrestore(&tc->lock, flags);
wake_worker(pool);
}
static void cell_error_with_code(struct pool *pool, static void cell_error_with_code(struct pool *pool,
struct dm_bio_prison_cell *cell, int error_code) struct dm_bio_prison_cell *cell, int error_code)
{ {
...@@ -323,6 +382,16 @@ static void cell_error(struct pool *pool, struct dm_bio_prison_cell *cell) ...@@ -323,6 +382,16 @@ static void cell_error(struct pool *pool, struct dm_bio_prison_cell *cell)
cell_error_with_code(pool, cell, -EIO); cell_error_with_code(pool, cell, -EIO);
} }
static void cell_success(struct pool *pool, struct dm_bio_prison_cell *cell)
{
cell_error_with_code(pool, cell, 0);
}
static void cell_requeue(struct pool *pool, struct dm_bio_prison_cell *cell)
{
cell_error_with_code(pool, cell, DM_ENDIO_REQUEUE);
}
/*----------------------------------------------------------------*/ /*----------------------------------------------------------------*/
/* /*
...@@ -393,44 +462,65 @@ struct dm_thin_endio_hook { ...@@ -393,44 +462,65 @@ struct dm_thin_endio_hook {
struct rb_node rb_node; struct rb_node rb_node;
}; };
static void requeue_bio_list(struct thin_c *tc, struct bio_list *master) static void __merge_bio_list(struct bio_list *bios, struct bio_list *master)
{
bio_list_merge(bios, master);
bio_list_init(master);
}
static void error_bio_list(struct bio_list *bios, int error)
{ {
struct bio *bio; struct bio *bio;
while ((bio = bio_list_pop(bios)))
bio_endio(bio, error);
}
static void error_thin_bio_list(struct thin_c *tc, struct bio_list *master, int error)
{
struct bio_list bios; struct bio_list bios;
unsigned long flags; unsigned long flags;
bio_list_init(&bios); bio_list_init(&bios);
spin_lock_irqsave(&tc->lock, flags); spin_lock_irqsave(&tc->lock, flags);
bio_list_merge(&bios, master); __merge_bio_list(&bios, master);
bio_list_init(master);
spin_unlock_irqrestore(&tc->lock, flags); spin_unlock_irqrestore(&tc->lock, flags);
while ((bio = bio_list_pop(&bios))) error_bio_list(&bios, error);
bio_endio(bio, DM_ENDIO_REQUEUE);
} }
static void requeue_io(struct thin_c *tc) static void requeue_deferred_cells(struct thin_c *tc)
{ {
requeue_bio_list(tc, &tc->deferred_bio_list); struct pool *pool = tc->pool;
requeue_bio_list(tc, &tc->retry_on_resume_list); unsigned long flags;
struct list_head cells;
struct dm_bio_prison_cell *cell, *tmp;
INIT_LIST_HEAD(&cells);
spin_lock_irqsave(&tc->lock, flags);
list_splice_init(&tc->deferred_cells, &cells);
spin_unlock_irqrestore(&tc->lock, flags);
list_for_each_entry_safe(cell, tmp, &cells, user_list)
cell_requeue(pool, cell);
} }
static void error_thin_retry_list(struct thin_c *tc) static void requeue_io(struct thin_c *tc)
{ {
struct bio *bio;
unsigned long flags;
struct bio_list bios; struct bio_list bios;
unsigned long flags;
bio_list_init(&bios); bio_list_init(&bios);
spin_lock_irqsave(&tc->lock, flags); spin_lock_irqsave(&tc->lock, flags);
bio_list_merge(&bios, &tc->retry_on_resume_list); __merge_bio_list(&bios, &tc->deferred_bio_list);
bio_list_init(&tc->retry_on_resume_list); __merge_bio_list(&bios, &tc->retry_on_resume_list);
spin_unlock_irqrestore(&tc->lock, flags); spin_unlock_irqrestore(&tc->lock, flags);
while ((bio = bio_list_pop(&bios))) error_bio_list(&bios, DM_ENDIO_REQUEUE);
bio_io_error(bio); requeue_deferred_cells(tc);
} }
static void error_retry_list(struct pool *pool) static void error_retry_list(struct pool *pool)
...@@ -439,7 +529,7 @@ static void error_retry_list(struct pool *pool) ...@@ -439,7 +529,7 @@ static void error_retry_list(struct pool *pool)
rcu_read_lock(); rcu_read_lock();
list_for_each_entry_rcu(tc, &pool->active_thins, list) list_for_each_entry_rcu(tc, &pool->active_thins, list)
error_thin_retry_list(tc); error_thin_bio_list(tc, &tc->retry_on_resume_list, -EIO);
rcu_read_unlock(); rcu_read_unlock();
} }
...@@ -629,33 +719,75 @@ static void overwrite_endio(struct bio *bio, int err) ...@@ -629,33 +719,75 @@ static void overwrite_endio(struct bio *bio, int err)
*/ */
/* /*
* This sends the bios in the cell back to the deferred_bios list. * This sends the bios in the cell, except the original holder, back
* to the deferred_bios list.
*/ */
static void cell_defer(struct thin_c *tc, struct dm_bio_prison_cell *cell) static void cell_defer_no_holder(struct thin_c *tc, struct dm_bio_prison_cell *cell)
{ {
struct pool *pool = tc->pool; struct pool *pool = tc->pool;
unsigned long flags; unsigned long flags;
spin_lock_irqsave(&tc->lock, flags); spin_lock_irqsave(&tc->lock, flags);
cell_release(pool, cell, &tc->deferred_bio_list); cell_release_no_holder(pool, cell, &tc->deferred_bio_list);
spin_unlock_irqrestore(&tc->lock, flags); spin_unlock_irqrestore(&tc->lock, flags);
wake_worker(pool); wake_worker(pool);
} }
/* static void thin_defer_bio(struct thin_c *tc, struct bio *bio);
* Same as cell_defer above, except it omits the original holder of the cell.
*/ struct remap_info {
static void cell_defer_no_holder(struct thin_c *tc, struct dm_bio_prison_cell *cell) struct thin_c *tc;
struct bio_list defer_bios;
struct bio_list issue_bios;
};
static void __inc_remap_and_issue_cell(void *context,
struct dm_bio_prison_cell *cell)
{ {
struct pool *pool = tc->pool; struct remap_info *info = context;
unsigned long flags; struct bio *bio;
spin_lock_irqsave(&tc->lock, flags); while ((bio = bio_list_pop(&cell->bios))) {
cell_release_no_holder(pool, cell, &tc->deferred_bio_list); if (bio->bi_rw & (REQ_DISCARD | REQ_FLUSH | REQ_FUA))
spin_unlock_irqrestore(&tc->lock, flags); bio_list_add(&info->defer_bios, bio);
else {
inc_all_io_entry(info->tc->pool, bio);
wake_worker(pool); /*
* We can't issue the bios with the bio prison lock
* held, so we add them to a list to issue on
* return from this function.
*/
bio_list_add(&info->issue_bios, bio);
}
}
}
static void inc_remap_and_issue_cell(struct thin_c *tc,
struct dm_bio_prison_cell *cell,
dm_block_t block)
{
struct bio *bio;
struct remap_info info;
info.tc = tc;
bio_list_init(&info.defer_bios);
bio_list_init(&info.issue_bios);
/*
* We have to be careful to inc any bios we're about to issue
* before the cell is released, and avoid a race with new bios
* being added to the cell.
*/
cell_visit_release(tc->pool, __inc_remap_and_issue_cell,
&info, cell);
while ((bio = bio_list_pop(&info.defer_bios)))
thin_defer_bio(tc, bio);
while ((bio = bio_list_pop(&info.issue_bios)))
remap_and_issue(info.tc, bio, block);
} }
static void process_prepared_mapping_fail(struct dm_thin_new_mapping *m) static void process_prepared_mapping_fail(struct dm_thin_new_mapping *m)
...@@ -706,10 +838,13 @@ static void process_prepared_mapping(struct dm_thin_new_mapping *m) ...@@ -706,10 +838,13 @@ static void process_prepared_mapping(struct dm_thin_new_mapping *m)
* the bios in the cell. * the bios in the cell.
*/ */
if (bio) { if (bio) {
cell_defer_no_holder(tc, m->cell); inc_remap_and_issue_cell(tc, m->cell, m->data_block);
bio_endio(bio, 0); bio_endio(bio, 0);
} else } else {
cell_defer(tc, m->cell); inc_all_io_entry(tc->pool, m->cell->holder);
remap_and_issue(tc, m->cell->holder, m->data_block);
inc_remap_and_issue_cell(tc, m->cell, m->data_block);
}
out: out:
list_del(&m->list); list_del(&m->list);
...@@ -842,6 +977,20 @@ static void ll_zero(struct thin_c *tc, struct dm_thin_new_mapping *m, ...@@ -842,6 +977,20 @@ static void ll_zero(struct thin_c *tc, struct dm_thin_new_mapping *m,
} }
} }
static void remap_and_issue_overwrite(struct thin_c *tc, struct bio *bio,
dm_block_t data_block,
struct dm_thin_new_mapping *m)
{
struct pool *pool = tc->pool;
struct dm_thin_endio_hook *h = dm_per_bio_data(bio, sizeof(struct dm_thin_endio_hook));
h->overwrite_mapping = m;
m->bio = bio;
save_and_set_endio(bio, &m->saved_bi_end_io, overwrite_endio);
inc_all_io_entry(pool, bio);
remap_and_issue(tc, bio, data_block);
}
/* /*
* A partial copy also needs to zero the uncopied region. * A partial copy also needs to zero the uncopied region.
*/ */
...@@ -876,15 +1025,9 @@ static void schedule_copy(struct thin_c *tc, dm_block_t virt_block, ...@@ -876,15 +1025,9 @@ static void schedule_copy(struct thin_c *tc, dm_block_t virt_block,
* If the whole block of data is being overwritten, we can issue the * If the whole block of data is being overwritten, we can issue the
* bio immediately. Otherwise we use kcopyd to clone the data first. * bio immediately. Otherwise we use kcopyd to clone the data first.
*/ */
if (io_overwrites_block(pool, bio)) { if (io_overwrites_block(pool, bio))
struct dm_thin_endio_hook *h = dm_per_bio_data(bio, sizeof(struct dm_thin_endio_hook)); remap_and_issue_overwrite(tc, bio, data_dest, m);
else {
h->overwrite_mapping = m;
m->bio = bio;
save_and_set_endio(bio, &m->saved_bi_end_io, overwrite_endio);
inc_all_io_entry(pool, bio);
remap_and_issue(tc, bio, data_dest);
} else {
struct dm_io_region from, to; struct dm_io_region from, to;
from.bdev = origin->bdev; from.bdev = origin->bdev;
...@@ -953,16 +1096,10 @@ static void schedule_zero(struct thin_c *tc, dm_block_t virt_block, ...@@ -953,16 +1096,10 @@ static void schedule_zero(struct thin_c *tc, dm_block_t virt_block,
if (!pool->pf.zero_new_blocks) if (!pool->pf.zero_new_blocks)
process_prepared_mapping(m); process_prepared_mapping(m);
else if (io_overwrites_block(pool, bio)) { else if (io_overwrites_block(pool, bio))
struct dm_thin_endio_hook *h = dm_per_bio_data(bio, sizeof(struct dm_thin_endio_hook)); remap_and_issue_overwrite(tc, bio, data_block, m);
h->overwrite_mapping = m;
m->bio = bio;
save_and_set_endio(bio, &m->saved_bi_end_io, overwrite_endio);
inc_all_io_entry(pool, bio);
remap_and_issue(tc, bio, data_block);
} else else
ll_zero(tc, m, ll_zero(tc, m,
data_block * pool->sectors_per_block, data_block * pool->sectors_per_block,
(data_block + 1) * pool->sectors_per_block); (data_block + 1) * pool->sectors_per_block);
...@@ -1134,29 +1271,25 @@ static void retry_bios_on_resume(struct pool *pool, struct dm_bio_prison_cell *c ...@@ -1134,29 +1271,25 @@ static void retry_bios_on_resume(struct pool *pool, struct dm_bio_prison_cell *c
bio_list_init(&bios); bio_list_init(&bios);
cell_release(pool, cell, &bios); cell_release(pool, cell, &bios);
error = should_error_unserviceable_bio(pool); while ((bio = bio_list_pop(&bios)))
if (error) retry_on_resume(bio);
while ((bio = bio_list_pop(&bios)))
bio_endio(bio, error);
else
while ((bio = bio_list_pop(&bios)))
retry_on_resume(bio);
} }
static void process_discard(struct thin_c *tc, struct bio *bio) static void process_discard_cell(struct thin_c *tc, struct dm_bio_prison_cell *cell)
{ {
int r; int r;
unsigned long flags; struct bio *bio = cell->holder;
struct pool *pool = tc->pool; struct pool *pool = tc->pool;
struct dm_bio_prison_cell *cell, *cell2; struct dm_bio_prison_cell *cell2;
struct dm_cell_key key, key2; struct dm_cell_key key2;
dm_block_t block = get_bio_block(tc, bio); dm_block_t block = get_bio_block(tc, bio);
struct dm_thin_lookup_result lookup_result; struct dm_thin_lookup_result lookup_result;
struct dm_thin_new_mapping *m; struct dm_thin_new_mapping *m;
build_virtual_key(tc->td, block, &key); if (tc->requeue_mode) {
if (bio_detain(tc->pool, &key, bio, &cell)) cell_requeue(pool, cell);
return; return;
}
r = dm_thin_find_block(tc->td, block, 1, &lookup_result); r = dm_thin_find_block(tc->td, block, 1, &lookup_result);
switch (r) { switch (r) {
...@@ -1187,12 +1320,9 @@ static void process_discard(struct thin_c *tc, struct bio *bio) ...@@ -1187,12 +1320,9 @@ static void process_discard(struct thin_c *tc, struct bio *bio)
m->cell2 = cell2; m->cell2 = cell2;
m->bio = bio; m->bio = bio;
if (!dm_deferred_set_add_work(pool->all_io_ds, &m->list)) { if (!dm_deferred_set_add_work(pool->all_io_ds, &m->list))
spin_lock_irqsave(&pool->lock, flags); pool->process_prepared_discard(m);
list_add_tail(&m->list, &pool->prepared_discards);
spin_unlock_irqrestore(&pool->lock, flags);
wake_worker(pool);
}
} else { } else {
inc_all_io_entry(pool, bio); inc_all_io_entry(pool, bio);
cell_defer_no_holder(tc, cell); cell_defer_no_holder(tc, cell);
...@@ -1227,6 +1357,19 @@ static void process_discard(struct thin_c *tc, struct bio *bio) ...@@ -1227,6 +1357,19 @@ static void process_discard(struct thin_c *tc, struct bio *bio)
} }
} }
static void process_discard_bio(struct thin_c *tc, struct bio *bio)
{
struct dm_bio_prison_cell *cell;
struct dm_cell_key key;
dm_block_t block = get_bio_block(tc, bio);
build_virtual_key(tc->td, block, &key);
if (bio_detain(tc->pool, &key, bio, &cell))
return;
process_discard_cell(tc, cell);
}
static void break_sharing(struct thin_c *tc, struct bio *bio, dm_block_t block, static void break_sharing(struct thin_c *tc, struct bio *bio, dm_block_t block,
struct dm_cell_key *key, struct dm_cell_key *key,
struct dm_thin_lookup_result *lookup_result, struct dm_thin_lookup_result *lookup_result,
...@@ -1255,11 +1398,53 @@ static void break_sharing(struct thin_c *tc, struct bio *bio, dm_block_t block, ...@@ -1255,11 +1398,53 @@ static void break_sharing(struct thin_c *tc, struct bio *bio, dm_block_t block,
} }
} }
static void __remap_and_issue_shared_cell(void *context,
struct dm_bio_prison_cell *cell)
{
struct remap_info *info = context;
struct bio *bio;
while ((bio = bio_list_pop(&cell->bios))) {
if ((bio_data_dir(bio) == WRITE) ||
(bio->bi_rw & (REQ_DISCARD | REQ_FLUSH | REQ_FUA)))
bio_list_add(&info->defer_bios, bio);
else {
struct dm_thin_endio_hook *h = dm_per_bio_data(bio, sizeof(struct dm_thin_endio_hook));;
h->shared_read_entry = dm_deferred_entry_inc(info->tc->pool->shared_read_ds);
inc_all_io_entry(info->tc->pool, bio);
bio_list_add(&info->issue_bios, bio);
}
}
}
static void remap_and_issue_shared_cell(struct thin_c *tc,
struct dm_bio_prison_cell *cell,
dm_block_t block)
{
struct bio *bio;
struct remap_info info;
info.tc = tc;
bio_list_init(&info.defer_bios);
bio_list_init(&info.issue_bios);
cell_visit_release(tc->pool, __remap_and_issue_shared_cell,
&info, cell);
while ((bio = bio_list_pop(&info.defer_bios)))
thin_defer_bio(tc, bio);
while ((bio = bio_list_pop(&info.issue_bios)))
remap_and_issue(tc, bio, block);
}
static void process_shared_bio(struct thin_c *tc, struct bio *bio, static void process_shared_bio(struct thin_c *tc, struct bio *bio,
dm_block_t block, dm_block_t block,
struct dm_thin_lookup_result *lookup_result) struct dm_thin_lookup_result *lookup_result,
struct dm_bio_prison_cell *virt_cell)
{ {
struct dm_bio_prison_cell *cell; struct dm_bio_prison_cell *data_cell;
struct pool *pool = tc->pool; struct pool *pool = tc->pool;
struct dm_cell_key key; struct dm_cell_key key;
...@@ -1268,19 +1453,23 @@ static void process_shared_bio(struct thin_c *tc, struct bio *bio, ...@@ -1268,19 +1453,23 @@ static void process_shared_bio(struct thin_c *tc, struct bio *bio,
* of being broken so we have nothing further to do here. * of being broken so we have nothing further to do here.
*/ */
build_data_key(tc->td, lookup_result->block, &key); build_data_key(tc->td, lookup_result->block, &key);
if (bio_detain(pool, &key, bio, &cell)) if (bio_detain(pool, &key, bio, &data_cell)) {
cell_defer_no_holder(tc, virt_cell);
return; return;
}
if (bio_data_dir(bio) == WRITE && bio->bi_iter.bi_size) if (bio_data_dir(bio) == WRITE && bio->bi_iter.bi_size) {
break_sharing(tc, bio, block, &key, lookup_result, cell); break_sharing(tc, bio, block, &key, lookup_result, data_cell);
else { cell_defer_no_holder(tc, virt_cell);
} else {
struct dm_thin_endio_hook *h = dm_per_bio_data(bio, sizeof(struct dm_thin_endio_hook)); struct dm_thin_endio_hook *h = dm_per_bio_data(bio, sizeof(struct dm_thin_endio_hook));
h->shared_read_entry = dm_deferred_entry_inc(pool->shared_read_ds); h->shared_read_entry = dm_deferred_entry_inc(pool->shared_read_ds);
inc_all_io_entry(pool, bio); inc_all_io_entry(pool, bio);
cell_defer_no_holder(tc, cell);
remap_and_issue(tc, bio, lookup_result->block); remap_and_issue(tc, bio, lookup_result->block);
remap_and_issue_shared_cell(tc, data_cell, lookup_result->block);
remap_and_issue_shared_cell(tc, virt_cell, lookup_result->block);
} }
} }
...@@ -1333,34 +1522,28 @@ static void provision_block(struct thin_c *tc, struct bio *bio, dm_block_t block ...@@ -1333,34 +1522,28 @@ static void provision_block(struct thin_c *tc, struct bio *bio, dm_block_t block
} }
} }
static void process_bio(struct thin_c *tc, struct bio *bio) static void process_cell(struct thin_c *tc, struct dm_bio_prison_cell *cell)
{ {
int r; int r;
struct pool *pool = tc->pool; struct pool *pool = tc->pool;
struct bio *bio = cell->holder;
dm_block_t block = get_bio_block(tc, bio); dm_block_t block = get_bio_block(tc, bio);
struct dm_bio_prison_cell *cell;
struct dm_cell_key key;
struct dm_thin_lookup_result lookup_result; struct dm_thin_lookup_result lookup_result;
/* if (tc->requeue_mode) {
* If cell is already occupied, then the block is already cell_requeue(pool, cell);
* being provisioned so we have nothing further to do here.
*/
build_virtual_key(tc->td, block, &key);
if (bio_detain(pool, &key, bio, &cell))
return; return;
}
r = dm_thin_find_block(tc->td, block, 1, &lookup_result); r = dm_thin_find_block(tc->td, block, 1, &lookup_result);
switch (r) { switch (r) {
case 0: case 0:
if (lookup_result.shared) { if (lookup_result.shared)
process_shared_bio(tc, bio, block, &lookup_result); process_shared_bio(tc, bio, block, &lookup_result, cell);
cell_defer_no_holder(tc, cell); /* FIXME: pass this cell into process_shared? */ else {
} else {
inc_all_io_entry(pool, bio); inc_all_io_entry(pool, bio);
cell_defer_no_holder(tc, cell);
remap_and_issue(tc, bio, lookup_result.block); remap_and_issue(tc, bio, lookup_result.block);
inc_remap_and_issue_cell(tc, cell, lookup_result.block);
} }
break; break;
...@@ -1394,7 +1577,26 @@ static void process_bio(struct thin_c *tc, struct bio *bio) ...@@ -1394,7 +1577,26 @@ static void process_bio(struct thin_c *tc, struct bio *bio)
} }
} }
static void process_bio_read_only(struct thin_c *tc, struct bio *bio) static void process_bio(struct thin_c *tc, struct bio *bio)
{
struct pool *pool = tc->pool;
dm_block_t block = get_bio_block(tc, bio);
struct dm_bio_prison_cell *cell;
struct dm_cell_key key;
/*
* If cell is already occupied, then the block is already
* being provisioned so we have nothing further to do here.
*/
build_virtual_key(tc->td, block, &key);
if (bio_detain(pool, &key, bio, &cell))
return;
process_cell(tc, cell);
}
static void __process_bio_read_only(struct thin_c *tc, struct bio *bio,
struct dm_bio_prison_cell *cell)
{ {
int r; int r;
int rw = bio_data_dir(bio); int rw = bio_data_dir(bio);
...@@ -1404,15 +1606,21 @@ static void process_bio_read_only(struct thin_c *tc, struct bio *bio) ...@@ -1404,15 +1606,21 @@ static void process_bio_read_only(struct thin_c *tc, struct bio *bio)
r = dm_thin_find_block(tc->td, block, 1, &lookup_result); r = dm_thin_find_block(tc->td, block, 1, &lookup_result);
switch (r) { switch (r) {
case 0: case 0:
if (lookup_result.shared && (rw == WRITE) && bio->bi_iter.bi_size) if (lookup_result.shared && (rw == WRITE) && bio->bi_iter.bi_size) {
handle_unserviceable_bio(tc->pool, bio); handle_unserviceable_bio(tc->pool, bio);
else { if (cell)
cell_defer_no_holder(tc, cell);
} else {
inc_all_io_entry(tc->pool, bio); inc_all_io_entry(tc->pool, bio);
remap_and_issue(tc, bio, lookup_result.block); remap_and_issue(tc, bio, lookup_result.block);
if (cell)
inc_remap_and_issue_cell(tc, cell, lookup_result.block);
} }
break; break;
case -ENODATA: case -ENODATA:
if (cell)
cell_defer_no_holder(tc, cell);
if (rw != READ) { if (rw != READ) {
handle_unserviceable_bio(tc->pool, bio); handle_unserviceable_bio(tc->pool, bio);
break; break;
...@@ -1431,11 +1639,23 @@ static void process_bio_read_only(struct thin_c *tc, struct bio *bio) ...@@ -1431,11 +1639,23 @@ static void process_bio_read_only(struct thin_c *tc, struct bio *bio)
default: default:
DMERR_LIMIT("%s: dm_thin_find_block() failed: error = %d", DMERR_LIMIT("%s: dm_thin_find_block() failed: error = %d",
__func__, r); __func__, r);
if (cell)
cell_defer_no_holder(tc, cell);
bio_io_error(bio); bio_io_error(bio);
break; break;
} }
} }
static void process_bio_read_only(struct thin_c *tc, struct bio *bio)
{
__process_bio_read_only(tc, bio, NULL);
}
static void process_cell_read_only(struct thin_c *tc, struct dm_bio_prison_cell *cell)
{
__process_bio_read_only(tc, cell->holder, cell);
}
static void process_bio_success(struct thin_c *tc, struct bio *bio) static void process_bio_success(struct thin_c *tc, struct bio *bio)
{ {
bio_endio(bio, 0); bio_endio(bio, 0);
...@@ -1446,6 +1666,16 @@ static void process_bio_fail(struct thin_c *tc, struct bio *bio) ...@@ -1446,6 +1666,16 @@ static void process_bio_fail(struct thin_c *tc, struct bio *bio)
bio_io_error(bio); bio_io_error(bio);
} }
static void process_cell_success(struct thin_c *tc, struct dm_bio_prison_cell *cell)
{
cell_success(tc->pool, cell);
}
static void process_cell_fail(struct thin_c *tc, struct dm_bio_prison_cell *cell)
{
cell_error(tc->pool, cell);
}
/* /*
* FIXME: should we also commit due to size of transaction, measured in * FIXME: should we also commit due to size of transaction, measured in
* metadata blocks? * metadata blocks?
...@@ -1527,9 +1757,10 @@ static void process_thin_deferred_bios(struct thin_c *tc) ...@@ -1527,9 +1757,10 @@ static void process_thin_deferred_bios(struct thin_c *tc)
struct bio *bio; struct bio *bio;
struct bio_list bios; struct bio_list bios;
struct blk_plug plug; struct blk_plug plug;
unsigned count = 0;
if (tc->requeue_mode) { if (tc->requeue_mode) {
requeue_bio_list(tc, &tc->deferred_bio_list); error_thin_bio_list(tc, &tc->deferred_bio_list, DM_ENDIO_REQUEUE);
return; return;
} }
...@@ -1568,10 +1799,97 @@ static void process_thin_deferred_bios(struct thin_c *tc) ...@@ -1568,10 +1799,97 @@ static void process_thin_deferred_bios(struct thin_c *tc)
pool->process_discard(tc, bio); pool->process_discard(tc, bio);
else else
pool->process_bio(tc, bio); pool->process_bio(tc, bio);
if ((count++ & 127) == 0) {
throttle_work_update(&pool->throttle);
dm_pool_issue_prefetches(pool->pmd);
}
} }
blk_finish_plug(&plug); blk_finish_plug(&plug);
} }
static int cmp_cells(const void *lhs, const void *rhs)
{
struct dm_bio_prison_cell *lhs_cell = *((struct dm_bio_prison_cell **) lhs);
struct dm_bio_prison_cell *rhs_cell = *((struct dm_bio_prison_cell **) rhs);
BUG_ON(!lhs_cell->holder);
BUG_ON(!rhs_cell->holder);
if (lhs_cell->holder->bi_iter.bi_sector < rhs_cell->holder->bi_iter.bi_sector)
return -1;
if (lhs_cell->holder->bi_iter.bi_sector > rhs_cell->holder->bi_iter.bi_sector)
return 1;
return 0;
}
static unsigned sort_cells(struct pool *pool, struct list_head *cells)
{
unsigned count = 0;
struct dm_bio_prison_cell *cell, *tmp;
list_for_each_entry_safe(cell, tmp, cells, user_list) {
if (count >= CELL_SORT_ARRAY_SIZE)
break;
pool->cell_sort_array[count++] = cell;
list_del(&cell->user_list);
}
sort(pool->cell_sort_array, count, sizeof(cell), cmp_cells, NULL);
return count;
}
static void process_thin_deferred_cells(struct thin_c *tc)
{
struct pool *pool = tc->pool;
unsigned long flags;
struct list_head cells;
struct dm_bio_prison_cell *cell;
unsigned i, j, count;
INIT_LIST_HEAD(&cells);
spin_lock_irqsave(&tc->lock, flags);
list_splice_init(&tc->deferred_cells, &cells);
spin_unlock_irqrestore(&tc->lock, flags);
if (list_empty(&cells))
return;
do {
count = sort_cells(tc->pool, &cells);
for (i = 0; i < count; i++) {
cell = pool->cell_sort_array[i];
BUG_ON(!cell->holder);
/*
* If we've got no free new_mapping structs, and processing
* this bio might require one, we pause until there are some
* prepared mappings to process.
*/
if (ensure_next_mapping(pool)) {
for (j = i; j < count; j++)
list_add(&pool->cell_sort_array[j]->user_list, &cells);
spin_lock_irqsave(&tc->lock, flags);
list_splice(&cells, &tc->deferred_cells);
spin_unlock_irqrestore(&tc->lock, flags);
return;
}
if (cell->holder->bi_rw & REQ_DISCARD)
pool->process_discard_cell(tc, cell);
else
pool->process_cell(tc, cell);
}
} while (!list_empty(&cells));
}
static void thin_get(struct thin_c *tc); static void thin_get(struct thin_c *tc);
static void thin_put(struct thin_c *tc); static void thin_put(struct thin_c *tc);
...@@ -1620,6 +1938,7 @@ static void process_deferred_bios(struct pool *pool) ...@@ -1620,6 +1938,7 @@ static void process_deferred_bios(struct pool *pool)
tc = get_first_thin(pool); tc = get_first_thin(pool);
while (tc) { while (tc) {
process_thin_deferred_cells(tc);
process_thin_deferred_bios(tc); process_thin_deferred_bios(tc);
tc = get_next_thin(pool, tc); tc = get_next_thin(pool, tc);
} }
...@@ -1653,9 +1972,15 @@ static void do_worker(struct work_struct *ws) ...@@ -1653,9 +1972,15 @@ static void do_worker(struct work_struct *ws)
{ {
struct pool *pool = container_of(ws, struct pool, worker); struct pool *pool = container_of(ws, struct pool, worker);
throttle_work_start(&pool->throttle);
dm_pool_issue_prefetches(pool->pmd);
throttle_work_update(&pool->throttle);
process_prepared(pool, &pool->prepared_mappings, &pool->process_prepared_mapping); process_prepared(pool, &pool->prepared_mappings, &pool->process_prepared_mapping);
throttle_work_update(&pool->throttle);
process_prepared(pool, &pool->prepared_discards, &pool->process_prepared_discard); process_prepared(pool, &pool->prepared_discards, &pool->process_prepared_discard);
throttle_work_update(&pool->throttle);
process_deferred_bios(pool); process_deferred_bios(pool);
throttle_work_complete(&pool->throttle);
} }
/* /*
...@@ -1792,6 +2117,8 @@ static void set_pool_mode(struct pool *pool, enum pool_mode new_mode) ...@@ -1792,6 +2117,8 @@ static void set_pool_mode(struct pool *pool, enum pool_mode new_mode)
dm_pool_metadata_read_only(pool->pmd); dm_pool_metadata_read_only(pool->pmd);
pool->process_bio = process_bio_fail; pool->process_bio = process_bio_fail;
pool->process_discard = process_bio_fail; pool->process_discard = process_bio_fail;
pool->process_cell = process_cell_fail;
pool->process_discard_cell = process_cell_fail;
pool->process_prepared_mapping = process_prepared_mapping_fail; pool->process_prepared_mapping = process_prepared_mapping_fail;
pool->process_prepared_discard = process_prepared_discard_fail; pool->process_prepared_discard = process_prepared_discard_fail;
...@@ -1804,6 +2131,8 @@ static void set_pool_mode(struct pool *pool, enum pool_mode new_mode) ...@@ -1804,6 +2131,8 @@ static void set_pool_mode(struct pool *pool, enum pool_mode new_mode)
dm_pool_metadata_read_only(pool->pmd); dm_pool_metadata_read_only(pool->pmd);
pool->process_bio = process_bio_read_only; pool->process_bio = process_bio_read_only;
pool->process_discard = process_bio_success; pool->process_discard = process_bio_success;
pool->process_cell = process_cell_read_only;
pool->process_discard_cell = process_cell_success;
pool->process_prepared_mapping = process_prepared_mapping_fail; pool->process_prepared_mapping = process_prepared_mapping_fail;
pool->process_prepared_discard = process_prepared_discard_passdown; pool->process_prepared_discard = process_prepared_discard_passdown;
...@@ -1822,7 +2151,9 @@ static void set_pool_mode(struct pool *pool, enum pool_mode new_mode) ...@@ -1822,7 +2151,9 @@ static void set_pool_mode(struct pool *pool, enum pool_mode new_mode)
if (old_mode != new_mode) if (old_mode != new_mode)
notify_of_pool_mode_change(pool, "out-of-data-space"); notify_of_pool_mode_change(pool, "out-of-data-space");
pool->process_bio = process_bio_read_only; pool->process_bio = process_bio_read_only;
pool->process_discard = process_discard; pool->process_discard = process_discard_bio;
pool->process_cell = process_cell_read_only;
pool->process_discard_cell = process_discard_cell;
pool->process_prepared_mapping = process_prepared_mapping; pool->process_prepared_mapping = process_prepared_mapping;
pool->process_prepared_discard = process_prepared_discard_passdown; pool->process_prepared_discard = process_prepared_discard_passdown;
...@@ -1835,7 +2166,9 @@ static void set_pool_mode(struct pool *pool, enum pool_mode new_mode) ...@@ -1835,7 +2166,9 @@ static void set_pool_mode(struct pool *pool, enum pool_mode new_mode)
notify_of_pool_mode_change(pool, "write"); notify_of_pool_mode_change(pool, "write");
dm_pool_metadata_read_write(pool->pmd); dm_pool_metadata_read_write(pool->pmd);
pool->process_bio = process_bio; pool->process_bio = process_bio;
pool->process_discard = process_discard; pool->process_discard = process_discard_bio;
pool->process_cell = process_cell;
pool->process_discard_cell = process_discard_cell;
pool->process_prepared_mapping = process_prepared_mapping; pool->process_prepared_mapping = process_prepared_mapping;
pool->process_prepared_discard = process_prepared_discard; pool->process_prepared_discard = process_prepared_discard;
break; break;
...@@ -1895,6 +2228,29 @@ static void thin_defer_bio(struct thin_c *tc, struct bio *bio) ...@@ -1895,6 +2228,29 @@ static void thin_defer_bio(struct thin_c *tc, struct bio *bio)
wake_worker(pool); wake_worker(pool);
} }
static void thin_defer_bio_with_throttle(struct thin_c *tc, struct bio *bio)
{
struct pool *pool = tc->pool;
throttle_lock(&pool->throttle);
thin_defer_bio(tc, bio);
throttle_unlock(&pool->throttle);
}
static void thin_defer_cell(struct thin_c *tc, struct dm_bio_prison_cell *cell)
{
unsigned long flags;
struct pool *pool = tc->pool;
throttle_lock(&pool->throttle);
spin_lock_irqsave(&tc->lock, flags);
list_add_tail(&cell->user_list, &tc->deferred_cells);
spin_unlock_irqrestore(&tc->lock, flags);
throttle_unlock(&pool->throttle);
wake_worker(pool);
}
static void thin_hook_bio(struct thin_c *tc, struct bio *bio) static void thin_hook_bio(struct thin_c *tc, struct bio *bio)
{ {
struct dm_thin_endio_hook *h = dm_per_bio_data(bio, sizeof(struct dm_thin_endio_hook)); struct dm_thin_endio_hook *h = dm_per_bio_data(bio, sizeof(struct dm_thin_endio_hook));
...@@ -1915,8 +2271,7 @@ static int thin_bio_map(struct dm_target *ti, struct bio *bio) ...@@ -1915,8 +2271,7 @@ static int thin_bio_map(struct dm_target *ti, struct bio *bio)
dm_block_t block = get_bio_block(tc, bio); dm_block_t block = get_bio_block(tc, bio);
struct dm_thin_device *td = tc->td; struct dm_thin_device *td = tc->td;
struct dm_thin_lookup_result result; struct dm_thin_lookup_result result;
struct dm_bio_prison_cell cell1, cell2; struct dm_bio_prison_cell *virt_cell, *data_cell;
struct dm_bio_prison_cell *cell_result;
struct dm_cell_key key; struct dm_cell_key key;
thin_hook_bio(tc, bio); thin_hook_bio(tc, bio);
...@@ -1932,7 +2287,7 @@ static int thin_bio_map(struct dm_target *ti, struct bio *bio) ...@@ -1932,7 +2287,7 @@ static int thin_bio_map(struct dm_target *ti, struct bio *bio)
} }
if (bio->bi_rw & (REQ_DISCARD | REQ_FLUSH | REQ_FUA)) { if (bio->bi_rw & (REQ_DISCARD | REQ_FLUSH | REQ_FUA)) {
thin_defer_bio(tc, bio); thin_defer_bio_with_throttle(tc, bio);
return DM_MAPIO_SUBMITTED; return DM_MAPIO_SUBMITTED;
} }
...@@ -1941,7 +2296,7 @@ static int thin_bio_map(struct dm_target *ti, struct bio *bio) ...@@ -1941,7 +2296,7 @@ static int thin_bio_map(struct dm_target *ti, struct bio *bio)
* there's a race with discard. * there's a race with discard.
*/ */
build_virtual_key(tc->td, block, &key); build_virtual_key(tc->td, block, &key);
if (dm_bio_detain(tc->pool->prison, &key, bio, &cell1, &cell_result)) if (bio_detain(tc->pool, &key, bio, &virt_cell))
return DM_MAPIO_SUBMITTED; return DM_MAPIO_SUBMITTED;
r = dm_thin_find_block(td, block, 0, &result); r = dm_thin_find_block(td, block, 0, &result);
...@@ -1966,20 +2321,19 @@ static int thin_bio_map(struct dm_target *ti, struct bio *bio) ...@@ -1966,20 +2321,19 @@ static int thin_bio_map(struct dm_target *ti, struct bio *bio)
* More distant ancestors are irrelevant. The * More distant ancestors are irrelevant. The
* shared flag will be set in their case. * shared flag will be set in their case.
*/ */
thin_defer_bio(tc, bio); thin_defer_cell(tc, virt_cell);
cell_defer_no_holder_no_free(tc, &cell1);
return DM_MAPIO_SUBMITTED; return DM_MAPIO_SUBMITTED;
} }
build_data_key(tc->td, result.block, &key); build_data_key(tc->td, result.block, &key);
if (dm_bio_detain(tc->pool->prison, &key, bio, &cell2, &cell_result)) { if (bio_detain(tc->pool, &key, bio, &data_cell)) {
cell_defer_no_holder_no_free(tc, &cell1); cell_defer_no_holder(tc, virt_cell);
return DM_MAPIO_SUBMITTED; return DM_MAPIO_SUBMITTED;
} }
inc_all_io_entry(tc->pool, bio); inc_all_io_entry(tc->pool, bio);
cell_defer_no_holder_no_free(tc, &cell2); cell_defer_no_holder(tc, data_cell);
cell_defer_no_holder_no_free(tc, &cell1); cell_defer_no_holder(tc, virt_cell);
remap(tc, bio, result.block); remap(tc, bio, result.block);
return DM_MAPIO_REMAPPED; return DM_MAPIO_REMAPPED;
...@@ -1991,18 +2345,13 @@ static int thin_bio_map(struct dm_target *ti, struct bio *bio) ...@@ -1991,18 +2345,13 @@ static int thin_bio_map(struct dm_target *ti, struct bio *bio)
* of doing so. * of doing so.
*/ */
handle_unserviceable_bio(tc->pool, bio); handle_unserviceable_bio(tc->pool, bio);
cell_defer_no_holder_no_free(tc, &cell1); cell_defer_no_holder(tc, virt_cell);
return DM_MAPIO_SUBMITTED; return DM_MAPIO_SUBMITTED;
} }
/* fall through */ /* fall through */
case -EWOULDBLOCK: case -EWOULDBLOCK:
/* thin_defer_cell(tc, virt_cell);
* In future, the failed dm_thin_find_block above could
* provide the hint to load the metadata into cache.
*/
thin_defer_bio(tc, bio);
cell_defer_no_holder_no_free(tc, &cell1);
return DM_MAPIO_SUBMITTED; return DM_MAPIO_SUBMITTED;
default: default:
...@@ -2012,7 +2361,7 @@ static int thin_bio_map(struct dm_target *ti, struct bio *bio) ...@@ -2012,7 +2361,7 @@ static int thin_bio_map(struct dm_target *ti, struct bio *bio)
* pool is switched to fail-io mode. * pool is switched to fail-io mode.
*/ */
bio_io_error(bio); bio_io_error(bio);
cell_defer_no_holder_no_free(tc, &cell1); cell_defer_no_holder(tc, virt_cell);
return DM_MAPIO_SUBMITTED; return DM_MAPIO_SUBMITTED;
} }
} }
...@@ -2193,7 +2542,7 @@ static struct pool *pool_create(struct mapped_device *pool_md, ...@@ -2193,7 +2542,7 @@ static struct pool *pool_create(struct mapped_device *pool_md,
pool->sectors_per_block_shift = __ffs(block_size); pool->sectors_per_block_shift = __ffs(block_size);
pool->low_water_blocks = 0; pool->low_water_blocks = 0;
pool_features_init(&pool->pf); pool_features_init(&pool->pf);
pool->prison = dm_bio_prison_create(PRISON_CELLS); pool->prison = dm_bio_prison_create();
if (!pool->prison) { if (!pool->prison) {
*error = "Error creating pool's bio prison"; *error = "Error creating pool's bio prison";
err_p = ERR_PTR(-ENOMEM); err_p = ERR_PTR(-ENOMEM);
...@@ -2219,6 +2568,7 @@ static struct pool *pool_create(struct mapped_device *pool_md, ...@@ -2219,6 +2568,7 @@ static struct pool *pool_create(struct mapped_device *pool_md,
goto bad_wq; goto bad_wq;
} }
throttle_init(&pool->throttle);
INIT_WORK(&pool->worker, do_worker); INIT_WORK(&pool->worker, do_worker);
INIT_DELAYED_WORK(&pool->waker, do_waker); INIT_DELAYED_WORK(&pool->waker, do_waker);
INIT_DELAYED_WORK(&pool->no_space_timeout, do_no_space_timeout); INIT_DELAYED_WORK(&pool->no_space_timeout, do_no_space_timeout);
...@@ -2228,6 +2578,7 @@ static struct pool *pool_create(struct mapped_device *pool_md, ...@@ -2228,6 +2578,7 @@ static struct pool *pool_create(struct mapped_device *pool_md,
INIT_LIST_HEAD(&pool->prepared_discards); INIT_LIST_HEAD(&pool->prepared_discards);
INIT_LIST_HEAD(&pool->active_thins); INIT_LIST_HEAD(&pool->active_thins);
pool->low_water_triggered = false; pool->low_water_triggered = false;
pool->suspended = true;
pool->shared_read_ds = dm_deferred_set_create(); pool->shared_read_ds = dm_deferred_set_create();
if (!pool->shared_read_ds) { if (!pool->shared_read_ds) {
...@@ -2764,20 +3115,77 @@ static int pool_preresume(struct dm_target *ti) ...@@ -2764,20 +3115,77 @@ static int pool_preresume(struct dm_target *ti)
return 0; return 0;
} }
static void pool_suspend_active_thins(struct pool *pool)
{
struct thin_c *tc;
/* Suspend all active thin devices */
tc = get_first_thin(pool);
while (tc) {
dm_internal_suspend_noflush(tc->thin_md);
tc = get_next_thin(pool, tc);
}
}
static void pool_resume_active_thins(struct pool *pool)
{
struct thin_c *tc;
/* Resume all active thin devices */
tc = get_first_thin(pool);
while (tc) {
dm_internal_resume(tc->thin_md);
tc = get_next_thin(pool, tc);
}
}
static void pool_resume(struct dm_target *ti) static void pool_resume(struct dm_target *ti)
{ {
struct pool_c *pt = ti->private; struct pool_c *pt = ti->private;
struct pool *pool = pt->pool; struct pool *pool = pt->pool;
unsigned long flags; unsigned long flags;
/*
* Must requeue active_thins' bios and then resume
* active_thins _before_ clearing 'suspend' flag.
*/
requeue_bios(pool);
pool_resume_active_thins(pool);
spin_lock_irqsave(&pool->lock, flags); spin_lock_irqsave(&pool->lock, flags);
pool->low_water_triggered = false; pool->low_water_triggered = false;
pool->suspended = false;
spin_unlock_irqrestore(&pool->lock, flags); spin_unlock_irqrestore(&pool->lock, flags);
requeue_bios(pool);
do_waker(&pool->waker.work); do_waker(&pool->waker.work);
} }
static void pool_presuspend(struct dm_target *ti)
{
struct pool_c *pt = ti->private;
struct pool *pool = pt->pool;
unsigned long flags;
spin_lock_irqsave(&pool->lock, flags);
pool->suspended = true;
spin_unlock_irqrestore(&pool->lock, flags);
pool_suspend_active_thins(pool);
}
static void pool_presuspend_undo(struct dm_target *ti)
{
struct pool_c *pt = ti->private;
struct pool *pool = pt->pool;
unsigned long flags;
pool_resume_active_thins(pool);
spin_lock_irqsave(&pool->lock, flags);
pool->suspended = false;
spin_unlock_irqrestore(&pool->lock, flags);
}
static void pool_postsuspend(struct dm_target *ti) static void pool_postsuspend(struct dm_target *ti)
{ {
struct pool_c *pt = ti->private; struct pool_c *pt = ti->private;
...@@ -2949,7 +3357,6 @@ static int process_release_metadata_snap_mesg(unsigned argc, char **argv, struct ...@@ -2949,7 +3357,6 @@ static int process_release_metadata_snap_mesg(unsigned argc, char **argv, struct
* create_thin <dev_id> * create_thin <dev_id>
* create_snap <dev_id> <origin_id> * create_snap <dev_id> <origin_id>
* delete <dev_id> * delete <dev_id>
* trim <dev_id> <new_size_in_sectors>
* set_transaction_id <current_trans_id> <new_trans_id> * set_transaction_id <current_trans_id> <new_trans_id>
* reserve_metadata_snap * reserve_metadata_snap
* release_metadata_snap * release_metadata_snap
...@@ -3177,15 +3584,35 @@ static void pool_io_hints(struct dm_target *ti, struct queue_limits *limits) ...@@ -3177,15 +3584,35 @@ static void pool_io_hints(struct dm_target *ti, struct queue_limits *limits)
{ {
struct pool_c *pt = ti->private; struct pool_c *pt = ti->private;
struct pool *pool = pt->pool; struct pool *pool = pt->pool;
uint64_t io_opt_sectors = limits->io_opt >> SECTOR_SHIFT; sector_t io_opt_sectors = limits->io_opt >> SECTOR_SHIFT;
/*
* If max_sectors is smaller than pool->sectors_per_block adjust it
* to the highest possible power-of-2 factor of pool->sectors_per_block.
* This is especially beneficial when the pool's data device is a RAID
* device that has a full stripe width that matches pool->sectors_per_block
* -- because even though partial RAID stripe-sized IOs will be issued to a
* single RAID stripe; when aggregated they will end on a full RAID stripe
* boundary.. which avoids additional partial RAID stripe writes cascading
*/
if (limits->max_sectors < pool->sectors_per_block) {
while (!is_factor(pool->sectors_per_block, limits->max_sectors)) {
if ((limits->max_sectors & (limits->max_sectors - 1)) == 0)
limits->max_sectors--;
limits->max_sectors = rounddown_pow_of_two(limits->max_sectors);
}
}
/* /*
* If the system-determined stacked limits are compatible with the * If the system-determined stacked limits are compatible with the
* pool's blocksize (io_opt is a factor) do not override them. * pool's blocksize (io_opt is a factor) do not override them.
*/ */
if (io_opt_sectors < pool->sectors_per_block || if (io_opt_sectors < pool->sectors_per_block ||
do_div(io_opt_sectors, pool->sectors_per_block)) { !is_factor(io_opt_sectors, pool->sectors_per_block)) {
blk_limits_io_min(limits, pool->sectors_per_block << SECTOR_SHIFT); if (is_factor(pool->sectors_per_block, limits->max_sectors))
blk_limits_io_min(limits, limits->max_sectors << SECTOR_SHIFT);
else
blk_limits_io_min(limits, pool->sectors_per_block << SECTOR_SHIFT);
blk_limits_io_opt(limits, pool->sectors_per_block << SECTOR_SHIFT); blk_limits_io_opt(limits, pool->sectors_per_block << SECTOR_SHIFT);
} }
...@@ -3214,11 +3641,13 @@ static struct target_type pool_target = { ...@@ -3214,11 +3641,13 @@ static struct target_type pool_target = {
.name = "thin-pool", .name = "thin-pool",
.features = DM_TARGET_SINGLETON | DM_TARGET_ALWAYS_WRITEABLE | .features = DM_TARGET_SINGLETON | DM_TARGET_ALWAYS_WRITEABLE |
DM_TARGET_IMMUTABLE, DM_TARGET_IMMUTABLE,
.version = {1, 13, 0}, .version = {1, 14, 0},
.module = THIS_MODULE, .module = THIS_MODULE,
.ctr = pool_ctr, .ctr = pool_ctr,
.dtr = pool_dtr, .dtr = pool_dtr,
.map = pool_map, .map = pool_map,
.presuspend = pool_presuspend,
.presuspend_undo = pool_presuspend_undo,
.postsuspend = pool_postsuspend, .postsuspend = pool_postsuspend,
.preresume = pool_preresume, .preresume = pool_preresume,
.resume = pool_resume, .resume = pool_resume,
...@@ -3248,14 +3677,14 @@ static void thin_dtr(struct dm_target *ti) ...@@ -3248,14 +3677,14 @@ static void thin_dtr(struct dm_target *ti)
struct thin_c *tc = ti->private; struct thin_c *tc = ti->private;
unsigned long flags; unsigned long flags;
thin_put(tc);
wait_for_completion(&tc->can_destroy);
spin_lock_irqsave(&tc->pool->lock, flags); spin_lock_irqsave(&tc->pool->lock, flags);
list_del_rcu(&tc->list); list_del_rcu(&tc->list);
spin_unlock_irqrestore(&tc->pool->lock, flags); spin_unlock_irqrestore(&tc->pool->lock, flags);
synchronize_rcu(); synchronize_rcu();
thin_put(tc);
wait_for_completion(&tc->can_destroy);
mutex_lock(&dm_thin_pool_table.mutex); mutex_lock(&dm_thin_pool_table.mutex);
__pool_dec(tc->pool); __pool_dec(tc->pool);
...@@ -3302,7 +3731,9 @@ static int thin_ctr(struct dm_target *ti, unsigned argc, char **argv) ...@@ -3302,7 +3731,9 @@ static int thin_ctr(struct dm_target *ti, unsigned argc, char **argv)
r = -ENOMEM; r = -ENOMEM;
goto out_unlock; goto out_unlock;
} }
tc->thin_md = dm_table_get_md(ti->table);
spin_lock_init(&tc->lock); spin_lock_init(&tc->lock);
INIT_LIST_HEAD(&tc->deferred_cells);
bio_list_init(&tc->deferred_bio_list); bio_list_init(&tc->deferred_bio_list);
bio_list_init(&tc->retry_on_resume_list); bio_list_init(&tc->retry_on_resume_list);
tc->sort_bio_list = RB_ROOT; tc->sort_bio_list = RB_ROOT;
...@@ -3347,18 +3778,18 @@ static int thin_ctr(struct dm_target *ti, unsigned argc, char **argv) ...@@ -3347,18 +3778,18 @@ static int thin_ctr(struct dm_target *ti, unsigned argc, char **argv)
if (get_pool_mode(tc->pool) == PM_FAIL) { if (get_pool_mode(tc->pool) == PM_FAIL) {
ti->error = "Couldn't open thin device, Pool is in fail mode"; ti->error = "Couldn't open thin device, Pool is in fail mode";
r = -EINVAL; r = -EINVAL;
goto bad_thin_open; goto bad_pool;
} }
r = dm_pool_open_thin_device(tc->pool->pmd, tc->dev_id, &tc->td); r = dm_pool_open_thin_device(tc->pool->pmd, tc->dev_id, &tc->td);
if (r) { if (r) {
ti->error = "Couldn't open thin internal device"; ti->error = "Couldn't open thin internal device";
goto bad_thin_open; goto bad_pool;
} }
r = dm_set_target_max_io_len(ti, tc->pool->sectors_per_block); r = dm_set_target_max_io_len(ti, tc->pool->sectors_per_block);
if (r) if (r)
goto bad_target_max_io_len; goto bad;
ti->num_flush_bios = 1; ti->num_flush_bios = 1;
ti->flush_supported = true; ti->flush_supported = true;
...@@ -3373,14 +3804,16 @@ static int thin_ctr(struct dm_target *ti, unsigned argc, char **argv) ...@@ -3373,14 +3804,16 @@ static int thin_ctr(struct dm_target *ti, unsigned argc, char **argv)
ti->split_discard_bios = true; ti->split_discard_bios = true;
} }
dm_put(pool_md);
mutex_unlock(&dm_thin_pool_table.mutex); mutex_unlock(&dm_thin_pool_table.mutex);
atomic_set(&tc->refcount, 1);
init_completion(&tc->can_destroy);
spin_lock_irqsave(&tc->pool->lock, flags); spin_lock_irqsave(&tc->pool->lock, flags);
if (tc->pool->suspended) {
spin_unlock_irqrestore(&tc->pool->lock, flags);
mutex_lock(&dm_thin_pool_table.mutex); /* reacquire for __pool_dec */
ti->error = "Unable to activate thin device while pool is suspended";
r = -EINVAL;
goto bad;
}
list_add_tail_rcu(&tc->list, &tc->pool->active_thins); list_add_tail_rcu(&tc->list, &tc->pool->active_thins);
spin_unlock_irqrestore(&tc->pool->lock, flags); spin_unlock_irqrestore(&tc->pool->lock, flags);
/* /*
...@@ -3391,11 +3824,16 @@ static int thin_ctr(struct dm_target *ti, unsigned argc, char **argv) ...@@ -3391,11 +3824,16 @@ static int thin_ctr(struct dm_target *ti, unsigned argc, char **argv)
*/ */
synchronize_rcu(); synchronize_rcu();
dm_put(pool_md);
atomic_set(&tc->refcount, 1);
init_completion(&tc->can_destroy);
return 0; return 0;
bad_target_max_io_len: bad:
dm_pool_close_thin_device(tc->td); dm_pool_close_thin_device(tc->td);
bad_thin_open: bad_pool:
__pool_dec(tc->pool); __pool_dec(tc->pool);
bad_pool_lookup: bad_pool_lookup:
dm_put(pool_md); dm_put(pool_md);
...@@ -3541,6 +3979,21 @@ static void thin_status(struct dm_target *ti, status_type_t type, ...@@ -3541,6 +3979,21 @@ static void thin_status(struct dm_target *ti, status_type_t type,
DMEMIT("Error"); DMEMIT("Error");
} }
static int thin_merge(struct dm_target *ti, struct bvec_merge_data *bvm,
struct bio_vec *biovec, int max_size)
{
struct thin_c *tc = ti->private;
struct request_queue *q = bdev_get_queue(tc->pool_dev->bdev);
if (!q->merge_bvec_fn)
return max_size;
bvm->bi_bdev = tc->pool_dev->bdev;
bvm->bi_sector = dm_target_offset(ti, bvm->bi_sector);
return min(max_size, q->merge_bvec_fn(q, bvm, biovec));
}
static int thin_iterate_devices(struct dm_target *ti, static int thin_iterate_devices(struct dm_target *ti,
iterate_devices_callout_fn fn, void *data) iterate_devices_callout_fn fn, void *data)
{ {
...@@ -3565,7 +4018,7 @@ static int thin_iterate_devices(struct dm_target *ti, ...@@ -3565,7 +4018,7 @@ static int thin_iterate_devices(struct dm_target *ti,
static struct target_type thin_target = { static struct target_type thin_target = {
.name = "thin", .name = "thin",
.version = {1, 13, 0}, .version = {1, 14, 0},
.module = THIS_MODULE, .module = THIS_MODULE,
.ctr = thin_ctr, .ctr = thin_ctr,
.dtr = thin_dtr, .dtr = thin_dtr,
...@@ -3575,6 +4028,7 @@ static struct target_type thin_target = { ...@@ -3575,6 +4028,7 @@ static struct target_type thin_target = {
.presuspend = thin_presuspend, .presuspend = thin_presuspend,
.postsuspend = thin_postsuspend, .postsuspend = thin_postsuspend,
.status = thin_status, .status = thin_status,
.merge = thin_merge,
.iterate_devices = thin_iterate_devices, .iterate_devices = thin_iterate_devices,
}; };
......
...@@ -19,6 +19,7 @@ ...@@ -19,6 +19,7 @@
#include <linux/idr.h> #include <linux/idr.h>
#include <linux/hdreg.h> #include <linux/hdreg.h>
#include <linux/delay.h> #include <linux/delay.h>
#include <linux/wait.h>
#include <trace/events/block.h> #include <trace/events/block.h>
...@@ -117,6 +118,7 @@ EXPORT_SYMBOL_GPL(dm_get_rq_mapinfo); ...@@ -117,6 +118,7 @@ EXPORT_SYMBOL_GPL(dm_get_rq_mapinfo);
#define DMF_NOFLUSH_SUSPENDING 5 #define DMF_NOFLUSH_SUSPENDING 5
#define DMF_MERGE_IS_OPTIONAL 6 #define DMF_MERGE_IS_OPTIONAL 6
#define DMF_DEFERRED_REMOVE 7 #define DMF_DEFERRED_REMOVE 7
#define DMF_SUSPENDED_INTERNALLY 8
/* /*
* A dummy definition to make RCU happy. * A dummy definition to make RCU happy.
...@@ -140,7 +142,7 @@ struct mapped_device { ...@@ -140,7 +142,7 @@ struct mapped_device {
* Use dm_get_live_table{_fast} or take suspend_lock for * Use dm_get_live_table{_fast} or take suspend_lock for
* dereference. * dereference.
*/ */
struct dm_table *map; struct dm_table __rcu *map;
struct list_head table_devices; struct list_head table_devices;
struct mutex table_devices_lock; struct mutex table_devices_lock;
...@@ -525,14 +527,15 @@ static int dm_blk_ioctl(struct block_device *bdev, fmode_t mode, ...@@ -525,14 +527,15 @@ static int dm_blk_ioctl(struct block_device *bdev, fmode_t mode,
goto out; goto out;
tgt = dm_table_get_target(map, 0); tgt = dm_table_get_target(map, 0);
if (!tgt->type->ioctl)
goto out;
if (dm_suspended_md(md)) { if (dm_suspended_md(md)) {
r = -EAGAIN; r = -EAGAIN;
goto out; goto out;
} }
if (tgt->type->ioctl) r = tgt->type->ioctl(tgt, cmd, arg);
r = tgt->type->ioctl(tgt, cmd, arg);
out: out:
dm_put_live_table(md, srcu_idx); dm_put_live_table(md, srcu_idx);
...@@ -1607,9 +1610,9 @@ static int dm_merge_bvec(struct request_queue *q, ...@@ -1607,9 +1610,9 @@ static int dm_merge_bvec(struct request_queue *q,
* Find maximum amount of I/O that won't need splitting * Find maximum amount of I/O that won't need splitting
*/ */
max_sectors = min(max_io_len(bvm->bi_sector, ti), max_sectors = min(max_io_len(bvm->bi_sector, ti),
(sector_t) BIO_MAX_SECTORS); (sector_t) queue_max_sectors(q));
max_size = (max_sectors << SECTOR_SHIFT) - bvm->bi_size; max_size = (max_sectors << SECTOR_SHIFT) - bvm->bi_size;
if (max_size < 0) if (unlikely(max_size < 0)) /* this shouldn't _ever_ happen */
max_size = 0; max_size = 0;
/* /*
...@@ -1621,10 +1624,10 @@ static int dm_merge_bvec(struct request_queue *q, ...@@ -1621,10 +1624,10 @@ static int dm_merge_bvec(struct request_queue *q,
max_size = ti->type->merge(ti, bvm, biovec, max_size); max_size = ti->type->merge(ti, bvm, biovec, max_size);
/* /*
* If the target doesn't support merge method and some of the devices * If the target doesn't support merge method and some of the devices
* provided their merge_bvec method (we know this by looking at * provided their merge_bvec method (we know this by looking for the
* queue_max_hw_sectors), then we can't allow bios with multiple vector * max_hw_sectors that dm_set_device_limits may set), then we can't
* entries. So always set max_size to 0, and the code below allows * allow bios with multiple vector entries. So always set max_size
* just one page. * to 0, and the code below allows just one page.
*/ */
else if (queue_max_hw_sectors(q) <= PAGE_SIZE >> 9) else if (queue_max_hw_sectors(q) <= PAGE_SIZE >> 9)
max_size = 0; max_size = 0;
...@@ -2332,7 +2335,7 @@ static struct dm_table *__bind(struct mapped_device *md, struct dm_table *t, ...@@ -2332,7 +2335,7 @@ static struct dm_table *__bind(struct mapped_device *md, struct dm_table *t,
merge_is_optional = dm_table_merge_is_optional(t); merge_is_optional = dm_table_merge_is_optional(t);
old_map = md->map; old_map = rcu_dereference_protected(md->map, lockdep_is_held(&md->suspend_lock));
rcu_assign_pointer(md->map, t); rcu_assign_pointer(md->map, t);
md->immutable_target_type = dm_table_get_immutable_target_type(t); md->immutable_target_type = dm_table_get_immutable_target_type(t);
...@@ -2341,7 +2344,8 @@ static struct dm_table *__bind(struct mapped_device *md, struct dm_table *t, ...@@ -2341,7 +2344,8 @@ static struct dm_table *__bind(struct mapped_device *md, struct dm_table *t,
set_bit(DMF_MERGE_IS_OPTIONAL, &md->flags); set_bit(DMF_MERGE_IS_OPTIONAL, &md->flags);
else else
clear_bit(DMF_MERGE_IS_OPTIONAL, &md->flags); clear_bit(DMF_MERGE_IS_OPTIONAL, &md->flags);
dm_sync_table(md); if (old_map)
dm_sync_table(md);
return old_map; return old_map;
} }
...@@ -2351,7 +2355,7 @@ static struct dm_table *__bind(struct mapped_device *md, struct dm_table *t, ...@@ -2351,7 +2355,7 @@ static struct dm_table *__bind(struct mapped_device *md, struct dm_table *t,
*/ */
static struct dm_table *__unbind(struct mapped_device *md) static struct dm_table *__unbind(struct mapped_device *md)
{ {
struct dm_table *map = md->map; struct dm_table *map = rcu_dereference_protected(md->map, 1);
if (!map) if (!map)
return NULL; return NULL;
...@@ -2716,36 +2720,18 @@ static void unlock_fs(struct mapped_device *md) ...@@ -2716,36 +2720,18 @@ static void unlock_fs(struct mapped_device *md)
} }
/* /*
* We need to be able to change a mapping table under a mounted * If __dm_suspend returns 0, the device is completely quiescent
* filesystem. For example we might want to move some data in * now. There is no request-processing activity. All new requests
* the background. Before the table can be swapped with * are being added to md->deferred list.
* dm_bind_table, dm_suspend must be called to flush any in
* flight bios and ensure that any further io gets deferred.
*/
/*
* Suspend mechanism in request-based dm.
* *
* 1. Flush all I/Os by lock_fs() if needed. * Caller must hold md->suspend_lock
* 2. Stop dispatching any I/O by stopping the request_queue.
* 3. Wait for all in-flight I/Os to be completed or requeued.
*
* To abort suspend, start the request_queue.
*/ */
int dm_suspend(struct mapped_device *md, unsigned suspend_flags) static int __dm_suspend(struct mapped_device *md, struct dm_table *map,
unsigned suspend_flags, int interruptible)
{ {
struct dm_table *map = NULL; bool do_lockfs = suspend_flags & DM_SUSPEND_LOCKFS_FLAG;
int r = 0; bool noflush = suspend_flags & DM_SUSPEND_NOFLUSH_FLAG;
int do_lockfs = suspend_flags & DM_SUSPEND_LOCKFS_FLAG ? 1 : 0; int r;
int noflush = suspend_flags & DM_SUSPEND_NOFLUSH_FLAG ? 1 : 0;
mutex_lock(&md->suspend_lock);
if (dm_suspended_md(md)) {
r = -EINVAL;
goto out_unlock;
}
map = md->map;
/* /*
* DMF_NOFLUSH_SUSPENDING must be set before presuspend. * DMF_NOFLUSH_SUSPENDING must be set before presuspend.
...@@ -2754,7 +2740,10 @@ int dm_suspend(struct mapped_device *md, unsigned suspend_flags) ...@@ -2754,7 +2740,10 @@ int dm_suspend(struct mapped_device *md, unsigned suspend_flags)
if (noflush) if (noflush)
set_bit(DMF_NOFLUSH_SUSPENDING, &md->flags); set_bit(DMF_NOFLUSH_SUSPENDING, &md->flags);
/* This does not get reverted if there's an error later. */ /*
* This gets reverted if there's an error later and the targets
* provide the .presuspend_undo hook.
*/
dm_table_presuspend_targets(map); dm_table_presuspend_targets(map);
/* /*
...@@ -2765,8 +2754,10 @@ int dm_suspend(struct mapped_device *md, unsigned suspend_flags) ...@@ -2765,8 +2754,10 @@ int dm_suspend(struct mapped_device *md, unsigned suspend_flags)
*/ */
if (!noflush && do_lockfs) { if (!noflush && do_lockfs) {
r = lock_fs(md); r = lock_fs(md);
if (r) if (r) {
goto out_unlock; dm_table_presuspend_undo_targets(map);
return r;
}
} }
/* /*
...@@ -2782,7 +2773,8 @@ int dm_suspend(struct mapped_device *md, unsigned suspend_flags) ...@@ -2782,7 +2773,8 @@ int dm_suspend(struct mapped_device *md, unsigned suspend_flags)
* flush_workqueue(md->wq). * flush_workqueue(md->wq).
*/ */
set_bit(DMF_BLOCK_IO_FOR_SUSPEND, &md->flags); set_bit(DMF_BLOCK_IO_FOR_SUSPEND, &md->flags);
synchronize_srcu(&md->io_barrier); if (map)
synchronize_srcu(&md->io_barrier);
/* /*
* Stop md->queue before flushing md->wq in case request-based * Stop md->queue before flushing md->wq in case request-based
...@@ -2798,11 +2790,12 @@ int dm_suspend(struct mapped_device *md, unsigned suspend_flags) ...@@ -2798,11 +2790,12 @@ int dm_suspend(struct mapped_device *md, unsigned suspend_flags)
* We call dm_wait_for_completion to wait for all existing requests * We call dm_wait_for_completion to wait for all existing requests
* to finish. * to finish.
*/ */
r = dm_wait_for_completion(md, TASK_INTERRUPTIBLE); r = dm_wait_for_completion(md, interruptible);
if (noflush) if (noflush)
clear_bit(DMF_NOFLUSH_SUSPENDING, &md->flags); clear_bit(DMF_NOFLUSH_SUSPENDING, &md->flags);
synchronize_srcu(&md->io_barrier); if (map)
synchronize_srcu(&md->io_barrier);
/* were we interrupted ? */ /* were we interrupted ? */
if (r < 0) { if (r < 0) {
...@@ -2812,14 +2805,56 @@ int dm_suspend(struct mapped_device *md, unsigned suspend_flags) ...@@ -2812,14 +2805,56 @@ int dm_suspend(struct mapped_device *md, unsigned suspend_flags)
start_queue(md->queue); start_queue(md->queue);
unlock_fs(md); unlock_fs(md);
goto out_unlock; /* pushback list is already flushed, so skip flush */ dm_table_presuspend_undo_targets(map);
/* pushback list is already flushed, so skip flush */
} }
/* return r;
* If dm_wait_for_completion returned 0, the device is completely }
* quiescent now. There is no request-processing activity. All new
* requests are being added to md->deferred list. /*
*/ * We need to be able to change a mapping table under a mounted
* filesystem. For example we might want to move some data in
* the background. Before the table can be swapped with
* dm_bind_table, dm_suspend must be called to flush any in
* flight bios and ensure that any further io gets deferred.
*/
/*
* Suspend mechanism in request-based dm.
*
* 1. Flush all I/Os by lock_fs() if needed.
* 2. Stop dispatching any I/O by stopping the request_queue.
* 3. Wait for all in-flight I/Os to be completed or requeued.
*
* To abort suspend, start the request_queue.
*/
int dm_suspend(struct mapped_device *md, unsigned suspend_flags)
{
struct dm_table *map = NULL;
int r = 0;
retry:
mutex_lock_nested(&md->suspend_lock, SINGLE_DEPTH_NESTING);
if (dm_suspended_md(md)) {
r = -EINVAL;
goto out_unlock;
}
if (dm_suspended_internally_md(md)) {
/* already internally suspended, wait for internal resume */
mutex_unlock(&md->suspend_lock);
r = wait_on_bit(&md->flags, DMF_SUSPENDED_INTERNALLY, TASK_INTERRUPTIBLE);
if (r)
return r;
goto retry;
}
map = rcu_dereference_protected(md->map, lockdep_is_held(&md->suspend_lock));
r = __dm_suspend(md, map, suspend_flags, TASK_INTERRUPTIBLE);
if (r)
goto out_unlock;
set_bit(DMF_SUSPENDED, &md->flags); set_bit(DMF_SUSPENDED, &md->flags);
...@@ -2830,22 +2865,13 @@ int dm_suspend(struct mapped_device *md, unsigned suspend_flags) ...@@ -2830,22 +2865,13 @@ int dm_suspend(struct mapped_device *md, unsigned suspend_flags)
return r; return r;
} }
int dm_resume(struct mapped_device *md) static int __dm_resume(struct mapped_device *md, struct dm_table *map)
{ {
int r = -EINVAL; if (map) {
struct dm_table *map = NULL; int r = dm_table_resume_targets(map);
if (r)
mutex_lock(&md->suspend_lock); return r;
if (!dm_suspended_md(md)) }
goto out;
map = md->map;
if (!map || !dm_table_get_size(map))
goto out;
r = dm_table_resume_targets(map);
if (r)
goto out;
dm_queue_flush(md); dm_queue_flush(md);
...@@ -2859,6 +2885,37 @@ int dm_resume(struct mapped_device *md) ...@@ -2859,6 +2885,37 @@ int dm_resume(struct mapped_device *md)
unlock_fs(md); unlock_fs(md);
return 0;
}
int dm_resume(struct mapped_device *md)
{
int r = -EINVAL;
struct dm_table *map = NULL;
retry:
mutex_lock_nested(&md->suspend_lock, SINGLE_DEPTH_NESTING);
if (!dm_suspended_md(md))
goto out;
if (dm_suspended_internally_md(md)) {
/* already internally suspended, wait for internal resume */
mutex_unlock(&md->suspend_lock);
r = wait_on_bit(&md->flags, DMF_SUSPENDED_INTERNALLY, TASK_INTERRUPTIBLE);
if (r)
return r;
goto retry;
}
map = rcu_dereference_protected(md->map, lockdep_is_held(&md->suspend_lock));
if (!map || !dm_table_get_size(map))
goto out;
r = __dm_resume(md, map);
if (r)
goto out;
clear_bit(DMF_SUSPENDED, &md->flags); clear_bit(DMF_SUSPENDED, &md->flags);
r = 0; r = 0;
...@@ -2872,15 +2929,80 @@ int dm_resume(struct mapped_device *md) ...@@ -2872,15 +2929,80 @@ int dm_resume(struct mapped_device *md)
* Internal suspend/resume works like userspace-driven suspend. It waits * Internal suspend/resume works like userspace-driven suspend. It waits
* until all bios finish and prevents issuing new bios to the target drivers. * until all bios finish and prevents issuing new bios to the target drivers.
* It may be used only from the kernel. * It may be used only from the kernel.
*
* Internal suspend holds md->suspend_lock, which prevents interaction with
* userspace-driven suspend.
*/ */
void dm_internal_suspend(struct mapped_device *md) static void __dm_internal_suspend(struct mapped_device *md, unsigned suspend_flags)
{ {
mutex_lock(&md->suspend_lock); struct dm_table *map = NULL;
if (dm_suspended_internally_md(md))
return; /* nested internal suspend */
if (dm_suspended_md(md)) {
set_bit(DMF_SUSPENDED_INTERNALLY, &md->flags);
return; /* nest suspend */
}
map = rcu_dereference_protected(md->map, lockdep_is_held(&md->suspend_lock));
/*
* Using TASK_UNINTERRUPTIBLE because only NOFLUSH internal suspend is
* supported. Properly supporting a TASK_INTERRUPTIBLE internal suspend
* would require changing .presuspend to return an error -- avoid this
* until there is a need for more elaborate variants of internal suspend.
*/
(void) __dm_suspend(md, map, suspend_flags, TASK_UNINTERRUPTIBLE);
set_bit(DMF_SUSPENDED_INTERNALLY, &md->flags);
dm_table_postsuspend_targets(map);
}
static void __dm_internal_resume(struct mapped_device *md)
{
if (!dm_suspended_internally_md(md))
return; /* resume from nested internal suspend */
if (dm_suspended_md(md)) if (dm_suspended_md(md))
goto done; /* resume from nested suspend */
/*
* NOTE: existing callers don't need to call dm_table_resume_targets
* (which may fail -- so best to avoid it for now by passing NULL map)
*/
(void) __dm_resume(md, NULL);
done:
clear_bit(DMF_SUSPENDED_INTERNALLY, &md->flags);
smp_mb__after_atomic();
wake_up_bit(&md->flags, DMF_SUSPENDED_INTERNALLY);
}
void dm_internal_suspend_noflush(struct mapped_device *md)
{
mutex_lock(&md->suspend_lock);
__dm_internal_suspend(md, DM_SUSPEND_NOFLUSH_FLAG);
mutex_unlock(&md->suspend_lock);
}
EXPORT_SYMBOL_GPL(dm_internal_suspend_noflush);
void dm_internal_resume(struct mapped_device *md)
{
mutex_lock(&md->suspend_lock);
__dm_internal_resume(md);
mutex_unlock(&md->suspend_lock);
}
EXPORT_SYMBOL_GPL(dm_internal_resume);
/*
* Fast variants of internal suspend/resume hold md->suspend_lock,
* which prevents interaction with userspace-driven suspend.
*/
void dm_internal_suspend_fast(struct mapped_device *md)
{
mutex_lock(&md->suspend_lock);
if (dm_suspended_md(md) || dm_suspended_internally_md(md))
return; return;
set_bit(DMF_BLOCK_IO_FOR_SUSPEND, &md->flags); set_bit(DMF_BLOCK_IO_FOR_SUSPEND, &md->flags);
...@@ -2889,9 +3011,9 @@ void dm_internal_suspend(struct mapped_device *md) ...@@ -2889,9 +3011,9 @@ void dm_internal_suspend(struct mapped_device *md)
dm_wait_for_completion(md, TASK_UNINTERRUPTIBLE); dm_wait_for_completion(md, TASK_UNINTERRUPTIBLE);
} }
void dm_internal_resume(struct mapped_device *md) void dm_internal_resume_fast(struct mapped_device *md)
{ {
if (dm_suspended_md(md)) if (dm_suspended_md(md) || dm_suspended_internally_md(md))
goto done; goto done;
dm_queue_flush(md); dm_queue_flush(md);
...@@ -2977,6 +3099,11 @@ int dm_suspended_md(struct mapped_device *md) ...@@ -2977,6 +3099,11 @@ int dm_suspended_md(struct mapped_device *md)
return test_bit(DMF_SUSPENDED, &md->flags); return test_bit(DMF_SUSPENDED, &md->flags);
} }
int dm_suspended_internally_md(struct mapped_device *md)
{
return test_bit(DMF_SUSPENDED_INTERNALLY, &md->flags);
}
int dm_test_deferred_remove_flag(struct mapped_device *md) int dm_test_deferred_remove_flag(struct mapped_device *md)
{ {
return test_bit(DMF_DEFERRED_REMOVE, &md->flags); return test_bit(DMF_DEFERRED_REMOVE, &md->flags);
......
...@@ -65,6 +65,7 @@ void dm_table_set_restrictions(struct dm_table *t, struct request_queue *q, ...@@ -65,6 +65,7 @@ void dm_table_set_restrictions(struct dm_table *t, struct request_queue *q,
struct queue_limits *limits); struct queue_limits *limits);
struct list_head *dm_table_get_devices(struct dm_table *t); struct list_head *dm_table_get_devices(struct dm_table *t);
void dm_table_presuspend_targets(struct dm_table *t); void dm_table_presuspend_targets(struct dm_table *t);
void dm_table_presuspend_undo_targets(struct dm_table *t);
void dm_table_postsuspend_targets(struct dm_table *t); void dm_table_postsuspend_targets(struct dm_table *t);
int dm_table_resume_targets(struct dm_table *t); int dm_table_resume_targets(struct dm_table *t);
int dm_table_any_congested(struct dm_table *t, int bdi_bits); int dm_table_any_congested(struct dm_table *t, int bdi_bits);
...@@ -128,6 +129,15 @@ int dm_deleting_md(struct mapped_device *md); ...@@ -128,6 +129,15 @@ int dm_deleting_md(struct mapped_device *md);
*/ */
int dm_suspended_md(struct mapped_device *md); int dm_suspended_md(struct mapped_device *md);
/*
* Internal suspend and resume methods.
*/
int dm_suspended_internally_md(struct mapped_device *md);
void dm_internal_suspend_fast(struct mapped_device *md);
void dm_internal_resume_fast(struct mapped_device *md);
void dm_internal_suspend_noflush(struct mapped_device *md);
void dm_internal_resume(struct mapped_device *md);
/* /*
* Test if the device is scheduled for deferred remove. * Test if the device is scheduled for deferred remove.
*/ */
......
...@@ -645,8 +645,10 @@ static int array_resize(struct dm_array_info *info, dm_block_t root, ...@@ -645,8 +645,10 @@ static int array_resize(struct dm_array_info *info, dm_block_t root,
int r; int r;
struct resize resize; struct resize resize;
if (old_size == new_size) if (old_size == new_size) {
*new_root = root;
return 0; return 0;
}
resize.info = info; resize.info = info;
resize.root = root; resize.root = root;
......
...@@ -564,7 +564,9 @@ static int sm_bootstrap_get_nr_blocks(struct dm_space_map *sm, dm_block_t *count ...@@ -564,7 +564,9 @@ static int sm_bootstrap_get_nr_blocks(struct dm_space_map *sm, dm_block_t *count
{ {
struct sm_metadata *smm = container_of(sm, struct sm_metadata, sm); struct sm_metadata *smm = container_of(sm, struct sm_metadata, sm);
return smm->ll.nr_blocks; *count = smm->ll.nr_blocks;
return 0;
} }
static int sm_bootstrap_get_nr_free(struct dm_space_map *sm, dm_block_t *count) static int sm_bootstrap_get_nr_free(struct dm_space_map *sm, dm_block_t *count)
...@@ -581,7 +583,9 @@ static int sm_bootstrap_get_count(struct dm_space_map *sm, dm_block_t b, ...@@ -581,7 +583,9 @@ static int sm_bootstrap_get_count(struct dm_space_map *sm, dm_block_t b,
{ {
struct sm_metadata *smm = container_of(sm, struct sm_metadata, sm); struct sm_metadata *smm = container_of(sm, struct sm_metadata, sm);
return b < smm->begin ? 1 : 0; *result = (b < smm->begin) ? 1 : 0;
return 0;
} }
static int sm_bootstrap_count_is_more_than_one(struct dm_space_map *sm, static int sm_bootstrap_count_is_more_than_one(struct dm_space_map *sm,
......
...@@ -10,6 +10,8 @@ ...@@ -10,6 +10,8 @@
#include "dm-persistent-data-internal.h" #include "dm-persistent-data-internal.h"
#include <linux/export.h> #include <linux/export.h>
#include <linux/mutex.h>
#include <linux/hash.h>
#include <linux/slab.h> #include <linux/slab.h>
#include <linux/device-mapper.h> #include <linux/device-mapper.h>
...@@ -17,6 +19,61 @@ ...@@ -17,6 +19,61 @@
/*----------------------------------------------------------------*/ /*----------------------------------------------------------------*/
#define PREFETCH_SIZE 128
#define PREFETCH_BITS 7
#define PREFETCH_SENTINEL ((dm_block_t) -1ULL)
struct prefetch_set {
struct mutex lock;
dm_block_t blocks[PREFETCH_SIZE];
};
static unsigned prefetch_hash(dm_block_t b)
{
return hash_64(b, PREFETCH_BITS);
}
static void prefetch_wipe(struct prefetch_set *p)
{
unsigned i;
for (i = 0; i < PREFETCH_SIZE; i++)
p->blocks[i] = PREFETCH_SENTINEL;
}
static void prefetch_init(struct prefetch_set *p)
{
mutex_init(&p->lock);
prefetch_wipe(p);
}
static void prefetch_add(struct prefetch_set *p, dm_block_t b)
{
unsigned h = prefetch_hash(b);
mutex_lock(&p->lock);
if (p->blocks[h] == PREFETCH_SENTINEL)
p->blocks[h] = b;
mutex_unlock(&p->lock);
}
static void prefetch_issue(struct prefetch_set *p, struct dm_block_manager *bm)
{
unsigned i;
mutex_lock(&p->lock);
for (i = 0; i < PREFETCH_SIZE; i++)
if (p->blocks[i] != PREFETCH_SENTINEL) {
dm_bm_prefetch(bm, p->blocks[i]);
p->blocks[i] = PREFETCH_SENTINEL;
}
mutex_unlock(&p->lock);
}
/*----------------------------------------------------------------*/
struct shadow_info { struct shadow_info {
struct hlist_node hlist; struct hlist_node hlist;
dm_block_t where; dm_block_t where;
...@@ -37,6 +94,8 @@ struct dm_transaction_manager { ...@@ -37,6 +94,8 @@ struct dm_transaction_manager {
spinlock_t lock; spinlock_t lock;
struct hlist_head buckets[DM_HASH_SIZE]; struct hlist_head buckets[DM_HASH_SIZE];
struct prefetch_set prefetches;
}; };
/*----------------------------------------------------------------*/ /*----------------------------------------------------------------*/
...@@ -117,6 +176,8 @@ static struct dm_transaction_manager *dm_tm_create(struct dm_block_manager *bm, ...@@ -117,6 +176,8 @@ static struct dm_transaction_manager *dm_tm_create(struct dm_block_manager *bm,
for (i = 0; i < DM_HASH_SIZE; i++) for (i = 0; i < DM_HASH_SIZE; i++)
INIT_HLIST_HEAD(tm->buckets + i); INIT_HLIST_HEAD(tm->buckets + i);
prefetch_init(&tm->prefetches);
return tm; return tm;
} }
...@@ -268,8 +329,14 @@ int dm_tm_read_lock(struct dm_transaction_manager *tm, dm_block_t b, ...@@ -268,8 +329,14 @@ int dm_tm_read_lock(struct dm_transaction_manager *tm, dm_block_t b,
struct dm_block_validator *v, struct dm_block_validator *v,
struct dm_block **blk) struct dm_block **blk)
{ {
if (tm->is_clone) if (tm->is_clone) {
return dm_bm_read_try_lock(tm->real->bm, b, v, blk); int r = dm_bm_read_try_lock(tm->real->bm, b, v, blk);
if (r == -EWOULDBLOCK)
prefetch_add(&tm->real->prefetches, b);
return r;
}
return dm_bm_read_lock(tm->bm, b, v, blk); return dm_bm_read_lock(tm->bm, b, v, blk);
} }
...@@ -317,6 +384,12 @@ struct dm_block_manager *dm_tm_get_bm(struct dm_transaction_manager *tm) ...@@ -317,6 +384,12 @@ struct dm_block_manager *dm_tm_get_bm(struct dm_transaction_manager *tm)
return tm->bm; return tm->bm;
} }
void dm_tm_issue_prefetches(struct dm_transaction_manager *tm)
{
prefetch_issue(&tm->prefetches, tm->bm);
}
EXPORT_SYMBOL_GPL(dm_tm_issue_prefetches);
/*----------------------------------------------------------------*/ /*----------------------------------------------------------------*/
static int dm_tm_create_internal(struct dm_block_manager *bm, static int dm_tm_create_internal(struct dm_block_manager *bm,
......
...@@ -108,6 +108,13 @@ int dm_tm_ref(struct dm_transaction_manager *tm, dm_block_t b, ...@@ -108,6 +108,13 @@ int dm_tm_ref(struct dm_transaction_manager *tm, dm_block_t b,
struct dm_block_manager *dm_tm_get_bm(struct dm_transaction_manager *tm); struct dm_block_manager *dm_tm_get_bm(struct dm_transaction_manager *tm);
/*
* If you're using a non-blocking clone the tm will build up a list of
* requested blocks that weren't in core. This call will request those
* blocks to be prefetched.
*/
void dm_tm_issue_prefetches(struct dm_transaction_manager *tm);
/* /*
* A little utility that ties the knot by producing a transaction manager * A little utility that ties the knot by producing a transaction manager
* that has a space map managed by the transaction manager... * that has a space map managed by the transaction manager...
......
...@@ -64,6 +64,7 @@ typedef int (*dm_request_endio_fn) (struct dm_target *ti, ...@@ -64,6 +64,7 @@ typedef int (*dm_request_endio_fn) (struct dm_target *ti,
union map_info *map_context); union map_info *map_context);
typedef void (*dm_presuspend_fn) (struct dm_target *ti); typedef void (*dm_presuspend_fn) (struct dm_target *ti);
typedef void (*dm_presuspend_undo_fn) (struct dm_target *ti);
typedef void (*dm_postsuspend_fn) (struct dm_target *ti); typedef void (*dm_postsuspend_fn) (struct dm_target *ti);
typedef int (*dm_preresume_fn) (struct dm_target *ti); typedef int (*dm_preresume_fn) (struct dm_target *ti);
typedef void (*dm_resume_fn) (struct dm_target *ti); typedef void (*dm_resume_fn) (struct dm_target *ti);
...@@ -145,6 +146,7 @@ struct target_type { ...@@ -145,6 +146,7 @@ struct target_type {
dm_endio_fn end_io; dm_endio_fn end_io;
dm_request_endio_fn rq_end_io; dm_request_endio_fn rq_end_io;
dm_presuspend_fn presuspend; dm_presuspend_fn presuspend;
dm_presuspend_undo_fn presuspend_undo;
dm_postsuspend_fn postsuspend; dm_postsuspend_fn postsuspend;
dm_preresume_fn preresume; dm_preresume_fn preresume;
dm_resume_fn resume; dm_resume_fn resume;
......
...@@ -267,9 +267,9 @@ enum { ...@@ -267,9 +267,9 @@ enum {
#define DM_DEV_SET_GEOMETRY _IOWR(DM_IOCTL, DM_DEV_SET_GEOMETRY_CMD, struct dm_ioctl) #define DM_DEV_SET_GEOMETRY _IOWR(DM_IOCTL, DM_DEV_SET_GEOMETRY_CMD, struct dm_ioctl)
#define DM_VERSION_MAJOR 4 #define DM_VERSION_MAJOR 4
#define DM_VERSION_MINOR 28 #define DM_VERSION_MINOR 29
#define DM_VERSION_PATCHLEVEL 0 #define DM_VERSION_PATCHLEVEL 0
#define DM_VERSION_EXTRA "-ioctl (2014-09-17)" #define DM_VERSION_EXTRA "-ioctl (2014-10-28)"
/* Status bits */ /* Status bits */
#define DM_READONLY_FLAG (1 << 0) /* In/Out */ #define DM_READONLY_FLAG (1 << 0) /* In/Out */
...@@ -352,4 +352,9 @@ enum { ...@@ -352,4 +352,9 @@ enum {
*/ */
#define DM_DEFERRED_REMOVE (1 << 17) /* In/Out */ #define DM_DEFERRED_REMOVE (1 << 17) /* In/Out */
/*
* If set, the device is suspended internally.
*/
#define DM_INTERNAL_SUSPEND_FLAG (1 << 18) /* Out */
#endif /* _LINUX_DM_IOCTL_H */ #endif /* _LINUX_DM_IOCTL_H */
Markdown is supported
0%
or
You are about to add 0 people to the discussion. Proceed with caution.
Finish editing this message first!
Please register or to comment