Commit a969324f authored by Linus Torvalds's avatar Linus Torvalds

Merge tag 'for-5.9/dm-fixes-2' of...

Merge tag 'for-5.9/dm-fixes-2' of git://git.kernel.org/pub/scm/linux/kernel/git/device-mapper/linux-dm

Pull device mapper fixes from Mike Snitzer:

 - DM core fix for incorrect double bio splitting. Keep "fixing" this
   because past attempts didn't fully appreciate the liability relative
   to recursive bio splitting. This fix limits DM's bio splitting to a
   single method and does _not_ use blk_queue_split() for normal IO.

 - DM crypt Documentation updates for features added during 5.9 merge.

* tag 'for-5.9/dm-fixes-2' of git://git.kernel.org/pub/scm/linux/kernel/git/device-mapper/linux-dm:
  dm crypt: document encrypted keyring key option
  dm crypt: document new no_workqueue flags
  dm: fix comment in dm_process_bio()
  dm: fix bio splitting and its bio completion order for regular IO
parents bffac4b5 4c07ae0a
......@@ -67,7 +67,7 @@ Parameters::
the value passed in <key_size>.
<key_type>
Either 'logon' or 'user' kernel key type.
Either 'logon', 'user' or 'encrypted' kernel key type.
<key_description>
The kernel keyring key description crypt target should look for
......@@ -121,6 +121,14 @@ submit_from_crypt_cpus
thread because it benefits CFQ to have writes submitted using the
same context.
no_read_workqueue
Bypass dm-crypt internal workqueue and process read requests synchronously.
no_write_workqueue
Bypass dm-crypt internal workqueue and process write requests synchronously.
This option is automatically enabled for host-managed zoned block devices
(e.g. host-managed SMR hard-disks).
integrity:<bytes>:<type>
The device requires additional <bytes> metadata per-sector stored
in per-bio integrity structure. This metadata must by provided
......
......@@ -1724,23 +1724,6 @@ static blk_qc_t __process_bio(struct mapped_device *md, struct dm_table *map,
return ret;
}
static void dm_queue_split(struct mapped_device *md, struct dm_target *ti, struct bio **bio)
{
unsigned len, sector_count;
sector_count = bio_sectors(*bio);
len = min_t(sector_t, max_io_len((*bio)->bi_iter.bi_sector, ti), sector_count);
if (sector_count > len) {
struct bio *split = bio_split(*bio, len, GFP_NOIO, &md->queue->bio_split);
bio_chain(split, *bio);
trace_block_split(md->queue, split, (*bio)->bi_iter.bi_sector);
submit_bio_noacct(*bio);
*bio = split;
}
}
static blk_qc_t dm_process_bio(struct mapped_device *md,
struct dm_table *map, struct bio *bio)
{
......@@ -1761,21 +1744,21 @@ static blk_qc_t dm_process_bio(struct mapped_device *md,
}
/*
* If in ->queue_bio we need to use blk_queue_split(), otherwise
* If in ->submit_bio we need to use blk_queue_split(), otherwise
* queue_limits for abnormal requests (e.g. discard, writesame, etc)
* won't be imposed.
* If called from dm_wq_work() for deferred bio processing, bio
* was already handled by following code with previous ->submit_bio.
*/
if (current->bio_list) {
if (is_abnormal_io(bio))
blk_queue_split(&bio);
else
dm_queue_split(md, ti, &bio);
/* regular IO is split by __split_and_process_bio */
}
if (dm_get_md_type(md) == DM_TYPE_NVME_BIO_BASED)
return __process_bio(md, map, bio, ti);
else
return __split_and_process_bio(md, map, bio);
return __split_and_process_bio(md, map, bio);
}
static blk_qc_t dm_submit_bio(struct bio *bio)
......
Markdown is supported
0%
or
You are about to add 0 people to the discussion. Proceed with caution.
Finish editing this message first!
Please register or to comment