Commit e893fba9 authored by Heinz Mauelshagen's avatar Heinz Mauelshagen Committed by Mike Snitzer

dm cache: fix access beyond end of origin device

In order to avoid wasting cache space a partial block at the end of the
origin device is not cached.  Unfortunately, the check for such a
partial block at the end of the origin device was flawed.

Fix accesses beyond the end of the origin device that occured due to
attempted promotion of an undetected partial block by:

- initializing the per bio data struct to allow cache_end_io to work properly
- recognizing access to the partial block at the end of the origin device
- avoiding out of bounds access to the discard bitset

Otherwise, users can experience errors like the following:

 attempt to access beyond end of device
 dm-5: rw=0, want=20971520, limit=20971456
 ...
 device-mapper: cache: promotion failed; couldn't copy block
Signed-off-by: default avatarHeinz Mauelshagen <heinzm@redhat.com>
Acked-by: default avatarJoe Thornber <ejt@redhat.com>
Signed-off-by: default avatarMike Snitzer <snitzer@redhat.com>
Cc: stable@vger.kernel.org
parent 8b9d9666
...@@ -2465,20 +2465,18 @@ static int cache_map(struct dm_target *ti, struct bio *bio) ...@@ -2465,20 +2465,18 @@ static int cache_map(struct dm_target *ti, struct bio *bio)
bool discarded_block; bool discarded_block;
struct dm_bio_prison_cell *cell; struct dm_bio_prison_cell *cell;
struct policy_result lookup_result; struct policy_result lookup_result;
struct per_bio_data *pb; struct per_bio_data *pb = init_per_bio_data(bio, pb_data_size);
if (from_oblock(block) > from_oblock(cache->origin_blocks)) { if (unlikely(from_oblock(block) >= from_oblock(cache->origin_blocks))) {
/* /*
* This can only occur if the io goes to a partial block at * This can only occur if the io goes to a partial block at
* the end of the origin device. We don't cache these. * the end of the origin device. We don't cache these.
* Just remap to the origin and carry on. * Just remap to the origin and carry on.
*/ */
remap_to_origin_clear_discard(cache, bio, block); remap_to_origin(cache, bio);
return DM_MAPIO_REMAPPED; return DM_MAPIO_REMAPPED;
} }
pb = init_per_bio_data(bio, pb_data_size);
if (bio->bi_rw & (REQ_FLUSH | REQ_FUA | REQ_DISCARD)) { if (bio->bi_rw & (REQ_FLUSH | REQ_FUA | REQ_DISCARD)) {
defer_bio(cache, bio); defer_bio(cache, bio);
return DM_MAPIO_SUBMITTED; return DM_MAPIO_SUBMITTED;
......
Markdown is supported
0%
or
You are about to add 0 people to the discussion. Proceed with caution.
Finish editing this message first!
Please register or to comment