Commit 429120f3 authored by Ming Lei's avatar Ming Lei Committed by Jens Axboe

block: fix splitting segments on boundary masks

We ran into a problem with a mpt3sas based controller, where we would
see random (and hard to reproduce) file corruption). The issue seemed
specific to this controller, but wasn't specific to the file system.
After a lot of debugging, we find out that it's caused by segments
spanning a 4G memory boundary. This shouldn't happen, as the default
setting for segment boundary masks is 4G.

Turns out there are two issues in get_max_segment_size():

1) The default segment boundary mask is bypassed

2) The segment start address isn't taken into account when checking
   segment boundary limit

Fix these two issues by removing the bypass of the segment boundary
check even if the mask is set to the default value, and taking into
account the actual start address of the request when checking if a
segment needs splitting.

Cc: stable@vger.kernel.org # v5.1+
Reviewed-by: default avatarChris Mason <clm@fb.com>
Tested-by: default avatarChris Mason <clm@fb.com>
Fixes: dcebd755 ("block: use bio_for_each_bvec() to compute multi-page bvec count")
Signed-off-by: default avatarMing Lei <ming.lei@redhat.com>

Dropped const on the page pointer, ppc page_to_phys() doesn't mark the
page as const...
Signed-off-by: default avatarJens Axboe <axboe@kernel.dk>
parent 85a8ce62
...@@ -157,16 +157,14 @@ static inline unsigned get_max_io_size(struct request_queue *q, ...@@ -157,16 +157,14 @@ static inline unsigned get_max_io_size(struct request_queue *q,
return sectors & (lbs - 1); return sectors & (lbs - 1);
} }
static unsigned get_max_segment_size(const struct request_queue *q, static inline unsigned get_max_segment_size(const struct request_queue *q,
unsigned offset) struct page *start_page,
unsigned long offset)
{ {
unsigned long mask = queue_segment_boundary(q); unsigned long mask = queue_segment_boundary(q);
/* default segment boundary mask means no boundary limit */ offset = mask & (page_to_phys(start_page) + offset);
if (mask == BLK_SEG_BOUNDARY_MASK) return min_t(unsigned long, mask - offset + 1,
return queue_max_segment_size(q);
return min_t(unsigned long, mask - (mask & offset) + 1,
queue_max_segment_size(q)); queue_max_segment_size(q));
} }
...@@ -201,7 +199,8 @@ static bool bvec_split_segs(const struct request_queue *q, ...@@ -201,7 +199,8 @@ static bool bvec_split_segs(const struct request_queue *q,
unsigned seg_size = 0; unsigned seg_size = 0;
while (len && *nsegs < max_segs) { while (len && *nsegs < max_segs) {
seg_size = get_max_segment_size(q, bv->bv_offset + total_len); seg_size = get_max_segment_size(q, bv->bv_page,
bv->bv_offset + total_len);
seg_size = min(seg_size, len); seg_size = min(seg_size, len);
(*nsegs)++; (*nsegs)++;
...@@ -419,7 +418,8 @@ static unsigned blk_bvec_map_sg(struct request_queue *q, ...@@ -419,7 +418,8 @@ static unsigned blk_bvec_map_sg(struct request_queue *q,
while (nbytes > 0) { while (nbytes > 0) {
unsigned offset = bvec->bv_offset + total; unsigned offset = bvec->bv_offset + total;
unsigned len = min(get_max_segment_size(q, offset), nbytes); unsigned len = min(get_max_segment_size(q, bvec->bv_page,
offset), nbytes);
struct page *page = bvec->bv_page; struct page *page = bvec->bv_page;
/* /*
......
Markdown is supported
0%
or
You are about to add 0 people to the discussion. Proceed with caution.
Finish editing this message first!
Please register or to comment