Commit b3f17fff authored by Christoph Hellwig's avatar Christoph Hellwig Committed by Kamal Mostafa

nvme: fix max_segments integer truncation

BugLink: http://bugs.launchpad.net/bugs/1588449

The block layer uses an unsigned short for max_segments.  The way we
calculate the value for NVMe tends to generate very large 32-bit values,
which after integer truncation may lead to a zero value instead of
the desired outcome.
Signed-off-by: default avatarChristoph Hellwig <hch@lst.de>
Reported-by: default avatarJeff Lien <Jeff.Lien@hgst.com>
Tested-by: default avatarJeff Lien <Jeff.Lien@hgst.com>
Reviewed-by: default avatarKeith Busch <keith.busch@intel.com>
Signed-off-by: default avatarJens Axboe <axboe@fb.com>
(cherry picked from commit 45686b61)
Signed-off-by: default avatarTim Gardner <tim.gardner@canonical.com>
Acked-by: default avatarStefan Bader <stefan.bader@canonical.com>
Signed-off-by: default avatarKamal Mostafa <kamal@canonical.com>
parent 9801fba1
...@@ -839,9 +839,11 @@ static void nvme_set_queue_limits(struct nvme_ctrl *ctrl, ...@@ -839,9 +839,11 @@ static void nvme_set_queue_limits(struct nvme_ctrl *ctrl,
struct request_queue *q) struct request_queue *q)
{ {
if (ctrl->max_hw_sectors) { if (ctrl->max_hw_sectors) {
u32 max_segments =
(ctrl->max_hw_sectors / (ctrl->page_size >> 9)) + 1;
blk_queue_max_hw_sectors(q, ctrl->max_hw_sectors); blk_queue_max_hw_sectors(q, ctrl->max_hw_sectors);
blk_queue_max_segments(q, blk_queue_max_segments(q, min_t(u32, max_segments, USHRT_MAX));
(ctrl->max_hw_sectors / (ctrl->page_size >> 9)) + 1);
} }
if (ctrl->stripe_size) if (ctrl->stripe_size)
blk_queue_chunk_sectors(q, ctrl->stripe_size >> 9); blk_queue_chunk_sectors(q, ctrl->stripe_size >> 9);
......
Markdown is supported
0%
or
You are about to add 0 people to the discussion. Proceed with caution.
Finish editing this message first!
Please register or to comment