Commit 36869cb9 authored by Linus Torvalds's avatar Linus Torvalds

Merge branch 'for-4.10/block' of git://git.kernel.dk/linux-block

Pull block layer updates from Jens Axboe:
 "This is the main block pull request this series. Contrary to previous
  release, I've kept the core and driver changes in the same branch. We
  always ended up having dependencies between the two for obvious
  reasons, so makes more sense to keep them together. That said, I'll
  probably try and keep more topical branches going forward, especially
  for cycles that end up being as busy as this one.

  The major parts of this pull request is:

   - Improved support for O_DIRECT on block devices, with a small
     private implementation instead of using the pig that is
     fs/direct-io.c. From Christoph.

   - Request completion tracking in a scalable fashion. This is utilized
     by two components in this pull, the new hybrid polling and the
     writeback queue throttling code.

   - Improved support for polling with O_DIRECT, adding a hybrid mode
     that combines pure polling with an initial sleep. From me.

   - Support for automatic throttling of writeback queues on the block
     side. This uses feedback from the device completion latencies to
     scale the queue on the block side up or down. From me.

   - Support from SMR drives in the block layer and for SD. From Hannes
     and Shaun.

   - Multi-connection support for nbd. From Josef.

   - Cleanup of request and bio flags, so we have a clear split between
     which are bio (or rq) private, and which ones are shared. From
     Christoph.

   - A set of patches from Bart, that improve how we handle queue
     stopping and starting in blk-mq.

   - Support for WRITE_ZEROES from Chaitanya.

   - Lightnvm updates from Javier/Matias.

   - Supoort for FC for the nvme-over-fabrics code. From James Smart.

   - A bunch of fixes from a whole slew of people, too many to name
     here"

* 'for-4.10/block' of git://git.kernel.dk/linux-block: (182 commits)
  blk-stat: fix a few cases of missing batch flushing
  blk-flush: run the queue when inserting blk-mq flush
  elevator: make the rqhash helpers exported
  blk-mq: abstract out blk_mq_dispatch_rq_list() helper
  blk-mq: add blk_mq_start_stopped_hw_queue()
  block: improve handling of the magic discard payload
  blk-wbt: don't throttle discard or write zeroes
  nbd: use dev_err_ratelimited in io path
  nbd: reset the setup task for NBD_CLEAR_SOCK
  nvme-fabrics: Add FC LLDD loopback driver to test FC-NVME
  nvme-fabrics: Add target support for FC transport
  nvme-fabrics: Add host support for FC transport
  nvme-fabrics: Add FC transport LLDD api definitions
  nvme-fabrics: Add FC transport FC-NVME definitions
  nvme-fabrics: Add FC transport error codes to nvme.h
  Add type 0x28 NVME type code to scsi fc headers
  nvme-fabrics: patch target code in prep for FC transport support
  nvme-fabrics: set sqe.command_id in core not transports
  parser: add u64 number parser
  nvme-rdma: align to generic ib_event logging helper
  ...
parents 9439b371 7cd54aa8
...@@ -235,3 +235,45 @@ Description: ...@@ -235,3 +235,45 @@ Description:
write_same_max_bytes is 0, write same is not supported write_same_max_bytes is 0, write same is not supported
by the device. by the device.
What: /sys/block/<disk>/queue/write_zeroes_max_bytes
Date: November 2016
Contact: Chaitanya Kulkarni <chaitanya.kulkarni@wdc.com>
Description:
Devices that support write zeroes operation in which a
single request can be issued to zero out the range of
contiguous blocks on storage without having any payload
in the request. This can be used to optimize writing zeroes
to the devices. write_zeroes_max_bytes indicates how many
bytes can be written in a single write zeroes command. If
write_zeroes_max_bytes is 0, write zeroes is not supported
by the device.
What: /sys/block/<disk>/queue/zoned
Date: September 2016
Contact: Damien Le Moal <damien.lemoal@hgst.com>
Description:
zoned indicates if the device is a zoned block device
and the zone model of the device if it is indeed zoned.
The possible values indicated by zoned are "none" for
regular block devices and "host-aware" or "host-managed"
for zoned block devices. The characteristics of
host-aware and host-managed zoned block devices are
described in the ZBC (Zoned Block Commands) and ZAC
(Zoned Device ATA Command Set) standards. These standards
also define the "drive-managed" zone model. However,
since drive-managed zoned block devices do not support
zone commands, they will be treated as regular block
devices and zoned will report "none".
What: /sys/block/<disk>/queue/chunk_sectors
Date: September 2016
Contact: Hannes Reinecke <hare@suse.com>
Description:
chunk_sectors has different meaning depending on the type
of the disk. For a RAID device (dm-raid), chunk_sectors
indicates the size in 512B sectors of the RAID volume
stripe segment. For a zoned block device, either
host-aware or host-managed, chunk_sectors indicates the
size of 512B sectors of the zones of the device, with
the eventual exception of the last zone of the device
which may be smaller.
...@@ -348,7 +348,7 @@ Drivers can now specify a request prepare function (q->prep_rq_fn) that the ...@@ -348,7 +348,7 @@ Drivers can now specify a request prepare function (q->prep_rq_fn) that the
block layer would invoke to pre-build device commands for a given request, block layer would invoke to pre-build device commands for a given request,
or perform other preparatory processing for the request. This is routine is or perform other preparatory processing for the request. This is routine is
called by elv_next_request(), i.e. typically just before servicing a request. called by elv_next_request(), i.e. typically just before servicing a request.
(The prepare function would not be called for requests that have REQ_DONTPREP (The prepare function would not be called for requests that have RQF_DONTPREP
enabled) enabled)
Aside: Aside:
...@@ -553,8 +553,8 @@ struct request { ...@@ -553,8 +553,8 @@ struct request {
struct request_list *rl; struct request_list *rl;
} }
See the rq_flag_bits definitions for an explanation of the various flags See the req_ops and req_flag_bits definitions for an explanation of the various
available. Some bits are used by the block layer or i/o scheduler. flags available. Some bits are used by the block layer or i/o scheduler.
The behaviour of the various sector counts are almost the same as before, The behaviour of the various sector counts are almost the same as before,
except that since we have multi-segment bios, current_nr_sectors refers except that since we have multi-segment bios, current_nr_sectors refers
......
...@@ -240,11 +240,11 @@ All cfq queues doing synchronous sequential IO go on to sync-idle tree. ...@@ -240,11 +240,11 @@ All cfq queues doing synchronous sequential IO go on to sync-idle tree.
On this tree we idle on each queue individually. On this tree we idle on each queue individually.
All synchronous non-sequential queues go on sync-noidle tree. Also any All synchronous non-sequential queues go on sync-noidle tree. Also any
request which are marked with REQ_NOIDLE go on this service tree. On this synchronous write request which is not marked with REQ_IDLE goes on this
tree we do not idle on individual queues instead idle on the whole group service tree. On this tree we do not idle on individual queues instead idle
of queues or the tree. So if there are 4 queues waiting for IO to dispatch on the whole group of queues or the tree. So if there are 4 queues waiting
we will idle only once last queue has dispatched the IO and there is for IO to dispatch we will idle only once last queue has dispatched the IO
no more IO on this service tree. and there is no more IO on this service tree.
All async writes go on async service tree. There is no idling on async All async writes go on async service tree. There is no idling on async
queues. queues.
...@@ -257,17 +257,17 @@ tree idling provides isolation with buffered write queues on async tree. ...@@ -257,17 +257,17 @@ tree idling provides isolation with buffered write queues on async tree.
FAQ FAQ
=== ===
Q1. Why to idle at all on queues marked with REQ_NOIDLE. Q1. Why to idle at all on queues not marked with REQ_IDLE.
A1. We only do tree idle (all queues on sync-noidle tree) on queues marked A1. We only do tree idle (all queues on sync-noidle tree) on queues not marked
with REQ_NOIDLE. This helps in providing isolation with all the sync-idle with REQ_IDLE. This helps in providing isolation with all the sync-idle
queues. Otherwise in presence of many sequential readers, other queues. Otherwise in presence of many sequential readers, other
synchronous IO might not get fair share of disk. synchronous IO might not get fair share of disk.
For example, if there are 10 sequential readers doing IO and they get For example, if there are 10 sequential readers doing IO and they get
100ms each. If a REQ_NOIDLE request comes in, it will be scheduled 100ms each. If a !REQ_IDLE request comes in, it will be scheduled
roughly after 1 second. If after completion of REQ_NOIDLE request we roughly after 1 second. If after completion of !REQ_IDLE request we
do not idle, and after a couple of milli seconds a another REQ_NOIDLE do not idle, and after a couple of milli seconds a another !REQ_IDLE
request comes in, again it will be scheduled after 1second. Repeat it request comes in, again it will be scheduled after 1second. Repeat it
and notice how a workload can lose its disk share and suffer due to and notice how a workload can lose its disk share and suffer due to
multiple sequential readers. multiple sequential readers.
...@@ -276,16 +276,16 @@ A1. We only do tree idle (all queues on sync-noidle tree) on queues marked ...@@ -276,16 +276,16 @@ A1. We only do tree idle (all queues on sync-noidle tree) on queues marked
context of fsync, and later some journaling data is written. Journaling context of fsync, and later some journaling data is written. Journaling
data comes in only after fsync has finished its IO (atleast for ext4 data comes in only after fsync has finished its IO (atleast for ext4
that seemed to be the case). Now if one decides not to idle on fsync that seemed to be the case). Now if one decides not to idle on fsync
thread due to REQ_NOIDLE, then next journaling write will not get thread due to !REQ_IDLE, then next journaling write will not get
scheduled for another second. A process doing small fsync, will suffer scheduled for another second. A process doing small fsync, will suffer
badly in presence of multiple sequential readers. badly in presence of multiple sequential readers.
Hence doing tree idling on threads using REQ_NOIDLE flag on requests Hence doing tree idling on threads using !REQ_IDLE flag on requests
provides isolation from multiple sequential readers and at the same provides isolation from multiple sequential readers and at the same
time we do not idle on individual threads. time we do not idle on individual threads.
Q2. When to specify REQ_NOIDLE Q2. When to specify REQ_IDLE
A2. I would think whenever one is doing synchronous write and not expecting A2. I would think whenever one is doing synchronous write and expecting
more writes to be dispatched from same context soon, should be able more writes to be dispatched from same context soon, should be able
to specify REQ_NOIDLE on writes and that probably should work well for to specify REQ_IDLE on writes and that probably should work well for
most of the cases. most of the cases.
...@@ -72,4 +72,4 @@ use_per_node_hctx=[0/1]: Default: 0 ...@@ -72,4 +72,4 @@ use_per_node_hctx=[0/1]: Default: 0
queue for each CPU node in the system. queue for each CPU node in the system.
use_lightnvm=[0/1]: Default: 0 use_lightnvm=[0/1]: Default: 0
Register device with LightNVM. Requires blk-mq to be used. Register device with LightNVM. Requires blk-mq and CONFIG_NVM to be enabled.
...@@ -58,6 +58,20 @@ When read, this file shows the total number of block IO polls and how ...@@ -58,6 +58,20 @@ When read, this file shows the total number of block IO polls and how
many returned success. Writing '0' to this file will disable polling many returned success. Writing '0' to this file will disable polling
for this device. Writing any non-zero value will enable this feature. for this device. Writing any non-zero value will enable this feature.
io_poll_delay (RW)
------------------
If polling is enabled, this controls what kind of polling will be
performed. It defaults to -1, which is classic polling. In this mode,
the CPU will repeatedly ask for completions without giving up any time.
If set to 0, a hybrid polling mode is used, where the kernel will attempt
to make an educated guess at when the IO will complete. Based on this
guess, the kernel will put the process issuing IO to sleep for an amount
of time, before entering a classic poll loop. This mode might be a
little slower than pure classic polling, but it will be more efficient.
If set to a value larger than 0, the kernel will put the process issuing
IO to sleep for this amont of microseconds before entering classic
polling.
iostats (RW) iostats (RW)
------------- -------------
This file is used to control (on/off) the iostats accounting of the This file is used to control (on/off) the iostats accounting of the
...@@ -169,5 +183,14 @@ This is the number of bytes the device can write in a single write-same ...@@ -169,5 +183,14 @@ This is the number of bytes the device can write in a single write-same
command. A value of '0' means write-same is not supported by this command. A value of '0' means write-same is not supported by this
device. device.
wb_lat_usec (RW)
----------------
If the device is registered for writeback throttling, then this file shows
the target minimum read latency. If this latency is exceeded in a given
window of time (see wb_window_usec), then the writeback throttling will start
scaling back writes. Writing a value of '0' to this file disables the
feature. Writing a value of '-1' to this file resets the value to the
default setting.
Jens Axboe <jens.axboe@oracle.com>, February 2009 Jens Axboe <jens.axboe@oracle.com>, February 2009
...@@ -8766,6 +8766,16 @@ L: linux-nvme@lists.infradead.org ...@@ -8766,6 +8766,16 @@ L: linux-nvme@lists.infradead.org
S: Supported S: Supported
F: drivers/nvme/target/ F: drivers/nvme/target/
NVM EXPRESS FC TRANSPORT DRIVERS
M: James Smart <james.smart@broadcom.com>
L: linux-nvme@lists.infradead.org
S: Supported
F: include/linux/nvme-fc.h
F: include/linux/nvme-fc-driver.h
F: drivers/nvme/host/fc.c
F: drivers/nvme/target/fc.c
F: drivers/nvme/target/fcloop.c
NVMEM FRAMEWORK NVMEM FRAMEWORK
M: Srinivas Kandagatla <srinivas.kandagatla@linaro.org> M: Srinivas Kandagatla <srinivas.kandagatla@linaro.org>
M: Maxime Ripard <maxime.ripard@free-electrons.com> M: Maxime Ripard <maxime.ripard@free-electrons.com>
...@@ -9656,8 +9666,8 @@ F: arch/mips/boot/dts/pistachio/ ...@@ -9656,8 +9666,8 @@ F: arch/mips/boot/dts/pistachio/
F: arch/mips/configs/pistachio*_defconfig F: arch/mips/configs/pistachio*_defconfig
PKTCDVD DRIVER PKTCDVD DRIVER
M: Jiri Kosina <jikos@kernel.org> S: Orphan
S: Maintained M: linux-block@vger.kernel.org
F: drivers/block/pktcdvd.c F: drivers/block/pktcdvd.c
F: include/linux/pktcdvd.h F: include/linux/pktcdvd.h
F: include/uapi/linux/pktcdvd.h F: include/uapi/linux/pktcdvd.h
......
...@@ -25,7 +25,6 @@ ...@@ -25,7 +25,6 @@
#include <linux/string.h> #include <linux/string.h>
#include <linux/types.h> #include <linux/types.h>
#include <linux/blk_types.h>
#include <asm/byteorder.h> #include <asm/byteorder.h>
#include <asm/memory.h> #include <asm/memory.h>
#include <asm-generic/pci_iomap.h> #include <asm-generic/pci_iomap.h>
......
...@@ -22,7 +22,6 @@ ...@@ -22,7 +22,6 @@
#ifdef __KERNEL__ #ifdef __KERNEL__
#include <linux/types.h> #include <linux/types.h>
#include <linux/blk_types.h>
#include <asm/byteorder.h> #include <asm/byteorder.h>
#include <asm/barrier.h> #include <asm/barrier.h>
......
...@@ -5,6 +5,7 @@ menuconfig BLOCK ...@@ -5,6 +5,7 @@ menuconfig BLOCK
bool "Enable the block layer" if EXPERT bool "Enable the block layer" if EXPERT
default y default y
select SBITMAP select SBITMAP
select SRCU
help help
Provide block layer support for the kernel. Provide block layer support for the kernel.
...@@ -89,6 +90,14 @@ config BLK_DEV_INTEGRITY ...@@ -89,6 +90,14 @@ config BLK_DEV_INTEGRITY
T10/SCSI Data Integrity Field or the T13/ATA External Path T10/SCSI Data Integrity Field or the T13/ATA External Path
Protection. If in doubt, say N. Protection. If in doubt, say N.
config BLK_DEV_ZONED
bool "Zoned block device support"
---help---
Block layer zoned block device support. This option enables
support for ZAC/ZBC host-managed and host-aware zoned block devices.
Say yes here if you have a ZAC or ZBC storage device.
config BLK_DEV_THROTTLING config BLK_DEV_THROTTLING
bool "Block layer bio throttling support" bool "Block layer bio throttling support"
depends on BLK_CGROUP=y depends on BLK_CGROUP=y
...@@ -112,6 +121,32 @@ config BLK_CMDLINE_PARSER ...@@ -112,6 +121,32 @@ config BLK_CMDLINE_PARSER
See Documentation/block/cmdline-partition.txt for more information. See Documentation/block/cmdline-partition.txt for more information.
config BLK_WBT
bool "Enable support for block device writeback throttling"
default n
---help---
Enabling this option enables the block layer to throttle buffered
background writeback from the VM, making it more smooth and having
less impact on foreground operations. The throttling is done
dynamically on an algorithm loosely based on CoDel, factoring in
the realtime performance of the disk.
config BLK_WBT_SQ
bool "Single queue writeback throttling"
default n
depends on BLK_WBT
---help---
Enable writeback throttling by default on legacy single queue devices
config BLK_WBT_MQ
bool "Multiqueue writeback throttling"
default y
depends on BLK_WBT
---help---
Enable writeback throttling by default on multiqueue devices.
Multiqueue currently doesn't have support for IO scheduling,
enabling this option is recommended.
menu "Partition Types" menu "Partition Types"
source "block/partitions/Kconfig" source "block/partitions/Kconfig"
......
...@@ -5,7 +5,7 @@ ...@@ -5,7 +5,7 @@
obj-$(CONFIG_BLOCK) := bio.o elevator.o blk-core.o blk-tag.o blk-sysfs.o \ obj-$(CONFIG_BLOCK) := bio.o elevator.o blk-core.o blk-tag.o blk-sysfs.o \
blk-flush.o blk-settings.o blk-ioc.o blk-map.o \ blk-flush.o blk-settings.o blk-ioc.o blk-map.o \
blk-exec.o blk-merge.o blk-softirq.o blk-timeout.o \ blk-exec.o blk-merge.o blk-softirq.o blk-timeout.o \
blk-lib.o blk-mq.o blk-mq-tag.o \ blk-lib.o blk-mq.o blk-mq-tag.o blk-stat.o \
blk-mq-sysfs.o blk-mq-cpumap.o ioctl.o \ blk-mq-sysfs.o blk-mq-cpumap.o ioctl.o \
genhd.o scsi_ioctl.o partition-generic.o ioprio.o \ genhd.o scsi_ioctl.o partition-generic.o ioprio.o \
badblocks.o partitions/ badblocks.o partitions/
...@@ -23,3 +23,5 @@ obj-$(CONFIG_BLOCK_COMPAT) += compat_ioctl.o ...@@ -23,3 +23,5 @@ obj-$(CONFIG_BLOCK_COMPAT) += compat_ioctl.o
obj-$(CONFIG_BLK_CMDLINE_PARSER) += cmdline-parser.o obj-$(CONFIG_BLK_CMDLINE_PARSER) += cmdline-parser.o
obj-$(CONFIG_BLK_DEV_INTEGRITY) += bio-integrity.o blk-integrity.o t10-pi.o obj-$(CONFIG_BLK_DEV_INTEGRITY) += bio-integrity.o blk-integrity.o t10-pi.o
obj-$(CONFIG_BLK_MQ_PCI) += blk-mq-pci.o obj-$(CONFIG_BLK_MQ_PCI) += blk-mq-pci.o
obj-$(CONFIG_BLK_DEV_ZONED) += blk-zoned.o
obj-$(CONFIG_BLK_WBT) += blk-wbt.o
...@@ -172,7 +172,7 @@ bool bio_integrity_enabled(struct bio *bio) ...@@ -172,7 +172,7 @@ bool bio_integrity_enabled(struct bio *bio)
{ {
struct blk_integrity *bi = bdev_get_integrity(bio->bi_bdev); struct blk_integrity *bi = bdev_get_integrity(bio->bi_bdev);
if (!bio_is_rw(bio)) if (bio_op(bio) != REQ_OP_READ && bio_op(bio) != REQ_OP_WRITE)
return false; return false;
/* Already protected? */ /* Already protected? */
......
...@@ -270,11 +270,15 @@ static void bio_free(struct bio *bio) ...@@ -270,11 +270,15 @@ static void bio_free(struct bio *bio)
} }
} }
void bio_init(struct bio *bio) void bio_init(struct bio *bio, struct bio_vec *table,
unsigned short max_vecs)
{ {
memset(bio, 0, sizeof(*bio)); memset(bio, 0, sizeof(*bio));
atomic_set(&bio->__bi_remaining, 1); atomic_set(&bio->__bi_remaining, 1);
atomic_set(&bio->__bi_cnt, 1); atomic_set(&bio->__bi_cnt, 1);
bio->bi_io_vec = table;
bio->bi_max_vecs = max_vecs;
} }
EXPORT_SYMBOL(bio_init); EXPORT_SYMBOL(bio_init);
...@@ -480,7 +484,7 @@ struct bio *bio_alloc_bioset(gfp_t gfp_mask, int nr_iovecs, struct bio_set *bs) ...@@ -480,7 +484,7 @@ struct bio *bio_alloc_bioset(gfp_t gfp_mask, int nr_iovecs, struct bio_set *bs)
return NULL; return NULL;
bio = p + front_pad; bio = p + front_pad;
bio_init(bio); bio_init(bio, NULL, 0);
if (nr_iovecs > inline_vecs) { if (nr_iovecs > inline_vecs) {
unsigned long idx = 0; unsigned long idx = 0;
...@@ -670,6 +674,7 @@ struct bio *bio_clone_bioset(struct bio *bio_src, gfp_t gfp_mask, ...@@ -670,6 +674,7 @@ struct bio *bio_clone_bioset(struct bio *bio_src, gfp_t gfp_mask,
switch (bio_op(bio)) { switch (bio_op(bio)) {
case REQ_OP_DISCARD: case REQ_OP_DISCARD:
case REQ_OP_SECURE_ERASE: case REQ_OP_SECURE_ERASE:
case REQ_OP_WRITE_ZEROES:
break; break;
case REQ_OP_WRITE_SAME: case REQ_OP_WRITE_SAME:
bio->bi_io_vec[bio->bi_vcnt++] = bio_src->bi_io_vec[0]; bio->bi_io_vec[bio->bi_vcnt++] = bio_src->bi_io_vec[0];
...@@ -847,6 +852,55 @@ int bio_add_page(struct bio *bio, struct page *page, ...@@ -847,6 +852,55 @@ int bio_add_page(struct bio *bio, struct page *page,
} }
EXPORT_SYMBOL(bio_add_page); EXPORT_SYMBOL(bio_add_page);
/**
* bio_iov_iter_get_pages - pin user or kernel pages and add them to a bio
* @bio: bio to add pages to
* @iter: iov iterator describing the region to be mapped
*
* Pins as many pages from *iter and appends them to @bio's bvec array. The
* pages will have to be released using put_page() when done.
*/
int bio_iov_iter_get_pages(struct bio *bio, struct iov_iter *iter)
{
unsigned short nr_pages = bio->bi_max_vecs - bio->bi_vcnt;
struct bio_vec *bv = bio->bi_io_vec + bio->bi_vcnt;
struct page **pages = (struct page **)bv;
size_t offset, diff;
ssize_t size;
size = iov_iter_get_pages(iter, pages, LONG_MAX, nr_pages, &offset);
if (unlikely(size <= 0))
return size ? size : -EFAULT;
nr_pages = (size + offset + PAGE_SIZE - 1) / PAGE_SIZE;
/*
* Deep magic below: We need to walk the pinned pages backwards
* because we are abusing the space allocated for the bio_vecs
* for the page array. Because the bio_vecs are larger than the
* page pointers by definition this will always work. But it also
* means we can't use bio_add_page, so any changes to it's semantics
* need to be reflected here as well.
*/
bio->bi_iter.bi_size += size;
bio->bi_vcnt += nr_pages;
diff = (nr_pages * PAGE_SIZE - offset) - size;
while (nr_pages--) {
bv[nr_pages].bv_page = pages[nr_pages];
bv[nr_pages].bv_len = PAGE_SIZE;
bv[nr_pages].bv_offset = 0;
}
bv[0].bv_offset += offset;
bv[0].bv_len -= offset;
if (diff)
bv[bio->bi_vcnt - 1].bv_len -= diff;
iov_iter_advance(iter, size);
return 0;
}
EXPORT_SYMBOL_GPL(bio_iov_iter_get_pages);
struct submit_bio_ret { struct submit_bio_ret {
struct completion event; struct completion event;
int error; int error;
...@@ -1786,15 +1840,7 @@ struct bio *bio_split(struct bio *bio, int sectors, ...@@ -1786,15 +1840,7 @@ struct bio *bio_split(struct bio *bio, int sectors,
BUG_ON(sectors <= 0); BUG_ON(sectors <= 0);
BUG_ON(sectors >= bio_sectors(bio)); BUG_ON(sectors >= bio_sectors(bio));
/*
* Discards need a mutable bio_vec to accommodate the payload
* required by the DSM TRIM and UNMAP commands.
*/
if (bio_op(bio) == REQ_OP_DISCARD || bio_op(bio) == REQ_OP_SECURE_ERASE)
split = bio_clone_bioset(bio, gfp, bs);
else
split = bio_clone_fast(bio, gfp, bs); split = bio_clone_fast(bio, gfp, bs);
if (!split) if (!split)
return NULL; return NULL;
......
...@@ -185,7 +185,8 @@ static struct blkcg_gq *blkg_create(struct blkcg *blkcg, ...@@ -185,7 +185,8 @@ static struct blkcg_gq *blkg_create(struct blkcg *blkcg,
} }
wb_congested = wb_congested_get_create(&q->backing_dev_info, wb_congested = wb_congested_get_create(&q->backing_dev_info,
blkcg->css.id, GFP_NOWAIT); blkcg->css.id,
GFP_NOWAIT | __GFP_NOWARN);
if (!wb_congested) { if (!wb_congested) {
ret = -ENOMEM; ret = -ENOMEM;
goto err_put_css; goto err_put_css;
...@@ -193,7 +194,7 @@ static struct blkcg_gq *blkg_create(struct blkcg *blkcg, ...@@ -193,7 +194,7 @@ static struct blkcg_gq *blkg_create(struct blkcg *blkcg,
/* allocate */ /* allocate */
if (!new_blkg) { if (!new_blkg) {
new_blkg = blkg_alloc(blkcg, q, GFP_NOWAIT); new_blkg = blkg_alloc(blkcg, q, GFP_NOWAIT | __GFP_NOWARN);
if (unlikely(!new_blkg)) { if (unlikely(!new_blkg)) {
ret = -ENOMEM; ret = -ENOMEM;
goto err_put_congested; goto err_put_congested;
...@@ -1022,7 +1023,7 @@ blkcg_css_alloc(struct cgroup_subsys_state *parent_css) ...@@ -1022,7 +1023,7 @@ blkcg_css_alloc(struct cgroup_subsys_state *parent_css)
} }
spin_lock_init(&blkcg->lock); spin_lock_init(&blkcg->lock);
INIT_RADIX_TREE(&blkcg->blkg_tree, GFP_NOWAIT); INIT_RADIX_TREE(&blkcg->blkg_tree, GFP_NOWAIT | __GFP_NOWARN);
INIT_HLIST_HEAD(&blkcg->blkg_list); INIT_HLIST_HEAD(&blkcg->blkg_list);
#ifdef CONFIG_CGROUP_WRITEBACK #ifdef CONFIG_CGROUP_WRITEBACK
INIT_LIST_HEAD(&blkcg->cgwb_list); INIT_LIST_HEAD(&blkcg->cgwb_list);
...@@ -1240,7 +1241,7 @@ int blkcg_activate_policy(struct request_queue *q, ...@@ -1240,7 +1241,7 @@ int blkcg_activate_policy(struct request_queue *q,
if (blkg->pd[pol->plid]) if (blkg->pd[pol->plid])
continue; continue;
pd = pol->pd_alloc_fn(GFP_NOWAIT, q->node); pd = pol->pd_alloc_fn(GFP_NOWAIT | __GFP_NOWARN, q->node);
if (!pd) if (!pd)
swap(pd, pd_prealloc); swap(pd, pd_prealloc);
if (!pd) { if (!pd) {
......
This diff is collapsed.
...@@ -72,7 +72,7 @@ void blk_execute_rq_nowait(struct request_queue *q, struct gendisk *bd_disk, ...@@ -72,7 +72,7 @@ void blk_execute_rq_nowait(struct request_queue *q, struct gendisk *bd_disk,
spin_lock_irq(q->queue_lock); spin_lock_irq(q->queue_lock);
if (unlikely(blk_queue_dying(q))) { if (unlikely(blk_queue_dying(q))) {
rq->cmd_flags |= REQ_QUIET; rq->rq_flags |= RQF_QUIET;
rq->errors = -ENXIO; rq->errors = -ENXIO;
__blk_end_request_all(rq, rq->errors); __blk_end_request_all(rq, rq->errors);
spin_unlock_irq(q->queue_lock); spin_unlock_irq(q->queue_lock);
......
...@@ -56,7 +56,7 @@ ...@@ -56,7 +56,7 @@
* Once while executing DATA and again after the whole sequence is * Once while executing DATA and again after the whole sequence is
* complete. The first completion updates the contained bio but doesn't * complete. The first completion updates the contained bio but doesn't
* finish it so that the bio submitter is notified only after the whole * finish it so that the bio submitter is notified only after the whole
* sequence is complete. This is implemented by testing REQ_FLUSH_SEQ in * sequence is complete. This is implemented by testing RQF_FLUSH_SEQ in
* req_bio_endio(). * req_bio_endio().
* *
* The above peculiarity requires that each FLUSH/FUA request has only one * The above peculiarity requires that each FLUSH/FUA request has only one
...@@ -127,17 +127,14 @@ static void blk_flush_restore_request(struct request *rq) ...@@ -127,17 +127,14 @@ static void blk_flush_restore_request(struct request *rq)
rq->bio = rq->biotail; rq->bio = rq->biotail;
/* make @rq a normal request */ /* make @rq a normal request */
rq->cmd_flags &= ~REQ_FLUSH_SEQ; rq->rq_flags &= ~RQF_FLUSH_SEQ;
rq->end_io = rq->flush.saved_end_io; rq->end_io = rq->flush.saved_end_io;
} }
static bool blk_flush_queue_rq(struct request *rq, bool add_front) static bool blk_flush_queue_rq(struct request *rq, bool add_front)
{ {
if (rq->q->mq_ops) { if (rq->q->mq_ops) {
struct request_queue *q = rq->q; blk_mq_add_to_requeue_list(rq, add_front, true);
blk_mq_add_to_requeue_list(rq, add_front);
blk_mq_kick_requeue_list(q);
return false; return false;
} else { } else {
if (add_front) if (add_front)
...@@ -330,7 +327,8 @@ static bool blk_kick_flush(struct request_queue *q, struct blk_flush_queue *fq) ...@@ -330,7 +327,8 @@ static bool blk_kick_flush(struct request_queue *q, struct blk_flush_queue *fq)
} }
flush_rq->cmd_type = REQ_TYPE_FS; flush_rq->cmd_type = REQ_TYPE_FS;
req_set_op_attrs(flush_rq, REQ_OP_FLUSH, WRITE_FLUSH | REQ_FLUSH_SEQ); flush_rq->cmd_flags = REQ_OP_FLUSH | REQ_PREFLUSH;
flush_rq->rq_flags |= RQF_FLUSH_SEQ;
flush_rq->rq_disk = first_rq->rq_disk; flush_rq->rq_disk = first_rq->rq_disk;
flush_rq->end_io = flush_end_io; flush_rq->end_io = flush_end_io;
...@@ -368,7 +366,7 @@ static void flush_data_end_io(struct request *rq, int error) ...@@ -368,7 +366,7 @@ static void flush_data_end_io(struct request *rq, int error)
elv_completed_request(q, rq); elv_completed_request(q, rq);
/* for avoiding double accounting */ /* for avoiding double accounting */
rq->cmd_flags &= ~REQ_STARTED; rq->rq_flags &= ~RQF_STARTED;
/* /*
* After populating an empty queue, kick it to avoid stall. Read * After populating an empty queue, kick it to avoid stall. Read
...@@ -425,6 +423,13 @@ void blk_insert_flush(struct request *rq) ...@@ -425,6 +423,13 @@ void blk_insert_flush(struct request *rq)
if (!(fflags & (1UL << QUEUE_FLAG_FUA))) if (!(fflags & (1UL << QUEUE_FLAG_FUA)))
rq->cmd_flags &= ~REQ_FUA; rq->cmd_flags &= ~REQ_FUA;
/*
* REQ_PREFLUSH|REQ_FUA implies REQ_SYNC, so if we clear any
* of those flags, we have to set REQ_SYNC to avoid skewing
* the request accounting.
*/
rq->cmd_flags |= REQ_SYNC;
/* /*
* An empty flush handed down from a stacking driver may * An empty flush handed down from a stacking driver may
* translate into nothing if the underlying device does not * translate into nothing if the underlying device does not
...@@ -449,7 +454,7 @@ void blk_insert_flush(struct request *rq) ...@@ -449,7 +454,7 @@ void blk_insert_flush(struct request *rq)
if ((policy & REQ_FSEQ_DATA) && if ((policy & REQ_FSEQ_DATA) &&
!(policy & (REQ_FSEQ_PREFLUSH | REQ_FSEQ_POSTFLUSH))) { !(policy & (REQ_FSEQ_PREFLUSH | REQ_FSEQ_POSTFLUSH))) {
if (q->mq_ops) { if (q->mq_ops) {
blk_mq_insert_request(rq, false, false, true); blk_mq_insert_request(rq, false, true, false);
} else } else
list_add_tail(&rq->queuelist, &q->queue_head); list_add_tail(&rq->queuelist, &q->queue_head);
return; return;
...@@ -461,7 +466,7 @@ void blk_insert_flush(struct request *rq) ...@@ -461,7 +466,7 @@ void blk_insert_flush(struct request *rq)
*/ */
memset(&rq->flush, 0, sizeof(rq->flush)); memset(&rq->flush, 0, sizeof(rq->flush));
INIT_LIST_HEAD(&rq->flush.list); INIT_LIST_HEAD(&rq->flush.list);
rq->cmd_flags |= REQ_FLUSH_SEQ; rq->rq_flags |= RQF_FLUSH_SEQ;
rq->flush.saved_end_io = rq->end_io; /* Usually NULL */ rq->flush.saved_end_io = rq->end_io; /* Usually NULL */
if (q->mq_ops) { if (q->mq_ops) {
rq->end_io = mq_flush_data_end_io; rq->end_io = mq_flush_data_end_io;
...@@ -513,7 +518,7 @@ int blkdev_issue_flush(struct block_device *bdev, gfp_t gfp_mask, ...@@ -513,7 +518,7 @@ int blkdev_issue_flush(struct block_device *bdev, gfp_t gfp_mask,
bio = bio_alloc(gfp_mask, 0); bio = bio_alloc(gfp_mask, 0);
bio->bi_bdev = bdev; bio->bi_bdev = bdev;
bio_set_op_attrs(bio, REQ_OP_WRITE, WRITE_FLUSH); bio->bi_opf = REQ_OP_WRITE | REQ_PREFLUSH;
ret = submit_bio_wait(bio); ret = submit_bio_wait(bio);
......
...@@ -29,7 +29,7 @@ int __blkdev_issue_discard(struct block_device *bdev, sector_t sector, ...@@ -29,7 +29,7 @@ int __blkdev_issue_discard(struct block_device *bdev, sector_t sector,
struct request_queue *q = bdev_get_queue(bdev); struct request_queue *q = bdev_get_queue(bdev);
struct bio *bio = *biop; struct bio *bio = *biop;
unsigned int granularity; unsigned int granularity;
enum req_op op; unsigned int op;
int alignment; int alignment;
sector_t bs_mask; sector_t bs_mask;
...@@ -80,7 +80,7 @@ int __blkdev_issue_discard(struct block_device *bdev, sector_t sector, ...@@ -80,7 +80,7 @@ int __blkdev_issue_discard(struct block_device *bdev, sector_t sector,
req_sects = end_sect - sector; req_sects = end_sect - sector;
} }
bio = next_bio(bio, 1, gfp_mask); bio = next_bio(bio, 0, gfp_mask);
bio->bi_iter.bi_sector = sector; bio->bi_iter.bi_sector = sector;
bio->bi_bdev = bdev; bio->bi_bdev = bdev;
bio_set_op_attrs(bio, op, 0); bio_set_op_attrs(bio, op, 0);
...@@ -137,24 +137,24 @@ int blkdev_issue_discard(struct block_device *bdev, sector_t sector, ...@@ -137,24 +137,24 @@ int blkdev_issue_discard(struct block_device *bdev, sector_t sector,
EXPORT_SYMBOL(blkdev_issue_discard); EXPORT_SYMBOL(blkdev_issue_discard);
/** /**
* blkdev_issue_write_same - queue a write same operation * __blkdev_issue_write_same - generate number of bios with same page
* @bdev: target blockdev * @bdev: target blockdev
* @sector: start sector * @sector: start sector
* @nr_sects: number of sectors to write * @nr_sects: number of sectors to write
* @gfp_mask: memory allocation flags (for bio_alloc) * @gfp_mask: memory allocation flags (for bio_alloc)
* @page: page containing data to write * @page: page containing data to write
* @biop: pointer to anchor bio
* *
* Description: * Description:
* Issue a write same request for the sectors in question. * Generate and issue number of bios(REQ_OP_WRITE_SAME) with same page.
*/ */
int blkdev_issue_write_same(struct block_device *bdev, sector_t sector, static int __blkdev_issue_write_same(struct block_device *bdev, sector_t sector,
sector_t nr_sects, gfp_t gfp_mask, sector_t nr_sects, gfp_t gfp_mask, struct page *page,
struct page *page) struct bio **biop)
{ {
struct request_queue *q = bdev_get_queue(bdev); struct request_queue *q = bdev_get_queue(bdev);
unsigned int max_write_same_sectors; unsigned int max_write_same_sectors;
struct bio *bio = NULL; struct bio *bio = *biop;
int ret = 0;
sector_t bs_mask; sector_t bs_mask;
if (!q) if (!q)
...@@ -164,6 +164,9 @@ int blkdev_issue_write_same(struct block_device *bdev, sector_t sector, ...@@ -164,6 +164,9 @@ int blkdev_issue_write_same(struct block_device *bdev, sector_t sector,
if ((sector | nr_sects) & bs_mask) if ((sector | nr_sects) & bs_mask)
return -EINVAL; return -EINVAL;
if (!bdev_write_same(bdev))
return -EOPNOTSUPP;
/* Ensure that max_write_same_sectors doesn't overflow bi_size */ /* Ensure that max_write_same_sectors doesn't overflow bi_size */
max_write_same_sectors = UINT_MAX >> 9; max_write_same_sectors = UINT_MAX >> 9;
...@@ -185,32 +188,112 @@ int blkdev_issue_write_same(struct block_device *bdev, sector_t sector, ...@@ -185,32 +188,112 @@ int blkdev_issue_write_same(struct block_device *bdev, sector_t sector,
bio->bi_iter.bi_size = nr_sects << 9; bio->bi_iter.bi_size = nr_sects << 9;
nr_sects = 0; nr_sects = 0;
} }
cond_resched();
} }
if (bio) { *biop = bio;
return 0;
}
/**
* blkdev_issue_write_same - queue a write same operation
* @bdev: target blockdev
* @sector: start sector
* @nr_sects: number of sectors to write
* @gfp_mask: memory allocation flags (for bio_alloc)
* @page: page containing data
*
* Description:
* Issue a write same request for the sectors in question.
*/
int blkdev_issue_write_same(struct block_device *bdev, sector_t sector,
sector_t nr_sects, gfp_t gfp_mask,
struct page *page)
{
struct bio *bio = NULL;
struct blk_plug plug;
int ret;
blk_start_plug(&plug);
ret = __blkdev_issue_write_same(bdev, sector, nr_sects, gfp_mask, page,
&bio);
if (ret == 0 && bio) {
ret = submit_bio_wait(bio); ret = submit_bio_wait(bio);
bio_put(bio); bio_put(bio);
} }
blk_finish_plug(&plug);
return ret; return ret;
} }
EXPORT_SYMBOL(blkdev_issue_write_same); EXPORT_SYMBOL(blkdev_issue_write_same);
/** /**
* blkdev_issue_zeroout - generate number of zero filed write bios * __blkdev_issue_write_zeroes - generate number of bios with WRITE ZEROES
* @bdev: blockdev to issue * @bdev: blockdev to issue
* @sector: start sector * @sector: start sector
* @nr_sects: number of sectors to write * @nr_sects: number of sectors to write
* @gfp_mask: memory allocation flags (for bio_alloc) * @gfp_mask: memory allocation flags (for bio_alloc)
* @biop: pointer to anchor bio
* *
* Description: * Description:
* Generate and issue number of bios with zerofiled pages. * Generate and issue number of bios(REQ_OP_WRITE_ZEROES) with zerofiled pages.
*/ */
static int __blkdev_issue_write_zeroes(struct block_device *bdev,
sector_t sector, sector_t nr_sects, gfp_t gfp_mask,
struct bio **biop)
{
struct bio *bio = *biop;
unsigned int max_write_zeroes_sectors;
struct request_queue *q = bdev_get_queue(bdev);
if (!q)
return -ENXIO;
/* Ensure that max_write_zeroes_sectors doesn't overflow bi_size */
max_write_zeroes_sectors = bdev_write_zeroes_sectors(bdev);
if (max_write_zeroes_sectors == 0)
return -EOPNOTSUPP;
while (nr_sects) {
bio = next_bio(bio, 0, gfp_mask);
bio->bi_iter.bi_sector = sector;
bio->bi_bdev = bdev;
bio_set_op_attrs(bio, REQ_OP_WRITE_ZEROES, 0);
if (nr_sects > max_write_zeroes_sectors) {
bio->bi_iter.bi_size = max_write_zeroes_sectors << 9;
nr_sects -= max_write_zeroes_sectors;
sector += max_write_zeroes_sectors;
} else {
bio->bi_iter.bi_size = nr_sects << 9;
nr_sects = 0;
}
cond_resched();
}
*biop = bio;
return 0;
}
static int __blkdev_issue_zeroout(struct block_device *bdev, sector_t sector, /**
sector_t nr_sects, gfp_t gfp_mask) * __blkdev_issue_zeroout - generate number of zero filed write bios
* @bdev: blockdev to issue
* @sector: start sector
* @nr_sects: number of sectors to write
* @gfp_mask: memory allocation flags (for bio_alloc)
* @biop: pointer to anchor bio
* @discard: discard flag
*
* Description:
* Generate and issue number of bios with zerofiled pages.
*/
int __blkdev_issue_zeroout(struct block_device *bdev, sector_t sector,
sector_t nr_sects, gfp_t gfp_mask, struct bio **biop,
bool discard)
{ {
int ret; int ret;
struct bio *bio = NULL; int bi_size = 0;
struct bio *bio = *biop;
unsigned int sz; unsigned int sz;
sector_t bs_mask; sector_t bs_mask;
...@@ -218,6 +301,24 @@ static int __blkdev_issue_zeroout(struct block_device *bdev, sector_t sector, ...@@ -218,6 +301,24 @@ static int __blkdev_issue_zeroout(struct block_device *bdev, sector_t sector,
if ((sector | nr_sects) & bs_mask) if ((sector | nr_sects) & bs_mask)
return -EINVAL; return -EINVAL;
if (discard) {
ret = __blkdev_issue_discard(bdev, sector, nr_sects, gfp_mask,
BLKDEV_DISCARD_ZERO, biop);
if (ret == 0 || (ret && ret != -EOPNOTSUPP))
goto out;
}
ret = __blkdev_issue_write_zeroes(bdev, sector, nr_sects, gfp_mask,
biop);
if (ret == 0 || (ret && ret != -EOPNOTSUPP))
goto out;
ret = __blkdev_issue_write_same(bdev, sector, nr_sects, gfp_mask,
ZERO_PAGE(0), biop);
if (ret == 0 || (ret && ret != -EOPNOTSUPP))
goto out;
ret = 0;
while (nr_sects != 0) { while (nr_sects != 0) {
bio = next_bio(bio, min(nr_sects, (sector_t)BIO_MAX_PAGES), bio = next_bio(bio, min(nr_sects, (sector_t)BIO_MAX_PAGES),
gfp_mask); gfp_mask);
...@@ -227,21 +328,20 @@ static int __blkdev_issue_zeroout(struct block_device *bdev, sector_t sector, ...@@ -227,21 +328,20 @@ static int __blkdev_issue_zeroout(struct block_device *bdev, sector_t sector,
while (nr_sects != 0) { while (nr_sects != 0) {
sz = min((sector_t) PAGE_SIZE >> 9 , nr_sects); sz = min((sector_t) PAGE_SIZE >> 9 , nr_sects);
ret = bio_add_page(bio, ZERO_PAGE(0), sz << 9, 0); bi_size = bio_add_page(bio, ZERO_PAGE(0), sz << 9, 0);
nr_sects -= ret >> 9; nr_sects -= bi_size >> 9;
sector += ret >> 9; sector += bi_size >> 9;
if (ret < (sz << 9)) if (bi_size < (sz << 9))
break; break;
} }
cond_resched();
} }
if (bio) { *biop = bio;
ret = submit_bio_wait(bio); out:
bio_put(bio);
return ret; return ret;
}
return 0;
} }
EXPORT_SYMBOL(__blkdev_issue_zeroout);
/** /**
* blkdev_issue_zeroout - zero-fill a block range * blkdev_issue_zeroout - zero-fill a block range
...@@ -258,26 +358,27 @@ static int __blkdev_issue_zeroout(struct block_device *bdev, sector_t sector, ...@@ -258,26 +358,27 @@ static int __blkdev_issue_zeroout(struct block_device *bdev, sector_t sector,
* the discard request fail, if the discard flag is not set, or if * the discard request fail, if the discard flag is not set, or if
* discard_zeroes_data is not supported, this function will resort to * discard_zeroes_data is not supported, this function will resort to
* zeroing the blocks manually, thus provisioning (allocating, * zeroing the blocks manually, thus provisioning (allocating,
* anchoring) them. If the block device supports the WRITE SAME command * anchoring) them. If the block device supports WRITE ZEROES or WRITE SAME
* blkdev_issue_zeroout() will use it to optimize the process of * command(s), blkdev_issue_zeroout() will use it to optimize the process of
* clearing the block range. Otherwise the zeroing will be performed * clearing the block range. Otherwise the zeroing will be performed
* using regular WRITE calls. * using regular WRITE calls.
*/ */
int blkdev_issue_zeroout(struct block_device *bdev, sector_t sector, int blkdev_issue_zeroout(struct block_device *bdev, sector_t sector,
sector_t nr_sects, gfp_t gfp_mask, bool discard) sector_t nr_sects, gfp_t gfp_mask, bool discard)
{ {
if (discard) { int ret;
if (!blkdev_issue_discard(bdev, sector, nr_sects, gfp_mask, struct bio *bio = NULL;
BLKDEV_DISCARD_ZERO)) struct blk_plug plug;
return 0;
}
if (bdev_write_same(bdev) && blk_start_plug(&plug);
blkdev_issue_write_same(bdev, sector, nr_sects, gfp_mask, ret = __blkdev_issue_zeroout(bdev, sector, nr_sects, gfp_mask,
ZERO_PAGE(0)) == 0) &bio, discard);
return 0; if (ret == 0 && bio) {
ret = submit_bio_wait(bio);
bio_put(bio);
}
blk_finish_plug(&plug);
return __blkdev_issue_zeroout(bdev, sector, nr_sects, gfp_mask); return ret;
} }
EXPORT_SYMBOL(blkdev_issue_zeroout); EXPORT_SYMBOL(blkdev_issue_zeroout);
...@@ -16,6 +16,8 @@ ...@@ -16,6 +16,8 @@
int blk_rq_append_bio(struct request *rq, struct bio *bio) int blk_rq_append_bio(struct request *rq, struct bio *bio)
{ {
if (!rq->bio) { if (!rq->bio) {
rq->cmd_flags &= REQ_OP_MASK;
rq->cmd_flags |= (bio->bi_opf & REQ_OP_MASK);
blk_rq_bio_prep(rq->q, rq, bio); blk_rq_bio_prep(rq->q, rq, bio);
} else { } else {
if (!ll_back_merge_fn(rq->q, rq, bio)) if (!ll_back_merge_fn(rq->q, rq, bio))
...@@ -138,7 +140,7 @@ int blk_rq_map_user_iov(struct request_queue *q, struct request *rq, ...@@ -138,7 +140,7 @@ int blk_rq_map_user_iov(struct request_queue *q, struct request *rq,
} while (iov_iter_count(&i)); } while (iov_iter_count(&i));
if (!bio_flagged(bio, BIO_USER_MAPPED)) if (!bio_flagged(bio, BIO_USER_MAPPED))
rq->cmd_flags |= REQ_COPY_USER; rq->rq_flags |= RQF_COPY_USER;
return 0; return 0;
unmap_rq: unmap_rq:
...@@ -236,7 +238,7 @@ int blk_rq_map_kern(struct request_queue *q, struct request *rq, void *kbuf, ...@@ -236,7 +238,7 @@ int blk_rq_map_kern(struct request_queue *q, struct request *rq, void *kbuf,
bio_set_op_attrs(bio, REQ_OP_WRITE, 0); bio_set_op_attrs(bio, REQ_OP_WRITE, 0);
if (do_copy) if (do_copy)
rq->cmd_flags |= REQ_COPY_USER; rq->rq_flags |= RQF_COPY_USER;
ret = blk_rq_append_bio(rq, bio); ret = blk_rq_append_bio(rq, bio);
if (unlikely(ret)) { if (unlikely(ret)) {
......
...@@ -199,6 +199,10 @@ void blk_queue_split(struct request_queue *q, struct bio **bio, ...@@ -199,6 +199,10 @@ void blk_queue_split(struct request_queue *q, struct bio **bio,
case REQ_OP_SECURE_ERASE: case REQ_OP_SECURE_ERASE:
split = blk_bio_discard_split(q, *bio, bs, &nsegs); split = blk_bio_discard_split(q, *bio, bs, &nsegs);
break; break;
case REQ_OP_WRITE_ZEROES:
split = NULL;
nsegs = (*bio)->bi_phys_segments;
break;
case REQ_OP_WRITE_SAME: case REQ_OP_WRITE_SAME:
split = blk_bio_write_same_split(q, *bio, bs, &nsegs); split = blk_bio_write_same_split(q, *bio, bs, &nsegs);
break; break;
...@@ -237,15 +241,14 @@ static unsigned int __blk_recalc_rq_segments(struct request_queue *q, ...@@ -237,15 +241,14 @@ static unsigned int __blk_recalc_rq_segments(struct request_queue *q,
if (!bio) if (!bio)
return 0; return 0;
/* switch (bio_op(bio)) {
* This should probably be returning 0, but blk_add_request_payload() case REQ_OP_DISCARD:
* (Christoph!!!!) case REQ_OP_SECURE_ERASE:
*/ case REQ_OP_WRITE_ZEROES:
if (bio_op(bio) == REQ_OP_DISCARD || bio_op(bio) == REQ_OP_SECURE_ERASE) return 0;
return 1; case REQ_OP_WRITE_SAME:
if (bio_op(bio) == REQ_OP_WRITE_SAME)
return 1; return 1;
}
fbio = bio; fbio = bio;
cluster = blk_queue_cluster(q); cluster = blk_queue_cluster(q);
...@@ -402,38 +405,21 @@ __blk_segment_map_sg(struct request_queue *q, struct bio_vec *bvec, ...@@ -402,38 +405,21 @@ __blk_segment_map_sg(struct request_queue *q, struct bio_vec *bvec,
*bvprv = *bvec; *bvprv = *bvec;
} }
static inline int __blk_bvec_map_sg(struct request_queue *q, struct bio_vec bv,
struct scatterlist *sglist, struct scatterlist **sg)
{
*sg = sglist;
sg_set_page(*sg, bv.bv_page, bv.bv_len, bv.bv_offset);
return 1;
}
static int __blk_bios_map_sg(struct request_queue *q, struct bio *bio, static int __blk_bios_map_sg(struct request_queue *q, struct bio *bio,
struct scatterlist *sglist, struct scatterlist *sglist,
struct scatterlist **sg) struct scatterlist **sg)
{ {
struct bio_vec bvec, bvprv = { NULL }; struct bio_vec bvec, bvprv = { NULL };
struct bvec_iter iter; struct bvec_iter iter;
int nsegs, cluster; int cluster = blk_queue_cluster(q), nsegs = 0;
nsegs = 0;
cluster = blk_queue_cluster(q);
switch (bio_op(bio)) {
case REQ_OP_DISCARD:
case REQ_OP_SECURE_ERASE:
/*
* This is a hack - drivers should be neither modifying the
* biovec, nor relying on bi_vcnt - but because of
* blk_add_request_payload(), a discard bio may or may not have
* a payload we need to set up here (thank you Christoph) and
* bi_vcnt is really the only way of telling if we need to.
*/
if (!bio->bi_vcnt)
return 0;
/* Fall through */
case REQ_OP_WRITE_SAME:
*sg = sglist;
bvec = bio_iovec(bio);
sg_set_page(*sg, bvec.bv_page, bvec.bv_len, bvec.bv_offset);
return 1;
default:
break;
}
for_each_bio(bio) for_each_bio(bio)
bio_for_each_segment(bvec, bio, iter) bio_for_each_segment(bvec, bio, iter)
...@@ -453,10 +439,14 @@ int blk_rq_map_sg(struct request_queue *q, struct request *rq, ...@@ -453,10 +439,14 @@ int blk_rq_map_sg(struct request_queue *q, struct request *rq,
struct scatterlist *sg = NULL; struct scatterlist *sg = NULL;
int nsegs = 0; int nsegs = 0;
if (rq->bio) if (rq->rq_flags & RQF_SPECIAL_PAYLOAD)
nsegs = __blk_bvec_map_sg(q, rq->special_vec, sglist, &sg);
else if (rq->bio && bio_op(rq->bio) == REQ_OP_WRITE_SAME)
nsegs = __blk_bvec_map_sg(q, bio_iovec(rq->bio), sglist, &sg);
else if (rq->bio)
nsegs = __blk_bios_map_sg(q, rq->bio, sglist, &sg); nsegs = __blk_bios_map_sg(q, rq->bio, sglist, &sg);
if (unlikely(rq->cmd_flags & REQ_COPY_USER) && if (unlikely(rq->rq_flags & RQF_COPY_USER) &&
(blk_rq_bytes(rq) & q->dma_pad_mask)) { (blk_rq_bytes(rq) & q->dma_pad_mask)) {
unsigned int pad_len = unsigned int pad_len =
(q->dma_pad_mask & ~blk_rq_bytes(rq)) + 1; (q->dma_pad_mask & ~blk_rq_bytes(rq)) + 1;
...@@ -486,12 +476,19 @@ int blk_rq_map_sg(struct request_queue *q, struct request *rq, ...@@ -486,12 +476,19 @@ int blk_rq_map_sg(struct request_queue *q, struct request *rq,
* Something must have been wrong if the figured number of * Something must have been wrong if the figured number of
* segment is bigger than number of req's physical segments * segment is bigger than number of req's physical segments
*/ */
WARN_ON(nsegs > rq->nr_phys_segments); WARN_ON(nsegs > blk_rq_nr_phys_segments(rq));
return nsegs; return nsegs;
} }
EXPORT_SYMBOL(blk_rq_map_sg); EXPORT_SYMBOL(blk_rq_map_sg);
static void req_set_nomerge(struct request_queue *q, struct request *req)
{
req->cmd_flags |= REQ_NOMERGE;
if (req == q->last_merge)
q->last_merge = NULL;
}
static inline int ll_new_hw_segment(struct request_queue *q, static inline int ll_new_hw_segment(struct request_queue *q,
struct request *req, struct request *req,
struct bio *bio) struct bio *bio)
...@@ -512,9 +509,7 @@ static inline int ll_new_hw_segment(struct request_queue *q, ...@@ -512,9 +509,7 @@ static inline int ll_new_hw_segment(struct request_queue *q,
return 1; return 1;
no_merge: no_merge:
req->cmd_flags |= REQ_NOMERGE; req_set_nomerge(q, req);
if (req == q->last_merge)
q->last_merge = NULL;
return 0; return 0;
} }
...@@ -528,9 +523,7 @@ int ll_back_merge_fn(struct request_queue *q, struct request *req, ...@@ -528,9 +523,7 @@ int ll_back_merge_fn(struct request_queue *q, struct request *req,
return 0; return 0;
if (blk_rq_sectors(req) + bio_sectors(bio) > if (blk_rq_sectors(req) + bio_sectors(bio) >
blk_rq_get_max_sectors(req, blk_rq_pos(req))) { blk_rq_get_max_sectors(req, blk_rq_pos(req))) {
req->cmd_flags |= REQ_NOMERGE; req_set_nomerge(q, req);
if (req == q->last_merge)
q->last_merge = NULL;
return 0; return 0;
} }
if (!bio_flagged(req->biotail, BIO_SEG_VALID)) if (!bio_flagged(req->biotail, BIO_SEG_VALID))
...@@ -552,9 +545,7 @@ int ll_front_merge_fn(struct request_queue *q, struct request *req, ...@@ -552,9 +545,7 @@ int ll_front_merge_fn(struct request_queue *q, struct request *req,
return 0; return 0;
if (blk_rq_sectors(req) + bio_sectors(bio) > if (blk_rq_sectors(req) + bio_sectors(bio) >
blk_rq_get_max_sectors(req, bio->bi_iter.bi_sector)) { blk_rq_get_max_sectors(req, bio->bi_iter.bi_sector)) {
req->cmd_flags |= REQ_NOMERGE; req_set_nomerge(q, req);
if (req == q->last_merge)
q->last_merge = NULL;
return 0; return 0;
} }
if (!bio_flagged(bio, BIO_SEG_VALID)) if (!bio_flagged(bio, BIO_SEG_VALID))
...@@ -634,7 +625,7 @@ void blk_rq_set_mixed_merge(struct request *rq) ...@@ -634,7 +625,7 @@ void blk_rq_set_mixed_merge(struct request *rq)
unsigned int ff = rq->cmd_flags & REQ_FAILFAST_MASK; unsigned int ff = rq->cmd_flags & REQ_FAILFAST_MASK;
struct bio *bio; struct bio *bio;
if (rq->cmd_flags & REQ_MIXED_MERGE) if (rq->rq_flags & RQF_MIXED_MERGE)
return; return;
/* /*
...@@ -647,7 +638,7 @@ void blk_rq_set_mixed_merge(struct request *rq) ...@@ -647,7 +638,7 @@ void blk_rq_set_mixed_merge(struct request *rq)
(bio->bi_opf & REQ_FAILFAST_MASK) != ff); (bio->bi_opf & REQ_FAILFAST_MASK) != ff);
bio->bi_opf |= ff; bio->bi_opf |= ff;
} }
rq->cmd_flags |= REQ_MIXED_MERGE; rq->rq_flags |= RQF_MIXED_MERGE;
} }
static void blk_account_io_merge(struct request *req) static void blk_account_io_merge(struct request *req)
...@@ -709,7 +700,7 @@ static int attempt_merge(struct request_queue *q, struct request *req, ...@@ -709,7 +700,7 @@ static int attempt_merge(struct request_queue *q, struct request *req,
* makes sure that all involved bios have mixable attributes * makes sure that all involved bios have mixable attributes
* set properly. * set properly.
*/ */
if ((req->cmd_flags | next->cmd_flags) & REQ_MIXED_MERGE || if (((req->rq_flags | next->rq_flags) & RQF_MIXED_MERGE) ||
(req->cmd_flags & REQ_FAILFAST_MASK) != (req->cmd_flags & REQ_FAILFAST_MASK) !=
(next->cmd_flags & REQ_FAILFAST_MASK)) { (next->cmd_flags & REQ_FAILFAST_MASK)) {
blk_rq_set_mixed_merge(req); blk_rq_set_mixed_merge(req);
......
...@@ -259,6 +259,47 @@ static ssize_t blk_mq_hw_sysfs_cpus_show(struct blk_mq_hw_ctx *hctx, char *page) ...@@ -259,6 +259,47 @@ static ssize_t blk_mq_hw_sysfs_cpus_show(struct blk_mq_hw_ctx *hctx, char *page)
return ret; return ret;
} }
static void blk_mq_stat_clear(struct blk_mq_hw_ctx *hctx)
{
struct blk_mq_ctx *ctx;
unsigned int i;
hctx_for_each_ctx(hctx, ctx, i) {
blk_stat_init(&ctx->stat[BLK_STAT_READ]);
blk_stat_init(&ctx->stat[BLK_STAT_WRITE]);
}
}
static ssize_t blk_mq_hw_sysfs_stat_store(struct blk_mq_hw_ctx *hctx,
const char *page, size_t count)
{
blk_mq_stat_clear(hctx);
return count;
}
static ssize_t print_stat(char *page, struct blk_rq_stat *stat, const char *pre)
{
return sprintf(page, "%s samples=%llu, mean=%lld, min=%lld, max=%lld\n",
pre, (long long) stat->nr_samples,
(long long) stat->mean, (long long) stat->min,
(long long) stat->max);
}
static ssize_t blk_mq_hw_sysfs_stat_show(struct blk_mq_hw_ctx *hctx, char *page)
{
struct blk_rq_stat stat[2];
ssize_t ret;
blk_stat_init(&stat[BLK_STAT_READ]);
blk_stat_init(&stat[BLK_STAT_WRITE]);
blk_hctx_stat_get(hctx, stat);
ret = print_stat(page, &stat[BLK_STAT_READ], "read :");
ret += print_stat(page + ret, &stat[BLK_STAT_WRITE], "write:");
return ret;
}
static struct blk_mq_ctx_sysfs_entry blk_mq_sysfs_dispatched = { static struct blk_mq_ctx_sysfs_entry blk_mq_sysfs_dispatched = {
.attr = {.name = "dispatched", .mode = S_IRUGO }, .attr = {.name = "dispatched", .mode = S_IRUGO },
.show = blk_mq_sysfs_dispatched_show, .show = blk_mq_sysfs_dispatched_show,
...@@ -317,6 +358,11 @@ static struct blk_mq_hw_ctx_sysfs_entry blk_mq_hw_sysfs_poll = { ...@@ -317,6 +358,11 @@ static struct blk_mq_hw_ctx_sysfs_entry blk_mq_hw_sysfs_poll = {
.show = blk_mq_hw_sysfs_poll_show, .show = blk_mq_hw_sysfs_poll_show,
.store = blk_mq_hw_sysfs_poll_store, .store = blk_mq_hw_sysfs_poll_store,
}; };
static struct blk_mq_hw_ctx_sysfs_entry blk_mq_hw_sysfs_stat = {
.attr = {.name = "stats", .mode = S_IRUGO | S_IWUSR },
.show = blk_mq_hw_sysfs_stat_show,
.store = blk_mq_hw_sysfs_stat_store,
};
static struct attribute *default_hw_ctx_attrs[] = { static struct attribute *default_hw_ctx_attrs[] = {
&blk_mq_hw_sysfs_queued.attr, &blk_mq_hw_sysfs_queued.attr,
...@@ -327,6 +373,7 @@ static struct attribute *default_hw_ctx_attrs[] = { ...@@ -327,6 +373,7 @@ static struct attribute *default_hw_ctx_attrs[] = {
&blk_mq_hw_sysfs_cpus.attr, &blk_mq_hw_sysfs_cpus.attr,
&blk_mq_hw_sysfs_active.attr, &blk_mq_hw_sysfs_active.attr,
&blk_mq_hw_sysfs_poll.attr, &blk_mq_hw_sysfs_poll.attr,
&blk_mq_hw_sysfs_stat.attr,
NULL, NULL,
}; };
......
This diff is collapsed.
#ifndef INT_BLK_MQ_H #ifndef INT_BLK_MQ_H
#define INT_BLK_MQ_H #define INT_BLK_MQ_H
#include "blk-stat.h"
struct blk_mq_tag_set; struct blk_mq_tag_set;
struct blk_mq_ctx { struct blk_mq_ctx {
...@@ -18,6 +20,7 @@ struct blk_mq_ctx { ...@@ -18,6 +20,7 @@ struct blk_mq_ctx {
/* incremented at completion time */ /* incremented at completion time */
unsigned long ____cacheline_aligned_in_smp rq_completed[2]; unsigned long ____cacheline_aligned_in_smp rq_completed[2];
struct blk_rq_stat stat[2];
struct request_queue *queue; struct request_queue *queue;
struct kobject kobj; struct kobject kobj;
...@@ -28,6 +31,7 @@ void blk_mq_freeze_queue(struct request_queue *q); ...@@ -28,6 +31,7 @@ void blk_mq_freeze_queue(struct request_queue *q);
void blk_mq_free_queue(struct request_queue *q); void blk_mq_free_queue(struct request_queue *q);
int blk_mq_update_nr_requests(struct request_queue *q, unsigned int nr); int blk_mq_update_nr_requests(struct request_queue *q, unsigned int nr);
void blk_mq_wake_waiters(struct request_queue *q); void blk_mq_wake_waiters(struct request_queue *q);
bool blk_mq_dispatch_rq_list(struct blk_mq_hw_ctx *, struct list_head *);
/* /*
* CPU hotplug helpers * CPU hotplug helpers
...@@ -100,6 +104,11 @@ static inline void blk_mq_set_alloc_data(struct blk_mq_alloc_data *data, ...@@ -100,6 +104,11 @@ static inline void blk_mq_set_alloc_data(struct blk_mq_alloc_data *data,
data->hctx = hctx; data->hctx = hctx;
} }
static inline bool blk_mq_hctx_stopped(struct blk_mq_hw_ctx *hctx)
{
return test_bit(BLK_MQ_S_STOPPED, &hctx->state);
}
static inline bool blk_mq_hw_queue_mapped(struct blk_mq_hw_ctx *hctx) static inline bool blk_mq_hw_queue_mapped(struct blk_mq_hw_ctx *hctx)
{ {
return hctx->nr_ctx && hctx->tags; return hctx->nr_ctx && hctx->tags;
......
...@@ -13,6 +13,7 @@ ...@@ -13,6 +13,7 @@
#include <linux/gfp.h> #include <linux/gfp.h>
#include "blk.h" #include "blk.h"
#include "blk-wbt.h"
unsigned long blk_max_low_pfn; unsigned long blk_max_low_pfn;
EXPORT_SYMBOL(blk_max_low_pfn); EXPORT_SYMBOL(blk_max_low_pfn);
...@@ -95,6 +96,7 @@ void blk_set_default_limits(struct queue_limits *lim) ...@@ -95,6 +96,7 @@ void blk_set_default_limits(struct queue_limits *lim)
lim->max_dev_sectors = 0; lim->max_dev_sectors = 0;
lim->chunk_sectors = 0; lim->chunk_sectors = 0;
lim->max_write_same_sectors = 0; lim->max_write_same_sectors = 0;
lim->max_write_zeroes_sectors = 0;
lim->max_discard_sectors = 0; lim->max_discard_sectors = 0;
lim->max_hw_discard_sectors = 0; lim->max_hw_discard_sectors = 0;
lim->discard_granularity = 0; lim->discard_granularity = 0;
...@@ -107,6 +109,7 @@ void blk_set_default_limits(struct queue_limits *lim) ...@@ -107,6 +109,7 @@ void blk_set_default_limits(struct queue_limits *lim)
lim->io_opt = 0; lim->io_opt = 0;
lim->misaligned = 0; lim->misaligned = 0;
lim->cluster = 1; lim->cluster = 1;
lim->zoned = BLK_ZONED_NONE;
} }
EXPORT_SYMBOL(blk_set_default_limits); EXPORT_SYMBOL(blk_set_default_limits);
...@@ -130,6 +133,7 @@ void blk_set_stacking_limits(struct queue_limits *lim) ...@@ -130,6 +133,7 @@ void blk_set_stacking_limits(struct queue_limits *lim)
lim->max_sectors = UINT_MAX; lim->max_sectors = UINT_MAX;
lim->max_dev_sectors = UINT_MAX; lim->max_dev_sectors = UINT_MAX;
lim->max_write_same_sectors = UINT_MAX; lim->max_write_same_sectors = UINT_MAX;
lim->max_write_zeroes_sectors = UINT_MAX;
} }
EXPORT_SYMBOL(blk_set_stacking_limits); EXPORT_SYMBOL(blk_set_stacking_limits);
...@@ -298,6 +302,19 @@ void blk_queue_max_write_same_sectors(struct request_queue *q, ...@@ -298,6 +302,19 @@ void blk_queue_max_write_same_sectors(struct request_queue *q,
} }
EXPORT_SYMBOL(blk_queue_max_write_same_sectors); EXPORT_SYMBOL(blk_queue_max_write_same_sectors);
/**
* blk_queue_max_write_zeroes_sectors - set max sectors for a single
* write zeroes
* @q: the request queue for the device
* @max_write_zeroes_sectors: maximum number of sectors to write per command
**/
void blk_queue_max_write_zeroes_sectors(struct request_queue *q,
unsigned int max_write_zeroes_sectors)
{
q->limits.max_write_zeroes_sectors = max_write_zeroes_sectors;
}
EXPORT_SYMBOL(blk_queue_max_write_zeroes_sectors);
/** /**
* blk_queue_max_segments - set max hw segments for a request for this queue * blk_queue_max_segments - set max hw segments for a request for this queue
* @q: the request queue for the device * @q: the request queue for the device
...@@ -526,6 +543,8 @@ int blk_stack_limits(struct queue_limits *t, struct queue_limits *b, ...@@ -526,6 +543,8 @@ int blk_stack_limits(struct queue_limits *t, struct queue_limits *b,
t->max_dev_sectors = min_not_zero(t->max_dev_sectors, b->max_dev_sectors); t->max_dev_sectors = min_not_zero(t->max_dev_sectors, b->max_dev_sectors);
t->max_write_same_sectors = min(t->max_write_same_sectors, t->max_write_same_sectors = min(t->max_write_same_sectors,
b->max_write_same_sectors); b->max_write_same_sectors);
t->max_write_zeroes_sectors = min(t->max_write_zeroes_sectors,
b->max_write_zeroes_sectors);
t->bounce_pfn = min_not_zero(t->bounce_pfn, b->bounce_pfn); t->bounce_pfn = min_not_zero(t->bounce_pfn, b->bounce_pfn);
t->seg_boundary_mask = min_not_zero(t->seg_boundary_mask, t->seg_boundary_mask = min_not_zero(t->seg_boundary_mask,
...@@ -631,6 +650,10 @@ int blk_stack_limits(struct queue_limits *t, struct queue_limits *b, ...@@ -631,6 +650,10 @@ int blk_stack_limits(struct queue_limits *t, struct queue_limits *b,
t->discard_granularity; t->discard_granularity;
} }
if (b->chunk_sectors)
t->chunk_sectors = min_not_zero(t->chunk_sectors,
b->chunk_sectors);
return ret; return ret;
} }
EXPORT_SYMBOL(blk_stack_limits); EXPORT_SYMBOL(blk_stack_limits);
...@@ -832,6 +855,19 @@ void blk_queue_flush_queueable(struct request_queue *q, bool queueable) ...@@ -832,6 +855,19 @@ void blk_queue_flush_queueable(struct request_queue *q, bool queueable)
} }
EXPORT_SYMBOL_GPL(blk_queue_flush_queueable); EXPORT_SYMBOL_GPL(blk_queue_flush_queueable);
/**
* blk_set_queue_depth - tell the block layer about the device queue depth
* @q: the request queue for the device
* @depth: queue depth
*
*/
void blk_set_queue_depth(struct request_queue *q, unsigned int depth)
{
q->queue_depth = depth;
wbt_set_queue_depth(q->rq_wb, depth);
}
EXPORT_SYMBOL(blk_set_queue_depth);
/** /**
* blk_queue_write_cache - configure queue's write cache * blk_queue_write_cache - configure queue's write cache
* @q: the request queue for the device * @q: the request queue for the device
...@@ -852,6 +888,8 @@ void blk_queue_write_cache(struct request_queue *q, bool wc, bool fua) ...@@ -852,6 +888,8 @@ void blk_queue_write_cache(struct request_queue *q, bool wc, bool fua)
else else
queue_flag_clear(QUEUE_FLAG_FUA, q); queue_flag_clear(QUEUE_FLAG_FUA, q);
spin_unlock_irq(q->queue_lock); spin_unlock_irq(q->queue_lock);
wbt_set_write_cache(q->rq_wb, test_bit(QUEUE_FLAG_WC, &q->queue_flags));
} }
EXPORT_SYMBOL_GPL(blk_queue_write_cache); EXPORT_SYMBOL_GPL(blk_queue_write_cache);
......
/*
* Block stat tracking code
*
* Copyright (C) 2016 Jens Axboe
*/
#include <linux/kernel.h>
#include <linux/blk-mq.h>
#include "blk-stat.h"
#include "blk-mq.h"
static void blk_stat_flush_batch(struct blk_rq_stat *stat)
{
const s32 nr_batch = READ_ONCE(stat->nr_batch);
const s32 nr_samples = READ_ONCE(stat->nr_samples);
if (!nr_batch)
return;
if (!nr_samples)
stat->mean = div64_s64(stat->batch, nr_batch);
else {
stat->mean = div64_s64((stat->mean * nr_samples) +
stat->batch,
nr_batch + nr_samples);
}
stat->nr_samples += nr_batch;
stat->nr_batch = stat->batch = 0;
}
static void blk_stat_sum(struct blk_rq_stat *dst, struct blk_rq_stat *src)
{
if (!src->nr_samples)
return;
blk_stat_flush_batch(src);
dst->min = min(dst->min, src->min);
dst->max = max(dst->max, src->max);
if (!dst->nr_samples)
dst->mean = src->mean;
else {
dst->mean = div64_s64((src->mean * src->nr_samples) +
(dst->mean * dst->nr_samples),
dst->nr_samples + src->nr_samples);
}
dst->nr_samples += src->nr_samples;
}
static void blk_mq_stat_get(struct request_queue *q, struct blk_rq_stat *dst)
{
struct blk_mq_hw_ctx *hctx;
struct blk_mq_ctx *ctx;
uint64_t latest = 0;
int i, j, nr;
blk_stat_init(&dst[BLK_STAT_READ]);
blk_stat_init(&dst[BLK_STAT_WRITE]);
nr = 0;
do {
uint64_t newest = 0;
queue_for_each_hw_ctx(q, hctx, i) {
hctx_for_each_ctx(hctx, ctx, j) {
blk_stat_flush_batch(&ctx->stat[BLK_STAT_READ]);
blk_stat_flush_batch(&ctx->stat[BLK_STAT_WRITE]);
if (!ctx->stat[BLK_STAT_READ].nr_samples &&
!ctx->stat[BLK_STAT_WRITE].nr_samples)
continue;
if (ctx->stat[BLK_STAT_READ].time > newest)
newest = ctx->stat[BLK_STAT_READ].time;
if (ctx->stat[BLK_STAT_WRITE].time > newest)
newest = ctx->stat[BLK_STAT_WRITE].time;
}
}
/*
* No samples
*/
if (!newest)
break;
if (newest > latest)
latest = newest;
queue_for_each_hw_ctx(q, hctx, i) {
hctx_for_each_ctx(hctx, ctx, j) {
if (ctx->stat[BLK_STAT_READ].time == newest) {
blk_stat_sum(&dst[BLK_STAT_READ],
&ctx->stat[BLK_STAT_READ]);
nr++;
}
if (ctx->stat[BLK_STAT_WRITE].time == newest) {
blk_stat_sum(&dst[BLK_STAT_WRITE],
&ctx->stat[BLK_STAT_WRITE]);
nr++;
}
}
}
/*
* If we race on finding an entry, just loop back again.
* Should be very rare.
*/
} while (!nr);
dst[BLK_STAT_READ].time = dst[BLK_STAT_WRITE].time = latest;
}
void blk_queue_stat_get(struct request_queue *q, struct blk_rq_stat *dst)
{
if (q->mq_ops)
blk_mq_stat_get(q, dst);
else {
blk_stat_flush_batch(&q->rq_stats[BLK_STAT_READ]);
blk_stat_flush_batch(&q->rq_stats[BLK_STAT_WRITE]);
memcpy(&dst[BLK_STAT_READ], &q->rq_stats[BLK_STAT_READ],
sizeof(struct blk_rq_stat));
memcpy(&dst[BLK_STAT_WRITE], &q->rq_stats[BLK_STAT_WRITE],
sizeof(struct blk_rq_stat));
}
}
void blk_hctx_stat_get(struct blk_mq_hw_ctx *hctx, struct blk_rq_stat *dst)
{
struct blk_mq_ctx *ctx;
unsigned int i, nr;
nr = 0;
do {
uint64_t newest = 0;
hctx_for_each_ctx(hctx, ctx, i) {
blk_stat_flush_batch(&ctx->stat[BLK_STAT_READ]);
blk_stat_flush_batch(&ctx->stat[BLK_STAT_WRITE]);
if (!ctx->stat[BLK_STAT_READ].nr_samples &&
!ctx->stat[BLK_STAT_WRITE].nr_samples)
continue;
if (ctx->stat[BLK_STAT_READ].time > newest)
newest = ctx->stat[BLK_STAT_READ].time;
if (ctx->stat[BLK_STAT_WRITE].time > newest)
newest = ctx->stat[BLK_STAT_WRITE].time;
}
if (!newest)
break;
hctx_for_each_ctx(hctx, ctx, i) {
if (ctx->stat[BLK_STAT_READ].time == newest) {
blk_stat_sum(&dst[BLK_STAT_READ],
&ctx->stat[BLK_STAT_READ]);
nr++;
}
if (ctx->stat[BLK_STAT_WRITE].time == newest) {
blk_stat_sum(&dst[BLK_STAT_WRITE],
&ctx->stat[BLK_STAT_WRITE]);
nr++;
}
}
/*
* If we race on finding an entry, just loop back again.
* Should be very rare, as the window is only updated
* occasionally
*/
} while (!nr);
}
static void __blk_stat_init(struct blk_rq_stat *stat, s64 time_now)
{
stat->min = -1ULL;
stat->max = stat->nr_samples = stat->mean = 0;
stat->batch = stat->nr_batch = 0;
stat->time = time_now & BLK_STAT_NSEC_MASK;
}
void blk_stat_init(struct blk_rq_stat *stat)
{
__blk_stat_init(stat, ktime_to_ns(ktime_get()));
}
static bool __blk_stat_is_current(struct blk_rq_stat *stat, s64 now)
{
return (now & BLK_STAT_NSEC_MASK) == (stat->time & BLK_STAT_NSEC_MASK);
}
bool blk_stat_is_current(struct blk_rq_stat *stat)
{
return __blk_stat_is_current(stat, ktime_to_ns(ktime_get()));
}
void blk_stat_add(struct blk_rq_stat *stat, struct request *rq)
{
s64 now, value;
now = __blk_stat_time(ktime_to_ns(ktime_get()));
if (now < blk_stat_time(&rq->issue_stat))
return;
if (!__blk_stat_is_current(stat, now))
__blk_stat_init(stat, now);
value = now - blk_stat_time(&rq->issue_stat);
if (value > stat->max)
stat->max = value;
if (value < stat->min)
stat->min = value;
if (stat->batch + value < stat->batch ||
stat->nr_batch + 1 == BLK_RQ_STAT_BATCH)
blk_stat_flush_batch(stat);
stat->batch += value;
stat->nr_batch++;
}
void blk_stat_clear(struct request_queue *q)
{
if (q->mq_ops) {
struct blk_mq_hw_ctx *hctx;
struct blk_mq_ctx *ctx;
int i, j;
queue_for_each_hw_ctx(q, hctx, i) {
hctx_for_each_ctx(hctx, ctx, j) {
blk_stat_init(&ctx->stat[BLK_STAT_READ]);
blk_stat_init(&ctx->stat[BLK_STAT_WRITE]);
}
}
} else {
blk_stat_init(&q->rq_stats[BLK_STAT_READ]);
blk_stat_init(&q->rq_stats[BLK_STAT_WRITE]);
}
}
void blk_stat_set_issue_time(struct blk_issue_stat *stat)
{
stat->time = (stat->time & BLK_STAT_MASK) |
(ktime_to_ns(ktime_get()) & BLK_STAT_TIME_MASK);
}
/*
* Enable stat tracking, return whether it was enabled
*/
bool blk_stat_enable(struct request_queue *q)
{
if (!test_bit(QUEUE_FLAG_STATS, &q->queue_flags)) {
set_bit(QUEUE_FLAG_STATS, &q->queue_flags);
return false;
}
return true;
}
#ifndef BLK_STAT_H
#define BLK_STAT_H
/*
* ~0.13s window as a power-of-2 (2^27 nsecs)
*/
#define BLK_STAT_NSEC 134217728ULL
#define BLK_STAT_NSEC_MASK ~(BLK_STAT_NSEC - 1)
/*
* Upper 3 bits can be used elsewhere
*/
#define BLK_STAT_RES_BITS 3
#define BLK_STAT_SHIFT (64 - BLK_STAT_RES_BITS)
#define BLK_STAT_TIME_MASK ((1ULL << BLK_STAT_SHIFT) - 1)
#define BLK_STAT_MASK ~BLK_STAT_TIME_MASK
enum {
BLK_STAT_READ = 0,
BLK_STAT_WRITE,
};
void blk_stat_add(struct blk_rq_stat *, struct request *);
void blk_hctx_stat_get(struct blk_mq_hw_ctx *, struct blk_rq_stat *);
void blk_queue_stat_get(struct request_queue *, struct blk_rq_stat *);
void blk_stat_clear(struct request_queue *);
void blk_stat_init(struct blk_rq_stat *);
bool blk_stat_is_current(struct blk_rq_stat *);
void blk_stat_set_issue_time(struct blk_issue_stat *);
bool blk_stat_enable(struct request_queue *);
static inline u64 __blk_stat_time(u64 time)
{
return time & BLK_STAT_TIME_MASK;
}
static inline u64 blk_stat_time(struct blk_issue_stat *stat)
{
return __blk_stat_time(stat->time);
}
#endif
...@@ -13,6 +13,7 @@ ...@@ -13,6 +13,7 @@
#include "blk.h" #include "blk.h"
#include "blk-mq.h" #include "blk-mq.h"
#include "blk-wbt.h"
struct queue_sysfs_entry { struct queue_sysfs_entry {
struct attribute attr; struct attribute attr;
...@@ -41,6 +42,19 @@ queue_var_store(unsigned long *var, const char *page, size_t count) ...@@ -41,6 +42,19 @@ queue_var_store(unsigned long *var, const char *page, size_t count)
return count; return count;
} }
static ssize_t queue_var_store64(s64 *var, const char *page)
{
int err;
s64 v;
err = kstrtos64(page, 10, &v);
if (err < 0)
return err;
*var = v;
return 0;
}
static ssize_t queue_requests_show(struct request_queue *q, char *page) static ssize_t queue_requests_show(struct request_queue *q, char *page)
{ {
return queue_var_show(q->nr_requests, (page)); return queue_var_show(q->nr_requests, (page));
...@@ -130,6 +144,11 @@ static ssize_t queue_physical_block_size_show(struct request_queue *q, char *pag ...@@ -130,6 +144,11 @@ static ssize_t queue_physical_block_size_show(struct request_queue *q, char *pag
return queue_var_show(queue_physical_block_size(q), page); return queue_var_show(queue_physical_block_size(q), page);
} }
static ssize_t queue_chunk_sectors_show(struct request_queue *q, char *page)
{
return queue_var_show(q->limits.chunk_sectors, page);
}
static ssize_t queue_io_min_show(struct request_queue *q, char *page) static ssize_t queue_io_min_show(struct request_queue *q, char *page)
{ {
return queue_var_show(queue_io_min(q), page); return queue_var_show(queue_io_min(q), page);
...@@ -192,6 +211,11 @@ static ssize_t queue_write_same_max_show(struct request_queue *q, char *page) ...@@ -192,6 +211,11 @@ static ssize_t queue_write_same_max_show(struct request_queue *q, char *page)
(unsigned long long)q->limits.max_write_same_sectors << 9); (unsigned long long)q->limits.max_write_same_sectors << 9);
} }
static ssize_t queue_write_zeroes_max_show(struct request_queue *q, char *page)
{
return sprintf(page, "%llu\n",
(unsigned long long)q->limits.max_write_zeroes_sectors << 9);
}
static ssize_t static ssize_t
queue_max_sectors_store(struct request_queue *q, const char *page, size_t count) queue_max_sectors_store(struct request_queue *q, const char *page, size_t count)
...@@ -258,6 +282,18 @@ QUEUE_SYSFS_BIT_FNS(random, ADD_RANDOM, 0); ...@@ -258,6 +282,18 @@ QUEUE_SYSFS_BIT_FNS(random, ADD_RANDOM, 0);
QUEUE_SYSFS_BIT_FNS(iostats, IO_STAT, 0); QUEUE_SYSFS_BIT_FNS(iostats, IO_STAT, 0);
#undef QUEUE_SYSFS_BIT_FNS #undef QUEUE_SYSFS_BIT_FNS
static ssize_t queue_zoned_show(struct request_queue *q, char *page)
{
switch (blk_queue_zoned_model(q)) {
case BLK_ZONED_HA:
return sprintf(page, "host-aware\n");
case BLK_ZONED_HM:
return sprintf(page, "host-managed\n");
default:
return sprintf(page, "none\n");
}
}
static ssize_t queue_nomerges_show(struct request_queue *q, char *page) static ssize_t queue_nomerges_show(struct request_queue *q, char *page)
{ {
return queue_var_show((blk_queue_nomerges(q) << 1) | return queue_var_show((blk_queue_nomerges(q) << 1) |
...@@ -320,6 +356,38 @@ queue_rq_affinity_store(struct request_queue *q, const char *page, size_t count) ...@@ -320,6 +356,38 @@ queue_rq_affinity_store(struct request_queue *q, const char *page, size_t count)
return ret; return ret;
} }
static ssize_t queue_poll_delay_show(struct request_queue *q, char *page)
{
int val;
if (q->poll_nsec == -1)
val = -1;
else
val = q->poll_nsec / 1000;
return sprintf(page, "%d\n", val);
}
static ssize_t queue_poll_delay_store(struct request_queue *q, const char *page,
size_t count)
{
int err, val;
if (!q->mq_ops || !q->mq_ops->poll)
return -EINVAL;
err = kstrtoint(page, 10, &val);
if (err < 0)
return err;
if (val == -1)
q->poll_nsec = -1;
else
q->poll_nsec = val * 1000;
return count;
}
static ssize_t queue_poll_show(struct request_queue *q, char *page) static ssize_t queue_poll_show(struct request_queue *q, char *page)
{ {
return queue_var_show(test_bit(QUEUE_FLAG_POLL, &q->queue_flags), page); return queue_var_show(test_bit(QUEUE_FLAG_POLL, &q->queue_flags), page);
...@@ -348,6 +416,50 @@ static ssize_t queue_poll_store(struct request_queue *q, const char *page, ...@@ -348,6 +416,50 @@ static ssize_t queue_poll_store(struct request_queue *q, const char *page,
return ret; return ret;
} }
static ssize_t queue_wb_lat_show(struct request_queue *q, char *page)
{
if (!q->rq_wb)
return -EINVAL;
return sprintf(page, "%llu\n", div_u64(q->rq_wb->min_lat_nsec, 1000));
}
static ssize_t queue_wb_lat_store(struct request_queue *q, const char *page,
size_t count)
{
struct rq_wb *rwb;
ssize_t ret;
s64 val;
ret = queue_var_store64(&val, page);
if (ret < 0)
return ret;
if (val < -1)
return -EINVAL;
rwb = q->rq_wb;
if (!rwb) {
ret = wbt_init(q);
if (ret)
return ret;
rwb = q->rq_wb;
if (!rwb)
return -EINVAL;
}
if (val == -1)
rwb->min_lat_nsec = wbt_default_latency_nsec(q);
else if (val >= 0)
rwb->min_lat_nsec = val * 1000ULL;
if (rwb->enable_state == WBT_STATE_ON_DEFAULT)
rwb->enable_state = WBT_STATE_ON_MANUAL;
wbt_update_limits(rwb);
return count;
}
static ssize_t queue_wc_show(struct request_queue *q, char *page) static ssize_t queue_wc_show(struct request_queue *q, char *page)
{ {
if (test_bit(QUEUE_FLAG_WC, &q->queue_flags)) if (test_bit(QUEUE_FLAG_WC, &q->queue_flags))
...@@ -385,6 +497,26 @@ static ssize_t queue_dax_show(struct request_queue *q, char *page) ...@@ -385,6 +497,26 @@ static ssize_t queue_dax_show(struct request_queue *q, char *page)
return queue_var_show(blk_queue_dax(q), page); return queue_var_show(blk_queue_dax(q), page);
} }
static ssize_t print_stat(char *page, struct blk_rq_stat *stat, const char *pre)
{
return sprintf(page, "%s samples=%llu, mean=%lld, min=%lld, max=%lld\n",
pre, (long long) stat->nr_samples,
(long long) stat->mean, (long long) stat->min,
(long long) stat->max);
}
static ssize_t queue_stats_show(struct request_queue *q, char *page)
{
struct blk_rq_stat stat[2];
ssize_t ret;
blk_queue_stat_get(q, stat);
ret = print_stat(page, &stat[BLK_STAT_READ], "read :");
ret += print_stat(page + ret, &stat[BLK_STAT_WRITE], "write:");
return ret;
}
static struct queue_sysfs_entry queue_requests_entry = { static struct queue_sysfs_entry queue_requests_entry = {
.attr = {.name = "nr_requests", .mode = S_IRUGO | S_IWUSR }, .attr = {.name = "nr_requests", .mode = S_IRUGO | S_IWUSR },
.show = queue_requests_show, .show = queue_requests_show,
...@@ -444,6 +576,11 @@ static struct queue_sysfs_entry queue_physical_block_size_entry = { ...@@ -444,6 +576,11 @@ static struct queue_sysfs_entry queue_physical_block_size_entry = {
.show = queue_physical_block_size_show, .show = queue_physical_block_size_show,
}; };
static struct queue_sysfs_entry queue_chunk_sectors_entry = {
.attr = {.name = "chunk_sectors", .mode = S_IRUGO },
.show = queue_chunk_sectors_show,
};
static struct queue_sysfs_entry queue_io_min_entry = { static struct queue_sysfs_entry queue_io_min_entry = {
.attr = {.name = "minimum_io_size", .mode = S_IRUGO }, .attr = {.name = "minimum_io_size", .mode = S_IRUGO },
.show = queue_io_min_show, .show = queue_io_min_show,
...@@ -480,12 +617,22 @@ static struct queue_sysfs_entry queue_write_same_max_entry = { ...@@ -480,12 +617,22 @@ static struct queue_sysfs_entry queue_write_same_max_entry = {
.show = queue_write_same_max_show, .show = queue_write_same_max_show,
}; };
static struct queue_sysfs_entry queue_write_zeroes_max_entry = {
.attr = {.name = "write_zeroes_max_bytes", .mode = S_IRUGO },
.show = queue_write_zeroes_max_show,
};
static struct queue_sysfs_entry queue_nonrot_entry = { static struct queue_sysfs_entry queue_nonrot_entry = {
.attr = {.name = "rotational", .mode = S_IRUGO | S_IWUSR }, .attr = {.name = "rotational", .mode = S_IRUGO | S_IWUSR },
.show = queue_show_nonrot, .show = queue_show_nonrot,
.store = queue_store_nonrot, .store = queue_store_nonrot,
}; };
static struct queue_sysfs_entry queue_zoned_entry = {
.attr = {.name = "zoned", .mode = S_IRUGO },
.show = queue_zoned_show,
};
static struct queue_sysfs_entry queue_nomerges_entry = { static struct queue_sysfs_entry queue_nomerges_entry = {
.attr = {.name = "nomerges", .mode = S_IRUGO | S_IWUSR }, .attr = {.name = "nomerges", .mode = S_IRUGO | S_IWUSR },
.show = queue_nomerges_show, .show = queue_nomerges_show,
...@@ -516,6 +663,12 @@ static struct queue_sysfs_entry queue_poll_entry = { ...@@ -516,6 +663,12 @@ static struct queue_sysfs_entry queue_poll_entry = {
.store = queue_poll_store, .store = queue_poll_store,
}; };
static struct queue_sysfs_entry queue_poll_delay_entry = {
.attr = {.name = "io_poll_delay", .mode = S_IRUGO | S_IWUSR },
.show = queue_poll_delay_show,
.store = queue_poll_delay_store,
};
static struct queue_sysfs_entry queue_wc_entry = { static struct queue_sysfs_entry queue_wc_entry = {
.attr = {.name = "write_cache", .mode = S_IRUGO | S_IWUSR }, .attr = {.name = "write_cache", .mode = S_IRUGO | S_IWUSR },
.show = queue_wc_show, .show = queue_wc_show,
...@@ -527,6 +680,17 @@ static struct queue_sysfs_entry queue_dax_entry = { ...@@ -527,6 +680,17 @@ static struct queue_sysfs_entry queue_dax_entry = {
.show = queue_dax_show, .show = queue_dax_show,
}; };
static struct queue_sysfs_entry queue_stats_entry = {
.attr = {.name = "stats", .mode = S_IRUGO },
.show = queue_stats_show,
};
static struct queue_sysfs_entry queue_wb_lat_entry = {
.attr = {.name = "wbt_lat_usec", .mode = S_IRUGO | S_IWUSR },
.show = queue_wb_lat_show,
.store = queue_wb_lat_store,
};
static struct attribute *default_attrs[] = { static struct attribute *default_attrs[] = {
&queue_requests_entry.attr, &queue_requests_entry.attr,
&queue_ra_entry.attr, &queue_ra_entry.attr,
...@@ -539,6 +703,7 @@ static struct attribute *default_attrs[] = { ...@@ -539,6 +703,7 @@ static struct attribute *default_attrs[] = {
&queue_hw_sector_size_entry.attr, &queue_hw_sector_size_entry.attr,
&queue_logical_block_size_entry.attr, &queue_logical_block_size_entry.attr,
&queue_physical_block_size_entry.attr, &queue_physical_block_size_entry.attr,
&queue_chunk_sectors_entry.attr,
&queue_io_min_entry.attr, &queue_io_min_entry.attr,
&queue_io_opt_entry.attr, &queue_io_opt_entry.attr,
&queue_discard_granularity_entry.attr, &queue_discard_granularity_entry.attr,
...@@ -546,7 +711,9 @@ static struct attribute *default_attrs[] = { ...@@ -546,7 +711,9 @@ static struct attribute *default_attrs[] = {
&queue_discard_max_hw_entry.attr, &queue_discard_max_hw_entry.attr,
&queue_discard_zeroes_data_entry.attr, &queue_discard_zeroes_data_entry.attr,
&queue_write_same_max_entry.attr, &queue_write_same_max_entry.attr,
&queue_write_zeroes_max_entry.attr,
&queue_nonrot_entry.attr, &queue_nonrot_entry.attr,
&queue_zoned_entry.attr,
&queue_nomerges_entry.attr, &queue_nomerges_entry.attr,
&queue_rq_affinity_entry.attr, &queue_rq_affinity_entry.attr,
&queue_iostats_entry.attr, &queue_iostats_entry.attr,
...@@ -554,6 +721,9 @@ static struct attribute *default_attrs[] = { ...@@ -554,6 +721,9 @@ static struct attribute *default_attrs[] = {
&queue_poll_entry.attr, &queue_poll_entry.attr,
&queue_wc_entry.attr, &queue_wc_entry.attr,
&queue_dax_entry.attr, &queue_dax_entry.attr,
&queue_stats_entry.attr,
&queue_wb_lat_entry.attr,
&queue_poll_delay_entry.attr,
NULL, NULL,
}; };
...@@ -628,6 +798,7 @@ static void blk_release_queue(struct kobject *kobj) ...@@ -628,6 +798,7 @@ static void blk_release_queue(struct kobject *kobj)
struct request_queue *q = struct request_queue *q =
container_of(kobj, struct request_queue, kobj); container_of(kobj, struct request_queue, kobj);
wbt_exit(q);
bdi_exit(&q->backing_dev_info); bdi_exit(&q->backing_dev_info);
blkcg_exit_queue(q); blkcg_exit_queue(q);
...@@ -668,6 +839,23 @@ struct kobj_type blk_queue_ktype = { ...@@ -668,6 +839,23 @@ struct kobj_type blk_queue_ktype = {
.release = blk_release_queue, .release = blk_release_queue,
}; };
static void blk_wb_init(struct request_queue *q)
{
#ifndef CONFIG_BLK_WBT_MQ
if (q->mq_ops)
return;
#endif
#ifndef CONFIG_BLK_WBT_SQ
if (q->request_fn)
return;
#endif
/*
* If this fails, we don't get throttling
*/
wbt_init(q);
}
int blk_register_queue(struct gendisk *disk) int blk_register_queue(struct gendisk *disk)
{ {
int ret; int ret;
...@@ -707,6 +895,8 @@ int blk_register_queue(struct gendisk *disk) ...@@ -707,6 +895,8 @@ int blk_register_queue(struct gendisk *disk)
if (q->mq_ops) if (q->mq_ops)
blk_mq_register_dev(dev, q); blk_mq_register_dev(dev, q);
blk_wb_init(q);
if (!q->request_fn) if (!q->request_fn)
return 0; return 0;
......
...@@ -270,7 +270,7 @@ void blk_queue_end_tag(struct request_queue *q, struct request *rq) ...@@ -270,7 +270,7 @@ void blk_queue_end_tag(struct request_queue *q, struct request *rq)
BUG_ON(tag >= bqt->real_max_depth); BUG_ON(tag >= bqt->real_max_depth);
list_del_init(&rq->queuelist); list_del_init(&rq->queuelist);
rq->cmd_flags &= ~REQ_QUEUED; rq->rq_flags &= ~RQF_QUEUED;
rq->tag = -1; rq->tag = -1;
if (unlikely(bqt->tag_index[tag] == NULL)) if (unlikely(bqt->tag_index[tag] == NULL))
...@@ -316,7 +316,7 @@ int blk_queue_start_tag(struct request_queue *q, struct request *rq) ...@@ -316,7 +316,7 @@ int blk_queue_start_tag(struct request_queue *q, struct request *rq)
unsigned max_depth; unsigned max_depth;
int tag; int tag;
if (unlikely((rq->cmd_flags & REQ_QUEUED))) { if (unlikely((rq->rq_flags & RQF_QUEUED))) {
printk(KERN_ERR printk(KERN_ERR
"%s: request %p for device [%s] already tagged %d", "%s: request %p for device [%s] already tagged %d",
__func__, rq, __func__, rq,
...@@ -371,7 +371,7 @@ int blk_queue_start_tag(struct request_queue *q, struct request *rq) ...@@ -371,7 +371,7 @@ int blk_queue_start_tag(struct request_queue *q, struct request *rq)
*/ */
bqt->next_tag = (tag + 1) % bqt->max_depth; bqt->next_tag = (tag + 1) % bqt->max_depth;
rq->cmd_flags |= REQ_QUEUED; rq->rq_flags |= RQF_QUEUED;
rq->tag = tag; rq->tag = tag;
bqt->tag_index[tag] = rq; bqt->tag_index[tag] = rq;
blk_start_request(rq); blk_start_request(rq);
......
...@@ -818,13 +818,13 @@ static void throtl_charge_bio(struct throtl_grp *tg, struct bio *bio) ...@@ -818,13 +818,13 @@ static void throtl_charge_bio(struct throtl_grp *tg, struct bio *bio)
tg->io_disp[rw]++; tg->io_disp[rw]++;
/* /*
* REQ_THROTTLED is used to prevent the same bio to be throttled * BIO_THROTTLED is used to prevent the same bio to be throttled
* more than once as a throttled bio will go through blk-throtl the * more than once as a throttled bio will go through blk-throtl the
* second time when it eventually gets issued. Set it when a bio * second time when it eventually gets issued. Set it when a bio
* is being charged to a tg. * is being charged to a tg.
*/ */
if (!(bio->bi_opf & REQ_THROTTLED)) if (!bio_flagged(bio, BIO_THROTTLED))
bio->bi_opf |= REQ_THROTTLED; bio_set_flag(bio, BIO_THROTTLED);
} }
/** /**
...@@ -1401,7 +1401,7 @@ bool blk_throtl_bio(struct request_queue *q, struct blkcg_gq *blkg, ...@@ -1401,7 +1401,7 @@ bool blk_throtl_bio(struct request_queue *q, struct blkcg_gq *blkg,
WARN_ON_ONCE(!rcu_read_lock_held()); WARN_ON_ONCE(!rcu_read_lock_held());
/* see throtl_charge_bio() */ /* see throtl_charge_bio() */
if ((bio->bi_opf & REQ_THROTTLED) || !tg->has_rules[rw]) if (bio_flagged(bio, BIO_THROTTLED) || !tg->has_rules[rw])
goto out; goto out;
spin_lock_irq(q->queue_lock); spin_lock_irq(q->queue_lock);
...@@ -1480,7 +1480,7 @@ bool blk_throtl_bio(struct request_queue *q, struct blkcg_gq *blkg, ...@@ -1480,7 +1480,7 @@ bool blk_throtl_bio(struct request_queue *q, struct blkcg_gq *blkg,
* being issued. * being issued.
*/ */
if (!throttled) if (!throttled)
bio->bi_opf &= ~REQ_THROTTLED; bio_clear_flag(bio, BIO_THROTTLED);
return throttled; return throttled;
} }
......
This diff is collapsed.
#ifndef WB_THROTTLE_H
#define WB_THROTTLE_H
#include <linux/kernel.h>
#include <linux/atomic.h>
#include <linux/wait.h>
#include <linux/timer.h>
#include <linux/ktime.h>
#include "blk-stat.h"
enum wbt_flags {
WBT_TRACKED = 1, /* write, tracked for throttling */
WBT_READ = 2, /* read */
WBT_KSWAPD = 4, /* write, from kswapd */
WBT_NR_BITS = 3, /* number of bits */
};
enum {
WBT_NUM_RWQ = 2,
};
/*
* Enable states. Either off, or on by default (done at init time),
* or on through manual setup in sysfs.
*/
enum {
WBT_STATE_ON_DEFAULT = 1,
WBT_STATE_ON_MANUAL = 2,
};
static inline void wbt_clear_state(struct blk_issue_stat *stat)
{
stat->time &= BLK_STAT_TIME_MASK;
}
static inline enum wbt_flags wbt_stat_to_mask(struct blk_issue_stat *stat)
{
return (stat->time & BLK_STAT_MASK) >> BLK_STAT_SHIFT;
}
static inline void wbt_track(struct blk_issue_stat *stat, enum wbt_flags wb_acct)
{
stat->time |= ((u64) wb_acct) << BLK_STAT_SHIFT;
}
static inline bool wbt_is_tracked(struct blk_issue_stat *stat)
{
return (stat->time >> BLK_STAT_SHIFT) & WBT_TRACKED;
}
static inline bool wbt_is_read(struct blk_issue_stat *stat)
{
return (stat->time >> BLK_STAT_SHIFT) & WBT_READ;
}
struct rq_wait {
wait_queue_head_t wait;
atomic_t inflight;
};
struct rq_wb {
/*
* Settings that govern how we throttle
*/
unsigned int wb_background; /* background writeback */
unsigned int wb_normal; /* normal writeback */
unsigned int wb_max; /* max throughput writeback */
int scale_step;
bool scaled_max;
short enable_state; /* WBT_STATE_* */
/*
* Number of consecutive periods where we don't have enough
* information to make a firm scale up/down decision.
*/
unsigned int unknown_cnt;
u64 win_nsec; /* default window size */
u64 cur_win_nsec; /* current window size */
struct timer_list window_timer;
s64 sync_issue;
void *sync_cookie;
unsigned int wc;
unsigned int queue_depth;
unsigned long last_issue; /* last non-throttled issue */
unsigned long last_comp; /* last non-throttled comp */
unsigned long min_lat_nsec;
struct request_queue *queue;
struct rq_wait rq_wait[WBT_NUM_RWQ];
};
static inline unsigned int wbt_inflight(struct rq_wb *rwb)
{
unsigned int i, ret = 0;
for (i = 0; i < WBT_NUM_RWQ; i++)
ret += atomic_read(&rwb->rq_wait[i].inflight);
return ret;
}
#ifdef CONFIG_BLK_WBT
void __wbt_done(struct rq_wb *, enum wbt_flags);
void wbt_done(struct rq_wb *, struct blk_issue_stat *);
enum wbt_flags wbt_wait(struct rq_wb *, struct bio *, spinlock_t *);
int wbt_init(struct request_queue *);
void wbt_exit(struct request_queue *);
void wbt_update_limits(struct rq_wb *);
void wbt_requeue(struct rq_wb *, struct blk_issue_stat *);
void wbt_issue(struct rq_wb *, struct blk_issue_stat *);
void wbt_disable_default(struct request_queue *);
void wbt_set_queue_depth(struct rq_wb *, unsigned int);
void wbt_set_write_cache(struct rq_wb *, bool);
u64 wbt_default_latency_nsec(struct request_queue *);
#else
static inline void __wbt_done(struct rq_wb *rwb, enum wbt_flags flags)
{
}
static inline void wbt_done(struct rq_wb *rwb, struct blk_issue_stat *stat)
{
}
static inline enum wbt_flags wbt_wait(struct rq_wb *rwb, struct bio *bio,
spinlock_t *lock)
{
return 0;
}
static inline int wbt_init(struct request_queue *q)
{
return -EINVAL;
}
static inline void wbt_exit(struct request_queue *q)
{
}
static inline void wbt_update_limits(struct rq_wb *rwb)
{
}
static inline void wbt_requeue(struct rq_wb *rwb, struct blk_issue_stat *stat)
{
}
static inline void wbt_issue(struct rq_wb *rwb, struct blk_issue_stat *stat)
{
}
static inline void wbt_disable_default(struct request_queue *q)
{
}
static inline void wbt_set_queue_depth(struct rq_wb *rwb, unsigned int depth)
{
}
static inline void wbt_set_write_cache(struct rq_wb *rwb, bool wc)
{
}
static inline u64 wbt_default_latency_nsec(struct request_queue *q)
{
return 0;
}
#endif /* CONFIG_BLK_WBT */
#endif
/*
* Zoned block device handling
*
* Copyright (c) 2015, Hannes Reinecke
* Copyright (c) 2015, SUSE Linux GmbH
*
* Copyright (c) 2016, Damien Le Moal
* Copyright (c) 2016, Western Digital
*/
#include <linux/kernel.h>
#include <linux/module.h>
#include <linux/rbtree.h>
#include <linux/blkdev.h>
static inline sector_t blk_zone_start(struct request_queue *q,
sector_t sector)
{
sector_t zone_mask = blk_queue_zone_size(q) - 1;
return sector & ~zone_mask;
}
/*
* Check that a zone report belongs to the partition.
* If yes, fix its start sector and write pointer, copy it in the
* zone information array and return true. Return false otherwise.
*/
static bool blkdev_report_zone(struct block_device *bdev,
struct blk_zone *rep,
struct blk_zone *zone)
{
sector_t offset = get_start_sect(bdev);
if (rep->start < offset)
return false;
rep->start -= offset;
if (rep->start + rep->len > bdev->bd_part->nr_sects)
return false;
if (rep->type == BLK_ZONE_TYPE_CONVENTIONAL)
rep->wp = rep->start + rep->len;
else
rep->wp -= offset;
memcpy(zone, rep, sizeof(struct blk_zone));
return true;
}
/**
* blkdev_report_zones - Get zones information
* @bdev: Target block device
* @sector: Sector from which to report zones
* @zones: Array of zone structures where to return the zones information
* @nr_zones: Number of zone structures in the zone array
* @gfp_mask: Memory allocation flags (for bio_alloc)
*
* Description:
* Get zone information starting from the zone containing @sector.
* The number of zone information reported may be less than the number
* requested by @nr_zones. The number of zones actually reported is
* returned in @nr_zones.
*/
int blkdev_report_zones(struct block_device *bdev,
sector_t sector,
struct blk_zone *zones,
unsigned int *nr_zones,
gfp_t gfp_mask)
{
struct request_queue *q = bdev_get_queue(bdev);
struct blk_zone_report_hdr *hdr;
unsigned int nrz = *nr_zones;
struct page *page;
unsigned int nr_rep;
size_t rep_bytes;
unsigned int nr_pages;
struct bio *bio;
struct bio_vec *bv;
unsigned int i, n, nz;
unsigned int ofst;
void *addr;
int ret;
if (!q)
return -ENXIO;
if (!blk_queue_is_zoned(q))
return -EOPNOTSUPP;
if (!nrz)
return 0;
if (sector > bdev->bd_part->nr_sects) {
*nr_zones = 0;
return 0;
}
/*
* The zone report has a header. So make room for it in the
* payload. Also make sure that the report fits in a single BIO
* that will not be split down the stack.
*/
rep_bytes = sizeof(struct blk_zone_report_hdr) +
sizeof(struct blk_zone) * nrz;
rep_bytes = (rep_bytes + PAGE_SIZE - 1) & PAGE_MASK;
if (rep_bytes > (queue_max_sectors(q) << 9))
rep_bytes = queue_max_sectors(q) << 9;
nr_pages = min_t(unsigned int, BIO_MAX_PAGES,
rep_bytes >> PAGE_SHIFT);
nr_pages = min_t(unsigned int, nr_pages,
queue_max_segments(q));
bio = bio_alloc(gfp_mask, nr_pages);
if (!bio)
return -ENOMEM;
bio->bi_bdev = bdev;
bio->bi_iter.bi_sector = blk_zone_start(q, sector);
bio_set_op_attrs(bio, REQ_OP_ZONE_REPORT, 0);
for (i = 0; i < nr_pages; i++) {
page = alloc_page(gfp_mask);
if (!page) {
ret = -ENOMEM;
goto out;
}
if (!bio_add_page(bio, page, PAGE_SIZE, 0)) {
__free_page(page);
break;
}
}
if (i == 0)
ret = -ENOMEM;
else
ret = submit_bio_wait(bio);
if (ret)
goto out;
/*
* Process the report result: skip the header and go through the
* reported zones to fixup and fixup the zone information for
* partitions. At the same time, return the zone information into
* the zone array.
*/
n = 0;
nz = 0;
nr_rep = 0;
bio_for_each_segment_all(bv, bio, i) {
if (!bv->bv_page)
break;
addr = kmap_atomic(bv->bv_page);
/* Get header in the first page */
ofst = 0;
if (!nr_rep) {
hdr = (struct blk_zone_report_hdr *) addr;
nr_rep = hdr->nr_zones;
ofst = sizeof(struct blk_zone_report_hdr);
}
/* Fixup and report zones */
while (ofst < bv->bv_len &&
n < nr_rep && nz < nrz) {
if (blkdev_report_zone(bdev, addr + ofst, &zones[nz]))
nz++;
ofst += sizeof(struct blk_zone);
n++;
}
kunmap_atomic(addr);
if (n >= nr_rep || nz >= nrz)
break;
}
*nr_zones = nz;
out:
bio_for_each_segment_all(bv, bio, i)
__free_page(bv->bv_page);
bio_put(bio);
return ret;
}
EXPORT_SYMBOL_GPL(blkdev_report_zones);
/**
* blkdev_reset_zones - Reset zones write pointer
* @bdev: Target block device
* @sector: Start sector of the first zone to reset
* @nr_sectors: Number of sectors, at least the length of one zone
* @gfp_mask: Memory allocation flags (for bio_alloc)
*
* Description:
* Reset the write pointer of the zones contained in the range
* @sector..@sector+@nr_sectors. Specifying the entire disk sector range
* is valid, but the specified range should not contain conventional zones.
*/
int blkdev_reset_zones(struct block_device *bdev,
sector_t sector, sector_t nr_sectors,
gfp_t gfp_mask)
{
struct request_queue *q = bdev_get_queue(bdev);
sector_t zone_sectors;
sector_t end_sector = sector + nr_sectors;
struct bio *bio;
int ret;
if (!q)
return -ENXIO;
if (!blk_queue_is_zoned(q))
return -EOPNOTSUPP;
if (end_sector > bdev->bd_part->nr_sects)
/* Out of range */
return -EINVAL;
/* Check alignment (handle eventual smaller last zone) */
zone_sectors = blk_queue_zone_size(q);
if (sector & (zone_sectors - 1))
return -EINVAL;
if ((nr_sectors & (zone_sectors - 1)) &&
end_sector != bdev->bd_part->nr_sects)
return -EINVAL;
while (sector < end_sector) {
bio = bio_alloc(gfp_mask, 0);
bio->bi_iter.bi_sector = sector;
bio->bi_bdev = bdev;
bio_set_op_attrs(bio, REQ_OP_ZONE_RESET, 0);
ret = submit_bio_wait(bio);
bio_put(bio);
if (ret)
return ret;
sector += zone_sectors;
/* This may take a while, so be nice to others */
cond_resched();
}
return 0;
}
EXPORT_SYMBOL_GPL(blkdev_reset_zones);
/**
* BLKREPORTZONE ioctl processing.
* Called from blkdev_ioctl.
*/
int blkdev_report_zones_ioctl(struct block_device *bdev, fmode_t mode,
unsigned int cmd, unsigned long arg)
{
void __user *argp = (void __user *)arg;
struct request_queue *q;
struct blk_zone_report rep;
struct blk_zone *zones;
int ret;
if (!argp)
return -EINVAL;
q = bdev_get_queue(bdev);
if (!q)
return -ENXIO;
if (!blk_queue_is_zoned(q))
return -ENOTTY;
if (!capable(CAP_SYS_ADMIN))
return -EACCES;
if (copy_from_user(&rep, argp, sizeof(struct blk_zone_report)))
return -EFAULT;
if (!rep.nr_zones)
return -EINVAL;
zones = kcalloc(rep.nr_zones, sizeof(struct blk_zone), GFP_KERNEL);
if (!zones)
return -ENOMEM;
ret = blkdev_report_zones(bdev, rep.sector,
zones, &rep.nr_zones,
GFP_KERNEL);
if (ret)
goto out;
if (copy_to_user(argp, &rep, sizeof(struct blk_zone_report))) {
ret = -EFAULT;
goto out;
}
if (rep.nr_zones) {
if (copy_to_user(argp + sizeof(struct blk_zone_report), zones,
sizeof(struct blk_zone) * rep.nr_zones))
ret = -EFAULT;
}
out:
kfree(zones);
return ret;
}
/**
* BLKRESETZONE ioctl processing.
* Called from blkdev_ioctl.
*/
int blkdev_reset_zones_ioctl(struct block_device *bdev, fmode_t mode,
unsigned int cmd, unsigned long arg)
{
void __user *argp = (void __user *)arg;
struct request_queue *q;
struct blk_zone_range zrange;
if (!argp)
return -EINVAL;
q = bdev_get_queue(bdev);
if (!q)
return -ENXIO;
if (!blk_queue_is_zoned(q))
return -ENOTTY;
if (!capable(CAP_SYS_ADMIN))
return -EACCES;
if (!(mode & FMODE_WRITE))
return -EBADF;
if (copy_from_user(&zrange, argp, sizeof(struct blk_zone_range)))
return -EFAULT;
return blkdev_reset_zones(bdev, zrange.sector, zrange.nr_sectors,
GFP_KERNEL);
}
...@@ -111,6 +111,7 @@ void blk_account_io_done(struct request *req); ...@@ -111,6 +111,7 @@ void blk_account_io_done(struct request *req);
enum rq_atomic_flags { enum rq_atomic_flags {
REQ_ATOM_COMPLETE = 0, REQ_ATOM_COMPLETE = 0,
REQ_ATOM_STARTED, REQ_ATOM_STARTED,
REQ_ATOM_POLL_SLEPT,
}; };
/* /*
...@@ -130,7 +131,7 @@ static inline void blk_clear_rq_complete(struct request *rq) ...@@ -130,7 +131,7 @@ static inline void blk_clear_rq_complete(struct request *rq)
/* /*
* Internal elevator interface * Internal elevator interface
*/ */
#define ELV_ON_HASH(rq) ((rq)->cmd_flags & REQ_HASHED) #define ELV_ON_HASH(rq) ((rq)->rq_flags & RQF_HASHED)
void blk_insert_flush(struct request *rq); void blk_insert_flush(struct request *rq);
...@@ -247,7 +248,7 @@ extern int blk_update_nr_requests(struct request_queue *, unsigned int); ...@@ -247,7 +248,7 @@ extern int blk_update_nr_requests(struct request_queue *, unsigned int);
static inline int blk_do_io_stat(struct request *rq) static inline int blk_do_io_stat(struct request *rq)
{ {
return rq->rq_disk && return rq->rq_disk &&
(rq->cmd_flags & REQ_IO_STAT) && (rq->rq_flags & RQF_IO_STAT) &&
(rq->cmd_type == REQ_TYPE_FS); (rq->cmd_type == REQ_TYPE_FS);
} }
......
...@@ -161,6 +161,8 @@ static int bsg_create_job(struct device *dev, struct request *req) ...@@ -161,6 +161,8 @@ static int bsg_create_job(struct device *dev, struct request *req)
* Drivers/subsys should pass this to the queue init function. * Drivers/subsys should pass this to the queue init function.
*/ */
void bsg_request_fn(struct request_queue *q) void bsg_request_fn(struct request_queue *q)
__releases(q->queue_lock)
__acquires(q->queue_lock)
{ {
struct device *dev = q->queuedata; struct device *dev = q->queuedata;
struct request *req; struct request *req;
......
...@@ -176,7 +176,7 @@ static int blk_fill_sgv4_hdr_rq(struct request_queue *q, struct request *rq, ...@@ -176,7 +176,7 @@ static int blk_fill_sgv4_hdr_rq(struct request_queue *q, struct request *rq,
* Check if sg_io_v4 from user is allowed and valid * Check if sg_io_v4 from user is allowed and valid
*/ */
static int static int
bsg_validate_sgv4_hdr(struct request_queue *q, struct sg_io_v4 *hdr, int *rw) bsg_validate_sgv4_hdr(struct sg_io_v4 *hdr, int *rw)
{ {
int ret = 0; int ret = 0;
...@@ -226,7 +226,7 @@ bsg_map_hdr(struct bsg_device *bd, struct sg_io_v4 *hdr, fmode_t has_write_perm, ...@@ -226,7 +226,7 @@ bsg_map_hdr(struct bsg_device *bd, struct sg_io_v4 *hdr, fmode_t has_write_perm,
hdr->dout_xfer_len, (unsigned long long) hdr->din_xferp, hdr->dout_xfer_len, (unsigned long long) hdr->din_xferp,
hdr->din_xfer_len); hdr->din_xfer_len);
ret = bsg_validate_sgv4_hdr(q, hdr, &rw); ret = bsg_validate_sgv4_hdr(hdr, &rw);
if (ret) if (ret)
return ERR_PTR(ret); return ERR_PTR(ret);
......
This diff is collapsed.
This diff is collapsed.
...@@ -519,6 +519,10 @@ int blkdev_ioctl(struct block_device *bdev, fmode_t mode, unsigned cmd, ...@@ -519,6 +519,10 @@ int blkdev_ioctl(struct block_device *bdev, fmode_t mode, unsigned cmd,
BLKDEV_DISCARD_SECURE); BLKDEV_DISCARD_SECURE);
case BLKZEROOUT: case BLKZEROOUT:
return blk_ioctl_zeroout(bdev, mode, arg); return blk_ioctl_zeroout(bdev, mode, arg);
case BLKREPORTZONE:
return blkdev_report_zones_ioctl(bdev, mode, cmd, arg);
case BLKRESETZONE:
return blkdev_reset_zones_ioctl(bdev, mode, cmd, arg);
case HDIO_GETGEO: case HDIO_GETGEO:
return blkdev_getgeo(bdev, argp); return blkdev_getgeo(bdev, argp);
case BLKRAGET: case BLKRAGET:
......
This diff is collapsed.
...@@ -384,9 +384,12 @@ config BLK_DEV_RAM_DAX ...@@ -384,9 +384,12 @@ config BLK_DEV_RAM_DAX
allocated from highmem (only a problem for highmem systems). allocated from highmem (only a problem for highmem systems).
config CDROM_PKTCDVD config CDROM_PKTCDVD
tristate "Packet writing on CD/DVD media" tristate "Packet writing on CD/DVD media (DEPRECATED)"
depends on !UML depends on !UML
help help
Note: This driver is deprecated and will be removed from the
kernel in the near future!
If you have a CDROM/DVD drive that supports packet writing, say If you have a CDROM/DVD drive that supports packet writing, say
Y to include support. It should work with any MMC/Mt Fuji Y to include support. It should work with any MMC/Mt Fuji
compliant ATAPI or SCSI drive, which is just about any newer compliant ATAPI or SCSI drive, which is just about any newer
......
This diff is collapsed.
...@@ -148,7 +148,7 @@ static int _drbd_md_sync_page_io(struct drbd_device *device, ...@@ -148,7 +148,7 @@ static int _drbd_md_sync_page_io(struct drbd_device *device,
if ((op == REQ_OP_WRITE) && !test_bit(MD_NO_FUA, &device->flags)) if ((op == REQ_OP_WRITE) && !test_bit(MD_NO_FUA, &device->flags))
op_flags |= REQ_FUA | REQ_PREFLUSH; op_flags |= REQ_FUA | REQ_PREFLUSH;
op_flags |= REQ_SYNC | REQ_NOIDLE; op_flags |= REQ_SYNC;
bio = bio_alloc_drbd(GFP_NOIO); bio = bio_alloc_drbd(GFP_NOIO);
bio->bi_bdev = bdev->md_bdev; bio->bi_bdev = bdev->md_bdev;
......
This diff is collapsed.
...@@ -3806,14 +3806,10 @@ static int __floppy_read_block_0(struct block_device *bdev, int drive) ...@@ -3806,14 +3806,10 @@ static int __floppy_read_block_0(struct block_device *bdev, int drive)
cbdata.drive = drive; cbdata.drive = drive;
bio_init(&bio); bio_init(&bio, &bio_vec, 1);
bio.bi_io_vec = &bio_vec;
bio_vec.bv_page = page;
bio_vec.bv_len = size;
bio_vec.bv_offset = 0;
bio.bi_vcnt = 1;
bio.bi_iter.bi_size = size;
bio.bi_bdev = bdev; bio.bi_bdev = bdev;
bio_add_page(&bio, page, size, 0);
bio.bi_iter.bi_sector = 0; bio.bi_iter.bi_sector = 0;
bio.bi_flags |= (1 << BIO_QUIET); bio.bi_flags |= (1 << BIO_QUIET);
bio.bi_private = &cbdata; bio.bi_private = &cbdata;
......
...@@ -1646,7 +1646,7 @@ static int loop_queue_rq(struct blk_mq_hw_ctx *hctx, ...@@ -1646,7 +1646,7 @@ static int loop_queue_rq(struct blk_mq_hw_ctx *hctx,
blk_mq_start_request(bd->rq); blk_mq_start_request(bd->rq);
if (lo->lo_state != Lo_bound) if (lo->lo_state != Lo_bound)
return -EIO; return BLK_MQ_RQ_QUEUE_ERROR;
switch (req_op(cmd->rq)) { switch (req_op(cmd->rq)) {
case REQ_OP_FLUSH: case REQ_OP_FLUSH:
......
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
Markdown is supported
0%
or
You are about to add 0 people to the discussion. Proceed with caution.
Finish editing this message first!
Please register or to comment