1. 02 Jul, 2017 1 commit
    • Martin K. Petersen's avatar
      nvme: Quirks for PM1725 controllers · d554b5e1
      Martin K. Petersen authored
      PM1725 controllers have a couple of quirks that need to be handled in
      the driver:
      
       - I/O queue depth must be limited to 64 entries on controllers that do
         not report MQES.
      
       - The host interface registers go offline briefly while resetting the
         chip. Thus a delay is needed before checking whether the controller
         is ready.
      
      Note that the admin queue depth is also limited to 64 on older versions
      of this board. Since our NVME_AQ_DEPTH is now 32 that is no longer an
      issue.
      Signed-off-by: default avatarMartin K. Petersen <martin.petersen@oracle.com>
      Signed-off-by: default avatarSagi Grimberg <sagi@grimberg.me>
      d554b5e1
  2. 30 Jun, 2017 10 commits
  3. 29 Jun, 2017 2 commits
    • Valentin Rothberg's avatar
      nvme: Makefile: remove dead build rule · a2b93775
      Valentin Rothberg authored
      Remove dead build rule for drivers/nvme/host/scsi.c which has been
      removed by commit ("nvme: Remove SCSI translations").
      Signed-off-by: default avatarValentin Rothberg <vrothberg@suse.com>
      Reviewed-by: default avatarJohannes Thumshirn <jthumshirn@suse.de>
      Reviewed-by: default avatarChristoph Hellwig <hch@lst.de>
      Reviewed-by: default avatarKeith Busch <keith.busch@intel.com>
      Signed-off-by: default avatarJens Axboe <axboe@kernel.dk>
      a2b93775
    • Max Gurtovoy's avatar
      blk-mq: map all HWQ also in hyperthreaded system · fe631457
      Max Gurtovoy authored
      This patch performs sequential mapping between CPUs and queues.
      In case the system has more CPUs than HWQs then there are still
      CPUs to map to HWQs. In hyperthreaded system, map the unmapped CPUs
      and their siblings to the same HWQ.
      This actually fixes a bug that found unmapped HWQs in a system with
      2 sockets, 18 cores per socket, 2 threads per core (total 72 CPUs)
      running NVMEoF (opens upto maximum of 64 HWQs).
      
      Performance results running fio (72 jobs, 128 iodepth)
      using null_blk (w/w.o patch):
      
      bs      IOPS(read submit_queues=72)   IOPS(write submit_queues=72)   IOPS(read submit_queues=24)  IOPS(write submit_queues=24)
      -----  ----------------------------  ------------------------------ ---------------------------- -----------------------------
      512    4890.4K/4723.5K                 4524.7K/4324.2K                   4280.2K/4264.3K               3902.4K/3909.5K
      1k     4910.1K/4715.2K                 4535.8K/4309.6K                   4296.7K/4269.1K               3906.8K/3914.9K
      2k     4906.3K/4739.7K                 4526.7K/4330.6K                   4301.1K/4262.4K               3890.8K/3900.1K
      4k     4918.6K/4730.7K                 4556.1K/4343.6K                   4297.6K/4264.5K               3886.9K/3893.9K
      8k     4906.4K/4748.9K                 4550.9K/4346.7K                   4283.2K/4268.8K               3863.4K/3858.2K
      16k    4903.8K/4782.6K                 4501.5K/4233.9K                   4292.3K/4282.3K               3773.1K/3773.5K
      32k    4885.8K/4782.4K                 4365.9K/4184.2K                   4307.5K/4289.4K               3780.3K/3687.3K
      64k    4822.5K/4762.7K                 2752.8K/2675.1K                   4308.8K/4312.3K               2651.5K/2655.7K
      128k   2388.5K/2313.8K                 1391.9K/1375.7K                   2142.8K/2152.2K               1395.5K/1374.2K
      Signed-off-by: default avatarMax Gurtovoy <maxg@mellanox.com>
      Reviewed-by: default avatarChristoph Hellwig <hch@lst.de>
      Signed-off-by: default avatarJens Axboe <axboe@kernel.dk>
      fe631457
  4. 28 Jun, 2017 19 commits
  5. 27 Jun, 2017 8 commits
    • Julia Lawall's avatar
      drbd: Drop unnecessary static · e9d5d4a0
      Julia Lawall authored
      Drop static on a local variable, when the variable is initialized before
      any use, on every possible execution path through the function.  The
      static has no benefit, and dropping it reduces the code size.
      
      The semantic patch that fixes this problem is as follows:
      (http://coccinelle.lip6.fr/)
      
      // <smpl>
      @bad exists@
      position p;
      identifier x;
      type T;
      @@
      
      static T x@p;
      ...
      x = <+...x...+>
      
      @@
      identifier x;
      expression e;
      type T;
      position p != bad.p;
      @@
      
      -static
       T x@p;
       ... when != x
           when strict
      ?x = e;
      // </smpl>
      
      The change in code size is indicates by the following output from the size
      command.
      
      before:
         text    data     bss     dec     hex filename
        67299    2291    1056   70646   113f6 drivers/block/drbd/drbd_nl.o
      
      after:
         text    data     bss     dec     hex filename
        67283    2291    1056   70630   113e6 drivers/block/drbd/drbd_nl.o
      Signed-off-by: default avatarJulia Lawall <Julia.Lawall@lip6.fr>
      Signed-off-by: default avatarRoland Kammerer <roland.kammerer@linbit.com>
      Signed-off-by: default avatarJens Axboe <axboe@kernel.dk>
      e9d5d4a0
    • Paolo Valente's avatar
      block, bfq: update wr_busy_queues if needed on a queue split · 13c931bd
      Paolo Valente authored
      This commit fixes a bug triggered by a non-trivial sequence of
      events. These events are briefly described in the next two
      paragraphs. The impatiens, or those who are familiar with queue
      merging and splitting, can jump directly to the last paragraph.
      
      On each I/O-request arrival for a shared bfq_queue, i.e., for a
      bfq_queue that is the result of the merge of two or more bfq_queues,
      BFQ checks whether the shared bfq_queue has become seeky (i.e., if too
      many random I/O requests have arrived for the bfq_queue; if the device
      is non rotational, then random requests must be also small for the
      bfq_queue to be tagged as seeky). If the shared bfq_queue is actually
      detected as seeky, then a split occurs: the bfq I/O context of the
      process that has issued the request is redirected from the shared
      bfq_queue to a new non-shared bfq_queue. As a degenerate case, if the
      shared bfq_queue actually happens to be shared only by one process
      (because of previous splits), then no new bfq_queue is created: the
      state of the shared bfq_queue is just changed from shared to non
      shared.
      
      Regardless of whether a brand new non-shared bfq_queue is created, or
      the pre-existing shared bfq_queue is just turned into a non-shared
      bfq_queue, several parameters of the non-shared bfq_queue are set
      (restored) to the original values they had when the bfq_queue
      associated with the bfq I/O context of the process (that has just
      issued an I/O request) was merged with the shared bfq_queue. One of
      these parameters is the weight-raising state.
      
      If, on the split of a shared bfq_queue,
      1) a pre-existing shared bfq_queue is turned into a non-shared
      bfq_queue;
      2) the previously shared bfq_queue happens to be busy;
      3) the weight-raising state of the previously shared bfq_queue happens
      to change;
      the number of weight-raised busy queues changes. The field
      wr_busy_queues must then be updated accordingly, but such an update
      was missing. This commit adds the missing update.
      Reported-by: default avatarLuca Miccio <lucmiccio@gmail.com>
      Signed-off-by: default avatarPaolo Valente <paolo.valente@linaro.org>
      Signed-off-by: default avatarJens Axboe <axboe@kernel.dk>
      13c931bd
    • Christoph Hellwig's avatar
      mmc/block: remove a call to blk_queue_bounce_limit · 8298912b
      Christoph Hellwig authored
      BLK_BOUNCE_ANY is the defauly now, so the call is superflous.
      Signed-off-by: default avatarChristoph Hellwig <hch@lst.de>
      Signed-off-by: default avatarJens Axboe <axboe@kernel.dk>
      8298912b
    • Christoph Hellwig's avatar
      dm: don't set bounce limit · 41341afa
      Christoph Hellwig authored
      Now all queues allocators come without abounce limit by default,
      dm doesn't have to override this anymore.
      Signed-off-by: default avatarChristoph Hellwig <hch@lst.de>
      Signed-off-by: default avatarJens Axboe <axboe@kernel.dk>
      41341afa
    • Christoph Hellwig's avatar
      block: don't set bounce limit in blk_init_queue · 8fc45044
      Christoph Hellwig authored
      Instead move it to the callers.  Those that either don't use bio_data() or
      page_address() or are specific to architectures that do not support highmem
      are skipped.
      Signed-off-by: default avatarChristoph Hellwig <hch@lst.de>
      Signed-off-by: default avatarJens Axboe <axboe@kernel.dk>
      8fc45044
    • Christoph Hellwig's avatar
      block: don't set bounce limit in blk_init_allocated_queue · 0bf6595e
      Christoph Hellwig authored
      And just move it into scsi_transport_sas which needs it due to low-level
      drivers directly derferencing bio_data, and into blk_init_queue_node,
      which will need a further push into the callers.
      Signed-off-by: default avatarChristoph Hellwig <hch@lst.de>
      Signed-off-by: default avatarJens Axboe <axboe@kernel.dk>
      0bf6595e
    • Christoph Hellwig's avatar
      blk-mq: don't bounce by default · 46685d1a
      Christoph Hellwig authored
      For historical reasons we default to bouncing highmem pages for all block
      queues.  But the blk-mq drivers are easy to audit to ensure that we don't
      need this - scsi and mtip32xx set explicit limits and everyone else doesn't
      have any particular ones.
      Signed-off-by: default avatarChristoph Hellwig <hch@lst.de>
      Signed-off-by: default avatarJens Axboe <axboe@kernel.dk>
      46685d1a
    • Christoph Hellwig's avatar
      block: don't bother with bounce limits for make_request drivers · 0b0bcacc
      Christoph Hellwig authored
      We only call blk_queue_bounce for request-based drivers, so stop messing
      with it for make_request based drivers.
      Signed-off-by: default avatarChristoph Hellwig <hch@lst.de>
      Signed-off-by: default avatarJens Axboe <axboe@kernel.dk>
      0b0bcacc