An error occurred fetching the project authors.
- 24 Apr, 2018 1 commit
-
-
Alexander Duyck authored
Instead of implementing our own version of a SR-IOV configuration stub in the nvme driver, use the existing pci_sriov_configure_simple() function. Signed-off-by:
Alexander Duyck <alexander.h.duyck@intel.com> Signed-off-by:
Bjorn Helgaas <bhelgaas@google.com> Reviewed-by:
Christoph Hellwig <hch@lst.de>
-
- 12 Apr, 2018 3 commits
-
-
Keith Busch authored
The admin and first IO queues shared the first irq vector, which has an affinity mask including cpu0. If a system allows cpu0 to be offlined, the admin queue may not be usable if no other CPUs in the affinity mask are online. This is a problem since unlike IO queues, there is only one admin queue that always needs to be usable. To fix, this patch allocates one pre_vector for the admin queue that is assigned all CPUs, so will always be accessible. The IO queues are assigned the remaining managed vectors. In case a controller has only one interrupt vector available, the admin and IO queues will share the pre_vector with all CPUs assigned. Cc: Jianchao Wang <jianchao.w.wang@oracle.com> Cc: Ming Lei <ming.lei@redhat.com> Signed-off-by:
Keith Busch <keith.busch@intel.com> Reviewed-by:
Christoph Hellwig <hch@lst.de> Reviewed-by:
Ming Lei <ming.lei@redhat.com> Signed-off-by:
Jens Axboe <axboe@kernel.dk>
-
Keith Busch authored
All the queue memory is allocated up front. We don't take the node into consideration when creating queues anymore, so removing the unused parameter. Signed-off-by:
Keith Busch <keith.busch@intel.com> Reviewed-by:
Christoph Hellwig <hch@lst.de> Reviewed-by:
Ming Lei <ming.lei@redhat.com> Signed-off-by:
Jens Axboe <axboe@kernel.dk>
-
Keith Busch authored
User reported controller always retains CSTS.RDY to 1, which fails controller disabling when resetting the controller. This is also before the admin queue is allocated, and trying to disable an unallocated queue results in a NULL dereference. Reported-by:
Alex Gagniuc <Alex_Gagniuc@Dellteam.com> Signed-off-by:
Keith Busch <keith.busch@intel.com> Signed-off-by:
Jens Axboe <axboe@kernel.dk>
-
- 28 Mar, 2018 1 commit
-
-
Keith Busch authored
The PCI interrupt vectors intended to be associated with a queue may not start at 0; a driver may allocate pre_vectors for special use. This patch adds an offset parameter so blk-mq may find the intended affinity mask and updates all drivers using this API accordingly. Cc: Don Brace <don.brace@microsemi.com> Cc: <qla2xxx-upstream@qlogic.com> Cc: <linux-scsi@vger.kernel.org> Signed-off-by:
Keith Busch <keith.busch@intel.com> Reviewed-by:
Ming Lei <ming.lei@redhat.com> Signed-off-by:
Jens Axboe <axboe@kernel.dk>
-
- 26 Mar, 2018 3 commits
-
-
Jarosław Janik authored
Yet another "incompatible" Samsung NVMe SSD 960 EVO and Asus motherboard combination. 960 EVO device disappears from PCIe bus within few minutes after boot-up when APST is in use and never gets back. Forcing NVME_QUIRK_NO_APST is the only way to make this drive work with this particular motherboard. NVME_QUIRK_NO_DEEPEST_PS doesn't work, upgrading motherboard's BIOS didn't help either. Since this is a desktop motherboard, the only drawback of not using APST is increased device temperature. Signed-off-by:
Jarosław Janik <jaroslaw.janik@gmail.com> Signed-off-by:
Keith Busch <keith.busch@intel.com> Signed-off-by:
Jens Axboe <axboe@kernel.dk>
-
Keith Busch authored
The nvme-fabrics exports the controller address to sysfs, and we'd like to have parity with this feature for PCIe. This patch provides the appropiate callback and returns the controller address as the pci domain:bus:device.function. Signed-off-by:
Keith Busch <keith.busch@intel.com> Reviewed-by:
Sagi Grimberg <sagi@grimberg.me> Signed-off-by:
Jens Axboe <axboe@kernel.dk>
-
Jianchao Wang authored
Quiesce IO queues prior to disabling device HMB accesses. A controller using HMB may relay on it to efficiently complete IO commands. Reviewed-by:
Keith Busch <keith.busch@intel.com> Reviewed-by:
Sagi Grimberg <sagi@grimberg.me> Signed-off-by:
Jianchao Wang <jianchao.w.wang@oracle.com> Signed-off-by:
Sagi Grimberg <sagi@grimberg.me> Signed-off-by:
Jens Axboe <axboe@kernel.dk>
-
- 01 Mar, 2018 2 commits
-
-
Ming Lei authored
84676c1f ("genirq/affinity: assign vectors to all possible CPUs") has switched to do irq vectors spread among all possible CPUs, so pass num_possible_cpus() as max vecotrs to be assigned. For example, in a 8 cores system, 0~3 online, 4~8 offline/not present, see 'lscpu': [ming@box]$lscpu Architecture: x86_64 CPU op-mode(s): 32-bit, 64-bit Byte Order: Little Endian CPU(s): 4 On-line CPU(s) list: 0-3 Thread(s) per core: 1 Core(s) per socket: 2 Socket(s): 2 NUMA node(s): 2 ... NUMA node0 CPU(s): 0-3 NUMA node1 CPU(s): ... 1) before this patch, follows the allocated vectors and their affinity: irq 47, cpu list 0,4 irq 48, cpu list 1,6 irq 49, cpu list 2,5 irq 50, cpu list 3,7 2) after this patch, follows the allocated vectors and their affinity: irq 43, cpu list 0 irq 44, cpu list 1 irq 45, cpu list 2 irq 46, cpu list 3 irq 47, cpu list 4 irq 48, cpu list 6 irq 49, cpu list 5 irq 50, cpu list 7 Cc: Keith Busch <keith.busch@intel.com> Cc: Sagi Grimberg <sagi@grimberg.me> Cc: Thomas Gleixner <tglx@linutronix.de> Cc: Christoph Hellwig <hch@lst.de> Signed-off-by:
Ming Lei <ming.lei@redhat.com> Reviewed-by:
Christoph Hellwig <hch@lst.de> Signed-off-by:
Keith Busch <keith.busch@intel.com>
-
Wen Xiong authored
Triggering PPC EEH detection and handling requires a memory mapped read failure. The NVMe driver removed the periodic health check MMIO, so there's no early detection mechanism to trigger the recovery. Instead, the detection now happens when the nvme driver handles an IO timeout event. This takes the pci channel offline, so we do not want the driver to proceed with escalating its own recovery efforts that may conflict with the EEH handler. This patch ensures the driver will observe the channel was set to offline after a failed MMIO read and resets the IO timer so the EEH handler has a chance to recover the device. Signed-off-by:
Wen Xiong <wenxiong@linux.vnet.ibm.com> [updated change log] Signed-off-by:
Keith Busch <keith.busch@intel.com>
-
- 26 Feb, 2018 1 commit
-
-
Jianchao Wang authored
This patch fixes nvme queue cleanup if requesting an IRQ handler for the queue's vector fails. It does this by resetting the cq_vector to the uninitialized value of -1 so it is ignored for a controller reset. Signed-off-by:
Jianchao Wang <jianchao.w.wang@oracle.com> [changelog updates, removed misc whitespace changes] Signed-off-by:
Keith Busch <keith.busch@intel.com>
-
- 14 Feb, 2018 2 commits
-
-
Keith Busch authored
We need to halt the controller immediately if we haven't completed initialization as indicated by the new "connecting" state. Fixes: ad70062c ("nvme-pci: introduce RECONNECTING state to mark initializing procedure") Signed-off-by:
Keith Busch <keith.busch@intel.com> Reviewed-by:
Christoph Hellwig <hch@lst.de>
-
Keith Busch authored
The controller memory buffer is remapped into a kernel address on each reset, but the driver was setting the submission queue base address only on the very first queue creation. The remapped address is likely to change after a reset, so accessing the old address will hit a kernel bug. This patch fixes that by setting the queue's CMB base address each time the queue is created. Fixes: f63572df ("nvme: unmap CMB and remove sysfs file in reset path") Reported-by:
Christian Black <christian.d.black@intel.com> Cc: Jon Derrick <jonathan.derrick@intel.com> Cc: <stable@vger.kernel.org> # 4.9+ Signed-off-by:
Keith Busch <keith.busch@intel.com> Reviewed-by:
Christoph Hellwig <hch@lst.de>
-
- 08 Feb, 2018 1 commit
-
-
Max Gurtovoy authored
In pci transport, this state is used to mark the initialization process. This should be also used in other transports as well. Signed-off-by:
Max Gurtovoy <maxg@mellanox.com> Reviewed-by:
James Smart <james.smart@broadcom.com> Signed-off-by:
Sagi Grimberg <sagi@grimberg.me>
-
- 26 Jan, 2018 1 commit
-
-
Jianchao Wang authored
After Sagi's commit (nvme-rdma: fix concurrent reset and reconnect), both nvme-fc/rdma have following pattern: RESETTING - quiesce blk-mq queues, teardown and delete queues/ connections, clear out outstanding IO requests... RECONNECTING - establish new queues/connections and some other initializing things. Introduce RECONNECTING to nvme-pci transport to do the same mark. Then we get a coherent state definition among nvme pci/rdma/fc transports. Suggested-by:
James Smart <james.smart@broadcom.com> Reviewed-by:
James Smart <james.smart@broadcom.com> Reviewed-by:
Reviewed-by: Keith Busch <keith.busch@intel.com> Signed-off-by:
Jianchao Wang <jianchao.w.wang@oracle.com> Signed-off-by:
Christoph Hellwig <hch@lst.de>
-
- 25 Jan, 2018 1 commit
-
-
Keith Busch authored
The driver had been abusing the cq_vector state to know if new submissions were safe, but that was before we could quiesce blk-mq. If the controller happens to get an interrupt through while we're suspending those queues, 'no irq handler' warnings may occur. This patch will disable the interrupts only after the queues are deleted. Reported-by:
Jianchao Wang <jianchao.w.wang@oracle.com> Tested-by:
Jianchao Wang <jianchao.w.wang@oracle.com> Signed-off-by:
Keith Busch <keith.busch@intel.com> Signed-off-by:
Christoph Hellwig <hch@lst.de>
-
- 23 Jan, 2018 1 commit
-
-
Keith Busch authored
The queue count says the highest queue that's been allocated, so don't reallocate a queue lower than that. Fixes: 147b27e4 ("nvme-pci: allocate device queues storage space at probe") Signed-off-by:
Keith Busch <keith.busch@intel.com> Signed-off-by:
Christoph Hellwig <hch@lst.de>
-
- 17 Jan, 2018 4 commits
-
-
Christoph Hellwig authored
Some iommu implementations can merge physically and/or virtually contiguous segments inside sg_map_dma. The NVMe SGL support does not take this into account and will warn because of falling off a loop. Pass the number of mapped segments to nvme_pci_setup_sgls so that the SGL setup can take the number of mapped segments into account. Reported-by:
Fangjian (Turing) <f.fangjian@huawei.com> Fixes: a7a7cbe3 ("nvme-pci: add SGL support") Signed-off-by:
Christoph Hellwig <hch@lst.de> Reviewed-by:
Keith Busch <keith.busch@intel.com> Reviewed-by:
Sagi Grimberg <sagi@rimberg.me> Signed-off-by:
Jens Axboe <axboe@kernel.dk>
-
Keith Busch authored
The driver needs to verify there is a payload with a command before seeing if it should use SGLs to map it. Fixes: 955b1b5a ("nvme-pci: move use_sgl initialization to nvme_init_iod()") Reported-by:
Paul Menzel <pmenzel+linux-nvme@molgen.mpg.de> Reviewed-by:
Paul Menzel <pmenzel+linux-nvme@molgen.mpg.de> Signed-off-by:
Keith Busch <keith.busch@intel.com> Signed-off-by:
Christoph Hellwig <hch@lst.de> Signed-off-by:
Jens Axboe <axboe@kernel.dk>
-
Christoph Hellwig authored
Define the bit positions instead of macros using the magic values, and move the expanded helpers to calculate the size and size unit into the implementation C file. Signed-off-by:
Christoph Hellwig <hch@lst.de> Reviewed-by:
Keith Busch <keith.busch@intel.com> Reviewed-by:
Sagi Grimberg <sagi@grimberg.me> Reviewed-by:
Logan Gunthorpe <logang@deltatee.com>
-
Christoph Hellwig authored
Refactor the call to nvme_map_cmb, and change the conditions for probing for the CMB. First remove the version check as NVMe TPs always apply to earlier versions of the spec as well. Second check for the whole CMBSZ register for support of the CMB feature instead of just the size field inside of it to simplify the code a bit. Signed-off-by:
Christoph Hellwig <hch@lst.de> Reviewed-by:
Keith Busch <keith.busch@intel.com> Reviewed-by:
Sagi Grimberg <sagi@grimberg.me> Reviewed-by:
Logan Gunthorpe <logang@deltatee.com>
-
- 15 Jan, 2018 3 commits
-
-
Minwoo Im authored
fix comment typos in nvme_create_io_queues() like below. _aount_ to _amount_ _an_ to _can_ Signed-off-by:
Minwoo Im <minwoo.im.dev@gmail.com> Reviewed-by:
Sagi Grimberg <sagi@grimberg.me> Signed-off-by:
Christoph Hellwig <hch@lst.de>
-
Sagi Grimberg authored
It may cause race by setting 'nvmeq' in nvme_init_request() because .init_request is called inside switching io scheduler, which may happen when the NVMe device is being resetted and its nvme queues are being freed and created. We don't have any sync between the two pathes. This patch changes the nvmeq allocation to occur at probe time so there is no way we can dereference it at init_request. [ 93.268391] kernel BUG at drivers/nvme/host/pci.c:408! [ 93.274146] invalid opcode: 0000 [#1] SMP [ 93.278618] Modules linked in: nfsv3 nfs_acl rpcsec_gss_krb5 auth_rpcgss nfsv4 dns_resolver nfs lockd grace fscache sunrpc ipmi_ssif vfat fat intel_rapl sb_edac x86_pkg_temp_thermal intel_powerclamp coretemp kvm_intel kvm irqbypass crct10dif_pclmul crc32_pclmul ghash_clmulni_intel iTCO_wdt intel_cstate ipmi_si iTCO_vendor_support intel_uncore mxm_wmi mei_me ipmi_devintf intel_rapl_perf pcspkr sg ipmi_msghandler lpc_ich dcdbas mei shpchp acpi_power_meter wmi dm_multipath ip_tables xfs libcrc32c sd_mod mgag200 i2c_algo_bit drm_kms_helper syscopyarea sysfillrect sysimgblt fb_sys_fops ttm drm ahci libahci nvme libata crc32c_intel nvme_core tg3 megaraid_sas ptp i2c_core pps_core dm_mirror dm_region_hash dm_log dm_mod [ 93.349071] CPU: 5 PID: 1842 Comm: sh Not tainted 4.15.0-rc2.ming+ #4 [ 93.356256] Hardware name: Dell Inc. PowerEdge R730xd/072T6D, BIOS 2.5.5 08/16/2017 [ 93.364801] task: 00000000fb8abf2a task.stack: 0000000028bd82d1 [ 93.371408] RIP: 0010:nvme_init_request+0x36/0x40 [nvme] [ 93.377333] RSP: 0018:ffffc90002537ca8 EFLAGS: 00010246 [ 93.383161] RAX: 0000000000000000 RBX: 0000000000000000 RCX: 0000000000000008 [ 93.391122] RDX: 0000000000000000 RSI: ffff880276ae0000 RDI: ffff88047bae9008 [ 93.399084] RBP: ffff88047bae9008 R08: ffff88047bae9008 R09: 0000000009dabc00 [ 93.407045] R10: 0000000000000004 R11: 000000000000299c R12: ffff880186bc1f00 [ 93.415007] R13: ffff880276ae0000 R14: 0000000000000000 R15: 0000000000000071 [ 93.422969] FS: 00007f33cf288740(0000) GS:ffff88047ba80000(0000) knlGS:0000000000000000 [ 93.431996] CS: 0010 DS: 0000 ES: 0000 CR0: 0000000080050033 [ 93.438407] CR2: 00007f33cf28e000 CR3: 000000047e5bb006 CR4: 00000000001606e0 [ 93.446368] Call Trace: [ 93.449103] blk_mq_alloc_rqs+0x231/0x2a0 [ 93.453579] blk_mq_sched_alloc_tags.isra.8+0x42/0x80 [ 93.459214] blk_mq_init_sched+0x7e/0x140 [ 93.463687] elevator_switch+0x5a/0x1f0 [ 93.467966] ? elevator_get.isra.17+0x52/0xc0 [ 93.472826] elv_iosched_store+0xde/0x150 [ 93.477299] queue_attr_store+0x4e/0x90 [ 93.481580] kernfs_fop_write+0xfa/0x180 [ 93.485958] __vfs_write+0x33/0x170 [ 93.489851] ? __inode_security_revalidate+0x4c/0x60 [ 93.495390] ? selinux_file_permission+0xda/0x130 [ 93.500641] ? _cond_resched+0x15/0x30 [ 93.504815] vfs_write+0xad/0x1a0 [ 93.508512] SyS_write+0x52/0xc0 [ 93.512113] do_syscall_64+0x61/0x1a0 [ 93.516199] entry_SYSCALL64_slow_path+0x25/0x25 [ 93.521351] RIP: 0033:0x7f33ce96aab0 [ 93.525337] RSP: 002b:00007ffe57570238 EFLAGS: 00000246 ORIG_RAX: 0000000000000001 [ 93.533785] RAX: ffffffffffffffda RBX: 0000000000000006 RCX: 00007f33ce96aab0 [ 93.541746] RDX: 0000000000000006 RSI: 00007f33cf28e000 RDI: 0000000000000001 [ 93.549707] RBP: 00007f33cf28e000 R08: 000000000000000a R09: 00007f33cf288740 [ 93.557669] R10: 00007f33cf288740 R11: 0000000000000246 R12: 00007f33cec42400 [ 93.565630] R13: 0000000000000006 R14: 0000000000000001 R15: 0000000000000000 [ 93.573592] Code: 4c 8d 40 08 4c 39 c7 74 16 48 8b 00 48 8b 04 08 48 85 c0 74 16 48 89 86 78 01 00 00 31 c0 c3 8d 4a 01 48 63 c9 48 c1 e1 03 eb de <0f> 0b 0f 1f 84 00 00 00 00 00 0f 1f 44 00 00 48 85 f6 53 48 89 [ 93.594676] RIP: nvme_init_request+0x36/0x40 [nvme] RSP: ffffc90002537ca8 [ 93.602273] ---[ end trace 810dde3993e5f14e ]--- Reported-by:
Yi Zhang <yi.zhang@redhat.com> Signed-off-by:
Sagi Grimberg <sagi@grimberg.me> Signed-off-by:
Christoph Hellwig <hch@lst.de>
-
Sagi Grimberg authored
Signed-off-by:
Sagi Grimberg <sagi@grimberg.me> Signed-off-by:
Christoph Hellwig <hch@lst.de>
-
- 08 Jan, 2018 3 commits
-
-
Jianchao Wang authored
When the io queues setup or tagset allocation failed, ctrl.tagset is NULL. But the scan work will still be queued and executed, then panic comes up due to NULL pointer reference of ctrl.tagset. To fix this, add a new ctrl state NVME_CTRL_ADMIN_ONLY to inidcate only admin queue is live. When non io queues or tagset allocation failed, ctrl enters into this state, scan work will not be started. But async event work and nvme dev ioctl will be still available. This will be helpful to do further investigation and recovery. Suggested-by:
Sagi Grimberg <sagi@grimberg.me> Signed-off-by:
Jianchao Wang <jianchao.w.wang@oracle.com> Signed-off-by:
Christoph Hellwig <hch@lst.de>
-
Sagi Grimberg authored
Signed-off-by:
Sagi Grimberg <sagi@grimberg.me> Signed-off-by:
Christoph Hellwig <hch@lst.de>
-
Minwoo Im authored
The local variable __size__ will be set a bit later in a for-loop. Remove the explicit initialization at the beginning of this function. Signed-off-by:
Minwoo Im <minwoo.im.dev@gmail.com> Signed-off-by:
Christoph Hellwig <hch@lst.de>
-
- 29 Dec, 2017 1 commit
-
-
Minwoo Im authored
A flag "use_sgl" of "struct nvme_iod" has been used in nvme_init_iod() without being set to any value. It seems like "use_sgl" has been set in either nvme_pci_setup_prps() or nvme_pci_setup_sgls() which occur later than nvme_init_iod(). Make "iod->use_sgl" being set in a proper place, nvme_init_iod(). Also move nvme_pci_use_sgls() up above nvme_init_iod() to make it possible to be called by nvme_init_iod(). Signed-off-by:
Minwoo Im <minwoo.im.dev@gmail.com> Reviewed-by:
Sagi Grimberg <sagi@grimberg.me> Signed-off-by:
Christoph Hellwig <hch@lst.de>
-
- 28 Nov, 2017 1 commit
-
-
Minwoo Im authored
Following condition which will cause NULL pointer dereference will occur in nvme_free_host_mem() when it tries to remove pci device via nvme_remove() especially after a failure of host memory allocation for HMB. "(host_mem_descs == NULL) && (nr_host_mem_descs != 0)" It's because __nr_host_mem_descs__ is not cleared to 0 unlike __host_mem_descs__ is so. Signed-off-by:
Minwoo Im <minwoo.im.dev@gmail.com> Signed-off-by:
Christoph Hellwig <hch@lst.de>
-
- 23 Nov, 2017 1 commit
-
-
Jeff Lien authored
And increase the existing delay to cover this device as well. Cc: stable@vger.kernel.org Signed-off-by:
Jeff Lien <jeff.lien@wdc.com> Signed-off-by:
Christoph Hellwig <hch@lst.de>
-
- 20 Nov, 2017 2 commits
-
-
Minwoo Im authored
hmb descriptor idx out-of-bound occurs in case of below conditions. preferred = 128MiB chunk_size = 4MiB hmmaxd = 1 Current code will not allow rmmod which will free hmb descriptors to be done successfully in above case. "descs[i]" will be set in for-loop without seeing any conditions related to "max_entries" after a single "descs" was allocated by (max_entries = 1) in this case. Added a condition into for-loop to check index of descriptors. Fixes: 044a9df1("nvme-pci: implement the HMB entry number and size limitations") Signed-off-by:
Minwoo Im <minwoo.im.dev@gmail.com> Reviewed-by:
Keith Busch <keith.busch@intel.com> Signed-off-by:
Christoph Hellwig <hch@lst.de>
-
Kai-Heng Feng authored
The NVMe device in question drops off the PCIe bus after system suspend. I've tried several approaches to workaround this issue, but none of them works: - NVME_QUIRK_DELAY_BEFORE_CHK_RDY - NVME_QUIRK_NO_DEEPEST_PS - Disable APST before controller shutdown - Delay between controller shutdown and system suspend - Explicitly set power state to 0 before controller shutdown Fortunately it's a desktop, so disable APST won't hurt the battery. Also, change the quirk function name to reflect it's for vendor combination quirks. BugLink: https://bugs.launchpad.net/bugs/1705748Signed-off-by:
Kai-Heng Feng <kai.heng.feng@canonical.com> Signed-off-by:
Christoph Hellwig <hch@lst.de>
-
- 11 Nov, 2017 3 commits
-
-
Ming Lei authored
The 'remove_work' may be scheduled to run after nvme_remove() returns since we can't simply cancel it in nvme_remove() for avoiding deadlock. Once nvme_remove() returns, this module(nvme) can be unloaded. On the other hand, nvme_put_ctrl() calls ctr->ops->free_ctrl which may point to nvme_pci_free_ctrl() in unloaded module. This patch avoids this issue by queuing 'remove_work' via 'nvme_wq', and flush this worqueue in nvme_exit() as suggested by Sagi. Suggested-by:
Sagi Grimberg <sagi@grimberg.me> Signed-off-by:
Ming Lei <ming.lei@redhat.com> Reviewed-by:
Keith Busch <keith.busch@intel.com> Reviewed-by:
Sagi Grimberg <sagi@grimberg.me> Signed-off-by:
Christoph Hellwig <hch@lst.de> Signed-off-by:
Jens Axboe <axboe@kernel.dk>
-
Keith Busch authored
The driver can handle tracking only one AEN request, so this patch removes handling for multiple ones. Reviewed-by:
Christoph Hellwig <hch@lst.de> Reviewed-by:
James Smart <james.smart@broadcom.com> Signed-off-by:
Keith Busch <keith.busch@intel.com> Signed-off-by:
Christoph Hellwig <hch@lst.de> Signed-off-by:
Jens Axboe <axboe@kernel.dk>
-
Keith Busch authored
All the transports were unnecessarilly duplicating the AEN request accounting. This patch defines everything in one place. Signed-off-by:
Keith Busch <keith.busch@intel.com> Reviewed-by:
Guan Junxiong <guanjunxiong@huawei.com> Signed-off-by:
Christoph Hellwig <hch@lst.de> Signed-off-by:
Jens Axboe <axboe@kernel.dk>
-
- 01 Nov, 2017 1 commit
-
-
Keith Busch authored
Signed-off-by:
Keith Busch <keith.busch@intel.com> Reviewed-by:
Sagi Grimberg <sagi@grimberg.me> Signed-off-by:
Christoph Hellwig <hch@lst.de>
-
- 27 Oct, 2017 1 commit
-
-
Christoph Hellwig authored
Instead of allocating a separate struct device for the character device handle embedd it into struct nvme_ctrl and use it for the main controller refcounting. This removes double refcounting and gets us an automatic reference for the character device operations. We keep ctrl->device as a pointer for now to avoid chaning printks all over, but in the future we could look into message printing helpers that take a controller structure similar to what other subsystems do. Note the delete_ctrl operation always already has a reference (either through sysfs due this change, or because every open file on the /dev/nvme-fabrics node has a refernece) when it is entered now, so we don't need to do the unless_zero variant there. Signed-off-by:
Christoph Hellwig <hch@lst.de> Reviewed-by:
Hannes Reinecke <hare@suse.com>
-
- 20 Oct, 2017 1 commit
-
-
Chaitanya Kulkarni authored
This adds SGL support for NVMe PCIe driver, based on an earlier patch from Rajiv Shanmugam Madeswaran <smrajiv15 at gmail.com>. This patch refactors the original code and adds new module parameter sgl_threshold to determine whether to use SGL or PRP for IOs. The usage of SGLs is controlled by the sgl_threshold module parameter, which allows to conditionally use SGLs if average request segment size (avg_seg_size) is greater than sgl_threshold. In the original patch, the decision of using SGLs was dependent only on the IO size, with the new approach we consider not only IO size but also the number of physical segments present in the IO. We calculate avg_seg_size based on request payload bytes and number of physical segments present in the request. For e.g.:- 1. blk_rq_nr_phys_segments = 2 blk_rq_payload_bytes = 8k avg_seg_size = 4K use sgl if avg_seg_size >= sgl_threshold. 2. blk_rq_nr_phys_segments = 2 blk_rq_payload_bytes = 64k avg_seg_size = 32K use sgl if avg_seg_size >= sgl_threshold. 3. blk_rq_nr_phys_segments = 16 blk_rq_payload_bytes = 64k avg_seg_size = 4K use sgl if avg_seg_size >= sgl_threshold. Signed-off-by:
Chaitanya Kulkarni <chaitanya.kulkarni@wdc.com> Reviewed-by:
Keith Busch <keith.busch@intel.com> Reviewed-by:
Sagi Grimberg <sagi@grimberg.me> Signed-off-by:
Christoph Hellwig <hch@lst.de>
-
- 19 Oct, 2017 1 commit
-
-
Minwoo Im authored
fixed comment typos in adapter_alloc_cq() and adapter_alloc_sq(). 'the the' duplications are replaced with 'that the'. Signed-off-by:
Minwoo Im <dn3108@gmail.com> Signed-off-by:
Christoph Hellwig <hch@lst.de>
-
- 04 Oct, 2017 1 commit
-
-
Christoph Hellwig authored
Currently, NVMe PCI host driver is programming CMB dma address as I/O SQs addresses. This results in failures on systems where 1:1 outbound mapping is not used (example Broadcom iProc SOCs) because CMB BAR will be progammed with PCI bus address but NVMe PCI EP will try to access CMB using dma address. To have CMB working on systems without 1:1 outbound mapping, we program PCI bus address for I/O SQs instead of dma address. This approach will work on systems with/without 1:1 outbound mapping. Based on a report and previous patch from Abhishek Shah. Fixes: 8ffaadf7 ("NVMe: Use CMB for the IO SQes if available") Cc: stable@vger.kernel.org Reported-by:
Abhishek Shah <abhishek.shah@broadcom.com> Tested-by:
Abhishek Shah <abhishek.shah@broadcom.com> Reviewed-by:
Keith Busch <keith.busch@intel.com> Signed-off-by:
Christoph Hellwig <hch@lst.de>
-