- 07 Aug, 2012 1 commit
-
-
Matthew Wilcox authored
If the device is hot-unplugged while there are active commands, we should time out the I/Os so that upper layers don't just see the I/Os disappear. Signed-off-by: Matthew Wilcox <matthew.r.wilcox@intel.com>
-
- 03 Aug, 2012 1 commit
-
-
Matthew Wilcox authored
If the adapter fails initialisation, the memory allocated for the admin queue may not be freed. Split the memory freeing part of nvme_free_queue() into nvme_free_queue_mem() and call it in the case of initialisation failure. Signed-off-by: Matthew Wilcox <matthew.r.wilcox@intel.com> Reported-by: Vishal Verma <vishal.l.verma@intel.com>
-
- 31 Jul, 2012 3 commits
-
-
Quoc-Son Anh authored
Signed-off-by: Quoc-Son Anh <quoc-sonx.anh@intel.com> Signed-off-by: Matthew Wilcox <matthew.r.wilcox@intel.com>
-
Matthew Wilcox authored
Commit 5c42ea16 used spaces instead of tabs. Also remove the unnecessary initialisation of the 'result' variable. Signed-off-by: Matthew Wilcox <matthew.r.wilcox@intel.com>
-
Dan Carpenter authored
We should return here and avoid a NULL dereference. Signed-off-by: Dan Carpenter <dan.carpenter@oracle.com> Signed-off-by: Matthew Wilcox <matthew.r.wilcox@intel.com>
-
- 27 Jul, 2012 2 commits
-
-
Keith Busch authored
Signed-off-by: Keith Busch <keith.busch@intel.com> Signed-off-by: Matthew Wilcox <matthew.r.wilcox@intel.com>
-
Keith Busch authored
Set the depth for IO queues to the device's maximum supported queue entries if the requested depth exceeds the device's capabilities. Signed-off-by: Keith Busch <keith.busch@intel.com> Signed-off-by: Matthew Wilcox <matthew.r.wilcox@intel.com>
-
- 26 Jul, 2012 4 commits
-
-
Keith Busch authored
Set the max hw sectors in a namespace's request queue if the nvme device has a max data transfer size. Signed-off-by: Keith Busch <keith.busch@intel.com> Signed-off-by: Matthew Wilcox <matthew.r.wilcox@intel.com>
-
Keith Busch authored
The specification does not provide a use for command dword11 in the NVMe Get Features command, but does use the NSID for some features. Signed-off-by: Keith Busch <keith.busch@intel.com> Signed-off-by: Matthew Wilcox <matthew.r.wilcox@intel.com>
-
Keith Busch authored
The function nvme_user_admin_command does not require a namespace to proceed. Replace with the nvme_dev structure so that it can be called from contexts that do not have a namespace. Signed-off-by: Keith Busch <keith.busch@intel.com> Signed-off-by: Matthew Wilcox <matthew.r.wilcox@intel.com>
-
Keith Busch authored
register_blkdev returns 0 when given a valid major number. Reported-by: Ross Zwisler <ross.zwisler@intel.com> Signed-off-by: Keith Busch <keith.busch@intel.com> Signed-off-by: Matthew Wilcox <matthew.r.wilcox@intel.com>
-
- 25 Jul, 2012 1 commit
-
-
Keith Busch authored
Sets the request queue logical block size with the block size of the namespace. Signed-off-by: Keith Busch <keith.busch@intel.com> Signed-off-by: Matthew Wilcox <matthew.r.wilcox@intel.com>
-
- 11 Jan, 2012 1 commit
-
-
Matthew Wilcox authored
The number of submission & completion queues should be set by calling Set Features, not Get Features. Reported-by: Kwok Kong <Kwok.Kong@idt.com> Signed-off-by: Matthew Wilcox <matthew.r.wilcox@intel.com>
-
- 10 Jan, 2012 10 commits
-
-
Matthew Wilcox authored
Signed-off-by: Matthew Wilcox <matthew.r.wilcox@intel.com>
-
Matthew Wilcox authored
QUEUE_FLAG_* are flags (other than QUEUE_FLAG_DEFAULT), so they cannot be ORed together. Set the queue flags using queue_flag_set_unlocked(). Reported-by: Donald Wood <donald.e.wood@intel.com> Signed-off-by: Matthew Wilcox <matthew.r.wilcox@intel.com>
-
Matthew Wilcox authored
By using the iod->nents field (the same way other I/O paths do), we can avoid recalculating the number of sg entries at unmap time, and make nvme_unmap_user_pages() easier to call. Also, use the 'write' parameter instead of assuming DMA_FROM_DEVICE. Signed-off-by: Matthew Wilcox <matthew.r.wilcox@intel.com>
-
Matthew Wilcox authored
For user I/O and admin commands, we were forgetting to mark the end of the SG list. Signed-off-by: Matthew Wilcox <matthew.r.wilcox@intel.com>
-
Matthew Wilcox authored
We were always mapping as DMA_FROM_DEVICE then unmapping with DMA_TO_DEVICE which was clearly not correct. Follow the same pattern as nvme_submit_io() and key off the bottom bit of the opcode to determine whether this is a read or a write. Signed-off-by: Matthew Wilcox <matthew.r.wilcox@intel.com>
-
Matthew Wilcox authored
IO_TIMEOUT is a little too generic and might be used by other parts of the kernel in the future. Signed-off-by: Matthew Wilcox <matthew.r.wilcox@intel.com>
-
Matthew Wilcox authored
The new merged data structure is called nvme_iod. This improves performance for mid-sized I/Os (in the 16k range) since we save a memory allocation. It is also a slightly simpler interface to use. Signed-off-by: Matthew Wilcox <matthew.r.wilcox@intel.com>
-
Matthew Wilcox authored
The queue is only needed for some rare occasions, and it's more consistent to pass the device around. Signed-off-by: Matthew Wilcox <matthew.r.wilcox@intel.com>
-
Matthew Wilcox authored
Upcoming patches require calling get_nvmeq when we don't have a namespace. Some callers already have the device in a local variable anyway. Signed-off-by: Matthew Wilcox <matthew.r.wilcox@intel.com>
-
Matthew Wilcox authored
Instead of encoding the handler type in the bottom two bits of the per-completion context pointer, store the handler function as well as the context pointer. This gives us more flexibility and the code is clearer. It comes at the cost of an extra 8k of memory per queue, but this feels like a reasonable price to pay. Signed-off-by: Matthew Wilcox <matthew.r.wilcox@intel.com>
-
- 04 Nov, 2011 17 commits
-
-
Matthew Wilcox authored
The driver was still using an old definition of Identify Controller which only came to light once we started using the 'number of namespaces' field properly. Reported-by: Nisheeth Bhat <nisheeth.bhat@intel.com> Reported-by: Khosrow Panah <Khosrow.Panah@idt.com> Signed-off-by: Matthew Wilcox <matthew.r.wilcox@intel.com>
-
Matthew Wilcox authored
The doorbell stride allows devices to spread out their doorbells instead of packing them tightly. This feature was added as part of ECN 003. This patch also enables support for more than 512 queues :-) Signed-off-by: Matthew Wilcox <matthew.r.wilcox@intel.com>
-
Matthew Wilcox authored
Signed-off-by: Matthew Wilcox <matthew.r.wilcox@intel.com>
-
Matthew Wilcox authored
ECN 001 documented that namespace 0 is not valid. Sending an Identify with CNS of 0 and Namespace of 0 is an undefined command. Signed-off-by: Matthew Wilcox <matthew.r.wilcox@intel.com>
-
Nisheeth Bhat authored
The existing calculation underestimated the number of pages required as it did not take into account the pointer at the end of each page. The replacement calculation may overestimate the number of pages required if the last page in the PRP List is entirely full. By using ->npages as a counter as we fill in the pages, we ensure that we don't try to free a page that was never allocated. Signed-off-by: Nisheeth Bhat <nisheeth.bhat@intel.com> Signed-off-by: Matthew Wilcox <matthew.r.wilcox@intel.com>
-
Matthew Wilcox authored
Instead of open-coding calls to nvme_submit_admin_cmd, these small wrappers are simpler to use (the patch removes 14 lines from nvme_dev_add() for example). Signed-off-by: Matthew Wilcox <matthew.r.wilcox@intel.com>
-
Matthew Wilcox authored
The driver was allocating 8k of memory, then freeing 4k of it. Signed-off-by: Matthew Wilcox <matthew.r.wilcox@intel.com>
-
Nisheeth Bhat authored
dma_unmap_sg() must be called with the same 'nents' passed to dma_map_sg(), not the number returned from dma_map_sg(). Signed-off-by: Nisheeth Bhat <nisheeth.bhat@intel.com> Signed-off-by: Matthew Wilcox <matthew.r.wilcox@intel.com>
-
Matthew Wilcox authored
Our SG list was constructed to always fill the entire first page, even if that was more than the length of the I/O. This is probably harmless, but some IOMMUs might do something bad. Correcting the first call to sg_set_page() made it look a lot closer to the sg_set_page() in the loop, so fold the first call to sg_set_page() into the loop. Reported-by: Nisheeth Bhat <nisheeth.bhat@intel.com> Signed-off-by: Matthew Wilcox <willy@linux.intel.com>
-
Matthew Wilcox authored
Missing 'break' in the switch statement meant that we'd fall through to the 'return -EINVAL' case.
-
Matthew Wilcox authored
Remove the special-purpose IDENTIFY, GET_RANGE_TYPE, DOWNLOAD_FIRMWARE and ACTIVATE_FIRMWARE commands. Replace them with a generic ADMIN_CMD ioctl that can submit any admin command. Add a new ID ioctl that returns the namespace ID of the queried device. It corresponds to the SCSI Idlun ioctl. Signed-off-by: Matthew Wilcox <matthew.r.wilcox@intel.com>
-
Matthew Wilcox authored
If the I/O was not completed by a single NVMe command, we add the bio to the congestion list and wake up the kthread to resubmit it. But the kthread calls remove_wait_queue() unconditionally, which will oops if it's not on the wait queue. So add the kthread to the wait queue before waking it up. Signed-off-by: Matthew Wilcox <matthew.r.wilcox@intel.com>
-
Matthew Wilcox authored
nvme_setup_io_queues() was assuming that a NULL return from nvme_create_queue() was an out-of-memory error. That's not necessarily true; the adapter might return -EIO, for example. Change the calling convention to return an ERR_PTR on failure instead of NULL. Signed-off-by: Matthew Wilcox <matthew.r.wilcox@intel.com>
-
Matthew Wilcox authored
Signed-off-by: Matthew Wilcox <matthew.r.wilcox@intel.com>
-
Matthew Wilcox authored
For the benefit of reviewers, add comments to a few functions describing their calling context Signed-off-by: Matthew Wilcox <matthew.r.wilcox@intel.com>
-
Matthew Wilcox authored
If any of the memory allocations in nvme_setup_prps fail, handle it by modifying the passed-in data length to reflect the number of bytes we are actually able to send. Also allow the caller to specify the GFP flags they need; for user-initiated commands, we can use GFP_KERNEL allocations. The various callers are updated to handle this possibility; the main I/O path is already prepared for this possibility (as it may happen due to nvme_map_bio being unable to map all the segments of the I/O). The other callers return -ENOMEM instead of doing partial I/Os. Reported-by: Andi Kleen <andi@firstfloor.org> Signed-off-by: Matthew Wilcox <matthew.r.wilcox@intel.com>
-
Matthew Wilcox authored
The current approach of using the namespace ID as the minor number doesn't work when there are multiple adapters in the machine. Rather than statically partitioning the number of namespaces between adapters, dynamically allocate minor numbers to namespaces as they are detected. Signed-off-by: Matthew Wilcox <matthew.r.wilcox@intel.com>
-