- 26 Mar, 2018 25 commits
-
-
Nitzan Carmi authored
For consistancy reasons, any fabric-specific works (e.g error recovery/reconnect) should be canceled in nvme_stop_ctrl, as for all other NVMe pending works (e.g. scan, keep alive). The patch aims to simplify the logic of the code, as we now only rely on a vague demand from any fabric to flush its private workqueues at the beginning of .delete_ctrl op. Signed-off-by: Nitzan Carmi <nitzanc@mellanox.com> Reviewed-by: Max Gurtovoy <maxg@mellanox.com> Signed-off-by: Keith Busch <keith.busch@intel.com> Signed-off-by: Jens Axboe <axboe@kernel.dk>
-
Nitzan Carmi authored
While error recovery is ongoing, it is OK to move ctrl to DELETING state (from concurrent delete_work). Thus we don't need a warning for that case. Signed-off-by: Nitzan Carmi <nitzanc@mellanox.com> Reviewed-by: Max Gurtovoy <maxg@mellanox.com> Reviewed-by: Sagi Grimberg <sagi@grimberg.me> Signed-off-by: Keith Busch <keith.busch@intel.com> Signed-off-by: Jens Axboe <axboe@kernel.dk>
-
Keith Busch authored
If a task is holding a reference to a namespace on a removed controller, the head will not be released. If the same controller is added again later, its namespaces may not be successfully added. Instead, the user will see kernel message "Duplicate IDs for nsid <X>". This patch fixes that by skipping heads that don't have namespaces when considering if a new namespace is safe to add. Reported-by: Alex Gagniuc <Alex_Gagniuc@Dellteam.com> Cc: stable@vger.kernel.org Signed-off-by: Keith Busch <keith.busch@intel.com> Reviewed-by: Max Gurtovoy <maxg@mellanox.com> Reviewed-by: Christoph Hellwig <hch@lst.de> Signed-off-by: Jens Axboe <axboe@kernel.dk>
-
Max Gurtovoy authored
The .remove_one function is called for any ib_device removal. In case the removed device has no reference in our driver, there is no need to flush the work queue. Reviewed-by: Israel Rukshin <israelr@mellanox.com> Signed-off-by: Max Gurtovoy <maxg@mellanox.com> Reviewed-by: Sagi Grimberg <sagi@grimberg.me> Signed-off-by: Keith Busch <keith.busch@intel.com> Signed-off-by: Jens Axboe <axboe@kernel.dk>
-
Max Gurtovoy authored
The .remove_one function is called for any ib_device removal. In case the removed device has no reference in our driver, there is no need to flush the system work queue. Reviewed-by: Israel Rukshin <israelr@mellanox.com> Signed-off-by: Max Gurtovoy <maxg@mellanox.com> Reviewed-by: Sagi Grimberg <sagi@grimberg.me> Signed-off-by: Keith Busch <keith.busch@intel.com> Signed-off-by: Jens Axboe <axboe@kernel.dk>
-
Israel Rukshin authored
We free nvmet rdma queues while handling rdma_cm events. In order to avoid this we destroy the qp and the queue after destroying the cm_id which guarantees that all rdma_cm events are done. Signed-off-by: Israel Rukshin <israelr@mellanox.com> Reviewed-by: Max Gurtovoy <maxg@mellanox.com> Signed-off-by: Keith Busch <keith.busch@intel.com> Signed-off-by: Jens Axboe <axboe@kernel.dk>
-
Israel Rukshin authored
Signed-off-by: Israel Rukshin <israelr@mellanox.com> Reviewed-by: Max Gurtovoy <maxg@mellanox.com> Signed-off-by: Keith Busch <keith.busch@intel.com> Signed-off-by: Jens Axboe <axboe@kernel.dk>
-
James Smart authored
When a bio completion calls back into the transport for a back-end io device, the request completion path can free the transport io job structure allowing it to be reused for other operations. The transport has a defer_rcv queue which holds temporary cmd rcv ops while waitng for io job structures. when the job frees, if there's a cmd waiting, it is picked up and submitted for processing, which can call back out to the bio path if it's a read. Unfortunately, what is unknown is the context of the original bio done call, and it may be in a state (softirq) that is not compatible with submitting the new bio in the same calling sequence. This is especially true when using scsi back-end devices as scsi is in softirq when it makes the done call. Correct by scheduling the io to be started via workq rather than calling the start new io path inline to the original bio done path. Signed-off-by: James Smart <james.smart@broadcom.com> Signed-off-by: Keith Busch <keith.busch@intel.com> Signed-off-by: Jens Axboe <axboe@kernel.dk>
-
James Smart authored
When reattaching to a removed remoteport that has not yet been fully deleted as it's waiting for reconnect timeouts, be sure to re-set the ports nport id and role. Signed-off-by: James Smart <james.smart@broadcom.com> Signed-off-by: Keith Busch <keith.busch@intel.com> Signed-off-by: Jens Axboe <axboe@kernel.dk>
-
James Smart authored
Another abort race: An io request is started, becomes active, and is attempted to be started with the lldd. At the same time the controller is stopped/torndown and an itterator is run to abort the ios. As the io is active, it is added to the outstanding aborted io count. However on the original io request thread, the driver ends up rejecting the io due to the condition that induced the controller teardown. The driver reject path didn't check whether it was in the outstanding io count. This left the count outstanding stopping controller teardown. Correct by, in the driver reject case, setting the state to inactive and checking whether it was in the outstanding io count. Signed-off-by: James Smart <james.smart@broadcom.com> Reviewed-by: Sagi Grimberg <sagi@grimberg.me> Signed-off-by: Keith Busch <keith.busch@intel.com> Signed-off-by: Jens Axboe <axboe@kernel.dk>
-
James Smart authored
The current nvme_fc code, when an io times out, will abort the io on the fc link, then call the error recovery routine to reset the controller. It is during the reset of the controller that the transport will wait for all ios to be aborted before sending a Disconnect LS to the target. However, the reset routine only waits for the io which it generates the abort for to complete. Any io that was aborted just prior to the reset isn't in it's list to wait for. Thus the Disconnect is getting sent before the aborts have completed. Correct by removing the abort in the timeout handler. The reset will generate the abort. At that point the timeout handler can be simplified to request the reset (via the error handler) and restart the timeout timer. Also fixes a small typo in a comment in the reset handler. Signed-off-by: James Smart <james.smart@broadcom.com> Signed-off-by: Keith Busch <keith.busch@intel.com> Signed-off-by: Jens Axboe <axboe@kernel.dk>
-
James Smart authored
If there are errors during initial controller create, the transport will teardown the partially initialized controller struct and free the ctlr memory. Trouble is - most of those errors can occur due to asynchronous events happening such io timeouts and subsystem connectivity failures. Those failures invoke async workq items to reset the controller and attempt reconnect. Those may be in progress as the main thread frees the ctrl memory, resulting in NULL ptr oops. Prevent this from happening by having the main ctrl failure thread changing state to DELETING followed by synchronously cancelling any pending queued work item. The change of state will prevent the scheduling of resets or reconnect events. Signed-off-by: James Smart <james.smart@broadcom.com> Signed-off-by: Keith Busch <keith.busch@intel.com> Signed-off-by: Jens Axboe <axboe@kernel.dk>
-
Jarosław Janik authored
Yet another "incompatible" Samsung NVMe SSD 960 EVO and Asus motherboard combination. 960 EVO device disappears from PCIe bus within few minutes after boot-up when APST is in use and never gets back. Forcing NVME_QUIRK_NO_APST is the only way to make this drive work with this particular motherboard. NVME_QUIRK_NO_DEEPEST_PS doesn't work, upgrading motherboard's BIOS didn't help either. Since this is a desktop motherboard, the only drawback of not using APST is increased device temperature. Signed-off-by: Jarosław Janik <jaroslaw.janik@gmail.com> Signed-off-by: Keith Busch <keith.busch@intel.com> Signed-off-by: Jens Axboe <axboe@kernel.dk>
-
Max Gurtovoy authored
nvme_delete_ctrl can be called from various contexts in parallel, and cause duplicated information prints, even though the specific context doesn't perform the actual removal. Instead, print the information when the actual removal occurs. Signed-off-by: Max Gurtovoy <maxg@mellanox.com> Reviewed-by: Johannes Thumshirn <jthumshirn@suse.de> Signed-off-by: Keith Busch <keith.busch@intel.com> Signed-off-by: Jens Axboe <axboe@kernel.dk>
-
Keith Busch authored
The nvme-fabrics exports the controller address to sysfs, and we'd like to have parity with this feature for PCIe. This patch provides the appropiate callback and returns the controller address as the pci domain:bus:device.function. Signed-off-by: Keith Busch <keith.busch@intel.com> Reviewed-by: Sagi Grimberg <sagi@grimberg.me> Signed-off-by: Jens Axboe <axboe@kernel.dk>
-
Matias Bjørling authored
NVMe 1.2.1 extends the get log page interface to include 64 bit offset and increases the number of dwords to 32 bits. Implement for future use. Signed-off-by: Matias Bjørling <mb@lightnvm.io> Reviewed-by: Johannes Thumshirn <jthumshirn@suse.de> Signed-off-by: Keith Busch <keith.busch@intel.com> Signed-off-by: Jens Axboe <axboe@kernel.dk>
-
Jianchao Wang authored
namespaces_mutext is used to synchronize the operations on ctrl namespaces list. Most of the time, it is a read operation. On the other hand, there are many interfaces in nvme core that need this lock, such as nvme_wait_freeze, and even more interfaces will be added. If we use mutex here, circular dependency could be introduced easily. For example: context A context B nvme_xxx nvme_xxx hold namespaces_mutext require namespaces_mutext sync context B So it is better to change it from mutex to rwsem. Reviewed-by: Keith Busch <keith.busch@intel.com> Signed-off-by: Jianchao Wang <jianchao.w.wang@oracle.com> Signed-off-by: Sagi Grimberg <sagi@grimberg.me> Signed-off-by: Jens Axboe <axboe@kernel.dk>
-
Jianchao Wang authored
nvme_remove_namespaces and nvme_remove_invalid_namespaces reference the ctrl->namespaces list w/o holding namespaces_mutext. It is ok to invoke nvme_ns_remove there, but what if there is others. To be safer, reference the ctrl->namespaces list under namespaces_mutext. Reviewed-by: Keith Busch <keith.busch@intel.com> Signed-off-by: Jianchao Wang <jianchao.w.wang@oracle.com> Signed-off-by: Sagi Grimberg <sagi@grimberg.me> Signed-off-by: Jens Axboe <axboe@kernel.dk>
-
Jianchao Wang authored
Quiesce IO queues prior to disabling device HMB accesses. A controller using HMB may relay on it to efficiently complete IO commands. Reviewed-by: Keith Busch <keith.busch@intel.com> Reviewed-by: Sagi Grimberg <sagi@grimberg.me> Signed-off-by: Jianchao Wang <jianchao.w.wang@oracle.com> Signed-off-by: Sagi Grimberg <sagi@grimberg.me> Signed-off-by: Jens Axboe <axboe@kernel.dk>
-
Thomas Tai authored
Add examples to show how to use nvme fault injection. Signed-off-by: Thomas Tai <thomas.tai@oracle.com> Reviewed-by: Eric Saint-Etienne <eric.saint.etienne@oracle.com> Signed-off-by: Karl Volz <karl.volz@oracle.com> Reviewed-by: Keith Busch <keith.busch@intel.com> Signed-off-by: Sagi Grimberg <sagi@grimberg.me> Signed-off-by: Jens Axboe <axboe@kernel.dk>
-
Thomas Tai authored
Linux's fault injection framework provides a systematic way to support error injection via debugfs in the /sys/kernel/debug directory. This patch uses the framework to add error injection to NVMe driver. The fault injection source code is stored in a separate file and only linked if CONFIG_FAULT_INJECTION_DEBUG_FS kernel config is selected. Once the error injection is enabled, NVME_SC_INVALID_OPCODE with no retry will be injected into the nvme_end_request. Users can change the default status code and no retry flag via debufs. Following example shows how to enable and inject an error. For more examples, refer to Documentation/fault-injection/nvme-fault-injection.txt How to enable nvme fault injection: First, enable CONFIG_FAULT_INJECTION_DEBUG_FS kernel config, recompile the kernel. After booting up the kernel, do the following. How to inject an error: mount /dev/nvme0n1 /mnt echo 1 > /sys/kernel/debug/nvme0n1/fault_inject/times echo 100 > /sys/kernel/debug/nvme0n1/fault_inject/probability cp a.file /mnt Expected Result: cp: cannot stat ‘/mnt/a.file’: Input/output error Message from dmesg: FAULT_INJECTION: forcing a failure. name fault_inject, interval 1, probability 100, space 0, times 1 CPU: 0 PID: 0 Comm: swapper/0 Not tainted 4.15.0-rc8+ #2 Hardware name: innotek GmbH VirtualBox/VirtualBox, BIOS VirtualBox 12/01/2006 Call Trace: <IRQ> dump_stack+0x5c/0x7d should_fail+0x148/0x170 nvme_should_fail+0x2f/0x50 [nvme_core] nvme_process_cq+0xe7/0x1d0 [nvme] nvme_irq+0x1e/0x40 [nvme] __handle_irq_event_percpu+0x3a/0x190 handle_irq_event_percpu+0x30/0x70 handle_irq_event+0x36/0x60 handle_fasteoi_irq+0x78/0x120 handle_irq+0xa7/0x130 ? tick_irq_enter+0xa8/0xc0 do_IRQ+0x43/0xc0 common_interrupt+0xa2/0xa2 </IRQ> RIP: 0010:native_safe_halt+0x2/0x10 RSP: 0018:ffffffff82003e90 EFLAGS: 00000246 ORIG_RAX: ffffffffffffffdd RAX: ffffffff817a10c0 RBX: ffffffff82012480 RCX: 0000000000000000 RDX: 0000000000000000 RSI: 0000000000000000 RDI: 0000000000000000 RBP: 0000000000000000 R08: 000000008e38ce64 R09: 0000000000000000 R10: 0000000000000000 R11: 0000000000000000 R12: ffffffff82012480 R13: ffffffff82012480 R14: 0000000000000000 R15: 0000000000000000 ? __sched_text_end+0x4/0x4 default_idle+0x18/0xf0 do_idle+0x150/0x1d0 cpu_startup_entry+0x6f/0x80 start_kernel+0x4c4/0x4e4 ? set_init_arg+0x55/0x55 secondary_startup_64+0xa5/0xb0 print_req_error: I/O error, dev nvme0n1, sector 9240 EXT4-fs error (device nvme0n1): ext4_find_entry:1436: inode #2: comm cp: reading directory lblock 0 Signed-off-by: Thomas Tai <thomas.tai@oracle.com> Reviewed-by: Eric Saint-Etienne <eric.saint.etienne@oracle.com> Signed-off-by: Karl Volz <karl.volz@oracle.com> Reviewed-by: Keith Busch <keith.busch@intel.com> Signed-off-by: Sagi Grimberg <sagi@grimberg.me> Signed-off-by: Jens Axboe <axboe@kernel.dk>
-
Minwoo Im authored
NVME_IDENTIFY_DATA_SIZE was added to linux/nvme.h by following commit. commit 0add5e8e ("nvmet: use NVME_IDENTIFY_DATA_SIZE") Make it use NVME_IDENTIFY_DATA_SIZE define instead of magic value 0x1000 in case of identify data size. Reviewed-by: Johannes Thumshirn <jthumshirn@suse.de> Signed-off-by: Minwoo Im <minwoo.im.dev@gmail.com> Signed-off-by: Sagi Grimberg <sagi@grimberg.me> Signed-off-by: Jens Axboe <axboe@kernel.dk>
-
Sagi Grimberg authored
Instead of open-coding it. Reviewed-by: Bart Van Assche <bart.vanassche@wdc.com> Reviewed-by: Johannes Thumshirn <jthumshirn@suse.de> Signed-off-by: Sagi Grimberg <sagi@grimberg.me> Cc: "Nicholas A. Bellinger" <nab@linux-iscsi.org> Cc: target-devel@vger.kernel.org Signed-off-by: Jens Axboe <axboe@kernel.dk>
-
Sagi Grimberg authored
Its perfectly valid to assign a nvmet port to listen on "any" IP address (traddr 0.0.0.0 for ipv4 address family) for IP based transport ports. However, we must not return this address in discovery log entries. Instead we need to return the address where the request was accepted on (req->port address). Since this is nvme transport specific, introduce an optional .disc_traddr interface that is designed to check that a port in question is bound to "any" IP address and if so, set the traddr from the port where the request came from. Reviewed-by: Johannes Thumshirn <jthumshirn@suse.de> Signed-off-by: Sagi Grimberg <sagi@grimberg.me> Signed-off-by: Jens Axboe <axboe@kernel.dk>
-
Sagi Grimberg authored
Can be useful to check INET_ANY address for both ipv4/ipv6 addresses. Reviewed-by: Bart Van Assche <bart.vanassche@wdc.com> Signed-off-by: Sagi Grimberg <sagi@grimberg.me> Cc: "David S. Miller" <davem@davemloft.net> Cc: netdev@vger.kernel.org Signed-off-by: Jens Axboe <axboe@kernel.dk>
-
- 22 Mar, 2018 3 commits
-
-
Shawn Lin authored
dd if=/dev/urandom of=/dev/mmcblk1 bs=4k count=10000 with a SD card hotplug during transfer reports a warning below introduced by commit a063057d ("block: Fix a race between request queue removal and the block cgroup controller"). So we should now remove the disk, partition and bdi sysfs attributes before cleaning up the request queue associated with the disk. [ 410.331226] mmc1: card 59b4 removed [ 410.348583] WARNING: CPU: 0 PID: 5 at block/blk-core.c:785 blk_cleanup_queue+0x138/0x140 [ 410.349294] Modules linked in: [ 410.349570] CPU: 0 PID: 5 Comm: kworker/0:0 Not tainted 4.16.0-rc6-next-20180321-00004-gc2ad6a7 #263 [ 410.350363] Hardware name: Excavator-RK3399 Board (DT) [ 410.350819] Workqueue: events_freezable mmc_rescan [ 410.351242] pstate: 60000005 (nZCv daif -PAN -UAO) [ 410.351663] pc : blk_cleanup_queue+0x138/0x140 [ 410.352054] lr : blk_cleanup_queue+0xac/0x140 [ 410.352436] sp : ffff0000092cbb90 [ 410.352727] x29: ffff0000092cbb90 x28: 0000000000000000 [ 410.353195] x27: ffff8000f6f23030 x26: ffff00000904e610 [ 410.353662] x25: ffff8000f17cc808 x24: ffff8000f1038200 [ 410.354128] x23: 0000000000000060 x22: 0000000000000000 [ 410.354595] x21: ffff8000f11748d8 x20: ffff8000f1038200 [ 410.355061] x19: ffff8000f1174200 x18: 0000ffff936347d8 [ 410.355528] x17: 0000ffff935b93c0 x16: ffff0000081263f8 [ 410.355994] x15: 0000000000000000 x14: 0000000000000400 [ 410.356461] x13: 0000000000000001 x12: 0000000000000001 [ 410.356927] x11: 0000000000000040 x10: ffff8000f2400028 [ 410.357393] x9 : ffff8000f2400040 x8 : 0000000000000000 [ 410.357860] x7 : ffff8000f6f3a340 x6 : ffff8000f6f3a340 [ 410.358326] x5 : ffff8000f2400000 x4 : ffff8000f6f3a340 [ 410.358792] x3 : 0000000000000000 x2 : 39c1333e45670800 [ 410.359259] x1 : 0000000000000000 x0 : 0000000000000003 [ 410.359726] Call trace: [ 410.359943] blk_cleanup_queue+0x138/0x140 [ 410.360305] mmc_cleanup_queue+0x2c/0x48 [ 410.360652] mmc_blk_remove_req+0x1c/0x98 [ 410.361005] mmc_blk_remove+0x180/0x1c0 [ 410.361343] mmc_bus_remove+0x1c/0x28 [ 410.361670] device_release_driver_internal+0x154/0x1f0 [ 410.362128] device_release_driver+0x14/0x20 [ 410.362504] bus_remove_device+0xc8/0x108 [ 410.362858] device_del+0x120/0x350 [ 410.363167] mmc_remove_card+0x5c/0xb8 [ 410.363498] mmc_sd_detect+0x40/0x78 [ 410.363813] mmc_rescan+0x19c/0x368 [ 410.364123] process_one_work+0x1ac/0x318 [ 410.364477] worker_thread+0x50/0x450 [ 410.364801] kthread+0xf8/0x128 [ 410.365081] ret_from_fork+0x10/0x18 [ 410.365395] ---[ end trace 268e87a46c28968c ]--- Reviewed-by: Bart Van Assche <bart.vanassche@wdc.com> Signed-off-by: Shawn Lin <shawn.lin@rock-chips.com> Signed-off-by: Jens Axboe <axboe@kernel.dk>
-
Mikulas Patocka authored
I'm getting a slab named "biovec-(1<<(21-12))". It is caused by unintended expansion of the macro BIO_MAX_PAGES. This patch renames it to biovec-max. Signed-off-by: Mikulas Patocka <mpatocka@redhat.com> Cc: stable@vger.kernel.org # v4.14+ Signed-off-by: Jens Axboe <axboe@kernel.dk>
-
Mikulas Patocka authored
Early alpha processors cannot write a single byte or word; they read 8 bytes, modify the value in registers and write back 8 bytes. The type blk_status_t is defined as one byte, it is often written asynchronously by I/O completion routines, this asynchronous modification can corrupt content of nearby bytes if these nearby bytes can be written simultaneously by another CPU. - one example of such corruption is the structure dm_io where "blk_status_t status" is written by an asynchronous completion routine and "atomic_t io_count" is modified synchronously - another example is the structure dm_buffer where "unsigned hold_count" is modified synchronously from process context and "blk_status_t write_error" is modified asynchronously from bio completion routine This patch fixes the bug by changing the type blk_status_t to 32 bits if we are on Alpha and if we are compiling for a processor that doesn't have the byte-word-extension. Signed-off-by: Mikulas Patocka <mpatocka@redhat.com> Cc: stable@vger.kernel.org # 4.13+ Signed-off-by: Jens Axboe <axboe@kernel.dk>
-
- 19 Mar, 2018 12 commits
-
-
Bart Van Assche authored
scsi_device_quiesce() uses synchronize_rcu() to guarantee that the effect of blk_set_preempt_only() will be visible for percpu_ref_tryget() calls that occur after the queue unfreeze by using the approach explained in https://lwn.net/Articles/573497/. The rcu read lock and unlock calls in blk_queue_enter() form a pair with the synchronize_rcu() call in scsi_device_quiesce(). Both scsi_device_quiesce() and blk_queue_enter() must either use regular RCU or RCU-sched. Since neither the RCU-protected code in blk_queue_enter() nor blk_queue_usage_counter_release() sleeps, regular RCU protection is sufficient. Note: scsi_device_quiesce() does not have to be modified since it already uses synchronize_rcu(). Reported-by: Tejun Heo <tj@kernel.org> Fixes: 3a0a5299 ("block, scsi: Make SCSI quiesce and resume work reliably") Signed-off-by: Bart Van Assche <bart.vanassche@wdc.com> Acked-by: Tejun Heo <tj@kernel.org> Cc: Tejun Heo <tj@kernel.org> Cc: Hannes Reinecke <hare@suse.com> Cc: Ming Lei <ming.lei@redhat.com> Cc: Christoph Hellwig <hch@lst.de> Cc: Johannes Thumshirn <jthumshirn@suse.de> Cc: Oleksandr Natalenko <oleksandr@natalenko.name> Cc: Martin Steigerwald <martin@lichtvoll.de> Cc: stable@vger.kernel.org # v4.15 Signed-off-by: Jens Axboe <axboe@kernel.dk>
-
Bart Van Assche authored
Avoid that building with W=1 triggers the following compiler warning: drivers/md/bcache/super.c:776:20: warning: comparison is always false due to limited range of data type [-Wtype-limits] d->nr_stripes > SIZE_MAX / sizeof(atomic_t)) { ^ Reviewed-by: Coly Li <colyli@suse.de> Reviewed-by: Michael Lyle <mlyle@lyle.org> Signed-off-by: Bart Van Assche <bart.vanassche@wdc.com> Signed-off-by: Jens Axboe <axboe@kernel.dk>
-
Bart Van Assche authored
Add more annotations for sparse to inform it about which functions do not have the same number of spin_lock() and spin_unlock() calls. Signed-off-by: Bart Van Assche <bart.vanassche@wdc.com> Reviewed-by: Michael Lyle <mlyle@lyle.org> Signed-off-by: Jens Axboe <axboe@kernel.dk>
-
Bart Van Assche authored
This patch does not change any functionality. Reviewed-by: Michael Lyle <mlyle@lyle.org> Reviewed-by: Coly Li <colyli@suse.de> Signed-off-by: Bart Van Assche <bart.vanassche@wdc.com> Signed-off-by: Jens Axboe <axboe@kernel.dk>
-
Bart Van Assche authored
Signed-off-by: Bart Van Assche <bart.vanassche@wdc.com> Reviewed-by: Michael Lyle <mlyle@lyle.org> Signed-off-by: Jens Axboe <axboe@kernel.dk>
-
Bart Van Assche authored
Avoid that building with W=1 triggers warnings about the kernel-doc headers. Signed-off-by: Bart Van Assche <bart.vanassche@wdc.com> Reviewed-by: Michael Lyle <mlyle@lyle.org> Signed-off-by: Jens Axboe <axboe@kernel.dk>
-
Bart Van Assche authored
This patch avoids that building with W=1 triggers complaints about switch fall-throughs. Signed-off-by: Bart Van Assche <bart.vanassche@wdc.com> Reviewed-by: Michael Lyle <mlyle@lyle.org> Signed-off-by: Jens Axboe <axboe@kernel.dk>
-
Bart Van Assche authored
Make it possible for the compiler to verify the consistency of the format string passed to __bch_check_keys() and the arguments that should be formatted according to that format string. Signed-off-by: Bart Van Assche <bart.vanassche@wdc.com> Reviewed-by: Michael Lyle <mlyle@lyle.org> Signed-off-by: Jens Axboe <axboe@kernel.dk>
-
Bart Van Assche authored
This patch avoids that smatch complains about inconsistent indentation. Signed-off-by: Bart Van Assche <bart.vanassche@wdc.com> Reviewed-by: Michael Lyle <mlyle@lyle.org> Reviewed-by: Coly Li <colyli@suse.de> Signed-off-by: Jens Axboe <axboe@kernel.dk>
-
Coly Li authored
If a bcache device is configured to writeback mode, current code does not handle write I/O errors on backing devices properly. In writeback mode, write request is written to cache device, and latter being flushed to backing device. If I/O failed when writing from cache device to the backing device, bcache code just ignores the error and upper layer code is NOT noticed that the backing device is broken. This patch tries to handle backing device failure like how the cache device failure is handled, - Add a error counter 'io_errors' and error limit 'error_limit' in struct cached_dev. Add another io_disable to struct cached_dev to disable I/Os on the problematic backing device. - When I/O error happens on backing device, increase io_errors counter. And if io_errors reaches error_limit, set cache_dev->io_disable to true, and stop the bcache device. The result is, if backing device is broken of disconnected, and I/O errors reach its error limit, backing device will be disabled and the associated bcache device will be removed from system. Changelog: v2: remove "bcache: " prefix in pr_error(), and use correct name string to print out bcache device gendisk name. v1: indeed this is new added in v2 patch set. Signed-off-by: Coly Li <colyli@suse.de> Reviewed-by: Hannes Reinecke <hare@suse.com> Reviewed-by: Michael Lyle <mlyle@lyle.org> Cc: Michael Lyle <mlyle@lyle.org> Cc: Junhui Tang <tang.junhui@zte.com.cn> Signed-off-by: Jens Axboe <axboe@kernel.dk>
-
Coly Li authored
In order to catch I/O error of backing device, a separate bi_end_io call back is required. Then a per backing device counter can record I/O errors number and retire the backing device if the counter reaches a per backing device I/O error limit. This patch adds backing_request_endio() to bcache backing device I/O code path, this is a preparation for further complicated backing device failure handling. So far there is no real code logic change, I make this change a separate patch to make sure it is stable and reliable for further work. Changelog: v2: Fix code comments typo, remove a redundant bch_writeback_add() line added in v4 patch set. v1: indeed this is new added in this patch set. [mlyle: truncated commit subject] Signed-off-by: Coly Li <colyli@suse.de> Reviewed-by: Hannes Reinecke <hare@suse.com> Reviewed-by: Michael Lyle <mlyle@lyle.org> Cc: Junhui Tang <tang.junhui@zte.com.cn> Cc: Michael Lyle <mlyle@lyle.org> Signed-off-by: Jens Axboe <axboe@kernel.dk>
-
Chengguang Xu authored
In current code closure debug file is outside of debug directory and when unloading module there is lack of removing operation for closure debug file, so it will cause creating error when trying to reload module. This patch move closure debug file into "bcache" debug direcory so that the file can get deleted properly. Signed-off-by: Chengguang Xu <cgxu519@gmx.com> Reviewed-by: Michael Lyle <mlyle@lyle.org> Reviewed-by: Tang Junhui <tang.junhui@zte.com.cn> Signed-off-by: Jens Axboe <axboe@kernel.dk>
-