- 15 Jun, 2021 2 commits
-
-
Guoqing Jiang authored
We introduce a new bioset (io_acct_set) for raid0 and raid5 since they don't own clone infrastructure to accounting io. And the bioset is added to mddev instead of to raid0 and raid5 layer, because with this way, we can put common functions to md.h and reuse them in raid0 and raid5. Also struct md_io_acct is added accordingly which includes io start_time, the origin bio and cloned bio. Then we can call bio_{start,end}_io_acct to get related io status. Signed-off-by: Guoqing Jiang <jiangguoqing@kylinos.cn> Signed-off-by: Song Liu <song@kernel.org>
-
Guoqing Jiang authored
The commit 41d2d848 ("md: improve io stats accounting") could cause double fault problem per the report [1], and also it is not correct to change ->bi_end_io if md don't own it, so let's revert it. And io stats accounting will be replemented in later commits. [1]. https://lore.kernel.org/linux-raid/3bf04253-3fad-434a-63a7-20214e38cf26@gmail.com/T/#t Fixes: 41d2d848 ("md: improve io stats accounting") Signed-off-by: Guoqing Jiang <jiangguoqing@kylinos.cn> Signed-off-by: Song Liu <song@kernel.org>
-
- 14 Jun, 2021 1 commit
-
-
Christoph Hellwig authored
pstore-blk just pokes directly into the pagecache for the block device without going through the file operations for that by faking up it's own file operations that do not match the block device ones. As this breaks the control of the block layer of it's page cache, and even now just works by accident only the best thing is to just disable this driver. Fixes: 17639f67 ("pstore/blk: Introduce backend for block devices") Signed-off-by: Christoph Hellwig <hch@lst.de> Link: https://lore.kernel.org/r/20210608161327.1537919-1-hch@lst.deSigned-off-by: Jens Axboe <axboe@kernel.dk>
-
- 09 Jun, 2021 6 commits
-
-
Zhen Lei authored
Fixes scripts/checkpatch.pl warning: WARNING: Possible unnecessary 'out of memory' message Remove it can help us save a bit of memory. Signed-off-by: Zhen Lei <thunder.leizhen@huawei.com> Signed-off-by: Jens Axboe <axboe@kernel.dk>
-
Zhen Lei authored
Fixes scripts/checkpatch.pl warning: WARNING: Possible unnecessary 'out of memory' message Remove it can help us save a bit of memory. Signed-off-by: Zhen Lei <thunder.leizhen@huawei.com> Signed-off-by: Jens Axboe <axboe@kernel.dk>
-
Zhen Lei authored
Fixes scripts/checkpatch.pl warning: WARNING: Possible unnecessary 'out of memory' message Remove it can help us save a bit of memory. Signed-off-by: Zhen Lei <thunder.leizhen@huawei.com> Signed-off-by: Jens Axboe <axboe@kernel.dk>
-
Zhen Lei authored
Fixes scripts/checkpatch.pl warning: WARNING: Possible unnecessary 'out of memory' message Remove it can help us save a bit of memory. Signed-off-by: Zhen Lei <thunder.leizhen@huawei.com> Signed-off-by: Jens Axboe <axboe@kernel.dk>
-
Zhen Lei authored
Fixes scripts/checkpatch.pl warning: WARNING: Possible unnecessary 'out of memory' message Remove it can help us save a bit of memory. Signed-off-by: Zhen Lei <thunder.leizhen@huawei.com> Signed-off-by: Jens Axboe <axboe@kernel.dk>
-
Zhen Lei authored
Fixes scripts/checkpatch.pl warning: WARNING: Possible unnecessary 'out of memory' message Remove it can help us save a bit of memory. Signed-off-by: Zhen Lei <thunder.leizhen@huawei.com> Signed-off-by: Jens Axboe <axboe@kernel.dk>
-
- 08 Jun, 2021 1 commit
-
-
git://git.infradead.org/nvmeJens Axboe authored
Pull NVMe updates from Christoph: "nvme updates for Linux 5.14 - improve the APST configuration algorithm (Alexey Bogoslavsky) - look for StorageD3Enable on companion ACPI device (Mario Limonciello) - allow selecting the network interface for TCP connections (Martin Belanger) - misc cleanups (Amit Engel, Chaitanya Kulkarni, Colin Ian King, me)" * tag 'nvme-5.14-2021-06-08' of git://git.infradead.org/nvme: nvmet: remove a superfluous variable nvmet: move ka_work initialization to nvmet_alloc_ctrl nvme: remove nvme_{get,put}_ns_from_disk nvme: split nvme_report_zones nvme: move the CSI sanity check into nvme_ns_report_zones nvme: add a sparse annotation to nvme_ns_head_ctrl_ioctl nvme: open code nvme_put_ns_from_disk in nvme_ns_head_ctrl_ioctl nvme: open code nvme_{get,put}_ns_from_disk in nvme_ns_head_ioctl nvme: open code nvme_put_ns_from_disk in nvme_ns_head_chr_ioctl nvme-fabrics: remove extra braces nvme-fabrics: remove an extra comment nvme-fabrics: remove extra new lines in the switch nvme-fabrics: fix the kerneldco comment for nvmf_log_connect_error() nvme-tcp: allow selecting the network interface for connections nvme-pci: look for StorageD3Enable on companion ACPI device instead nvme: extend and modify the APST configuration algorithm nvme: remove redundant initialization of variable ret
-
- 03 Jun, 2021 17 commits
-
-
Chaitanya Kulkarni authored
Remove the superfluous variable "bdev" that is only used once in the nvmet_bdev_alloc_bip() and use req->ns->bdev that is used everywhere in the code to access the nvmet request's bdev. Signed-off-by: Chaitanya Kulkarni <chaitanya.kulkarni@wdc.com> Signed-off-by: Christoph Hellwig <hch@lst.de>
-
Amit Engel authored
Initialize keep-alive work only once, as part of alloc_ctrl and not each time that nvmet_start_keep_alive_timer is being called Signed-off-by: Amit Engel <amit.engel@dell.com> Reviewed-by: Hou Pu <houpu.main@gmail.com>
-
Christoph Hellwig authored
Now that only one caller is left remove the helpers by restructuring nvme_pr_command so that it has two helpers for sending a command of to a given nsid using either the ns_head for multipath, or the namespace stored in the gendisk. Signed-off-by: Christoph Hellwig <hch@lst.de> Reviewed-by: Keith Busch <kbusch@kernel.org> Reviewed-by: Sagi Grimberg <sagi@grimberg.me>
-
Christoph Hellwig authored
Split multipath support out of nvme_report_zones into a separate helper and simplify the non-multipath version as a result. Signed-off-by: Christoph Hellwig <hch@lst.de> Reviewed-by: Keith Busch <kbusch@kernel.org> Reviewed-by: Sagi Grimberg <sagi@grimberg.me>
-
Christoph Hellwig authored
Move the CSI check into nvme_ns_report_zones to clean up the code a little bit and prepare for further refactoring. Signed-off-by: Christoph Hellwig <hch@lst.de> Reviewed-by: Keith Busch <kbusch@kernel.org> Reviewed-by: Sagi Grimberg <sagi@grimberg.me>
-
Christoph Hellwig authored
Add the __releases annotation to tell sparse that nvme_ns_head_ctrl_ioctl is expected to unlock head->srcu. Signed-off-by: Christoph Hellwig <hch@lst.de> Reviewed-by: Keith Busch <kbusch@kernel.org> Reviewed-by: Sagi Grimberg <sagi@grimberg.me>
-
Christoph Hellwig authored
nvme_ns_head_ctrl_ioctl is always used on multipath nodes, so just call srcu_read_unlock directly. Signed-off-by: Christoph Hellwig <hch@lst.de> Reviewed-by: Keith Busch <kbusch@kernel.org> Reviewed-by: Sagi Grimberg <sagi@grimberg.me>
-
Christoph Hellwig authored
nvme_ns_head_ioctl is always used on multipath nodes, no need to deal with the de-multiplexers. Signed-off-by: Christoph Hellwig <hch@lst.de> Reviewed-by: Keith Busch <kbusch@kernel.org> Reviewed-by: Sagi Grimberg <sagi@grimberg.me>
-
Christoph Hellwig authored
nvme_ns_head_chr_ioctl is always used on multipath nodes, so just call srcu_read_unlock and consolidate the two unlock paths. Signed-off-by: Christoph Hellwig <hch@lst.de> Reviewed-by: Keith Busch <kbusch@kernel.org> Reviewed-by: Sagi Grimberg <sagi@grimberg.me>
-
Chaitanya Kulkarni authored
No need to use the braces around ~ operator. No functionality change in this patch. Signed-off-by: Chaitanya Kulkarni <chaitanya.kulkarni@wdc.com> Signed-off-by: Christoph Hellwig <hch@lst.de>
-
Chaitanya Kulkarni authored
Remove the comment at the end of the switch that is not needed as function is small enough. No functionality change in this patch. Signed-off-by: Chaitanya Kulkarni <chaitanya.kulkarni@wdc.com> Signed-off-by: Christoph Hellwig <hch@lst.de>
-
Chaitanya Kulkarni authored
Remove the extra lines in the switch block that is not common practice in the kernel code. No functionality change in this patch. Signed-off-by: Chaitanya Kulkarni <chaitanya.kulkarni@wdc.com> Signed-off-by: Christoph Hellwig <hch@lst.de>
-
Chaitanya Kulkarni authored
Fix the comment style that matches existing code. No functionality change in this patch. Signed-off-by: Chaitanya Kulkarni <chaitanya.kulkarni@wdc.com> Signed-off-by: Christoph Hellwig <hch@lst.de>
-
Martin Belanger authored
In our application, we need a way to force TCP connections to go out a specific IP interface instead of letting Linux select the interface based on the routing tables. Add the 'host-iface' option to allow specifying the interface to use. When the option host-iface is specified, the driver uses the specified interface to set the option SO_BINDTODEVICE on the TCP socket before connecting. This new option is needed in addtion to the existing host-traddr for the following reasons: Specifying an IP interface by its associated IP address is less intuitive than specifying the actual interface name and, in some cases, simply doesn't work. That's because the association between interfaces and IP addresses is not predictable. IP addresses can be changed or can change by themselves over time (e.g. DHCP). Interface names are predictable [1] and will persist over time. Consider the following configuration. 1: lo: <LOOPBACK,UP,LOWER_UP> mtu 65536 qdisc noqueue state ... link/loopback 00:00:00:00:00:00 brd 00:00:00:00:00:00 inet 100.0.0.100/24 scope global lo valid_lft forever preferred_lft forever 2: enp0s3: <BROADCAST,MULTICAST,UP,LOWER_UP> mtu 1500 qdisc ... link/ether 08:00:27:21:65:ec brd ff:ff:ff:ff:ff:ff inet 100.0.0.100/24 scope global enp0s3 valid_lft forever preferred_lft forever 3: enp0s8: <BROADCAST,MULTICAST,UP,LOWER_UP> mtu 1500 qdisc ... link/ether 08:00:27:4f:95:5c brd ff:ff:ff:ff:ff:ff inet 100.0.0.100/24 scope global enp0s8 valid_lft forever preferred_lft forever The above is a VM that I configured with the same IP address (100.0.0.100) on all interfaces. Doing a reverse lookup to identify the unique interface associated with 100.0.0.100 does not work here. And this is why the option host_iface is required. I understand that the above config does not represent a standard host system, but I'm using this to prove a point: "We can never know how users will configure their systems". By te way, The above configuration is perfectly fine by Linux. The current TCP implementation for host_traddr performs a bind()-before-connect(). This is a common construct to set the source IP address on a TCP socket before connecting. This has no effect on how Linux selects the interface for the connection. That's because Linux uses the Weak End System model as described in RFC1122 [2]. On the other hand, setting the Source IP Address has benefits and should be supported by linux-nvme. In fact, setting the Source IP Address is a mandatory FedGov requirement (e.g. connection to a RADIUS/TACACS+ server). Consider the following configuration. $ ip addr list dev enp0s8 3: enp0s8: <BROADCAST,MULTICAST,UP,LOWER_UP> mtu 1500 qdisc ... link/ether 08:00:27:4f:95:5c brd ff:ff:ff:ff:ff:ff inet 192.168.56.101/24 brd 192.168.56.255 scope global enp0s8 valid_lft 426sec preferred_lft 426sec inet 192.168.56.102/24 scope global secondary enp0s8 valid_lft forever preferred_lft forever inet 192.168.56.103/24 scope global secondary enp0s8 valid_lft forever preferred_lft forever inet 192.168.56.104/24 scope global secondary enp0s8 valid_lft forever preferred_lft forever Here we can see that several addresses are associated with interface enp0s8. By default, Linux always selects the default IP address, 192.168.56.101, as the source address when connecting over interface enp0s8. Some users, however, want the ability to specify a different source address (e.g., 192.168.56.102, 192.168.56.103, ...). The option host_traddr can be used as-is to perform this function. In conclusion, I believe that we need 2 options for TCP connections. One that can be used to specify an interface (host-iface). And one that can be used to set the source address (host-traddr). Users should be allowed to use one or the other, or both, or none. Of course, the documentation for host_traddr will need some clarification. It should state that when used for TCP connection, this option only sets the source address. And the documentation for host_iface should say that this option is only available for TCP connections. References: [1] https://www.freedesktop.org/wiki/Software/systemd/PredictableNetworkInterfaceNames/ [2] https://tools.ietf.org/html/rfc1122 Tested both IPv4 and IPv6 connections. Signed-off-by: Martin Belanger <martin.belanger@dell.com> Reviewed-by: Sagi Grimberg <sagi@grimberg.me> Reviewed-by: Hannes Reinecke <hare@suse.de> Signed-off-by: Christoph Hellwig <hch@lst.de>
-
Mario Limonciello authored
The documentation around the StorageD3Enable property hints that it should be made on the PCI device. This is where newer AMD systems set the property and it's required for S0i3 support. So rather than look for nodes of the root port only present on Intel systems, switch to the companion ACPI device for all systems. David Box from Intel indicated this should work on Intel as well. Link: https://lore.kernel.org/linux-nvme/YK6gmAWqaRmvpJXb@google.com/T/#m900552229fa455867ee29c33b854845fce80ba70 Link: https://docs.microsoft.com/en-us/windows-hardware/design/component-guidelines/power-management-for-storage-hardware-devices-intro Fixes: df4f9bc4 ("nvme-pci: add support for ACPI StorageD3Enable property") Suggested-by: Liang Prike <Prike.Liang@amd.com> Acked-by: Raul E Rangel <rrangel@chromium.org> Signed-off-by: Mario Limonciello <mario.limonciello@amd.com> Reviewed-by: David E. Box <david.e.box@linux.intel.com> Signed-off-by: Christoph Hellwig <hch@lst.de>
-
Alexey Bogoslavsky authored
The algorithm that was used until now for building the APST configuration table has been found to produce entries with excessively long ITPT (idle time prior to transition) for devices declaring relatively long entry and exit latencies for non-operational power states. This leads to unnecessary waste of power and, as a result, failure to pass mandatory power consumption tests on Chromebook platforms. The new algorithm is based on two predefined ITPT values and two predefined latency tolerances. Based on these values, as well as on exit and entry latencies reported by the device, the algorithm looks for up to 2 suitable non-operational power states to use as primary and secondary APST transition targets. The predefined values are supplied to the nvme driver as module parameters: - apst_primary_timeout_ms (default: 100) - apst_secondary_timeout_ms (default: 2000) - apst_primary_latency_tol_us (default: 15000) - apst_secondary_latency_tol_us (default: 100000) The algorithm echoes the approach used by Intel's and Microsoft's drivers on Windows. The specific default parameter values are also based on those drivers. Yet, this patch doesn't introduce the ability to dynamically regenerate the APST table in the event of switching the power source from AC to battery and back. Adding this functionality may be considered in the future. In the meantime, the timeouts and tolerances reflect a compromise between values used by Microsoft for AC and battery scenarios. In most NVMe devices the new algorithm causes them to implement a more aggressive power saving policy. While beneficial in most cases, this sometimes comes at the price of a higher IO processing latency in certain scenarios as well as at the price of a potential impact on the drive's endurance (due to more frequent context saving when entering deep non- operational states). So in order to provide a fallback for systems where these regressions cannot be tolerated, the patch allows to revert to the legacy behavior by setting either apst_primary_timeout_ms or apst_primary_latency_tol_us parameter to 0. Eventually (and possibly after fine tuning the default values of the module parameters) the legacy behavior can be removed. TESTING. The new algorithm has been extensively tested. Initially, simulations were used to compare APST tables generated by old and new algorithms for a wide range of devices. After that, power consumption, performance and latencies were measured under different workloads on devices from multiple vendors (WD, Intel, Samsung, Hynix, Kioxia). Below is the description of the tests and the findings. General observations. The effect the patch has on the APST table varies depending on the entry and exit latencies advertised by the devices. For some devices, the effect is negligible (e.g. Kioxia KBG40ZNS), for some significant, making the transitions to PS3 and PS4 much quicker (e.g. WD SN530, Intel 760P), or making the sleep deeper, PS4 rather than PS3 after a similar amount of time (e.g. SK Hynix BC511). For some devices (e.g. Samsung PM991) the effect is mixed: the initial transition happens after a longer idle time, but takes the device to a lower power state. Workflows. In order to evaluate the patch's effect on the power consumption and latency, 7 workflows were used for each device. The workflows were designed to test the scenarios where significant differences between the old and new behaviors are most likely. Each workflow was tested twice: with the new and with the old APST table generation implementation. Power consumption, performance and latency were measured in the process. The following workflows were used: 1) Consecutive write at the maximum rate with IO depth of 2, with no pauses 2) Repeated pattern of 1000 consecutive writes of 4K packets followed by 50ms idle time 3) Repeated pattern of 1000 consecutive writes of 4K packets followed by 150ms idle time 4) Repeated pattern of 1000 consecutive writes of 4K packets followed by 500ms idle time 5) Repeated pattern of 1000 consecutive writes of 4K packets followed by 1.5s idle time 6) Repeated pattern of 1000 consecutive writes of 4K packets followed by 5s idle time 7) Repeated pattern of a single random read of a 4K packet followed by 150ms idle time Power consumption Actual power consumption measurements produced predictable results in accordance with the APST mechanism's theory of operation. Devices with long entry and exit latencies such as WD SN530 showed huge improvement on scenarios 4,5 and 6 of up to 62%. Devices such as Kioxia KBG40ZNS where the resulting APST table looks virtually identical with both legacy and new algorithms, showed little or no change in the average power consumption on all workflows. Devices with extra short latencies such as Samsung PM991 showed moderate increase in power consumption of up to 18% in worst case scenarios. In addition, on Intel and Samsung devices a more complex impact was observed on scenarios 3, 4 and 7. Our understanding is that due to longer stay in deep non-operational states between the writes the devices start performing background operations leading to an increase of power consumption. With the old APST tables part of these operations are delayed until the scenario is over and a longer idle period begins, but eventually this extra power is consumed anyway. Performance. In terms of performance measured on sustained write or read scenarios, the effect of the patch is minimal as in this case the device doesn't enter low power states. Latency As expected, in devices where the patch causes a more aggressive power saving policy (e.g. WD SN530, Intel 760P), an increase in latency was observed in certain scenarios. Workflow number 7, specifically designed to simulate the worst case scenario as far as latency is concerned, indeed shows a sharp increase in average latency (~2ms -> ~53ms on Intel 760P and 0.6 -> 10ms on WD SN530). The latency increase on other workloads and other devices is much milder or non-existent. Signed-off-by: Alexey Bogoslavsky <alexey.bogoslavsky@wdc.com> Signed-off-by: Christoph Hellwig <hch@lst.de>
-
Colin Ian King authored
The variable ret is being initialized with a value that is never read, it is being updated later on. The assignment is redundant and can be removed. Signed-off-by: Colin Ian King <colin.king@canonical.com> Signed-off-by: Christoph Hellwig <hch@lst.de>
-
- 24 May, 2021 13 commits
-
-
Gustavo A. R. Silva authored
Make use of the struct_size() helper instead of an open-coded version, in order to avoid any potential type mistakes or integer overflows that, in the worst scenario, could lead to heap overflows. This code was detected with the help of Coccinelle and, audited and fixed manually. Signed-off-by: Gustavo A. R. Silva <gustavoars@kernel.org> Link: https://lore.kernel.org/r/20210513203730.GA212128@embeddedorSigned-off-by: Jens Axboe <axboe@kernel.dk>
-
John Garry authored
The tags used for an IO scheduler are currently per hctx. As such, when q->nr_hw_queues grows, so does the request queue total IO scheduler tag depth. This may cause problems for SCSI MQ HBAs whose total driver depth is fixed. Ming and Yanhui report higher CPU usage and lower throughput in scenarios where the fixed total driver tag depth is appreciably lower than the total scheduler tag depth: https://lore.kernel.org/linux-block/440dfcfc-1a2c-bd98-1161-cec4d78c6dfc@huawei.com/T/#mc0d6d4f95275a2743d1c8c3e4dc9ff6c9aa3a76b In that scenario, since the scheduler tag is got first, much contention is introduced since a driver tag may not be available after we have got the sched tag. Improve this scenario by introducing request queue-wide tags for when a tagset-wide sbitmap is used. The static sched requests are still allocated per hctx, as requests are initialised per hctx, as in blk_mq_init_request(..., hctx_idx, ...) -> set->ops->init_request(.., hctx_idx, ...). For simplicity of resizing the request queue sbitmap when updating the request queue depth, just init at the max possible size, so we don't need to deal with the possibly with swapping out a new sbitmap for old if we need to grow. Signed-off-by: John Garry <john.garry@huawei.com> Reviewed-by: Ming Lei <ming.lei@redhat.com> Link: https://lore.kernel.org/r/1620907258-30910-3-git-send-email-john.garry@huawei.comSigned-off-by: Jens Axboe <axboe@kernel.dk>
-
John Garry authored
The tag allocation code to alloc the sbitmap pairs is common for regular bitmaps tags and shared sbitmap, so refactor into a common function. Also remove superfluous "flags" argument from blk_mq_init_shared_sbitmap(). Signed-off-by: John Garry <john.garry@huawei.com> Reviewed-by: Ming Lei <ming.lei@redhat.com> Link: https://lore.kernel.org/r/1620907258-30910-2-git-send-email-john.garry@huawei.comSigned-off-by: Jens Axboe <axboe@kernel.dk>
-
Ming Lei authored
Before we free request queue, clearing flush request reference in tags->rqs[], so that potential UAF can be avoided. Based on one patch written by David Jeffery. Tested-by: John Garry <john.garry@huawei.com> Reviewed-by: Bart Van Assche <bvanassche@acm.org> Reviewed-by: David Jeffery <djeffery@redhat.com> Signed-off-by: Ming Lei <ming.lei@redhat.com> Link: https://lore.kernel.org/r/20210511152236.763464-5-ming.lei@redhat.comSigned-off-by: Jens Axboe <axboe@kernel.dk>
-
Ming Lei authored
refcount_inc_not_zero() in bt_tags_iter() still may read one freed request. Fix the issue by the following approach: 1) hold a per-tags spinlock when reading ->rqs[tag] and calling refcount_inc_not_zero in bt_tags_iter() 2) clearing stale request referred via ->rqs[tag] before freeing request pool, the per-tags spinlock is held for clearing stale ->rq[tag] So after we cleared stale requests, bt_tags_iter() won't observe freed request any more, also the clearing will wait for pending request reference. The idea of clearing ->rqs[] is borrowed from John Garry's previous patch and one recent David's patch. Tested-by: John Garry <john.garry@huawei.com> Reviewed-by: David Jeffery <djeffery@redhat.com> Reviewed-by: Bart Van Assche <bvanassche@acm.org> Signed-off-by: Ming Lei <ming.lei@redhat.com> Link: https://lore.kernel.org/r/20210511152236.763464-4-ming.lei@redhat.comSigned-off-by: Jens Axboe <axboe@kernel.dk>
-
Ming Lei authored
Grab rq->refcount before calling ->fn in blk_mq_tagset_busy_iter(), and this way will prevent the request from being re-used when ->fn is running. The approach is same as what we do during handling timeout. Fix request use-after-free(UAF) related with completion race or queue releasing: - If one rq is referred before rq->q is frozen, then queue won't be frozen before the request is released during iteration. - If one rq is referred after rq->q is frozen, refcount_inc_not_zero() will return false, and we won't iterate over this request. However, still one request UAF not covered: refcount_inc_not_zero() may read one freed request, and it will be handled in next patch. Tested-by: John Garry <john.garry@huawei.com> Reviewed-by: Christoph Hellwig <hch@lst.de> Reviewed-by: Bart Van Assche <bvanassche@acm.org> Signed-off-by: Ming Lei <ming.lei@redhat.com> Link: https://lore.kernel.org/r/20210511152236.763464-3-ming.lei@redhat.comSigned-off-by: Jens Axboe <axboe@kernel.dk>
-
Ming Lei authored
For flush request, rq->end_io() may be called two times, one is from timeout handling(blk_mq_check_expired()), another is from normal completion(__blk_mq_end_request()). Move blk_account_io_flush() after flush_rq->ref drops to zero, so io accounting can be done just once for flush request. Fixes: b6866318 ("block: add iostat counters for flush requests") Reviewed-by: Bart Van Assche <bvanassche@acm.org> Reviewed-by: Christoph Hellwig <hch@lst.de> Tested-by: John Garry <john.garry@huawei.com> Signed-off-by: Ming Lei <ming.lei@redhat.com> Link: https://lore.kernel.org/r/20210511152236.763464-2-ming.lei@redhat.comSigned-off-by: Jens Axboe <axboe@kernel.dk>
-
Max Gurtovoy authored
Align to common code conventions. Signed-off-by: Max Gurtovoy <mgurtovoy@nvidia.com> Link: https://lore.kernel.org/r/20210511155319.1885277-1-mgurtovoy@nvidia.comSigned-off-by: Jens Axboe <axboe@kernel.dk>
-
Tejun Heo authored
blkcg has always rejected to attach if any of the member tasks has shared io_context. The rationale was that io_contexts can be shared across different cgroups making it impossible to define what the appropriate control behavior should be. However, this check causes more problems than it solves: * The check prevents controller enable and migrations but not CLONE_IO itself, which can lead to surprises as the outcome changes depending on the order of operations. * Sharing within a cgroup is fine but the check can't distinguish that. This leads to unnecessary conflicts with the recent CLONE_IO usage in io_uring. io_context sharing doesn't make any difference for rq_qos based controllers and the way it's used is safe as long as tasks aren't migrated dynamically which is the vast majority of use cases. While we can try to make the check more precise to avoid false positives, the added complexity doesn't seem worthwhile. Let's just drop blkcg_can_attach(). Signed-off-by: Tejun Heo <tj@kernel.org> Link: https://lore.kernel.org/r/YJrTvHbrRDbJjw+S@slm.duckdns.orgSigned-off-by: Jens Axboe <axboe@kernel.dk>
-
Yang Yingliang authored
The mutex ktio_spawn_lock is initialized statically. It is unnecessary to initialize by mutex_init(). Reported-by: Hulk Robot <hulkci@huawei.com> Signed-off-by: Yang Yingliang <yangyingliang@huawei.com> Link: https://lore.kernel.org/r/20210511113440.3772053-1-yangyingliang@huawei.comSigned-off-by: Jens Axboe <axboe@kernel.dk>
-
zhangyi (F) authored
Now block_dump feature is gone, remove all comments in docs. Signed-off-by: zhangyi (F) <yi.zhang@huawei.com> Reviewed-by: Jan Kara <jack@suse.cz> Reviewed-by: Christoph Hellwig <hch@lst.de> Link: https://lore.kernel.org/r/20210313030146.2882027-4-yi.zhang@huawei.comSigned-off-by: Jens Axboe <axboe@kernel.dk>
-
zhangyi (F) authored
We have already delete block_dump feature in mark_inode_dirty() because it can be replaced by tracepoints, now we also remove the part in submit_bio() for the same reason. The part of block dump feature in submit_bio() dump the write process, write region and sectors on the target disk into kernel message. it can be replaced by block_bio_queue tracepoint in submit_bio_checks(), so we do not need block_dump anymore, remove the whole block_dump feature. Signed-off-by: zhangyi (F) <yi.zhang@huawei.com> Reviewed-by: Jan Kara <jack@suse.cz> Reviewed-by: Christoph Hellwig <hch@lst.de> Link: https://lore.kernel.org/r/20210313030146.2882027-3-yi.zhang@huawei.comSigned-off-by: Jens Axboe <axboe@kernel.dk>
-
zhangyi (F) authored
block_dump is an old debugging interface, one of it's functions is used to print the information about who write which file on disk. If we enable block_dump through /proc/sys/vm/block_dump and turn on debug log level, we can gather information about write process name, target file name and disk from kernel message. This feature is realized in block_dump___mark_inode_dirty(), it print above information into kernel message directly when marking inode dirty, so it is noisy and can easily trigger log storm. At the same time, get the dentry refcount is also not safe, we found it will lead to deadlock on ext4 file system with data=journal mode. After tracepoints has been introduced into the kernel, we got a tracepoint in __mark_inode_dirty(), which is a better replacement of block_dump___mark_inode_dirty(). The only downside is that it only trace the inode number and not a file name, but it probably doesn't matter because the original printed file name in block_dump is not accurate in some cases, and we can still find it through the inode number and device id. So this patch delete the dirting inode part of block_dump feature. Signed-off-by: zhangyi (F) <yi.zhang@huawei.com> Reviewed-by: Jan Kara <jack@suse.cz> Reviewed-by: Christoph Hellwig <hch@lst.de> Link: https://lore.kernel.org/r/20210313030146.2882027-2-yi.zhang@huawei.comSigned-off-by: Jens Axboe <axboe@kernel.dk>
-