- 25 Jun, 2021 7 commits
-
-
Mikulas Patocka authored
Implementation reuses dm_io_tracker, that until now was only used by dm-cache, to track if any writes were issued directly to the origin (due to cache being full) within the last second. If so writeback is paused for a second. This change improves performance for when the cache is full and IO is issued directly to the origin device (rather than through the cache). Depends-on: d53f1faf ("dm writecache: do direct write if the cache is full") Suggested-by: Joe Thornber <ejt@redhat.com> Signed-off-by: Mikulas Patocka <mpatocka@redhat.com> Signed-off-by: Mike Snitzer <snitzer@redhat.com>
-
Mike Snitzer authored
Allow other code to use dm_io_tracker. Signed-off-by: Mike Snitzer <snitzer@redhat.com>
-
Hou Tao authored
remove_raw() in dm_btree_remove() may fail due to IO read error (e.g. read the content of origin block fails during shadowing), and the value of shadow_spine::root is uninitialized, but the uninitialized value is still assign to new_root in the end of dm_btree_remove(). For dm-thin, the value of pmd->details_root or pmd->root will become an uninitialized value, so if trying to read details_info tree again out-of-bound memory may occur as showed below: general protection fault, probably for non-canonical address 0x3fdcb14c8d7520 CPU: 4 PID: 515 Comm: dmsetup Not tainted 5.13.0-rc6 Hardware name: QEMU Standard PC RIP: 0010:metadata_ll_load_ie+0x14/0x30 Call Trace: sm_metadata_count_is_more_than_one+0xb9/0xe0 dm_tm_shadow_block+0x52/0x1c0 shadow_step+0x59/0xf0 remove_raw+0xb2/0x170 dm_btree_remove+0xf4/0x1c0 dm_pool_delete_thin_device+0xc3/0x140 pool_message+0x218/0x2b0 target_message+0x251/0x290 ctl_ioctl+0x1c4/0x4d0 dm_ctl_ioctl+0xe/0x20 __x64_sys_ioctl+0x7b/0xb0 do_syscall_64+0x40/0xb0 entry_SYSCALL_64_after_hwframe+0x44/0xae Fixing it by only assign new_root when removal succeeds Signed-off-by: Hou Tao <houtao1@huawei.com> Cc: stable@vger.kernel.org Signed-off-by: Mike Snitzer <snitzer@redhat.com>
-
Damien Le Moal authored
Make sure that the zone write pointer offset array is allocated with a vmalloc in dm_zone_revalidate_cb() by passing GFP_KERNEL gfp flag to kvcalloc(). However, since we do not want to trigger IOs while revalidating zones, change dm_revalidate_zones() to have the zone scan done in GFP_NOIO context using memalloc_noio_save/restore calls. Reported-by: Dan Carpenter <dan.carpenter@oracle.com> Fixes: bb37d772 ("dm: introduce zone append emulation") Signed-off-by: Damien Le Moal <damien.lemoal@wdc.com> Signed-off-by: Mike Snitzer <snitzer@redhat.com>
-
Colin Ian King authored
The continue statement at the end of a for-loop has no effect, remove it. Addresses-Coverity: ("Continue has no effect") Signed-off-by: Colin Ian King <colin.king@canonical.com> Signed-off-by: Mike Snitzer <snitzer@redhat.com>
-
Mikulas Patocka authored
Add a "metadata_only" parameter that when present: only metadata is promoted to the cache. This option improves performance for heavier REQ_META workloads (e.g. device-mapper-test-suite's "git clone and checkout" benchmark improves from 341s to 312s). Signed-off-by: Mikulas Patocka <mpatocka@redhat.com> Signed-off-by: Mike Snitzer <snitzer@redhat.com>
-
Mike Snitzer authored
Backfill missing Documentation. Fixes: 93de44eb ("dm writecache: implement the "cleaner" policy") Fixes: 3923d485 ("dm writecache: implement gradual cleanup") Signed-off-by: Mike Snitzer <snitzer@redhat.com>
-
- 21 Jun, 2021 1 commit
-
-
Mikulas Patocka authored
SSDs perform badly with sub-4k writes (because they perfrorm read-modify-write internally), so make sure writecache writes at least 4k when committing. Fixes: 991bd8d7 ("dm writecache: commit just one block, not a full page") Signed-off-by: Mikulas Patocka <mpatocka@redhat.com> Signed-off-by: Mike Snitzer <snitzer@redhat.com>
-
- 16 Jun, 2021 1 commit
-
-
Mikulas Patocka authored
Commit d53f1faf ("dm writecache: do direct write if the cache is full") changed dm-writecache, so that it writes directly to the origin device if the cache is full. Unfortunately, it doesn't forward flush requests to the origin device, so that there is a bug where flushes are being ignored. Fix this by adding missing flush forwarding. For PMEM mode, we fix this bug by disabling direct writes to the origin device, because it performs better. Signed-off-by: Mikulas Patocka <mpatocka@redhat.com> Fixes: d53f1faf ("dm writecache: do direct write if the cache is full") Cc: stable@vger.kernel.org # v5.7+ Signed-off-by: Mike Snitzer <snitzer@redhat.com>
-
- 15 Jun, 2021 1 commit
-
-
Mikulas Patocka authored
Make dm-writecache wait if the kcopyd workqueue is busy (as will happen if waiting for page allocation or inside submit_bio). This change improves performance of "mkfs.ext2" by approximately 20% on one testbed. Signed-off-by: Mikulas Patocka <mpatocka@redhat.com> Signed-off-by: Mike Snitzer <snitzer@redhat.com>
-
- 14 Jun, 2021 3 commits
-
-
Baokun Li authored
Reported-by: Hulk Robot <hulkci@huawei.com> Signed-off-by: Baokun Li <libaokun1@huawei.com> Signed-off-by: Mike Snitzer <snitzer@redhat.com>
-
Mikulas Patocka authored
Some architectures have pages larger than 4k and committing a full page causes needless overhead. Fix this by writing a single block when committing the superblock. Signed-off-by: Mikulas Patocka <mpatocka@redhat.com> Signed-off-by: Mike Snitzer <snitzer@redhat.com>
-
Mikulas Patocka authored
Signed-off-by: Mikulas Patocka <mpatocka@redhat.com> Signed-off-by: Mike Snitzer <snitzer@redhat.com>
-
- 04 Jun, 2021 21 commits
-
-
Damien Le Moal authored
Zone append BIOs (REQ_OP_ZONE_APPEND) always specify the start sector of the zone to be written instead of the actual sector location to write. The write location is determined by the device and returned to the host upon completion of the operation. This interface, while simple and efficient for writing into sequential zones of a zoned block device, is incompatible with the use of sector values to calculate a cypher block IV. All data written in a zone end up using the same IV values corresponding to the first sectors of the zone, but read operation will specify any sector within the zone resulting in an IV mismatch between encryption and decryption. To solve this problem, report to DM core that zone append operations are not supported. This result in the zone append operations being emulated using regular write operations. Reported-by: Shin'ichiro Kawasaki <shinichiro.kawasaki@wdc.com> Signed-off-by: Damien Le Moal <damien.lemoal@wdc.com> Reviewed-by: Hannes Reinecke <hare@suse.de> Reviewed-by: Himanshu Madhani <himanshu.madhani@oracle.com> Signed-off-by: Mike Snitzer <snitzer@redhat.com>
-
Damien Le Moal authored
For zoned targets that cannot support zone append operations, implement an emulation using regular write operations. If the original BIO submitted by the user is a zone append operation, change its clone into a regular write operation directed at the target zone write pointer position. To do so, an array of write pointer offsets (write pointer position relative to the start of a zone) is added to struct mapped_device. All operations that modify a sequential zone write pointer (writes, zone reset, zone finish and zone append) are intersepted in __map_bio() and processed using the new functions dm_zone_map_bio(). Detection of the target ability to natively support zone append operations is done from dm_table_set_restrictions() by calling the function dm_set_zones_restrictions(). A target that does not support zone append operation, either by explicitly declaring it using the new struct dm_target field zone_append_not_supported, or because the device table contains a non-zoned device, has its mapped device marked with the new flag DMF_ZONE_APPEND_EMULATED. The helper function dm_emulate_zone_append() is introduced to test a mapped device for this new flag. Atomicity of the zones write pointer tracking and updates is done using a zone write locking mechanism based on a bitmap. This is similar to the block layer method but based on BIOs rather than struct request. A zone write lock is taken in dm_zone_map_bio() for any clone BIO with an operation type that changes the BIO target zone write pointer position. The zone write lock is released if the clone BIO is failed before submission or when dm_zone_endio() is called when the clone BIO completes. The zone write lock bitmap of the mapped device, together with a bitmap indicating zone types (conv_zones_bitmap) and the write pointer offset array (zwp_offset) are allocated and initialized with a full device zone report in dm_set_zones_restrictions() using the function dm_revalidate_zones(). For failed operations that may have modified a zone write pointer, the zone write pointer offset is marked as invalid in dm_zone_endio(). Zones with an invalid write pointer offset are checked and the write pointer updated using an internal report zone operation when the faulty zone is accessed again by the user. All functions added for this emulation have a minimal overhead for zoned targets natively supporting zone append operations. Regular device targets are also not affected. The added code also does not impact builds with CONFIG_BLK_DEV_ZONED disabled by stubbing out all dm zone related functions. Signed-off-by: Damien Le Moal <damien.lemoal@wdc.com> Reviewed-by: Himanshu Madhani <himanshu.madhani@oracle.com> Reviewed-by: Hannes Reinecke <hare@suse.de> Signed-off-by: Mike Snitzer <snitzer@redhat.com>
-
Damien Le Moal authored
Move the definitions of struct dm_target_io, struct dm_io and the bits of the flags field of struct mapped_device from dm.c to dm-core.h to make them usable from dm-zone.c. For the same reason, declare dec_pending() in dm-core.h after renaming it to dm_io_dec_pending(). And for symmetry of the function names, introduce the inline helper dm_io_inc_pending() instead of directly using atomic_inc() calls. Signed-off-by: Damien Le Moal <damien.lemoal@wdc.com> Reviewed-by: Hannes Reinecke <hare@suse.de> Reviewed-by: Himanshu Madhani <himanshu.madhani@oracle.com> Signed-off-by: Mike Snitzer <snitzer@redhat.com>
-
Damien Le Moal authored
Introduce the BIO flag BIO_ZONE_WRITE_LOCKED to indicate that a BIO owns the write lock of the zone it is targeting. This is the counterpart of the struct request flag RQF_ZONE_WRITE_LOCKED. This new BIO flag is reserved for now for zone write locking control for device mapper targets exposing a zoned block device. Since in this case, the lock flag must not be propagated to the struct request that will be used to process the BIO, a BIO private flag is used rather than changing the RQF_ZONE_WRITE_LOCKED request flag into a common REQ_XXX flag that could be used for both BIO and request. This avoids conflicts down the stack with the block IO scheduler zone write locking (in mq-deadline). Signed-off-by: Damien Le Moal <damien.lemoal@wdc.com> Reviewed-by: Christoph Hellwig <hch@lst.de> Reviewed-by: Hannes Reinecke <hare@suse.de> Reviewed-by: Chaitanya Kulkarni <chaitanya.kulkarni@wdc.com> Reviewed-by: Himanshu Madhani <himanshu.madhani@oracle.com> Acked-by: Jens Axboe <axboe@kernel.dk> Signed-off-by: Mike Snitzer <snitzer@redhat.com>
-
Damien Le Moal authored
Introduce the helper functions bio_zone_no() and bio_zone_is_seq(). Both are the BIO counterparts of the request helpers blk_rq_zone_no() and blk_rq_zone_is_seq(), respectively returning the number of the target zone of a bio and true if the BIO target zone is sequential. Signed-off-by: Damien Le Moal <damien.lemoal@wdc.com> Reviewed-by: Hannes Reinecke <hare@suse.de> Reviewed-by: Chaitanya Kulkarni <chaitanya.kulkarni@wdc.com> Reviewed-by: Christoph Hellwig <hch@lst.de> Reviewed-by: Himanshu Madhani <himanshu.madhani@oracle.com> Acked-by: Jens Axboe <axboe@kernel.dk> Signed-off-by: Mike Snitzer <snitzer@redhat.com>
-
Damien Le Moal authored
SCSI, ZNS and null_blk zoned devices support resetting all zones using a single command (REQ_OP_ZONE_RESET_ALL), as indicated using the device request queue flag QUEUE_FLAG_ZONE_RESETALL. This flag is not set for device mapper targets creating zoned devices. In this case, a user request for resetting all zones of a device is processed in blkdev_zone_mgmt() by issuing a REQ_OP_ZONE_RESET operation for each zone of the device. This leads to different behaviors of the BLKRESETZONE ioctl() depending on the target device support for the reset all operation. E.g. blkzone reset /dev/sdX will reset all zones of a SCSI device using a single command that will ignore conventional, read-only or offline zones. But a dm-linear device including conventional, read-only or offline zones cannot be reset in the same manner as some of the single zone reset operations issued by blkdev_zone_mgmt() will fail. E.g.: blkzone reset /dev/dm-Y blkzone: /dev/dm-0: BLKRESETZONE ioctl failed: Remote I/O error To simplify applications and tools development, unify the behavior of the all-zone reset operation by modifying blkdev_zone_mgmt() to not issue a zone reset operation for conventional, read-only and offline zones, thus mimicking what an actual reset-all device command does on a device supporting REQ_OP_ZONE_RESET_ALL. This emulation is done using the new function blkdev_zone_reset_all_emulated(). The zones needing a reset are identified using a bitmap that is initialized using a zone report. Since empty zones do not need a reset, also ignore these zones. The function blkdev_zone_reset_all() is introduced for block devices natively supporting reset all operations. blkdev_zone_mgmt() is modified to call either function to execute an all zone reset request. Signed-off-by: Damien Le Moal <damien.lemoal@wdc.com> [hch: split into multiple functions] Signed-off-by: Christoph Hellwig <hch@lst.de> Reviewed-by: Chaitanya Kulkarni <chaitanya.kulkarni@wdc.com> Reviewed-by: Hannes Reinecke <hare@suse.de> Acked-by: Jens Axboe <axboe@kernel.dk> Signed-off-by: Mike Snitzer <snitzer@redhat.com>
-
Damien Le Moal authored
A target map method requesting the requeue of a bio with DM_MAPIO_REQUEUE or completing it with DM_ENDIO_REQUEUE can cause unaligned write errors if the bio is a write operation targeting a sequential zone. If a zoned target request such a requeue, warn about it and kill the IO. The function dm_is_zone_write() is introduced to detect write operations to zoned targets. This change does not affect the target drivers supporting zoned devices and exposing a zoned device, namely dm-crypt, dm-linear and dm-flakey as none of these targets ever request a requeue. Signed-off-by: Damien Le Moal <damien.lemoal@wdc.com> Reviewed-by: Hannes Reinecke <hare@suse.de> Reviewed-by: Himanshu Madhani <himanshu.madhani@oracle.com> Signed-off-by: Mike Snitzer <snitzer@redhat.com>
-
Damien Le Moal authored
To simplify the implementation of the report_zones operation of a zoned target, introduce the function dm_report_zones() to set a target mapping start sector in struct dm_report_zones_args and call blkdev_report_zones(). This new function is exported and the report zones callback function dm_report_zones_cb() is not. dm-linear, dm-flakey and dm-crypt are modified to use dm_report_zones(). Signed-off-by: Damien Le Moal <damien.lemoal@wdc.com> Reviewed-by: Johannes Thumshirn <johannes.thumshirn@wdc.com> Reviewed-by: Hannes Reinecke <hare@suse.de> Reviewed-by: Himanshu Madhani <himanshu.madhani@oracle.com> Signed-off-by: Mike Snitzer <snitzer@redhat.com>
-
Damien Le Moal authored
Move core and table code used for zoned targets and conditionally defined with #ifdef CONFIG_BLK_DEV_ZONED to the new file dm-zone.c. This file is conditionally compiled depending on CONFIG_BLK_DEV_ZONED. The small helper dm_set_zones_restrictions() is introduced to initialize a mapped device request queue zone attributes in dm_table_set_restrictions(). Signed-off-by: Damien Le Moal <damien.lemoal@wdc.com> Reviewed-by: Johannes Thumshirn <johannes.thumshirn@wdc.com> Reviewed-by: Hannes Reinecke <hare@suse.de> Reviewed-by: Himanshu Madhani <himanshu.madhani@oracle.com> Signed-off-by: Mike Snitzer <snitzer@redhat.com>
-
Damien Le Moal authored
In device_area_is_invalid(), use bdev_is_zoned() instead of open coding the test on the zoned model returned by bdev_zoned_model(). Signed-off-by: Damien Le Moal <damien.lemoal@wdc.com> Reviewed-by: Johannes Thumshirn <johannes.thumshirn@wdc.com> Reviewed-by: Hannes Reinecke <hare@suse.de> Reviewed-by: Christoph Hellwig <hch@lst.de> Reviewed-by: Himanshu Madhani <himanshu.madhani@oracle.com> Signed-off-by: Mike Snitzer <snitzer@redhat.com>
-
Damien Le Moal authored
Fix dm_accept_partial_bio() to actually check that zone management commands are not passed as explained in the function documentation comment. Also, since a zone append operation cannot be split, add REQ_OP_ZONE_APPEND as a forbidden command. White lines are added around the group of BUG_ON() calls to make the code more legible. Signed-off-by: Damien Le Moal <damien.lemoal@wdc.com> Signed-off-by: Mike Snitzer <snitzer@redhat.com>
-
Damien Le Moal authored
The dm-zoned target cannot support zoned block devices with zones that have a capacity smaller than the zone size (e.g. NVMe zoned namespaces) due to the current chunk zone mapping implementation as it is assumed that zones and chunks have the same size with all blocks usable. If a zoned drive is found to have zones with a capacity different from the zone size, fail the target initialization. Signed-off-by: Damien Le Moal <damien.lemoal@wdc.com> Cc: stable@vger.kernel.org # v5.9+ Signed-off-by: Mike Snitzer <snitzer@redhat.com>
-
Rikard Falkeborn authored
The only usage of dm_ksm_ll_ops is to make a copy of it to the ksm_ll_ops field in the blk_keyslot_manager struct. Make it const to allow the compiler to put it in read-only memory. Signed-off-by: Rikard Falkeborn <rikard.falkeborn@gmail.com> Signed-off-by: Mike Snitzer <snitzer@redhat.com>
-
Mikulas Patocka authored
If the DM device is suspended, interrupt the writeback sequence so that there is no excessive suspend delay. Signed-off-by: Mikulas Patocka <mpatocka@redhat.com> Signed-off-by: Mike Snitzer <snitzer@redhat.com>
-
Mikulas Patocka authored
If dm-writecache overwrites existing cached data, it splits the incoming bio into many block-sized bios. The I/O scheduler does merge these bios into one large request but this needless splitting and merging causes performance degradation. Fix this by avoiding bio splitting if the cache target area that is being overwritten is contiguous. Signed-off-by: Mikulas Patocka <mpatocka@redhat.com> Signed-off-by: Mike Snitzer <snitzer@redhat.com>
-
Mikulas Patocka authored
The functions "pop", "push_head", "do_work" can only be called from process context. Therefore, replace spin_lock_irq{save,restore} with spin_{lock,unlock}_irq. Signed-off-by: Mikulas Patocka <mpatocka@redhat.com> Signed-off-by: Mike Snitzer <snitzer@redhat.com>
-
Mikulas Patocka authored
The functions set_bit and clear_bit are atomic. We don't need atomicity when making flags for dm-kcopyd. So, change them to direct manipulation of the flags. Signed-off-by: Mikulas Patocka <mpatocka@redhat.com> Signed-off-by: Mike Snitzer <snitzer@redhat.com>
-
Joe Thornber authored
The disk space map stores it's index entries in a btree, these are accessed very frequently, so having a few cached makes a big difference to performance. With this change provisioning a new block takes roughly 20% less cpu. Signed-off-by: Joe Thornber <ejt@redhat.com> Signed-off-by: Mike Snitzer <snitzer@redhat.com>
-
Joe Thornber authored
When we break sharing on btree nodes we typically need to increment the reference counts to every value held in the node. This can cause a lot of repeated calls to the space maps. Fix this by changing the interface to the space map inc/dec methods to take ranges of adjacent blocks to be operated on. For installations that are using a lot of snapshots this will reduce cpu overhead of fundamental operations such as provisioning a new block, or deleting a snapshot, by as much as 10 times. Signed-off-by: Joe Thornber <ejt@redhat.com> Signed-off-by: Mike Snitzer <snitzer@redhat.com>
-
Joe Thornber authored
Current commit code resets the place where the search for free blocks will begin back to the start of the metadata device. There are a couple of repercussions to this: - The first allocation after the commit is likely to take longer than normal as it searches for a free block in an area that is likely to have very few free blocks (if any). - Any free blocks it finds will have been recently freed. Reusing them means we have fewer old copies of the metadata to aid recovery from hardware error. Fix these issues by leaving the cursor alone, only resetting when the search hits the end of the metadata device. Signed-off-by: Joe Thornber <ejt@redhat.com> Signed-off-by: Mike Snitzer <snitzer@redhat.com>
-
Joe Thornber authored
This commit improves the residency of btrees built in the metadata for dm-thin and dm-cache. When inserting a new entry into a full btree node the current code splits the node into two. This can result in very many half full nodes, particularly if the insertions are occurring in an ascending order (as happens in dm-thin with large writes). With this commit, when we insert into a full node we first try and move some entries to a neighbouring node that has space, failing that it tries to split two neighbouring nodes into three. Results are given below. 'Residency' is how full nodes are on average as a percentage. Average instruction counts for the operations are given to show the extra processing has little overhead. +--------------------------+--------------------------+ | Before | After | +------------+-----------+-----------+--------------+-----------+--------------+ | Test | Phase | Residency | Instructions | Residency | Instructions | +------------+-----------+-----------+--------------+-----------+--------------+ | Ascending | insert | 50 | 1876 | 96 | 1930 | | | overwrite | 50 | 1789 | 96 | 1746 | | | lookup | 50 | 778 | 96 | 778 | | Descending | insert | 50 | 3024 | 96 | 3181 | | | overwrite | 50 | 1789 | 96 | 1746 | | | lookup | 50 | 778 | 96 | 778 | | Random | insert | 68 | 3800 | 84 | 3736 | | | overwrite | 68 | 4254 | 84 | 3911 | | | lookup | 68 | 779 | 84 | 779 | | Runs | insert | 63 | 2546 | 82 | 2815 | | | overwrite | 63 | 2013 | 82 | 1986 | | | lookup | 63 | 778 | 82 | 779 | +------------+-----------+-----------+--------------+-----------+--------------+ Ascending - keys are inserted in ascending order. Descending - keys are inserted in descending order. Random - keys are inserted in random order. Runs - keys are split into ascending runs of ~20 length. Then the runs are shuffled. Signed-off-by: Joe Thornber <ejt@redhat.com> Signed-off-by: Colin Ian King <colin.king@canonical.com> # contains_key() fix Signed-off-by: Mike Snitzer <snitzer@redhat.com>
-
- 25 May, 2021 3 commits
-
-
Mikulas Patocka authored
If an origin target has no snapshots, o->split_boundary is set to 0. This causes BUG_ON(sectors <= 0) in block/bio.c:bio_split(). Fix this by initializing chunk_size, and in turn split_boundary, to rounddown_pow_of_two(UINT_MAX) -- the largest power of two that fits into "unsigned" type. Signed-off-by: Mikulas Patocka <mpatocka@redhat.com> Cc: stable@vger.kernel.org Signed-off-by: Mike Snitzer <snitzer@redhat.com>
-
Mikulas Patocka authored
Commit 7ee06ddc ("dm snapshot: fix a crash when an origin has no snapshots") introduced a regression in snapshot merging - causing the lvm2 test lvcreate-cache-snapshot.sh got stuck in an infinite loop. Even though commit 7ee06ddc was marked for stable@ the stable team was notified to _not_ backport it. Fixes: 7ee06ddc ("dm snapshot: fix a crash when an origin has no snapshots") Signed-off-by: Mikulas Patocka <mpatocka@redhat.com> Signed-off-by: Mike Snitzer <snitzer@redhat.com>
-
John Keeping authored
The third parameter of module_param() is permissions for the sysfs node but it looks like it is being used as the initial value of the parameter here. In fact, false here equates to omitting the file from sysfs and does not affect the value of require_signatures. Making the parameter writable is not simple because going from false->true is fine but it should not be possible to remove the requirement to verify a signature. But it can be useful to inspect the value of this parameter from userspace, so change the permissions to make a read-only file in sysfs. Signed-off-by: John Keeping <john@metanate.com> Signed-off-by: Mike Snitzer <snitzer@redhat.com>
-
- 23 May, 2021 3 commits
-
-
Linus Torvalds authored
-
git://git.kernel.org/pub/scm/linux/kernel/git/tip/tipLinus Torvalds authored
Pull perf fixes from Thomas Gleixner: "Two perf fixes: - Do not check the LBR_TOS MSR when setting up unrelated LBR MSRs as this can cause malfunction when TOS is not supported - Allocate the LBR XSAVE buffers along with the DS buffers upfront because allocating them when adding an event can deadlock" * tag 'perf-urgent-2021-05-23' of git://git.kernel.org/pub/scm/linux/kernel/git/tip/tip: perf/x86/lbr: Remove cpuc->lbr_xsave allocation from atomic context perf/x86: Avoid touching LBR_TOS MSR for Arch LBR
-
git://git.kernel.org/pub/scm/linux/kernel/git/tip/tipLinus Torvalds authored
Pull locking fixes from Thomas Gleixner: "Two locking fixes: - Invoke the lockdep tracepoints in the correct place so the ordering is correct again - Don't leave the mutex WAITER bit stale when the last waiter is dropping out early due to a signal as that forces all subsequent lock operations needlessly into the slowpath until it's cleaned up again" * tag 'locking-urgent-2021-05-23' of git://git.kernel.org/pub/scm/linux/kernel/git/tip/tip: locking/mutex: clear MUTEX_FLAGS if wait_list is empty due to signal locking/lockdep: Correct calling tracepoints
-