- 25 May, 2020 29 commits
-
-
Qu Wenruo authored
This function is mostly single purpose to relocation backref cache, but since we're moving the main part of backref cache to backref.c, we need to export such function. And to avoid confusion, rename the function to btrfs_should_ignore_reloc_root() make the name a little more clear. Signed-off-by: Qu Wenruo <wqu@suse.com> Reviewed-by: David Sterba <dsterba@suse.com> Signed-off-by: David Sterba <dsterba@suse.com>
-
Qu Wenruo authored
Also change the parameter, since all callers can easily grab an fs_info, there is no need for all the pointer chasing. Signed-off-by: Qu Wenruo <wqu@suse.com> Reviewed-by: David Sterba <dsterba@suse.com> Signed-off-by: David Sterba <dsterba@suse.com>
-
Qu Wenruo authored
Since we're releasing all existing nodes/edges, other than cleanup the mess after error, "release" is a more proper naming here. Signed-off-by: Qu Wenruo <wqu@suse.com> Reviewed-by: David Sterba <dsterba@suse.com> Signed-off-by: David Sterba <dsterba@suse.com>
-
Qu Wenruo authored
Also add comment explaining the cleanup progress, to differ it from btrfs_backref_drop_node(). Signed-off-by: Qu Wenruo <wqu@suse.com> Reviewed-by: David Sterba <dsterba@suse.com> Signed-off-by: David Sterba <dsterba@suse.com>
-
Qu Wenruo authored
With extra comment for drop_backref_node() as it has some similarity with remove_backref_node(), thus we need extra comment explaining the difference. Signed-off-by: Qu Wenruo <wqu@suse.com> Reviewed-by: David Sterba <dsterba@suse.com> Signed-off-by: David Sterba <dsterba@suse.com>
-
Qu Wenruo authored
Signed-off-by: Qu Wenruo <wqu@suse.com> Reviewed-by: David Sterba <dsterba@suse.com> Signed-off-by: David Sterba <dsterba@suse.com>
-
Qu Wenruo authored
Signed-off-by: Qu Wenruo <wqu@suse.com> Reviewed-by: David Sterba <dsterba@suse.com> Signed-off-by: David Sterba <dsterba@suse.com>
-
Qu Wenruo authored
Signed-off-by: Qu Wenruo <wqu@suse.com> Reviewed-by: David Sterba <dsterba@suse.com> Signed-off-by: David Sterba <dsterba@suse.com>
-
Qu Wenruo authored
Signed-off-by: Qu Wenruo <wqu@suse.com> Reviewed-by: David Sterba <dsterba@suse.com> Signed-off-by: David Sterba <dsterba@suse.com>
-
Qu Wenruo authored
Signed-off-by: Qu Wenruo <wqu@suse.com> Reviewed-by: David Sterba <dsterba@suse.com> Signed-off-by: David Sterba <dsterba@suse.com>
-
Qu Wenruo authored
Structure tree_entry provides a very simple rb_tree which only uses bytenr as search index. That tree_entry is used in 3 structures: backref_node, mapping_node and tree_block. Since we're going to make backref_node independnt from relocation, it's a good time to extract the tree_entry into rb_simple_node, and export it into misc.h. Signed-off-by: Qu Wenruo <wqu@suse.com> Reviewed-by: David Sterba <dsterba@suse.com> Signed-off-by: David Sterba <dsterba@suse.com>
-
Qu Wenruo authored
These 3 structures are the main part of btrfs backref cache, move them to backref.h to build the basis for later reuse. Signed-off-by: Qu Wenruo <wqu@suse.com> Reviewed-by: David Sterba <dsterba@suse.com> Signed-off-by: David Sterba <dsterba@suse.com>
-
Qu Wenruo authored
Those three structures are the main elements of backref cache. Add the "btrfs_" prefix for later export. Signed-off-by: Qu Wenruo <wqu@suse.com> Reviewed-by: David Sterba <dsterba@suse.com> Signed-off-by: David Sterba <dsterba@suse.com>
-
Qu Wenruo authored
This patch will also add some comment for the cleanup. Reviewed-by: Josef Bacik <josef@toxicpanda.com> Signed-off-by: Qu Wenruo <wqu@suse.com> Reviewed-by: David Sterba <dsterba@suse.com> Signed-off-by: David Sterba <dsterba@suse.com>
-
Qu Wenruo authored
After handle_one_tree_backref(), all newly added (not cached) edges and nodes have the following features: - Only backref_edge::list[LOWER] is linked. This means, we can only iterate from botton to top, not the other direction. - Newly added nodes are not added to cache rb_tree yet So to finish the backref cache, we still need to finish the links and add all nodes into backref cache rb_tree. This patch will refactor the existing code into finish_upper_links(), add more comments of each branch, and why we need to do all the work. Reviewed-by: Josef Bacik <josef@toxicpanda.com> Signed-off-by: Qu Wenruo <wqu@suse.com> Reviewed-by: David Sterba <dsterba@suse.com> Signed-off-by: David Sterba <dsterba@suse.com>
-
Qu Wenruo authored
build_backref_tree() uses "goto again;" to implement a breadth-first search to build backref cache. This patch will extract most of its work into a wrapper, handle_one_tree_block(), and use a do {} while() loop to implement the same thing. Reviewed-by: Josef Bacik <josef@toxicpanda.com> Signed-off-by: Qu Wenruo <wqu@suse.com> Reviewed-by: David Sterba <dsterba@suse.com> Signed-off-by: David Sterba <dsterba@suse.com>
-
Qu Wenruo authored
Bytenr and level are essential parameters for backref_node, thus it makes sense to initialize them at allocation time. Reviewed-by: Josef Bacik <josef@toxicpanda.com> Reviewed-by: Nikolay Borisov <nborisov@suse.com> Signed-off-by: Qu Wenruo <wqu@suse.com> Reviewed-by: David Sterba <dsterba@suse.com> Signed-off-by: David Sterba <dsterba@suse.com>
-
Qu Wenruo authored
Since backref_edge is used to connect upper and lower backref nodes, and needs to access both nodes, some code can look pretty nasty: list_add_tail(&edge->list[LOWER], &cur->upper); The above code will link @cur to the LOWER side of the edge, while both "LOWER" and "upper" words show up. This can sometimes be very confusing for reader to grasp. This patch introduces a new wrapper, link_backref_edge(), to handle the linking behavior. Which also has extra ASSERT() to ensure caller won't pass wrong nodes. Also, this updates the comment of related lists of backref_node and backref_edge, to make it more clear that each list points to what. Reviewed-by: Josef Bacik <josef@toxicpanda.com> Reviewed-by: Nikolay Borisov <nborisov@suse.com> Signed-off-by: Qu Wenruo <wqu@suse.com> Reviewed-by: David Sterba <dsterba@suse.com> Signed-off-by: David Sterba <dsterba@suse.com>
-
Qu Wenruo authored
The processing of indirect tree backref (TREE_BLOCK_REF) is the most complex work. We need to grab the fs root, do a tree search to locate all its parent nodes, link all needed edges, and put all uncached edges to pending edge list. This is definitely worth a helper function. Reviewed-by: Josef Bacik <josef@toxicpanda.com> Signed-off-by: Qu Wenruo <wqu@suse.com> Reviewed-by: David Sterba <dsterba@suse.com> Signed-off-by: David Sterba <dsterba@suse.com>
-
Qu Wenruo authored
For BTRFS_SHARED_BLOCK_REF_KEY, its processing is straightforward, as we now the parent node bytenr directly. If the parent is already cached, or a root, call it a day. If the parent is not cached, add it pending list. This patch will just refactor this part into its own function, handle_direct_tree_backref() and add some comment explaining the @ref_key parameter. Reviewed-by: Josef Bacik <josef@toxicpanda.com> Signed-off-by: Qu Wenruo <wqu@suse.com> Reviewed-by: David Sterba <dsterba@suse.com> Signed-off-by: David Sterba <dsterba@suse.com>
-
Qu Wenruo authored
find_reloc_root() searches reloc_control::reloc_root_tree to find the reloc root. This behavior is only useful for relocation backref cache. For the incoming more generic purpose backref cache, we don't care about who owns the reloc root, but only care if it's a reloc root. So this patch makes the following modifications to make the reloc root search more specific to relocation backref: - Add backref_node::is_reloc_root This will be an extra indicator for generic purposed backref cache. User doesn't need to read root key from backref_node::root to determine if it's a reloc root. Also for reloc tree root, it's useless and will be queued to useless list. - Add backref_cache::is_reloc This will allow backref cache code to do different behavior for generic purpose backref cache and relocation backref cache. - Pass fs_info to find_reloc_root() - Export find_reloc_root() So backref.c can utilize this function. Reviewed-by: Josef Bacik <josef@toxicpanda.com> Signed-off-by: Qu Wenruo <wqu@suse.com> Reviewed-by: David Sterba <dsterba@suse.com> Signed-off-by: David Sterba <dsterba@suse.com>
-
Qu Wenruo authored
Add this member so that we can grab fs_info without the help from reloc_control. Reviewed-by: Josef Bacik <josef@toxicpanda.com> Signed-off-by: Qu Wenruo <wqu@suse.com> Reviewed-by: David Sterba <dsterba@suse.com> Signed-off-by: David Sterba <dsterba@suse.com>
-
Qu Wenruo authored
These two new members will act the same as the existing local lists, @useless and @list in build_backref_tree(). Currently build_backref_tree() is only executed serially, thus moving such local list into backref_cache is still safe. Also since we're here, use list_first_entry() to replace a lot of list_entry() calls after !list_empty(). Reviewed-by: Josef Bacik <josef@toxicpanda.com> Signed-off-by: Qu Wenruo <wqu@suse.com> Reviewed-by: David Sterba <dsterba@suse.com> Signed-off-by: David Sterba <dsterba@suse.com>
-
Qu Wenruo authored
These two functions are weirdly named, mark_block_processed() in fact just marks a range dirty unconditionally, while __mark_block_processed() does extra check before doing the marking. This patch will open code old mark_block_processed, and rename __mark_block_processed() to remove the "__" prefix. Since we're here, also kill the forward declaration, which could also kill in_block_group() with in_range() macro. Reviewed-by: Nikolay Borisov <nborisov@suse.com> Reviewed-by: Josef Bacik <josef@toxicpanda.com> Signed-off-by: Qu Wenruo <wqu@suse.com> Reviewed-by: David Sterba <dsterba@suse.com> Signed-off-by: David Sterba <dsterba@suse.com>
-
Qu Wenruo authored
In the core function of relocation, build_backref_tree, it needs to iterate all backref items of one tree block. Use btrfs_backref_iter infrastructure to do the loop and make the code more readable. The backref items look would be much more easier to read: ret = btrfs_backref_iter_start(iter, cur->bytenr); for (; ret == 0; ret = btrfs_backref_iter_next(iter)) { /* The really important work */ } Reviewed-by: Johannes Thumshirn <johannes.thumshirn@wdc.com> Reviewed-by: Josef Bacik <josef@toxicpanda.com> Signed-off-by: Qu Wenruo <wqu@suse.com> Reviewed-by: David Sterba <dsterba@suse.com> Signed-off-by: David Sterba <dsterba@suse.com>
-
Qu Wenruo authored
This function will go to the next inline/keyed backref for btrfs_backref_iter infrastructure. Reviewed-by: Johannes Thumshirn <johannes.thumshirn@wdc.com> Reviewed-by: Josef Bacik <josef@toxicpanda.com> Signed-off-by: Qu Wenruo <wqu@suse.com> Reviewed-by: David Sterba <dsterba@suse.com> Signed-off-by: David Sterba <dsterba@suse.com>
-
Qu Wenruo authored
Due to the complex nature of btrfs extent tree, when we want to iterate all backrefs of one extent, this involves quite a lot of work, like searching the EXTENT_ITEM/METADATA_ITEM, iteration through inline and keyed backrefs. Normally this would result in a complex code, something like: btrfs_search_slot() /* Ensure we are at EXTENT_ITEM/METADATA_ITEM */ while (1) { /* Loop for extent tree items */ while (ptr < end) { /* Loop for inlined items */ /* Real work here */ } next: ret = btrfs_next_item() /* Ensure we're still at keyed item for specified bytenr */ } The idea of btrfs_backref_iter is to avoid such complex and hard to read code structure, but something like the following: iter = btrfs_backref_iter_alloc(); ret = btrfs_backref_iter_start(iter, bytenr); if (ret < 0) goto out; for (; ; ret = btrfs_backref_iter_next(iter)) { /* Real work here */ } out: btrfs_backref_iter_free(iter); This patch is just the skeleton + btrfs_backref_iter_start() code. Reviewed-by: Johannes Thumshirn <johannes.thumshirn@wdc.com> Reviewed-by: Josef Bacik <josef@toxicpanda.com> Signed-off-by: Qu Wenruo <wqu@suse.com> Reviewed-by: David Sterba <dsterba@suse.com> Signed-off-by: David Sterba <dsterba@suse.com>
-
Jules Irenge authored
Sparse reports a warning at btrfs_tree_lock() warning: context imbalance in btrfs_tree_lock() - wrong count at exit The root cause is the missing annotation at btrfs_tree_lock() Add the missing __acquires(&eb->lock) annotation Signed-off-by: Jules Irenge <jbi.octave@gmail.com> Reviewed-by: David Sterba <dsterba@suse.com> Signed-off-by: David Sterba <dsterba@suse.com>
-
Jules Irenge authored
Sparse reports a warning at btrfs_lock_cluster() warning: context imbalance in btrfs_lock_cluster() - wrong count The root cause is the missing annotation at btrfs_lock_cluster() Add the missing __acquires(&cluster->refill_lock) annotation. Signed-off-by: Jules Irenge <jbi.octave@gmail.com> Reviewed-by: David Sterba <dsterba@suse.com> Signed-off-by: David Sterba <dsterba@suse.com>
-
- 24 May, 2020 5 commits
-
-
Linus Torvalds authored
-
git://git.kernel.org/pub/scm/linux/kernel/git/tip/tipLinus Torvalds authored
Pull EFI fixes from Thomas Gleixner: "A set of EFI fixes: - Don't return a garbage screen info when EFI framebuffer is not available - Make the early EFI console work properly with wider fonts instead of drawing garbage - Prevent a memory buffer leak in allocate_e820() - Print the firmware error record properly so it can be decoded by users - Fix a symbol clash in the host tool build which only happens with newer compilers. - Add a missing check for the event log version of TPM which caused boot failures on several Dell systems due to an attempt to decode SHA-1 format with the crypto agile algorithm" * tag 'efi-urgent-2020-05-24' of git://git.kernel.org/pub/scm/linux/kernel/git/tip/tip: tpm: check event log version before reading final events efi: Pull up arch-specific prototype efi_systab_show_arch() x86/boot: Mark global variables as static efi: cper: Add support for printing Firmware Error Record Reference efi/libstub/x86: Avoid EFI map buffer alloc in allocate_e820() efi/earlycon: Fix early printk for wider fonts efi/libstub: Avoid returning uninitialized data from setup_graphics()
-
git://git.kernel.org/pub/scm/linux/kernel/git/tip/tipLinus Torvalds authored
Pull x86 fixes from Thomas Gleixner: "Two fixes for x86: - Unbreak stack dumps for inactive tasks by interpreting the special first frame left by __switch_to_asm() correctly. The recent change not to skip the first frame so ORC and frame unwinder behave in the same way caused all entries to be unreliable, i.e. prepended with '?'. - Use cpumask_available() instead of an implicit NULL check of a cpumask_var_t in mmio trace to prevent a Clang build warning" * tag 'x86-urgent-2020-05-24' of git://git.kernel.org/pub/scm/linux/kernel/git/tip/tip: x86/unwind/orc: Fix unwind_get_return_address_ptr() for inactive tasks x86/mmiotrace: Use cpumask_available() for cpumask_var_t variables
-
git://git.kernel.org/pub/scm/linux/kernel/git/tip/tipLinus Torvalds authored
Pull scheduler fixes from Thomas Gleixner: "A set of fixes for the scheduler: - Fix handling of throttled parents in enqueue_task_fair() completely. The recent fix overlooked a corner case where the first iteration terminates due to an entity already being on the runqueue which makes the list management incomplete and later triggers the assertion which checks for completeness. - Fix a similar problem in unthrottle_cfs_rq(). - Show the correct uclamp values in procfs which prints the effective value twice instead of requested and effective" * tag 'sched-urgent-2020-05-24' of git://git.kernel.org/pub/scm/linux/kernel/git/tip/tip: sched/fair: Fix unthrottle_cfs_rq() for leaf_cfs_rq list sched/debug: Fix requested task uclamp values shown in procfs sched/fair: Fix enqueue_task_fair() warning some more
-
git://git.kernel.org/pub/scm/linux/kernel/git/netdev/netLinus Torvalds authored
Pull networking fixes from David Miller: 1) Fix RCU warnings in ipv6 multicast router code, from Madhuparna Bhowmik. 2) Nexthop attributes aren't being checked properly because of mis-initialized iterator, from David Ahern. 3) Revert iop_idents_reserve() change as it caused performance regressions and was just working around what is really a UBSAN bug in the compiler. From Yuqi Jin. 4) Read MAC address properly from ROM in bmac driver (double iteration proceeds past end of address array), from Jeremy Kerr. 5) Add Microsoft Surface device IDs to r8152, from Marc Payne. 6) Prevent reference to freed SKB in __netif_receive_skb_core(), from Boris Sukholitko. 7) Fix ACK discard behavior in rxrpc, from David Howells. 8) Preserve flow hash across packet scrubbing in wireguard, from Jason A. Donenfeld. 9) Cap option length properly for SO_BINDTODEVICE in AX25, from Eric Dumazet. 10) Fix encryption error checking in kTLS code, from Vadim Fedorenko. 11) Missing BPF prog ref release in flow dissector, from Jakub Sitnicki. 12) dst_cache must be used with BH disabled in tipc, from Eric Dumazet. 13) Fix use after free in mlxsw driver, from Jiri Pirko. 14) Order kTLS key destruction properly in mlx5 driver, from Tariq Toukan. 15) Check devm_platform_ioremap_resource() return value properly in several drivers, from Tiezhu Yang. * git://git.kernel.org/pub/scm/linux/kernel/git/netdev/net: (71 commits) net: smsc911x: Fix runtime PM imbalance on error net/mlx4_core: fix a memory leak bug. net: ethernet: ti: cpsw: fix ASSERT_RTNL() warning during suspend net: phy: mscc: fix initialization of the MACsec protocol mode net: stmmac: don't attach interface until resume finishes net: Fix return value about devm_platform_ioremap_resource() net/mlx5: Fix error flow in case of function_setup failure net/mlx5e: CT: Correctly get flow rule net/mlx5e: Update netdev txq on completions during closure net/mlx5: Annotate mutex destroy for root ns net/mlx5: Don't maintain a case of del_sw_func being null net/mlx5: Fix cleaning unmanaged flow tables net/mlx5: Fix memory leak in mlx5_events_init net/mlx5e: Fix inner tirs handling net/mlx5e: kTLS, Destroy key object after destroying the TIS net/mlx5e: Fix allowed tc redirect merged eswitch offload cases net/mlx5: Avoid processing commands before cmdif is ready net/mlx5: Fix a race when moving command interface to events mode net/mlx5: Add command entry handling completion rxrpc: Fix a memory leak in rxkad_verify_response() ...
-
- 23 May, 2020 6 commits
-
-
Dinghao Liu authored
Remove runtime PM usage counter decrement when the increment function has not been called to keep the counter balanced. Signed-off-by: Dinghao Liu <dinghao.liu@zju.edu.cn> Signed-off-by: David S. Miller <davem@davemloft.net>
-
git://git.kernel.org/pub/scm/linux/kernel/git/saeed/linuxDavid S. Miller authored
Saeed Mahameed says: ==================== mlx5 fixes 2020-05-22 This series introduces some fixes to mlx5 driver. Please pull and let me know if there is any problem. For -stable v4.13 ('net/mlx5: Add command entry handling completion') For -stable v5.2 ('net/mlx5: Fix error flow in case of function_setup failure') ('net/mlx5: Fix memory leak in mlx5_events_init') For -stable v5.3 ('net/mlx5e: Update netdev txq on completions during closure') ('net/mlx5e: kTLS, Destroy key object after destroying the TIS') ('net/mlx5e: Fix inner tirs handling') For -stable v5.6 ('net/mlx5: Fix cleaning unmanaged flow tables') ('net/mlx5: Fix a race when moving command interface to events mode') ==================== Signed-off-by: David S. Miller <davem@davemloft.net>
-
Qiushi Wu authored
In function mlx4_opreq_action(), pointer "mailbox" is not released, when mlx4_cmd_box() return and error, causing a memory leak bug. Fix this issue by going to "out" label, mlx4_free_cmd_mailbox() can free this pointer. Fixes: fe6f700d ("net/mlx4_core: Respond to operation request by firmware") Signed-off-by: Qiushi Wu <wu000273@umn.edu> Signed-off-by: David S. Miller <davem@davemloft.net>
-
Grygorii Strashko authored
vlan_for_each() are required to be called with rtnl_lock taken, otherwise ASSERT_RTNL() warning will be triggered - which happens now during System resume from suspend: cpsw_suspend() |- cpsw_ndo_stop() |- __hw_addr_ref_unsync_dev() |- cpsw_purge_all_mc() |- vlan_for_each() |- ASSERT_RTNL(); Hence, fix it by surrounding cpsw_ndo_stop() by rtnl_lock/unlock() calls. Fixes: 15180eca ("net: ethernet: ti: cpsw: fix vlan mcast") Signed-off-by: Grygorii Strashko <grygorii.strashko@ti.com> Signed-off-by: David S. Miller <davem@davemloft.net>
-
Antoine Tenart authored
At the very end of the MACsec block initialization in the MSCC PHY driver, the MACsec "protocol mode" is set. This setting should be set based on the PHY id within the package, as the bank used to access the register used depends on this. This was not done correctly, and only the first bank was used leading to the two upper PHYs being unstable when using the VSC8584. This patch fixes it. Fixes: 1bbe0ecc ("net: phy: mscc: macsec initialization") Signed-off-by: Antoine Tenart <antoine.tenart@bootlin.com> Signed-off-by: David S. Miller <davem@davemloft.net>
-
Leon Yu authored
Commit 14b41a29 ("net: stmmac: Delete txtimer in suspend") was the first attempt to fix a race between mod_timer() and setup_timer() during stmmac_resume(). However the issue still exists as the commit only addressed half of the issue. Same race can still happen as stmmac_resume() re-attaches interface way too early - even before hardware is fully initialized. Worse, doing so allows network traffic to restart and stmmac_tx_timer_arm() being called in the middle of stmmac_resume(), which re-init tx timers in stmmac_init_coalesce(). timer_list will be corrupted and system crashes as a result of race between mod_timer() and setup_timer(). systemd--1995 2.... 552950018us : stmmac_suspend: 4994 ksoftirq-9 0..s2 553123133us : stmmac_tx_timer_arm: 2276 systemd--1995 0.... 553127896us : stmmac_resume: 5101 systemd--320 7...2 553132752us : stmmac_tx_timer_arm: 2276 (sd-exec-1999 5...2 553135204us : stmmac_tx_timer_arm: 2276 --------------------------------- pc : run_timer_softirq+0x468/0x5e0 lr : run_timer_softirq+0x570/0x5e0 Call trace: run_timer_softirq+0x468/0x5e0 __do_softirq+0x124/0x398 irq_exit+0xd8/0xe0 __handle_domain_irq+0x6c/0xc0 gic_handle_irq+0x60/0xb0 el1_irq+0xb8/0x180 arch_cpu_idle+0x38/0x230 default_idle_call+0x24/0x3c do_idle+0x1e0/0x2b8 cpu_startup_entry+0x28/0x48 secondary_start_kernel+0x1b4/0x208 Fix this by deferring netif_device_attach() to the end of stmmac_resume(). Signed-off-by: Leon Yu <leoyu@nvidia.com> Signed-off-by: David S. Miller <davem@davemloft.net>
-