- 25 Feb, 2019 40 commits
-
-
wenxu authored
ip l add dev tun type gretap key 1000 Non-tunnel-dst ip tunnel device can send packet through lwtunnel This patch provide the tun_inf dst cache support for this mode. Signed-off-by: wenxu <wenxu@ucloud.cn> Signed-off-by: David S. Miller <davem@davemloft.net>
-
David S. Miller authored
Andrew Lunn says: ==================== mv88e6xxx: Avoid false positive Lockdep splats When acquiring the GPIO interrupt line for the switch, it is possible to trigger lockdep splats. These are false positives, the mutex is in a different IRQ descriptor. But fix it anyway, since it could mask real locking issues. ==================== Signed-off-by: David S. Miller <davem@davemloft.net>
-
Andrew Lunn authored
There is no need to hold the register lock while requesting the GPIO interrupt. By not holding it we can also avoid a false positive lockdep splat. Reported-by: Russell King <rmk+kernel@armlinux.org.uk> Signed-off-by: Andrew Lunn <andrew@lunn.ch> Signed-off-by: David S. Miller <davem@davemloft.net>
-
Andrew Lunn authored
The following false positive lockdep splat has been observed. ====================================================== WARNING: possible circular locking dependency detected 4.20.0+ #302 Not tainted ------------------------------------------------------ systemd-udevd/160 is trying to acquire lock: edea6080 (&chip->reg_lock){+.+.}, at: __setup_irq+0x640/0x704 but task is already holding lock: edff0340 (&desc->request_mutex){+.+.}, at: __setup_irq+0xa0/0x704 which lock already depends on the new lock. the existing dependency chain (in reverse order) is: -> #1 (&desc->request_mutex){+.+.}: mutex_lock_nested+0x1c/0x24 __setup_irq+0xa0/0x704 request_threaded_irq+0xd0/0x150 mv88e6xxx_probe+0x41c/0x694 [mv88e6xxx] mdio_probe+0x2c/0x54 really_probe+0x200/0x2c4 driver_probe_device+0x5c/0x174 __driver_attach+0xd8/0xdc bus_for_each_dev+0x58/0x7c bus_add_driver+0xe4/0x1f0 driver_register+0x7c/0x110 mdio_driver_register+0x24/0x58 do_one_initcall+0x74/0x2e8 do_init_module+0x60/0x1d0 load_module+0x1968/0x1ff4 sys_finit_module+0x8c/0x98 ret_fast_syscall+0x0/0x28 0xbedf2ae8 -> #0 (&chip->reg_lock){+.+.}: __mutex_lock+0x50/0x8b8 mutex_lock_nested+0x1c/0x24 __setup_irq+0x640/0x704 request_threaded_irq+0xd0/0x150 mv88e6xxx_g2_irq_setup+0xcc/0x1b4 [mv88e6xxx] mv88e6xxx_probe+0x44c/0x694 [mv88e6xxx] mdio_probe+0x2c/0x54 really_probe+0x200/0x2c4 driver_probe_device+0x5c/0x174 __driver_attach+0xd8/0xdc bus_for_each_dev+0x58/0x7c bus_add_driver+0xe4/0x1f0 driver_register+0x7c/0x110 mdio_driver_register+0x24/0x58 do_one_initcall+0x74/0x2e8 do_init_module+0x60/0x1d0 load_module+0x1968/0x1ff4 sys_finit_module+0x8c/0x98 ret_fast_syscall+0x0/0x28 0xbedf2ae8 other info that might help us debug this: Possible unsafe locking scenario: CPU0 CPU1 ---- ---- lock(&desc->request_mutex); lock(&chip->reg_lock); lock(&desc->request_mutex); lock(&chip->reg_lock); &desc->request_mutex refer to two different mutex. #1 is the GPIO for the chip interrupt. #2 is the chained interrupt between global 1 and global 2. Add lockdep classes to the GPIO interrupt to avoid this. Reported-by: Russell King <rmk+kernel@armlinux.org.uk> Signed-off-by: Andrew Lunn <andrew@lunn.ch> Signed-off-by: David S. Miller <davem@davemloft.net>
-
wenxu authored
The lwtunnel_state is not init the dst_cache Which make the ip_md_tunnel_xmit can't use the dst_cache. It will lookup route table every packets. Signed-off-by: wenxu <wenxu@ucloud.cn> Signed-off-by: David S. Miller <davem@davemloft.net>
-
Vakul Garg authored
The patch enables returning 'type' in msghdr for records that are retrieved with MSG_PEEK in recvmsg. Further it prevents records peeked from socket from getting clubbed with any other record of different type when records are subsequently dequeued from strparser. For each record, we now retain its type in sk_buff's control buffer cb[]. Inside control buffer, record's full length and offset are already stored by strparser in 'struct strp_msg'. We store record type after 'struct strp_msg' inside 'struct tls_msg'. For tls1.2, the type is stored just after record dequeue. For tls1.3, the type is stored after record has been decrypted. Inside process_rx_list(), before processing a non-data record, we check that we must be able to return back the record type to the user application. If not, the decrypted records in tls context's rx_list is left there without consuming any data. Fixes: 692d7b5d ("tls: Fix recvmsg() to be able to peek across multiple records") Signed-off-by: Vakul Garg <vakul.garg@nxp.com> Signed-off-by: David S. Miller <davem@davemloft.net>
-
David S. Miller authored
Kefeng Wang says: ==================== ipv4/v6: icmp: small cleanup and update v2: - Add cover letter and user proper patch subject-prefix suggested-by Eric Dumazet This patch series contains some small cleanup and update, 1) use icmp/v6_sk_exit when icmp_sk_init fails instead of open-code 2) use new percpu allocation interface for the ipv6.icmp_sk ==================== Signed-off-by: David S. Miller <davem@davemloft.net>
-
Kefeng Wang authored
Use percpu allocation for the ipv6.icmp_sk. Signed-off-by: Kefeng Wang <wangkefeng.wang@huawei.com> Signed-off-by: David S. Miller <davem@davemloft.net>
-
Kefeng Wang authored
Simply use icmpv6_sk_exit() when inet_ctl_sock_create() fail in icmpv6_sk_init(). Signed-off-by: Kefeng Wang <wangkefeng.wang@huawei.com> Signed-off-by: David S. Miller <davem@davemloft.net>
-
Kefeng Wang authored
Simply use icmp_sk_exit() when inet_ctl_sock_create() fail in icmp_sk_init(). Signed-off-by: Kefeng Wang <wangkefeng.wang@huawei.com> Signed-off-by: David S. Miller <davem@davemloft.net>
-
Herbert Xu authored
This patch fixes an uninitialised return value error in ila_xlat_nl_cmd_flush. Reported-by: Dan Carpenter <dan.carpenter@oracle.com> Fixes: 6c4128f6 ("rhashtable: Remove obsolete...") Signed-off-by: Herbert Xu <herbert@gondor.apana.org.au> Signed-off-by: David S. Miller <davem@davemloft.net>
-
wenxu authored
The metadata_dst is not init the dst_cache which make the ip_md_tunnel_xmit can't use the dst_cache. It will lookup route table every packets. Signed-off-by: wenxu <wenxu@ucloud.cn> Signed-off-by: David S. Miller <davem@davemloft.net>
-
David S. Miller authored
Huazhong Tan says: ==================== code optimizations & bugfixes for HNS3 driver This patchset includes bugfixes and code optimizations for the HNS3 ethernet controller driver. ==================== Signed-off-by: David S. Miller <davem@davemloft.net>
-
Huazhong Tan authored
If hns3_client_start() failed in the hns3_client_init(), register_dev() should be undo in its error handling. Fixes: a6d818e3 ("net: hns3: Add vport alive state checking support") Signed-off-by: Huazhong Tan <tanhuazhong@huawei.com> Signed-off-by: Peng Li <lipeng321@huawei.com> Signed-off-by: David S. Miller <davem@davemloft.net>
-
Shiju Jose authored
Presently the hns reset_type for the roce errors is set in the hclge_log_and_clear_rocee_ras_error function. This function is also called to detect and clear roce errors while enabling the rdma error interrupts. However there is no hns reset requested for this case. This can cause issue of wrong reset_type used with subsequent hns reset as the reset_type set in the above case was not cleared. This patch moves setting of hns reset_type for the roce errors from hclge_log_and_clear_rocee_ras_error function to hclge_handle_rocee_ras_error. Fixes: 630ba007 ("net: hns3: add handling of RDMA RAS errors") Reported-by: Huazhong Tan <tanhuazhong@huawei.com> Reported-by: Xiaofei Tan <tanxiaofei@huawei.com> Signed-off-by: Shiju Jose <shiju.jose@huawei.com> Signed-off-by: Peng Li <lipeng321@huawei.com> Signed-off-by: Huazhong Tan <tanhuazhong@huawei.com> Signed-off-by: David S. Miller <davem@davemloft.net>
-
Jian Shen authored
For revision 0x20, VF shares the same RSS config with PF. In original codes, it always return 0 when query RSS hash key for VF. This patch fixes it by return the hash key got from PF. Fixes: 374ad291 ("net: hns3: net: hns3: Add RSS general configuration support for VF") Signed-off-by: Jian Shen <shenjian15@huawei.com> Signed-off-by: Peng Li <lipeng321@huawei.com> Signed-off-by: Huazhong Tan <tanhuazhong@huawei.com> Signed-off-by: David S. Miller <davem@davemloft.net>
-
Jian Shen authored
For revision 0x21, the switch of VF VLAN filter is per function. It's necessary to enable VF VLAN filter for each VF when initializing. Otherwise, VF will be able to receive broadcast packets with unknown VLAN when PF enters promisc mode. Fixes: 64d114f0 ("net: hns3: Add egress/ingress vlan filter for revision 0x21") Signed-off-by: Jian Shen <shenjian15@huawei.com> Signed-off-by: Peng Li <lipeng321@huawei.com> Signed-off-by: Huazhong Tan <tanhuazhong@huawei.com> Signed-off-by: David S. Miller <davem@davemloft.net>
-
Peng Li authored
This patch adds support to config depth for tx|rx ring separately by ethtool command "-G". Signed-off-by: Peng Li <lipeng321@huawei.com> Signed-off-by: Huazhong Tan <tanhuazhong@huawei.com> Signed-off-by: David S. Miller <davem@davemloft.net>
-
Yunsheng Lin authored
The hnae3_get_bit uses hnae3_get_field, and hnae3_get_field masks the data, which is unnecessary in data path. Signed-off-by: Yunsheng Lin <linyunsheng@huawei.com> Signed-off-by: Huazhong Tan <tanhuazhong@huawei.com> Signed-off-by: David S. Miller <davem@davemloft.net>
-
Yunsheng Lin authored
hnae3_set_bit and hnae3_set_field masks the data before setting the field or bit, which is unnecessary because the data is already zero initialized. Suggested-by: John Garry <john.garry@huawei.com> Signed-off-by: Yunsheng Lin <linyunsheng@huawei.com> Signed-off-by: Huazhong Tan <tanhuazhong@huawei.com> Signed-off-by: David S. Miller <davem@davemloft.net>
-
Yunsheng Lin authored
This patch adds unlikely hint for error handling in critical data path. Signed-off-by: Yunsheng Lin <linyunsheng@huawei.com> Signed-off-by: Huazhong Tan <tanhuazhong@huawei.com> Signed-off-by: David S. Miller <davem@davemloft.net>
-
Yunsheng Lin authored
The fill_desc ops has only one implementation, and get_rxd_bnum has not been used, so this patch removes them. Signed-off-by: Yunsheng Lin <linyunsheng@huawei.com> Signed-off-by: Huazhong Tan <tanhuazhong@huawei.com> Signed-off-by: David S. Miller <davem@davemloft.net>
-
Yunsheng Lin authored
This patch limits some variables' scope as much as possible in hns3_fill_desc. Also, only set l3_type and l4_type when necessary. Signed-off-by: Yunsheng Lin <linyunsheng@huawei.com> Signed-off-by: Huazhong tan <tanhuazhong@huawei.com> Signed-off-by: David S. Miller <davem@davemloft.net>
-
Yunsheng Lin authored
This patch uses shift offset to avoid doing mult and div operation. Signed-off-by: Yunsheng Lin <linyunsheng@huawei.com> Signed-off-by: Huazhong Tan <tanhuazhong@huawei.com> Signed-off-by: David S. Miller <davem@davemloft.net>
-
Yunsheng Lin authored
This patch adds xps setting support for hns3 driver based on the interrupt affinity info. Signed-off-by: Yunsheng Lin <linyunsheng@huawei.com> Signed-off-by: Huazhong Tan <tanhuazhong@huawei.com> Signed-off-by: David S. Miller <davem@davemloft.net>
-
David S. Miller authored
Ido Schimmel says: ==================== mlxsw: spectrum_acl: Don't take rtnl mutex for region rehash Jiri says: During region rehash, a new region is created with a more optimized set of masks (ERPs). When transitioning to the new region, all the rules from the old region are copied one-by-one to the new region. This transition can be time consuming and currently done under RTNL lock. In order to remove RTNL lock dependency during region rehash, introduce multiple smaller locks guarding dedicated structures or parts of them. That is the vast majority of this patchset. Only patch #1 is simple cleanup and patches 12-15 are improving or introducing new selftests. ==================== Signed-off-by: David S. Miller <davem@davemloft.net>
-
Jiri Pirko authored
Do insertions and removal of filters during rehash in higher volumes. Signed-off-by: Jiri Pirko <jiri@mellanox.com> Signed-off-by: Ido Schimmel <idosch@mellanox.com> Signed-off-by: David S. Miller <davem@davemloft.net>
-
Jiri Pirko authored
Add checking of newly added trace. Signed-off-by: Jiri Pirko <jiri@mellanox.com> Signed-off-by: Ido Schimmel <idosch@mellanox.com> Signed-off-by: David S. Miller <davem@davemloft.net>
-
Jiri Pirko authored
Hit the new tracepoint once the vregion migration ends. Signed-off-by: Jiri Pirko <jiri@mellanox.com> Signed-off-by: Ido Schimmel <idosch@mellanox.com> Signed-off-by: David S. Miller <davem@davemloft.net>
-
Jiri Pirko authored
Track the basic codepaths of delta rehash handling, using mlxsw tracepoints. Use IPv6 addresses. Signed-off-by: Jiri Pirko <jiri@mellanox.com> Signed-off-by: Ido Schimmel <idosch@mellanox.com> Signed-off-by: David S. Miller <davem@davemloft.net>
-
Jiri Pirko authored
Other mutexes are taking care of proper locking for this, no longer needed to take RTNL mutex here. Signed-off-by: Jiri Pirko <jiri@mellanox.com> Signed-off-by: Ido Schimmel <idosch@mellanox.com> Signed-off-by: David S. Miller <davem@davemloft.net>
-
Jiri Pirko authored
No longer require RTNL lock in this code. Newly introduced mutexes take care of guarding objagg and bloom filter. There is no need to guard gen_pool_alloc()/gen_pool_free() as they are fine to be called lockless. Signed-off-by: Jiri Pirko <jiri@mellanox.com> Signed-off-by: Ido Schimmel <idosch@mellanox.com> Signed-off-by: David S. Miller <davem@davemloft.net>
-
Jiri Pirko authored
Relax dependency on rtnl mutex during vregion_rehash_intrvl_set(). The vregion list is protected with newly introduced mutex. Signed-off-by: Jiri Pirko <jiri@mellanox.com> Signed-off-by: Ido Schimmel <idosch@mellanox.com> Signed-off-by: David S. Miller <davem@davemloft.net>
-
Jiri Pirko authored
Protect objagg structures by adding a mutex to ERP code and take it during the structure manipulation. Signed-off-by: Jiri Pirko <jiri@mellanox.com> Signed-off-by: Ido Schimmel <idosch@mellanox.com> Signed-off-by: David S. Miller <davem@davemloft.net>
-
Jiri Pirko authored
For MR ACL profile is does not make sense to do periodical rehashes, as there is only one mask in use during the whole vregion lifetime. Therefore periodical work is scheduled but the rehash never happens. So allow to enable/disable rehash for the whole group, which is added per-profile. Disable rehashing for MR profile. Addition to the vregion list is done only in case the rehash is enable on the particular vregion. Also, the addition is moved after delayed work init to avoid schedule of uninitialized work from vregion_rehash_intrvl_set(). Symmetrically, deletion from the list is done before canceling the delayed work so it is not scheduled by vregion_rehash_intrvl_set() again. Signed-off-by: Jiri Pirko <jiri@mellanox.com> Signed-off-by: Ido Schimmel <idosch@mellanox.com> Signed-off-by: David S. Miller <davem@davemloft.net>
-
Jiri Pirko authored
Bloom filter is shared within multiple regions. For updates, it needs to be guarded by a separate mutex. Do that in order to not rely on RTNL mutex. Signed-off-by: Jiri Pirko <jiri@mellanox.com> Signed-off-by: Ido Schimmel <idosch@mellanox.com> Signed-off-by: David S. Miller <davem@davemloft.net>
-
Jiri Pirko authored
In order to remove dependency on RTNL, introduce a mutex to guard vregion structure, list of chunks and list of entries in chunks. Signed-off-by: Jiri Pirko <jiri@mellanox.com> Signed-off-by: Ido Schimmel <idosch@mellanox.com> Signed-off-by: David S. Miller <davem@davemloft.net>
-
Jiri Pirko authored
Refactor existing _vchunk_assoc/_vchunk_deassoc() functions into _vregion_get()/_vregion_put() to make the code simpler and prepared for vregion locking. Signed-off-by: Jiri Pirko <jiri@mellanox.com> Signed-off-by: Ido Schimmel <idosch@mellanox.com> Signed-off-by: David S. Miller <davem@davemloft.net>
-
Jiri Pirko authored
In order to remove RTNL lock dependency, it is needed to protect the regions list in a group. Introduce a mutex to do the job. Signed-off-by: Jiri Pirko <jiri@mellanox.com> Signed-off-by: Ido Schimmel <idosch@mellanox.com> Signed-off-by: David S. Miller <davem@davemloft.net>
-
Jiri Pirko authored
Make the existing group structure to contain fields needed for HW region list manipulations. Move the rest of the fields into new vgroup struct. This makes layering cleaner as the vgroup struct is on higher level than low-level group struct. Also, this makes it possible to introduce fine-grained locking. Signed-off-by: Jiri Pirko <jiri@mellanox.com> Signed-off-by: Ido Schimmel <idosch@mellanox.com> Signed-off-by: David S. Miller <davem@davemloft.net>
-