- 18 Mar, 2018 8 commits
-
-
Thomas Falcon authored
Introduce function that initializes one TX pool. Use that to create each pool entry in both the standard TX pool and TSO pool arrays. Signed-off-by: Thomas Falcon <tlfalcon@linux.vnet.ibm.com> Signed-off-by: David S. Miller <davem@davemloft.net>
-
Thomas Falcon authored
Introduce function that frees one TX pool. Use that to release each pool in both the standard TX pool and TSO pool arrays. Signed-off-by: Thomas Falcon <tlfalcon@linux.vnet.ibm.com> Signed-off-by: David S. Miller <davem@davemloft.net>
-
Thomas Falcon authored
Update TX pool reset routine to accommodate new TSO pool array. Introduce a function that resets one TX pool, and use that function to initialize each pool in both pool arrays. Signed-off-by: Thomas Falcon <tlfalcon@linux.vnet.ibm.com> Signed-off-by: David S. Miller <davem@davemloft.net>
-
Thomas Falcon authored
Remove some unused fields in the structure and include values describing the individual buffer size and number of buffers in a TX pool. This allows us to use these fields for TX pool buffer accounting as opposed to using hard coded values. Include a new pool array for TSO transmissions. Signed-off-by: Thomas Falcon <tlfalcon@linux.vnet.ibm.com> Signed-off-by: David S. Miller <davem@davemloft.net>
-
Al Viro authored
use proc_remove_subtree() for subtree removal, both on setup failure halfway through and on teardown. No need to make simple things complex... Signed-off-by: Al Viro <viro@zeniv.linux.org.uk> Signed-off-by: David S. Miller <davem@davemloft.net>
-
David S. Miller authored
Stephen Hemminger says: ==================== hv_netvsc: minor enhancements A couple of small things for net-next ==================== Signed-off-by: David S. Miller <davem@davemloft.net>
-
Stephen Hemminger authored
This adds tracepoints to the driver which has proved useful in debugging startup and shutdown race conditions. Signed-off-by: Stephen Hemminger <sthemmin@microsoft.com> Signed-off-by: David S. Miller <davem@davemloft.net>
-
Stephen Hemminger authored
The caller has a valid pointer, pass it to rndis_filter_halt_device and avoid any possible RCU races here. Signed-off-by: Stephen Hemminger <sthemmin@microsoft.com> Signed-off-by: David S. Miller <davem@davemloft.net>
-
- 17 Mar, 2018 16 commits
-
-
Grygorii Strashko authored
In VLAN_AWARE mode CPSW can insert VLAN header encapsulation word on Host port 0 egress (RX) before the packet data if RX_VLAN_ENCAP bit is set in CPSW_CONTROL register. VLAN header encapsulation word has following format: HDR_PKT_Priority bits 29-31 - Header Packet VLAN prio (Highest prio: 7) HDR_PKT_CFI bits 28 - Header Packet VLAN CFI bit. HDR_PKT_Vid bits 27-16 - Header Packet VLAN ID PKT_Type bits 8-9 - Packet Type. Indicates whether the packet is VLAN-tagged, priority-tagged, or non-tagged. 00: VLAN-tagged packet 01: Reserved 10: Priority-tagged packet 11: Non-tagged packet This feature can be used to implement TX VLAN offload in case of VLAN-tagged packets and to insert VLAN tag in case Non-tagged packet was received on port with PVID set. As per documentation, CPSW never modifies packet data on Host egress (RX) and as result, without this feature enabled, Host port will not be able to receive properly packets which entered switch non-tagged through external Port with PVID set (when non-tagged packet forwarded from external Port with PVID set to another external Port - packet will be VLAN tagged properly). Implementation details: - on RX driver will check CPDMA status bit RX_VLAN_ENCAP BIT(19) in CPPI descriptor to identify when VLAN header encapsulation word is present. - PKT_Type = 0x01 or 0x02 then ignore VLAN header encapsulation word and pass packet as is; - if HDR_PKT_Vid = 0 then ignore VLAN header encapsulation word and pass packet as is; - In dual mac mode traffic is separated between ports using default port vlans, which are not be visible to Host and so should not be reported. Hence, check for default port vlans in dual mac mode and ignore VLAN header encapsulation word; - otherwise fill SKB with VLAN info using __vlan_hwaccel_put_tag(); - PKT_Type = 0x00 (VLAN-tagged) then strip out VLAN header from SKB. Signed-off-by: Grygorii Strashko <grygorii.strashko@ti.com> Signed-off-by: David S. Miller <davem@davemloft.net>
-
Arjun Vynipadath authored
Setting sge_uld_rxq_info to NULL in free_queues_uld(). We are referencing sge_uld_rxq_info in cxgb_up(). This will fix a panic when interface is brought up after a ULDq creation failure. Fixes: 94cdb8bb (cxgb4: Add support for dynamic allocation of resources for ULD) Signed-off-by: Arjun Vynipadath <arjun@chelsio.com> Signed-off-by: Casey Leedom <leedom@chelsio.com> Signed-off-by: Ganesh Goudhar <ganeshgr@chelsio.com> Signed-off-by: David S. Miller <davem@davemloft.net>
-
Sowmini Varadhan authored
rds_tcp_connection allocation/free management has the potential to be called from __rds_conn_create after IRQs have been disabled, so spin_[un]lock_bh cannot be used with rds_tcp_conn_lock. Bottom-halves that need to synchronize for critical sections protected by rds_tcp_conn_lock should instead use rds_destroy_pending() correctly. Reported-by: syzbot+c68e51bb5e699d3f8d91@syzkaller.appspotmail.com Fixes: ebeeb1ad ("rds: tcp: use rds_destroy_pending() to synchronize netns/module teardown and rds connection/workq management") Signed-off-by: Sowmini Varadhan <sowmini.varadhan@oracle.com> Acked-by: Santosh Shilimkar <santosh.shilimkar@oracle.com> Signed-off-by: David S. Miller <davem@davemloft.net>
-
David S. Miller authored
Jon Maloy says: ==================== tipc: obsolete zone concept Functionality related to the 'zone' concept was never implemented in TIPC. In this series we eliminate the remaining traces of it in the code, and can hence take a first step in reducing the footprint and complexity of the binding table. ==================== Signed-off-by: David S. Miller <davem@davemloft.net>
-
Jon Maloy authored
We rename some lists and fields in struct publication both to make the naming more consistent and to better reflect their roles. We also update the descriptions of those lists. node_list -> local_publ cluster_list -> all_publ pport_list -> binding_sock ref -> port There are no functional changes in this commit. Acked-by: Ying Xue <ying.xue@windriver.com> Signed-off-by: Jon Maloy <jon.maloy@ericsson.com> Signed-off-by: David S. Miller <davem@davemloft.net>
-
Jon Maloy authored
The size of struct publication can be reduced further. Membership in lists 'nodesub_list' and 'local_list' is mutually exlusive, in that remote publications use the former and local publications the latter. We replace the two lists with one single, named 'binding_node' which reflects what it really is. Acked-by: Ying Xue <ying.xue@windriver.com> Signed-off-by: Jon Maloy <jon.maloy@ericsson.com> Signed-off-by: David S. Miller <davem@davemloft.net>
-
Jon Maloy authored
As a further consequence of the previous commits, we can also remove the member 'zone_list 'in struct name_info and struct publication. Instead, we now let the member cluster_list take over the role a container of all publications of a given <type,lower, upper>. We also remove the counters for the size of those lists, since they don't serve any purpose. Acked-by: Ying Xue <ying.xue@windriver.com> Signed-off-by: Jon Maloy <jon.maloy@ericsson.com> Signed-off-by: David S. Miller <davem@davemloft.net>
-
Jon Maloy authored
As a consequence of the previous commit we nan now eliminate zone scope related lists in the name table. We start with name_table::publ_list[3], which can now be replaced with two lists, one for node scope publications and one for cluster scope publications. Acked-by: Ying Xue <ying.xue@windriver.com> Signed-off-by: Jon Maloy <jon.maloy@ericsson.com> Signed-off-by: David S. Miller <davem@davemloft.net>
-
Jon Maloy authored
Publications for TIPC_CLUSTER_SCOPE and TIPC_ZONE_SCOPE are in all aspects handled the same way, both on the publishing node and on the receiving nodes. Despite previous ambitions to the contrary, this is never going to change, so we take the conseqeunce of this and obsolete TIPC_ZONE_SCOPE and related macros/functions. Whenever a user is doing a bind() or a sendmsg() attempt using ZONE_SCOPE we translate this internally to CLUSTER_SCOPE, while we remain compatible with users and remote nodes still using ZONE_SCOPE. Furthermore, the non-formalized scope value 0 has always been permitted for use during lookup, with the same meaning as ZONE_SCOPE/CLUSTER_SCOPE. We now permit it even as binding scope, but for compatibility reasons we choose to not change the value of TIPC_CLUSTER_SCOPE. Acked-by: Ying Xue <ying.xue@windriver.com> Signed-off-by: Jon Maloy <jon.maloy@ericsson.com> Signed-off-by: David S. Miller <davem@davemloft.net>
-
David S. Miller authored
Kirill Tkhai says: ==================== Converting pernet_operations (part #8) this series continues to review and to convert pernet_operations to make them possible to be executed in parallel for several net namespaces at the same time. There are different operations over the tree, mostly are ipvs. ==================== Signed-off-by: David S. Miller <davem@davemloft.net>
-
Kirill Tkhai authored
These pernet_operations register and unregister ipvs app. register_ip_vs_app(), unregister_ip_vs_app() and register_ip_vs_app_inc() modify per-net structures, and there are no global structures touched. So, this looks safe to be marked as async. Signed-off-by: Kirill Tkhai <ktkhai@virtuozzo.com> Signed-off-by: David S. Miller <davem@davemloft.net>
-
Kirill Tkhai authored
Exit method stops two per-net threads and cancels delayed work. Everything looks nicely per-net divided. Signed-off-by: Kirill Tkhai <ktkhai@virtuozzo.com> Signed-off-by: David S. Miller <davem@davemloft.net>
-
Kirill Tkhai authored
These pernet_operations register and unregister nf hooks, /proc entries, sysctl, percpu statistics. There are several global lists, and the only list modified without exclusive locks is ip_vs_conn_tab in ip_vs_conn_flush(). We iterate the list and force the timers expire at the moment. Since there were possible several timer expirations before this patch, and since they are safe, the patch does not invent new parallelism of their destruction. These pernet_operations look safe to be converted. Signed-off-by: Kirill Tkhai <ktkhai@virtuozzo.com> Signed-off-by: David S. Miller <davem@davemloft.net>
-
Kirill Tkhai authored
These pernet_operations initialize and destroy net_generic() data pointed by ovs_net_id. Exit method destroys vports from alive net to exiting net. Since they are only pernet_operations interested in this data, and exit method is executed under exclusive global lock (ovs_mutex), they are safe to be executed in parallel. Signed-off-by: Kirill Tkhai <ktkhai@virtuozzo.com> Signed-off-by: David S. Miller <davem@davemloft.net>
-
Kirill Tkhai authored
These pernet_operations register and unregister sysctl table. Exit methods frees platform_labels from net::mpls::platform_label. Everything is per-net, and they looks safe to be marked async. Signed-off-by: Kirill Tkhai <ktkhai@virtuozzo.com> Signed-off-by: David S. Miller <davem@davemloft.net>
-
Kirill Tkhai authored
Init method is rather simple. Exit method queues del_work for every tunnel from per-net list. This seems to be safe to be marked async. Signed-off-by: Kirill Tkhai <ktkhai@virtuozzo.com> Acked-by: Guillaume Nault <g.nault@alphalink.fr> Signed-off-by: David S. Miller <davem@davemloft.net>
-
- 16 Mar, 2018 16 commits
-
-
Yousuk Seung authored
Set tp->snd_ssthresh to BDP upon STARTUP exit. This allows us to check if a BBR flow exited STARTUP and the BDP at the time of STARTUP exit with SCM_TIMESTAMPING_OPT_STATS. Since BBR does not use snd_ssthresh this fix has no impact on BBR's behavior. Signed-off-by: Yousuk Seung <ysseung@google.com> Signed-off-by: Neal Cardwell <ncardwell@google.com> Signed-off-by: Priyaranjan Jha <priyarjha@google.com> Signed-off-by: Soheil Hassas Yeganeh <soheil@google.com> Signed-off-by: Yuchung Cheng <ycheng@google.com> Signed-off-by: David S. Miller <davem@davemloft.net>
-
Yousuk Seung authored
This patch adds TCP_NLA_SND_SSTHRESH stat into SCM_TIMESTAMPING_OPT_STATS that reports tcp_sock.snd_ssthresh. Signed-off-by: Yousuk Seung <ysseung@google.com> Signed-off-by: Neal Cardwell <ncardwell@google.com> Signed-off-by: Priyaranjan Jha <priyarjha@google.com> Signed-off-by: Soheil Hassas Yeganeh <soheil@google.com> Signed-off-by: Yuchung Cheng <ycheng@google.com> Reviewed-by: Eric Dumazet <edumazet@google.com> Signed-off-by: David S. Miller <davem@davemloft.net>
-
Vinicius Costa Gomes authored
Add a way to configure if poll() should wait forever for an event, the number of packets that should be sent for each and if there should be any delay between packets. Signed-off-by: Vinicius Costa Gomes <vinicius.gomes@intel.com> Signed-off-by: David S. Miller <davem@davemloft.net>
-
Intiyaz Basha authored
1) Moved interrupt enable related code from octeon_process_droq_poll_cmd() to separate function octeon_enable_irq(). 2) Removed wrapper function octeon_process_droq_poll_cmd(), and directlyi using octeon_droq_process_poll_pkts(). 3) Removed unused macros POLL_EVENT_XXX. Signed-off-by: Intiyaz Basha <intiyaz.basha@cavium.com> Signed-off-by: Felix Manlunas <felix.manlunas@cavium.com> Signed-off-by: David S. Miller <davem@davemloft.net>
-
David S. Miller authored
Ursula Braun says: ==================== net/smc: IPv6 support these smc patches for the net-next tree add IPv6 support. ==================== Signed-off-by: David S. Miller <davem@davemloft.net>
-
Karsten Graul authored
Add ipv6 support to the smc socket layer functions. Make use of the updated clc layer functions to retrieve and match ipv6 information. The indicator for ipv4 or ipv6 is the protocol constant that is provided in the socket() call with address family AF_SMC. Based-on-patch-by: Takanori Ueda <tkueda@jp.ibm.com> Signed-off-by: Karsten Graul <kgraul@linux.vnet.ibm.com> Signed-off-by: Ursula Braun <ubraun@linux.vnet.ibm.com> Signed-off-by: David S. Miller <davem@davemloft.net>
-
Karsten Graul authored
The CLC layer is updated to support ipv6 proposal messages from peers and to match incoming proposal messages against the ipv6 addresses of the net device. struct smc_clc_ipv6_prefix is updated to provide the space for an ipv6 address (struct was not used before). SMC_CLC_MAX_LEN is updated to include the size of the proposal prefix. Existing code in net is not affected, the previous SMC_CLC_MAX_LEN value is large enough to hold ipv4 proposal messages. Signed-off-by: Karsten Graul <kgraul@linux.vnet.ibm.com> Signed-off-by: Ursula Braun <ubraun@linux.vnet.ibm.com> Signed-off-by: David S. Miller <davem@davemloft.net>
-
Karsten Graul authored
Introduce functions smc_clc_prfx_set to retrieve IP information for the CLC proposal msg and smc_clc_prfx_match to match the contents of a proposal message against the IP addresses of the net device. The new functions replace the functionality provided by smc_clc_netinfo_by_tcpsk, which is removed by this patch. The match functionality is extended to scan all ipv4 addresses of the net device for a match against the ipv4 subnet from the proposal msg. Signed-off-by: Karsten Graul <kgraul@linux.vnet.ibm.com> Signed-off-by: Ursula Braun <ubraun@linux.vnet.ibm.com> Signed-off-by: David S. Miller <davem@davemloft.net>
-
Ganesh Goudar authored
notify uld drivers if the adapter encounters fatal error. Signed-off-by: Ganesh Goudar <ganeshgr@chelsio.com> Signed-off-by: David S. Miller <davem@davemloft.net>
-
David S. Miller authored
Kirill Tkhai says: ==================== Introduce rtnl_lock_killable() rtnl_lock() is widely used mutex in kernel. Some of kernel code does memory allocations under it. In case of memory deficit this may invoke OOM killer, but the problem is a killed task can't exit if it's waiting for the mutex. This may be a reason of deadlock and panic. This patchset adds a new primitive, which responds on SIGKILL, and it allows to use it in the places, where we don't want to sleep forever. Also, the first place is made to use it. ==================== Signed-off-by: David S. Miller <davem@davemloft.net>
-
Kirill Tkhai authored
This patch adds rtnl_lock_killable() to one of hot path using rtnl_lock(). Signed-off-by: Kirill Tkhai <ktkhai@virtuozzo.com> Signed-off-by: David S. Miller <davem@davemloft.net>
-
Kirill Tkhai authored
rtnl_lock() is widely used mutex in kernel. Some of kernel code does memory allocations under it. In case of memory deficit this may invoke OOM killer, but the problem is a killed task can't exit if it's waiting for the mutex. This may be a reason of deadlock and panic. This patch adds a new primitive, which responds on SIGKILL, and it allows to use it in the places, where we don't want to sleep forever. Signed-off-by: Kirill Tkhai <ktkhai@virtuozzo.com> Signed-off-by: David S. Miller <davem@davemloft.net>
-
Tonghao Zhang authored
The SK_MEM_QUANTUM was changed from PAGE_SIZE to 4096. Signed-off-by: Tonghao Zhang <xiangxia.m.yue@gmail.com> Signed-off-by: David S. Miller <davem@davemloft.net>
-
Tonghao Zhang authored
This patch moves the udp_rmem_min, udp_wmem_min to namespace and init the udp_l3mdev_accept explicitly. The udp_rmem_min/udp_wmem_min affect udp rx/tx queue, with this patch namespaces can set them differently. Signed-off-by: Tonghao Zhang <xiangxia.m.yue@gmail.com> Signed-off-by: David S. Miller <davem@davemloft.net>
-
David S. Miller authored
David Ahern says: ==================== net/ipv6: Address checks need to consider the L3 domain IPv6 prohibits a local address from being used as a gateway for a route. However, it is ok for the gateway to be a local address in a different L3 domain (e.g., VRF). This allows, for example, veth pairs to connect VRFs. ip6_route_info_create calls ipv6_chk_addr_and_flags for gateway addresses to determine if the address is a local one, but ipv6_chk_addr_and_flags does not currently consider L3 domains. As a result routes can not be added in one VRF with a nexthop that points to a local address in a second VRF. Resolve by comparing the l3mdev for the passed in device and requiring an l3mdev match with the device containing an address. The intent of checking for an address on the specified device versus any device in the domain is mantained by a new argument to skip the check between the passed in device and the device with the address. Patch 1 moves the gateway validation from ip6_route_info_create into a helper; the function is long enough and refactoring drops the indent level. Patch 2 adds a skip_dev_check argument to ipv6_chk_addr_and_flags to allow a device to always be passed yet skip the device check when looking at addresses and fixes up a few ipv6_chk_addr callers that pass a NULL device. Patch 3 adds l3mdev checks to ipv6_chk_addr_and_flags. Patches 4 and 5 do some refactoring to the fib_tests script and then patch 6 adds nexthop validation tests. v4 - separated l3mdev check into a separate patch (patch 3 of this set) as suggested by Kirill - consolidated dev and ipv6_chk_addr_and_flags call into 1 if (Kirill) - added a temp variable for gw type (Kirill) v3 - set skip_dev_check in ipv6_chk_addr based on dev == NULL (per comment from Ido) v2 - handle 2 variations of route spec with sane error path - add test cases ==================== Signed-off-by: David S. Miller <davem@davemloft.net>
-
David Ahern authored
Add series of tests for valid and invalid nexthop specs for IPv6. $ TEST=fib_nexthop_test ./fib_tests.sh ... IPv6 nexthop tests TEST: Directly connected nexthop, unicast address [ OK ] TEST: Directly connected nexthop, unicast address with device [ OK ] TEST: Gateway is linklocal address [ OK ] TEST: Gateway is linklocal address, no device [ OK ] TEST: Gateway can not be local unicast address [ OK ] TEST: Gateway can not be local unicast address, with device [ OK ] TEST: Gateway can not be a local linklocal address [ OK ] TEST: Gateway can be local address in a VRF [ OK ] TEST: Gateway can be local address in a VRF, with device [ OK ] TEST: Gateway can be local linklocal address in a VRF [ OK ] TEST: Redirect to VRF lookup [ OK ] TEST: VRF route, gateway can be local address in default VRF [ OK ] TEST: VRF route, gateway can not be a local address [ OK ] TEST: VRF route, gateway can not be a local addr with device [ OK ] Signed-off-by: David Ahern <dsahern@gmail.com> Signed-off-by: David S. Miller <davem@davemloft.net>
-