- 17 Jun, 2016 1 commit
-
-
Philippe Reynes authored
The private structure contain a pointer to phydev, but the structure net_device already contain such pointer. So we can remove the pointer phydev in the private structure, and update the driver to use the one contained in struct net_device. Signed-off-by: Philippe Reynes <tremyfr@gmail.com> Signed-off-by: David S. Miller <davem@davemloft.net>
-
- 16 Jun, 2016 39 commits
-
-
David S. Miller authored
Vincent Palatin says: ==================== net: stmmac: dwmac-rk: fixes for Wake-on-Lan on RK3288 In order to support Wake-On-Lan when using the RK3288 integrated MAC (with an external RGMII PHY), we need to avoid shutting down the regulator of the external PHY when the MAC is suspended as it's currently done in the MAC platform code. As a first step, create independant callbacks for suspend/resume rather than re-using exit/init callbacks. So the dwmac platform driver can behave differently on suspend where it might skip shutting the PHY and at module unloading. Then update the dwmac-rk driver to switch off the PHY regulator only if we are not planning to wake up from the LAN. Finally add the PMT interrupt to the MAC device tree configuration, so we can wake up the core from it when the PHY has received the magic packet. ==================== Signed-off-by: David S. Miller <davem@davemloft.net>
-
Vincent Palatin authored
In order to use Wake-on-Lan on RK3288 integrated MAC, we need to wake-up the CPU on the PMT interrupt when the MAC and the PHY are in low power mode. Adding the interrupt declaration. Signed-off-by: Vincent Palatin <vpalatin@chromium.org> Signed-off-by: David S. Miller <davem@davemloft.net>
-
Vincent Palatin authored
When suspending the machine, do not shutdown the external PHY by cutting its regulator in the mac platform driver suspend code if Wake-on-Lan is enabled, else it cannot wake us up. In order to do this, split the suspend/resume callbacks from the init/exit callbacks, so we can condition the power-down on the lack of need to wake-up from the LAN but do it unconditionally when unloading the module. Signed-off-by: Vincent Palatin <vpalatin@chromium.org> Signed-off-by: David S. Miller <davem@davemloft.net>
-
Vincent Palatin authored
Let the stmmac platform drivers provide dedicated suspend and resume callbacks rather than always re-using the init and exits callbacks. If the driver does not provide the suspend or resume callback, we fall back to the old behavior trying to use exit or init. This allows a specific platform to perform only a partial power-down on suspend if Wake-on-Lan is enabled but always perform the full shutdown sequence if the module is unloaded. Signed-off-by: Vincent Palatin <vpalatin@chromium.org> Signed-off-by: David S. Miller <davem@davemloft.net>
-
Xin Long authored
Commit d46e416c ("sctp: sctp should change socket state when shutdown is received") may set sk_state CLOSING in sctp_sock_migrate, but inet_accept doesn't allow the sk_state other than ESTABLISHED/ CLOSED for sctp. So we will change sk_state to CLOSED, instead of CLOSING, as actually sk is closed already there. Fixes: d46e416c ("sctp: sctp should change socket state when shutdown is received") Reported-by: Ye Xiaolong <xiaolong.ye@intel.com> Signed-off-by: Xin Long <lucien.xin@gmail.com> Signed-off-by: David S. Miller <davem@davemloft.net>
-
Fabien Siron authored
Signed-off-by: Fabien Siron <fabien.siron@epita.fr> Signed-off-by: David S. Miller <davem@davemloft.net>
-
David S. Miller authored
Daniel Borkmann says: ==================== bpf: improve fd array release This set improves BPF perf fd array map release wrt to purging entries, first two extend the API as needed. Please see individual patches for more details. ==================== Signed-off-by: David S. Miller <davem@davemloft.net>
-
Daniel Borkmann authored
The behavior of perf event arrays are quite different from all others as they are tightly coupled to perf event fds, f.e. shown recently by commit e03e7ee3 ("perf/bpf: Convert perf_event_array to use struct file") to make refcounting on perf event more robust. A remaining issue that the current code still has is that since additions to the perf event array take a reference on the struct file via perf_event_get() and are only released via fput() (that cleans up the perf event eventually via perf_event_release_kernel()) when the element is either manually removed from the map from user space or automatically when the last reference on the perf event map is dropped. However, this leads us to dangling struct file's when the map gets pinned after the application owning the perf event descriptor exits, and since the struct file reference will in such case only be manually dropped or via pinned file removal, it leads to the perf event living longer than necessary, consuming needlessly resources for that time. Relations between perf event fds and bpf perf event map fds can be rather complex. F.e. maps can act as demuxers among different perf event fds that can possibly be owned by different threads and based on the index selection from the program, events get dispatched to one of the per-cpu fd endpoints. One perf event fd (or, rather a per-cpu set of them) can also live in multiple perf event maps at the same time, listening for events. Also, another requirement is that perf event fds can get closed from application side after they have been attached to the perf event map, so that on exit perf event map will take care of dropping their references eventually. Likewise, when such maps are pinned, the intended behavior is that a user application does bpf_obj_get(), puts its fds in there and on exit when fd is released, they are dropped from the map again, so the map acts rather as connector endpoint. This also makes perf event maps inherently different from program arrays as described in more detail in commit c9da161c ("bpf: fix clearing on persistent program array maps"). To tackle this, map entries are marked by the map struct file that added the element to the map. And when the last reference to that map struct file is released from user space, then the tracked entries are purged from the map. This is okay, because new map struct files instances resp. frontends to the anon inode are provided via bpf_map_new_fd() that is called when we invoke bpf_obj_get_user() for retrieving a pinned map, but also when an initial instance is created via map_create(). The rest is resolved by the vfs layer automatically for us by keeping reference count on the map's struct file. Any concurrent updates on the map slot are fine as well, it just means that perf_event_fd_array_release() needs to delete less of its own entires. Signed-off-by: Daniel Borkmann <daniel@iogearbox.net> Acked-by: Alexei Starovoitov <ast@kernel.org> Signed-off-by: David S. Miller <davem@davemloft.net>
-
Daniel Borkmann authored
This patch extends map_fd_get_ptr() callback that is used by fd array maps, so that struct file pointer from the related map can be passed in. It's safe to remove map_update_elem() callback for the two maps since this is only allowed from syscall side, but not from eBPF programs for these two map types. Like in per-cpu map case, bpf_fd_array_map_update_elem() needs to be called directly here due to the extra argument. Signed-off-by: Daniel Borkmann <daniel@iogearbox.net> Acked-by: Alexei Starovoitov <ast@kernel.org> Signed-off-by: David S. Miller <davem@davemloft.net>
-
Daniel Borkmann authored
Add a release callback for maps that is invoked when the last reference to its struct file is gone and the struct file about to be released by vfs. The handler will be used by fd array maps. Signed-off-by: Daniel Borkmann <daniel@iogearbox.net> Acked-by: Alexei Starovoitov <ast@kernel.org> Signed-off-by: David S. Miller <davem@davemloft.net>
-
David S. Miller authored
Edward Cree says: ==================== sfc: RX VLAN filtering Adds support for VLAN-qualified receive filters on EF10 hardware. This is needed when running as a guest if the hypervisor has enabled vfs-vlan-restrict, in which case the firmware rejects filters not qualified with VLAN 0. ==================== Signed-off-by: David S. Miller <davem@davemloft.net>
-
Andrew Rybchenko authored
If vPort has VLAN_RESTRICT flag, VLAN tagged traffic will not be delivered without corresponding Rx filters which may be proxied to and moderated by hypervisor. Signed-off-by: Edward Cree <ecree@solarflare.com> Signed-off-by: David S. Miller <davem@davemloft.net>
-
Edward Cree authored
Signed-off-by: Edward Cree <ecree@solarflare.com> Signed-off-by: David S. Miller <davem@davemloft.net>
-
Andrew Rybchenko authored
If should be done after net_dev->hw_features initialization, to keep the feature there to be able to enable it later using ethtool. VLAN filtering is enforced and fixed if vPort requires usage of VLAN filters to receive tagged traffic. Signed-off-by: Edward Cree <ecree@solarflare.com> Signed-off-by: David S. Miller <davem@davemloft.net>
-
Martin Habets authored
If it is not supported we simply disable the feature. For the feature to work we need firmware filter support for OUTER_VID + LOC_MAC and for OUTER_VID + LOC_MAC_IG. The low-latency firmware can match on OUTER_VID + LOC_MAC but not on OUTER_VID + LOC_MAC_IG. For the capture packet firmware it is the other way around. Only the full-feature variant can match on both combinations. Incorporates a fix by Andrew Rybchenko <Andrew.Rybchenko@oktetlabs.ru> in the net_dev->[hw_]features handling. Signed-off-by: Edward Cree <ecree@solarflare.com> Signed-off-by: David S. Miller <davem@davemloft.net>
-
Andrew Rybchenko authored
Filter match flags are not unique criteria to be mapped to priority because of both unknown unicast and unknown multicast are mapped to LOC_MAC_IG. So, local MAC is required to map filter to priority. MCDI filter flags is unique criteria to find filter priority. Signed-off-by: Edward Cree <ecree@solarflare.com> Signed-off-by: David S. Miller <davem@davemloft.net>
-
Edward Cree authored
Nearly every time we call efx_ef10_filter_remove_unsafe, we first check for EFX_EF10_FILTER_ID_INVALID, in which case we do nothing. So move that check into the function, simplifying all the call sites. Also, change the return type to void, since none of the callers check it. Signed-off-by: Edward Cree <ecree@solarflare.com> Signed-off-by: David S. Miller <davem@davemloft.net>
-
Martin Habets authored
When trying to enslave an SFC interface to a bond the following BUG_ON was hit: kernel BUG [in ef10.c]! CPU: 0 PID: 4383 Comm: ifenslave Tainted: G ... Call Trace: efx_ef10_filter_add_vlan+0x121/0x180 [sfc] efx_ef10_filter_table_probe+0x2a2/0x4f0 [sfc] efx_ef10_set_mac_address+0x370/0x6d0 [sfc] efx_set_mac_address+0x7d/0x120 [sfc] dev_set_mac_address+0x43/0xa0 bond_enslave+0x337/0xea0 [bonding] This comes from function efx_ef10_filter_vlan_sync_rx_mode. To solve the bug we ensure the mac_lock is taken before calling efx_ef10_filter_add_vlan. But to avoid a priority inversion mac_lock must be taken before filter_sem. To satisfy these requirements we end up taking mac_lock in efx_ef10_vport_set_mac_address, efx_ef10_set_mac_address, efx_ef10_sriov_set_vf_vlan and efx_probe_filters. Signed-off-by: Edward Cree <ecree@solarflare.com> Signed-off-by: David S. Miller <davem@davemloft.net>
-
Andrew Rybchenko authored
Supports HW VLAN filtering, en/disabled using ethtool. Signed-off-by: Edward Cree <ecree@solarflare.com> Signed-off-by: David S. Miller <davem@davemloft.net>
-
Andrew Rybchenko authored
Right now it contains dummy VLAN entry with unspecified VID only. The entry is used for the case when HW VLAN filtering is not used. Signed-off-by: Edward Cree <ecree@solarflare.com> Signed-off-by: David S. Miller <davem@davemloft.net>
-
Andrew Rybchenko authored
It is a step to support VLAN filtering in HW. Signed-off-by: Edward Cree <ecree@solarflare.com> Signed-off-by: David S. Miller <davem@davemloft.net>
-
Andrew Rybchenko authored
These flags are built when address cache is updated. The information will be required when VLAN filtering is added and address cache is used without re-sync. Signed-off-by: Edward Cree <ecree@solarflare.com> Signed-off-by: David S. Miller <davem@davemloft.net>
-
Andrew Rybchenko authored
It is a step to support VLAN filtering in HW. Until then, there is only one struct efx_ef10_filter_vlan per struct efx_ef10_filter_table, with no VLAN information yet. Signed-off-by: Edward Cree <ecree@solarflare.com> Signed-off-by: David S. Miller <davem@davemloft.net>
-
Andrew Rybchenko authored
It is required to remove setting of filter IDs to invalid from multicast and unicast addresses caching functions. Add initialization to invalid when filter table is created. Add paranoid checks to track consistency. Signed-off-by: Edward Cree <ecree@solarflare.com> Signed-off-by: David S. Miller <davem@davemloft.net>
-
Edward Cree authored
Based on a patch by Andrew Rybchenko <Andrew.Rybchenko@oktetlabs.ru> Signed-off-by: Edward Cree <ecree@solarflare.com> Signed-off-by: David S. Miller <davem@davemloft.net>
-
Andrew Rybchenko authored
It allows to change set of fixed features on datapath reset. Signed-off-by: Edward Cree <ecree@solarflare.com> Signed-off-by: David S. Miller <davem@davemloft.net>
-
Andrew Rybchenko authored
It is used for EF10 only and logically belongs to EF10 filter table state. It is OK that it is reset to false on filter table recreation since all filters are removed on destruction. Signed-off-by: Edward Cree <ecree@solarflare.com> Signed-off-by: David S. Miller <davem@davemloft.net>
-
Andrew Rybchenko authored
It is useful to simplify features addition. Signed-off-by: Edward Cree <ecree@solarflare.com> Signed-off-by: David S. Miller <davem@davemloft.net>
-
David S. Miller authored
Merge tag 'rxrpc-rewrite-20160615' of git://git.kernel.org/pub/scm/linux/kernel/git/dhowells/linux-fs David Howells says: ==================== rxrpc: Rework endpoint record handling Here's the next part of the AF_RXRPC rewrite. In this set I rework endpoint record handling. There are two types of endpoint record, local and peer. The local endpoint record is used as an anchor for the transport socket that AF_RXRPC uses (at the moment a UDP socket). Local endpoints can be shared between AF_RXRPC sockets under certain restricted circumstances. The peer endpoint is a record of the remote end. It is (or will be) used to keep track MTU and RTT values and, with these changes, is used to find the call(s) to abort when a network error occurs. The following significant changes are made: (1) The local endpoint event handling code is split out into its own file. (2) The local endpoint list bottom half-excluding spinlock is removed as things are arranged such that sk_user_data will not change whilst the transport socket callbacks are in progress. (3) Local endpoints can now only be shared if they have the same transport address (as before) and have a local service ID of 0 (ie. they're not listening for incoming calls). This prevents callbacks from a server to one process being picked up by another process. (4) Local endpoint destruction is now accomplished by the same work item as processes events, meaning that the destructor doesn't need to wait for the event processor. (5) Peer endpoints are now held in a hash table rather than a flat list. (6) Peer endpoints are now destroyed by RCU rather than by work item. (7) Peer endpoints are now differentiated by local endpoint and remote transport port in addition to remote transport address and transport type and family. This means that a firewall that excludes access between a particular local port and remote port won't cause calls to be aborted that use a different port pair. (8) Error report handling now no longer assumes that the source is always an IPv4 ICMP message from a UDP port and has assumptions that an ICMP message comes from an IPv4 socket removed. At some point IPv6 support will be added. (9) Peer endpoints rather than local endpoints are now the anchor point for distributing network error reports. (10) Both types of endpoint records are now disposed of as soon as all references to them are gone. There is less hanging around and once their usage counts hit zero, records can no longer be resurrected. ==================== Signed-off-by: David S. Miller <davem@davemloft.net>
-
David S. Miller authored
Raghu Vatsavayi says: ==================== liquidio: Updates and Bug fixes Following are updates for liquidio bug fixes and driver support for new firmware interface. These updates are divided into smaller logical patches as mentioned by you. These set of nine patches should be applied in the following order as some of them depend on earlier patches in the list. ==================== Signed-off-by: David S. Miller <davem@davemloft.net>
-
Raghu Vatsavayi authored
Added support for new instruction header for octeon2/octeon3(ih) and corresponding changes. Signed-off-by: Derek Chickles <derek.chickles@caviumnetworks.com> Signed-off-by: Satanand Burla <satananda.burla@caviumnetworks.com> Signed-off-by: Felix Manlunas <felix.manlunas@caviumnetworks.com> Signed-off-by: Raghu Vatsavayi <raghu.vatsavayi@caviumnetworks.com> Signed-off-by: David S. Miller <davem@davemloft.net>
-
Raghu Vatsavayi authored
This patch decoupled the firmware side ifidx and host side interface number. It also has some minor name change for linkinfo sturct field. Signed-off-by: Derek Chickles <derek.chickles@caviumnetworks.com> Signed-off-by: Satanand Burla <satananda.burla@caviumnetworks.com> Signed-off-by: Felix Manlunas <felix.manlunas@caviumnetworks.com> Signed-off-by: Raghu Vatsavayi <raghu.vatsavayi@caviumnetworks.com> Signed-off-by: David S. Miller <davem@davemloft.net>
-
Raghu Vatsavayi authored
This patch is for new driver/firmware control command structure (octnic_packet_params and octnic_cmd_setup ) and resultant code changes. Signed-off-by: Derek Chickles <derek.chickles@caviumnetworks.com> Signed-off-by: Satanand Burla <satananda.burla@caviumnetworks.com> Signed-off-by: Felix Manlunas <felix.manlunas@caviumnetworks.com> Signed-off-by: Raghu Vatsavayi <raghu.vatsavayi@caviumnetworks.com> Signed-off-by: David S. Miller <davem@davemloft.net>
-
Raghu Vatsavayi authored
This patch is to refactor packet size calculations to support PTP enabled for 66xx and 68xx cards and also other cards that do not support PTP. Signed-off-by: Derek Chickles <derek.chickles@caviumnetworks.com> Signed-off-by: Satanand Burla <satananda.burla@caviumnetworks.com> Signed-off-by: Felix Manlunas <felix.manlunas@caviumnetworks.com> Signed-off-by: Raghu Vatsavayi <raghu.vatsavayi@caviumnetworks.com> Signed-off-by: David S. Miller <davem@davemloft.net>
-
Raghu Vatsavayi authored
This patch is to add page based buffers for receive side descriptors of the driver and separate free routines for rx and tx buffers. Signed-off-by: Derek Chickles <derek.chickles@caviumnetworks.com> Signed-off-by: Satanand Burla <satananda.burla@caviumnetworks.com> Signed-off-by: Felix Manlunas <felix.manlunas@caviumnetworks.com> Signed-off-by: Raghu Vatsavayi <raghu.vatsavayi@caviumnetworks.com> Signed-off-by: David S. Miller <davem@davemloft.net>
-
Raghu Vatsavayi authored
This patch is to allocate rx queue's memory based on numa node and also use page based buffers for rx traffic improvements. Signed-off-by: Derek Chickles <derek.chickles@caviumnetworks.com> Signed-off-by: Satanand Burla <satananda.burla@caviumnetworks.com> Signed-off-by: Felix Manlunas <felix.manlunas@caviumnetworks.com> Signed-off-by: Raghu Vatsavayi <raghu.vatsavayi@caviumnetworks.com> Signed-off-by: David S. Miller <davem@davemloft.net>
-
Raghu Vatsavayi authored
This patch is to allocate and manage scatter gather lists per input queue(iq's) and remove queue's interdependence. Signed-off-by: Derek Chickles <derek.chickles@caviumnetworks.com> Signed-off-by: Satanand Burla <satananda.burla@caviumnetworks.com> Signed-off-by: Felix Manlunas <felix.manlunas@caviumnetworks.com> Signed-off-by: Raghu Vatsavayi <raghu.vatsavayi@caviumnetworks.com> Signed-off-by: David S. Miller <davem@davemloft.net>
-
Raghu Vatsavayi authored
This patch is to allocate the input queues based on Numa node in tx path and queue mapping changes based on the mapping info provided by firmware. Signed-off-by: Derek Chickles <derek.chickles@caviumnetworks.com> Signed-off-by: Satanand Burla <satananda.burla@caviumnetworks.com> Signed-off-by: Felix Manlunas <felix.manlunas@caviumnetworks.com> Signed-off-by: Raghu Vatsavayi <raghu.vatsavayi@caviumnetworks.com> Signed-off-by: David S. Miller <davem@davemloft.net>
-
Raghu Vatsavayi authored
This patch is to resolve the double free issue by checking proper return values from soft command. Signed-off-by: Derek Chickles <derek.chickles@caviumnetworks.com> Signed-off-by: Satanand Burla <satananda.burla@caviumnetworks.com> Signed-off-by: Felix Manlunas <felix.manlunas@caviumnetworks.com> Signed-off-by: Raghu Vatsavayi <raghu.vatsavayi@caviumnetworks.com> Signed-off-by: David S. Miller <davem@davemloft.net>
-