- 15 May, 2015 28 commits
-
-
David S. Miller authored
Tom Lendacky says: ==================== amd-xgbe: AMD XGBE driver updates 2015-05-12 The following series of patches includes functional updates and changes to the driver. - Add additional statistics to be collected and reported - Use the netif_* functions for issuing some debug and informational driver messages - Rx path SKB allocation cleanup/simplification - Remove stand-alone phylib driver and incorporate function into the nic driver - Simplify device tree support while maintaining backwards compatibility - Fix the flow control negotiation logic to properly configure flow control - Remove the checking and setting of the device dma_mask field This patch series is based on net-next. Changes in v2: - Change from using the netif_msg_*/netdev_* combination for issuing messages to the more concise netif_* ==================== Signed-off-by: David S. Miller <davem@davemloft.net>
-
Lendacky, Thomas authored
The underlying device support will set the device dma_mask pointer if DMA is set up properly for the device. Remove the check for and assignment of dma_mask when it is null. Instead, just error out if the dma_set_mask_and_coherent function fails because dma_mask is null. Signed-off-by: Tom Lendacky <thomas.lendacky@amd.com> Signed-off-by: David S. Miller <davem@davemloft.net>
-
Lendacky, Thomas authored
The flow control negotiation logic is flawed and does not properly advertise and process auto-negotiation of the flow control settings. Update the flow control support to properly set the flow control auto-negotiation settings and process the results approrpriately. Signed-off-by: Tom Lendacky <thomas.lendacky@amd.com> Signed-off-by: David S. Miller <davem@davemloft.net>
-
Lendacky, Thomas authored
Simplify the device tree support of the amd-xgbe driver by defining the PHY-related resources within the ethernet device node. The support provides backwards compatibility with the original way. Update the driver version to 1.0.2. Signed-off-by: Tom Lendacky <thomas.lendacky@amd.com> Signed-off-by: David S. Miller <davem@davemloft.net>
-
Lendacky, Thomas authored
The AMD XGBE device is intended to work with a specific integrated PHY and that PHY is not meant to be a standalone PHY for use by other devices. As such this patch removes the phylib driver and implements the PHY support in the amd-xgbe driver (the majority of the logic from the phylib driver is moved into the amd-xgbe driver). Update the driver version to 1.0.1. Signed-off-by: Tom Lendacky <thomas.lendacky@amd.com> Signed-off-by: David S. Miller <davem@davemloft.net>
-
Lendacky, Thomas authored
Rework the SKB allocation so that all of the buffers of the first descriptor are handled in the SKB allocation routine. After copying the data in the header buffer (which can be just the header if split header processing succeeded for header plus data if split header processing did not succeed) into the SKB, check for remaining data in the receive buffer. If there is data remaining in the receive buffer, add that as a frag to the SKB. Once an SKB has been allocated, all other descriptors are added as frags to the SKB. Signed-off-by: Tom Lendacky <thomas.lendacky@amd.com> Signed-off-by: David S. Miller <davem@davemloft.net>
-
Lendacky, Thomas authored
Add support for the network interface message level settings for determining whether to issue some of the driver messages. Make use of the netif_* interface where appropriate. Signed-off-by: Tom Lendacky <thomas.lendacky@amd.com> Signed-off-by: David S. Miller <davem@davemloft.net>
-
Lendacky, Thomas authored
Add additional/extended statistics beyond what is provided by the hardware to be reported via ethtool. The new stats focus on the calls into ndo_start_xmit and the napi_poll routine. Signed-off-by: Tom Lendacky <thomas.lendacky@amd.com> Signed-off-by: David S. Miller <davem@davemloft.net>
-
David S. Miller authored
Joachim Eastwood says: ==================== convert stmmac glue layers into platform drivers This patch set aims to convert the current dwmac glue layers into proper platform drivers as request by Arnd[1]. These changes start from patch 3 and onwards. Overview: Platform driver functions like probe and remove are exported from the stmmac platform and then used in subsequent glue later conversions. The conversion involes adding the platform driver boiler plate code and adding it to the build system. The last patch removes the driver from the stmmac platform code thus making it into a library for common platform driver functions. The two first patches adds glue layer for my platform. I chose to first add old style glue layer and then convert it. The churn this creates is just 3 lines. I would be very nice if people could test this patch set on their respective platform. My testing has been limited to compiling and testing on my (LPC18xx) platform. Thanks! Next I will look into cleaning up the stmmac platform code. [1] http://marc.info/?l=linux-arm-kernel&m=143059524606459&w=2 ==================== Tested-by: Chen-Yu Tsai <wens@csie.org> Tested-by: Dinh Nguyen <dinguyen@opensource.altera.com> Signed-off-by: David S. Miller <davem@davemloft.net>
-
Joachim Eastwood authored
The dwmac-generic replaces the driver inside the stmmac platform code. This turns stmmac platform into a library used by drivers for common platform driver functions. Signed-off-by: Joachim Eastwood <manabian@gmail.com> Signed-off-by: David S. Miller <davem@davemloft.net>
-
Joachim Eastwood authored
Convert platform glue layer into a proper platform driver and add it to the build system. Signed-off-by: Joachim Eastwood <manabian@gmail.com> Signed-off-by: David S. Miller <davem@davemloft.net>
-
Joachim Eastwood authored
Convert platform glue layer into a proper platform driver and add it to the build system. Signed-off-by: Joachim Eastwood <manabian@gmail.com> Signed-off-by: David S. Miller <davem@davemloft.net>
-
Joachim Eastwood authored
Convert platform glue layer into a proper platform driver and add it to the build system. Signed-off-by: Joachim Eastwood <manabian@gmail.com> Signed-off-by: David S. Miller <davem@davemloft.net>
-
Joachim Eastwood authored
Convert platform glue layer into a proper platform driver and add it to the build system. Signed-off-by: Joachim Eastwood <manabian@gmail.com> Signed-off-by: David S. Miller <davem@davemloft.net>
-
Joachim Eastwood authored
Convert platform glue layer into a proper platform driver and add it to the build system. Signed-off-by: Joachim Eastwood <manabian@gmail.com> Signed-off-by: David S. Miller <davem@davemloft.net>
-
Joachim Eastwood authored
Convert platform glue layer into a proper platform driver and add it to the build system. Signed-off-by: Joachim Eastwood <manabian@gmail.com> Signed-off-by: David S. Miller <davem@davemloft.net>
-
Joachim Eastwood authored
Create a new driver around the generic device tree match strings in the stmmac platform code. This driver is intended to be used by all platforms that doesn't require any platform specific code to function or is using platform data. Signed-off-by: Joachim Eastwood <manabian@gmail.com> Signed-off-by: David S. Miller <davem@davemloft.net>
-
Joachim Eastwood authored
Prepare the stmmac platform code to support standalone drivers by exporting the need functions and having of_match_device use the match table reference already present in the driver struct. This will allow us to reuse the platform driver functions from this code easily in other stand alone platform drivers. Signed-off-by: Joachim Eastwood <manabian@gmail.com> Signed-off-by: David S. Miller <davem@davemloft.net>
-
Joachim Eastwood authored
Add device tree binding documentation for nxp,lpc1850-dwmac. Signed-off-by: Joachim Eastwood <manabian@gmail.com> Signed-off-by: David S. Miller <davem@davemloft.net>
-
Joachim Eastwood authored
Add support for Ethernet on NXP LPC18xx and LPC43xx using the dwmac driver. This glue is required to setup phy interface mode, MII or RMII, on the SoC. Signed-off-by: Joachim Eastwood <manabian@gmail.com> Signed-off-by: David S. Miller <davem@davemloft.net>
-
sixiao@microsoft.com authored
Current code does not lock anything when calculating the TX and RX stats. As a result, the RX and TX data reported by ifconfig are not accuracy in a system with high network throughput and multiple CPUs (in my test, RX/TX = 83% between 2 HyperV VM nodes which have 8 vCPUs and 40G Ethernet). This patch fixed the above issue by using per_cpu stats. netvsc_get_stats64() summarizes TX and RX data by iterating over all CPUs to get their respective stats. This v2 patch addressed David's comments on the cleanup path when netdev_alloc_pcpu_stats() failed. Signed-off-by: Simon Xiao <sixiao@microsoft.com> Reviewed-by: K. Y. Srinivasan <kys@microsoft.com> Reviewed-by: Haiyang Zhang <haiyangz@microsoft.com> Signed-off-by: David S. Miller <davem@davemloft.net>
-
Michael Holzheu authored
Fix several sparse warnings like: lib/test_bpf.c:1824:25: sparse: constant 4294967295 is so big it is long lib/test_bpf.c:1878:25: sparse: constant 0x0000ffffffff0000 is so big it is long Fixes: cffc642d ("test_bpf: add 173 new testcases for eBPF") Reported-by: Fengguang Wu <fengguang.wu@intel.com> Signed-off-by: Michael Holzheu <holzheu@linux.vnet.ibm.com> Signed-off-by: Alexei Starovoitov <ast@plumgrid.com> Acked-by: Daniel Borkmann <daniel@iogearbox.net> Signed-off-by: David S. Miller <davem@davemloft.net>
-
Florian Westphal authored
commit d2788d34 ("net: sched: further simplify handle_ing") removed the call to qdisc_enqueue_root(). However, after this removal we no longer set qdisc pkt length. This breaks traffic policing on ingress. This is the minimum fix: set qdisc pkt length before tc_classify. Only setting the length does remove support for 'stab' on ingress, but as Alexei pointed out: "Though it was allowed to add qdisc_size_table to ingress, it's useless. Nothing takes advantage of recomputed qdisc_pkt_len". Jamal suggested to use qdisc_pkt_len_init(), but as Eric mentioned that would result in qdisc_pkt_len_init to no longer get inlined due to the additional 2nd call site. ingress policing is rare and GRO doesn't really work that well with police on ingress, as we see packets > mtu and drop skbs that -- without aggregation -- would still have fitted the policier budget. Thus to have reliable/smooth ingress policing GRO has to be turned off. Cc: Alexei Starovoitov <alexei.starovoitov@gmail.com> Cc: Eric Dumazet <eric.dumazet@gmail.com> Cc: Jamal Hadi Salim <jhs@mojatatu.com> Fixes: d2788d34 ("net: sched: further simplify handle_ing") Acked-by: Daniel Borkmann <daniel@iogearbox.net> Signed-off-by: Florian Westphal <fw@strlen.de> Acked-by: Eric Dumazet <edumazet@google.com> Acked-by: Alexei Starovoitov <ast@plumgrid.com> Acked-by: Jamal Hadi Salim <jhs@mojatatu.com> Signed-off-by: David S. Miller <davem@davemloft.net>
-
Nicolas Dichtel authored
Unlock was missing on error path. Fixes: 95f38411 ("netns: use a spin_lock to protect nsid management") Reported-by: Dan Carpenter <dan.carpenter@oracle.com> Signed-off-by: Nicolas Dichtel <nicolas.dichtel@6wind.com> Signed-off-by: David S. Miller <davem@davemloft.net>
-
Bert Vermeulen authored
This also changes mii_bus.phy_mask to u32 for consistency. Signed-off-by: Bert Vermeulen <bert@biot.com> Signed-off-by: David S. Miller <davem@davemloft.net>
-
Daniel Borkmann authored
Couple of torture test cases related to the bug fixed in 0b59d880 ("ARM: net: delegate filter to kernel interpreter when imm_offset() return value can't fit into 12bits."). I've added a helper to allocate and fill the insn space. Output on x86_64 from my laptop: test_bpf: #233 BPF_MAXINSNS: Maximum possible literals jited:0 7 PASS test_bpf: #234 BPF_MAXINSNS: Single literal jited:0 8 PASS test_bpf: #235 BPF_MAXINSNS: Run/add until end jited:0 11553 PASS test_bpf: #236 BPF_MAXINSNS: Too many instructions PASS test_bpf: #237 BPF_MAXINSNS: Very long jump jited:0 9 PASS test_bpf: #238 BPF_MAXINSNS: Ctx heavy transformations jited:0 20329 20398 PASS test_bpf: #239 BPF_MAXINSNS: Call heavy transformations jited:0 32178 32475 PASS test_bpf: #240 BPF_MAXINSNS: Jump heavy test jited:0 10518 PASS test_bpf: #233 BPF_MAXINSNS: Maximum possible literals jited:1 4 PASS test_bpf: #234 BPF_MAXINSNS: Single literal jited:1 4 PASS test_bpf: #235 BPF_MAXINSNS: Run/add until end jited:1 1625 PASS test_bpf: #236 BPF_MAXINSNS: Too many instructions PASS test_bpf: #237 BPF_MAXINSNS: Very long jump jited:1 8 PASS test_bpf: #238 BPF_MAXINSNS: Ctx heavy transformations jited:1 3301 3174 PASS test_bpf: #239 BPF_MAXINSNS: Call heavy transformations jited:1 24107 23491 PASS test_bpf: #240 BPF_MAXINSNS: Jump heavy test jited:1 8651 PASS Signed-off-by: Daniel Borkmann <daniel@iogearbox.net> Cc: Alexei Starovoitov <ast@plumgrid.com> Cc: Nicolas Schichan <nschichan@freebox.fr> Acked-by: Alexei Starovoitov <ast@plumgrid.com> Signed-off-by: David S. Miller <davem@davemloft.net>
-
Eric Dumazet authored
Now we allow storing more request socks per listener, we might hit syncookie mode less often and hit following bug in our stack : When we send a burst of syncookies, then exit this mode, tcp_synq_no_recent_overflow() can return false if the ACK packets coming from clients are coming three seconds after the end of syncookie episode. This is a way too strong requirement and conflicts with rest of syncookie code which allows ACK to be aged up to 2 minutes. Perfectly valid ACK packets are dropped just because clients might be in a crowded wifi environment or on another planet. So let's fix this, and also change tcp_synq_overflow() to not dirty a cache line for every syncookie we send, as we are under attack. Signed-off-by: Eric Dumazet <edumazet@google.com> Acked-by: Florian Westphal <fw@strlen.de> Acked-by: Yuchung Cheng <ycheng@google.com> Signed-off-by: David S. Miller <davem@davemloft.net>
-
Alexander Duyck authored
The rx_dropped stat wasn't being reported when ip_tunnel_get_stats64 was called. This was leading to some confusing results in my debug as I was seeing rx_errors increment but no other value which pointed me toward the type of error being seen. This change corrects that by using netdev_stats_to_stats64 to copy all available dev stats instead of just the few that were hand picked. Signed-off-by: Alexander Duyck <alexander.h.duyck@redhat.com> Signed-off-by: Eric Dumazet <edumazet@google.com> Signed-off-by: David S. Miller <davem@davemloft.net>
-
- 14 May, 2015 12 commits
-
-
Willem de Bruijn authored
Avoid two xchg calls whose return values were unused, causing a warning on some architectures. The relevant variable is a hint and read without mutual exclusion. This fix makes all writers hold the receive_queue lock. Suggested-by: David S. Miller <davem@davemloft.net> Signed-off-by: Willem de Bruijn <willemb@google.com> Signed-off-by: David S. Miller <davem@davemloft.net>
-
françois romieu authored
None of those drivers uses last_rx for its own needs. See 4dc89133 ("net: add a comment on netdev->last_rx") for reference. Signed-off-by: Francois Romieu <romieu@fr.zoreil.com> Cc: Tom Lendacky <thomas.lendacky@amd.com> Cc: Zhangfei Gao <zhangfei.gao@linaro.org> Cc: Jeff Kirsher <jeffrey.t.kirsher@intel.com> Cc: Wingman Kwok <w-kwok2@ti.com> Cc: Murali Karicheri <m-karicheri2@ti.com> Cc: Chris Metcalf <cmetcalf@tilera.com> Signed-off-by: David S. Miller <davem@davemloft.net>
-
David S. Miller authored
Florian Fainelli says: ==================== net: phy: broken turn-around support This is an attempt at solving the broken turn-around problem in a way that is not specific to the mdio-gpio driver, since it affects different kinds of platforms. We cannot make that localized to PHY device drivers because probing the PHY device which has a broken turn-around can fail as early as in get_phy_id(), therefore we need a bit of help from Device Tree/platform_data. ==================== Signed-off-by: David S. Miller <davem@davemloft.net>
-
Florian Fainelli authored
Update mdiobb_read() to read whether the PHY has a broken turn-around, and if it does, ignore it to make the read succeeed. Signed-off-by: Florian Fainelli <f.fainelli@gmail.com> Signed-off-by: David S. Miller <davem@davemloft.net>
-
Florian Fainelli authored
Some Ethernet PHY devices/switches may not properly release the MDIO bus during turn-around time, and fail to drive it low, which can be seen by some controllers as a read failure, while the data clocked in is still correct. Add a boolean property "broken-turn-around" which is parsed by the generic MDIO bus probing code and will set the corresponding bit in the MDIO bus phy_ignore_ta_mask bitmask for MDIO bus drivers to utilize that information. Signed-off-by: Florian Fainelli <f.fainelli@gmail.com> Signed-off-by: David S. Miller <davem@davemloft.net>
-
Florian Fainelli authored
Some PHY devices/switches will not release the turn-around line as they should do at the end of a MDIO transaction. To help with such situations, allow MDIO bus drivers to be made aware of such restrictions. Signed-off-by: Florian Fainelli <f.fainelli@gmail.com> Signed-off-by: David S. Miller <davem@davemloft.net>
-
Ying Xue authored
After commit eeb1bd5c ("net: Add a struct net parameter to sock_create_kern"), we should use sock_create_kern() to create kernel socket as the interface doesn't reference count struct net any more. Signed-off-by: Ying Xue <ying.xue@windriver.com> Signed-off-by: David S. Miller <davem@davemloft.net>
-
Brian Haley authored
Fix compile error in net/sched/cls_flower.c net/sched/cls_flower.c: In function ‘fl_set_key’: net/sched/cls_flower.c:240:3: error: implicit declaration of function ‘tcf_change_indev’ [-Werror=implicit-function-declaration] err = tcf_change_indev(net, tb[TCA_FLOWER_INDEV]); Introduced in 77b9900e Fixes: 77b9900e ("tc: introduce Flower classifier") Signed-off-by: Brian Haley <brian.haley@hp.com> Signed-off-by: David S. Miller <davem@davemloft.net>
-
David S. Miller authored
Jon Maloy says: ==================== tipc: some link layer improvements We continue eliminating redundant complexity at the link layer, and add a couple of improvements to the packet sending functionality. ==================== Signed-off-by: David S. Miller <davem@davemloft.net>
-
Jon Paul Maloy authored
Currently, the packet sequence number is updated and added to each packet at the moment a packet is added to the link backlog queue. This is wasteful, since it forces the code to traverse the send packet list packet by packet when adding them to the backlog queue. It would be better to just splice the whole packet list into the backlog queue when that is the right action to do. In this commit, we do this change. Also, since the sequence numbers cannot now be assigned to the packets at the moment they are added the backlog queue, we do instead calculate and add them at the moment of transmission, when the backlog queue has to be traversed anyway. We do this in the function tipc_link_push_packet(). Reviewed-by: Erik Hugne <erik.hugne@ericsson.com> Reviewed-by: Ying Xue <ying.xue@windriver.com> Signed-off-by: Jon Maloy <jon.maloy@ericsson.com> Signed-off-by: David S. Miller <davem@davemloft.net>
-
Jon Paul Maloy authored
The link congestion algorithm used until now implies two problems. - It is too generous towards lower-level messages in situations of high load by giving "absolute" bandwidth guarantees to the different priority levels. LOW traffic is guaranteed 10%, MEDIUM is guaranted 20%, HIGH is guaranteed 30%, and CRITICAL is guaranteed 40% of the available bandwidth. But, in the absence of higher level traffic, the ratio between two distinct levels becomes unreasonable. E.g. if there is only LOW and MEDIUM traffic on a system, the former is guaranteed 1/3 of the bandwidth, and the latter 2/3. This again means that if there is e.g. one LOW user and 10 MEDIUM users, the former will have 33.3% of the bandwidth, and the others will have to compete for the remainder, i.e. each will end up with 6.7% of the capacity. - Packets of type MSG_BUNDLER are created at SYSTEM importance level, but only after the packets bundled into it have passed the congestion test for their own respective levels. Since bundled packets don't result in incrementing the level counter for their own importance, only occasionally for the SYSTEM level counter, they do in practice obtain SYSTEM level importance. Hence, the current implementation provides a gap in the congestion algorithm that in the worst case may lead to a link reset. We now refine the congestion algorithm as follows: - A message is accepted to the link backlog only if its own level counter, and all superior level counters, permit it. - The importance of a created bundle packet is set according to its contents. A bundle packet created from messges at levels LOW to CRITICAL is given importance level CRITICAL, while a bundle created from a SYSTEM level message is given importance SYSTEM. In the latter case only subsequent SYSTEM level messages are allowed to be bundled into it. This solves the first problem described above, by making the bandwidth guarantee relative to the total number of users at all levels; only the upper limit for each level remains absolute. In the example described above, the single LOW user would use 1/11th of the bandwidth, the same as each of the ten MEDIUM users, but he still has the same guarantee against starvation as the latter ones. The fix also solves the second problem. If the CRITICAL level is filled up by bundle packets of that level, no lower level packets will be accepted any more. Suggested-by: Gergely Kiss <gergely.kiss@ericsson.com> Reviewed-by: Ying Xue <ying.xue@windriver.com> Signed-off-by: Jon Maloy <jon.maloy@ericsson.com> Signed-off-by: David S. Miller <davem@davemloft.net>
-
Jon Paul Maloy authored
We change the sequence number checkpointing that is performed by the timer in order to discover if the peer is active. Currently, we store a checkpoint of the next expected sequence number "rcv_nxt" at each timer expiration, and compare it to the current expected number at next timeout expiration. Instead, we now use the already existing field "silent_intv_cnt" for this task. We step the counter at each timeout expiration, and zero it at each valid received packet. If no valid packet has been received from the peer after "abort_limit" number of silent timer intervals, the link is declared faulty and reset. We also remove the multiple instances of timer activation from inside the FSM function "link_state_event()", and now do it at only one place; at the end of the timer function itself. Reviewed-by: Erik Hugne <erik.hugne@ericsson.com> Reviewed-by: Ying Xue <ying.xue@windriver.com> Signed-off-by: Jon Maloy <jon.maloy@ericsson.com> Signed-off-by: David S. Miller <davem@davemloft.net>
-