1. 04 Aug, 2017 2 commits
    • Ido Schimmel's avatar
      mlxsw: spectrum_switchdev: Release multicast groups during fini · 852cfeed
      Ido Schimmel authored
      Each multicast group (MID) stores a bitmap of ports to which a packet
      should be forwarded to in case an MDB entry associated with the MID is
      hit.
      
      Since the initial introduction of IGMP snooping in commit 3a49b4fd
      ("mlxsw: Adding layer 2 multicast support") the driver didn't correctly
      free these multicast groups upon ungraceful situations such as the
      removal of the upper bridge device or module removal.
      
      The correct way to fix this is to associate each MID with the bridge
      ports member in it and then drop the reference in case the bridge port
      is destroyed, but this will result in a lot more code and will be fixed
      in net-next.
      
      For now, upon module removal, traverse the MID list and release each
      one.
      
      Fixes: 3a49b4fd ("mlxsw: Adding layer 2 multicast support")
      Signed-off-by: default avatarIdo Schimmel <idosch@mellanox.com>
      Signed-off-by: default avatarJiri Pirko <jiri@mellanox.com>
      Signed-off-by: default avatarDavid S. Miller <davem@davemloft.net>
      852cfeed
    • Ido Schimmel's avatar
      mlxsw: spectrum_switchdev: Don't warn about valid situations · 17b334a8
      Ido Schimmel authored
      Some operations in the bridge driver such as MDB deletion are preformed
      in an atomic context and thus deferred to a process context by the
      switchdev infrastructure.
      
      Therefore, by the time the operation is performed by the underlying
      device driver it's possible the bridge port context is already gone.
      This is especially true for removal flows, but theoretically can also be
      invoked during addition.
      
      Remove the warnings in such situations and return normally.
      
      Fixes: c57529e1 ("mlxsw: spectrum: Replace vPorts with Port-VLAN")
      Fixes: 3922285d ("net: bridge: Add support for offloading port attributes")
      Signed-off-by: default avatarIdo Schimmel <idosch@mellanox.com>
      Signed-off-by: default avatarJiri Pirko <jiri@mellanox.com>
      Signed-off-by: default avatarDavid S. Miller <davem@davemloft.net>
      17b334a8
  2. 03 Aug, 2017 7 commits
    • David S. Miller's avatar
      Merge branch 'tcp-xmit-timer-rearming' · 337f1b07
      David S. Miller authored
      Neal Cardwell says:
      
      ====================
      tcp: fix xmit timer rearming to avoid stalls
      
      This patch series is a bug fix for a TCP loss recovery performance bug
      reported independently in recent netdev threads:
      
       (i)  July 26, 2017: netdev thread "TCP fast retransmit issues"
       (ii) July 26, 2017: netdev thread:
             "[PATCH V2 net-next] TLP: Don't reschedule PTO when there's one
             outstanding TLP retransmission"
      
      Many thanks to Klavs Klavsen and Mao Wenan for the detailed reports,
      traces, and packetdrill test cases, which enabled us to root-cause
      this issue and verify the fix.
      
      - v1 -> v2:
       - In patch 2/3, changed an unclear comment in the pre-existing code
         in tcp_schedule_loss_probe() to be more clear (thanks to Eric Dumazet
         for suggesting we improve this).
      ====================
      Signed-off-by: default avatarDavid S. Miller <davem@davemloft.net>
      337f1b07
    • Neal Cardwell's avatar
      tcp: fix xmit timer to only be reset if data ACKed/SACKed · df92c839
      Neal Cardwell authored
      Fix a TCP loss recovery performance bug raised recently on the netdev
      list, in two threads:
      
      (i)  July 26, 2017: netdev thread "TCP fast retransmit issues"
      (ii) July 26, 2017: netdev thread:
           "[PATCH V2 net-next] TLP: Don't reschedule PTO when there's one
           outstanding TLP retransmission"
      
      The basic problem is that incoming TCP packets that did not indicate
      forward progress could cause the xmit timer (TLP or RTO) to be rearmed
      and pushed back in time. In certain corner cases this could result in
      the following problems noted in these threads:
      
       - Repeated ACKs coming in with bogus SACKs corrupted by middleboxes
         could cause TCP to repeatedly schedule TLPs forever. We kept
         sending TLPs after every ~200ms, which elicited bogus SACKs, which
         caused more TLPs, ad infinitum; we never fired an RTO to fill in
         the holes.
      
       - Incoming data segments could, in some cases, cause us to reschedule
         our RTO or TLP timer further out in time, for no good reason. This
         could cause repeated inbound data to result in stalls in outbound
         data, in the presence of packet loss.
      
      This commit fixes these bugs by changing the TLP and RTO ACK
      processing to:
      
       (a) Only reschedule the xmit timer once per ACK.
      
       (b) Only reschedule the xmit timer if tcp_clean_rtx_queue() deems the
           ACK indicates sufficient forward progress (a packet was
           cumulatively ACKed, or we got a SACK for a packet that was sent
           before the most recent retransmit of the write queue head).
      
      This brings us back into closer compliance with the RFCs, since, as
      the comment for tcp_rearm_rto() notes, we should only restart the RTO
      timer after forward progress on the connection. Previously we were
      restarting the xmit timer even in these cases where there was no
      forward progress.
      
      As a side benefit, this commit simplifies and speeds up the TCP timer
      arming logic. We had been calling inet_csk_reset_xmit_timer() three
      times on normal ACKs that cumulatively acknowledged some data:
      
      1) Once near the top of tcp_ack() to switch from TLP timer to RTO:
              if (icsk->icsk_pending == ICSK_TIME_LOSS_PROBE)
                     tcp_rearm_rto(sk);
      
      2) Once in tcp_clean_rtx_queue(), to update the RTO:
              if (flag & FLAG_ACKED) {
                     tcp_rearm_rto(sk);
      
      3) Once in tcp_ack() after tcp_fastretrans_alert() to switch from RTO
         to TLP:
              if (icsk->icsk_pending == ICSK_TIME_RETRANS)
                     tcp_schedule_loss_probe(sk);
      
      This commit, by only rescheduling the xmit timer once per ACK,
      simplifies the code and reduces CPU overhead.
      
      This commit was tested in an A/B test with Google web server
      traffic. SNMP stats and request latency metrics were within noise
      levels, substantiating that for normal web traffic patterns this is a
      rare issue. This commit was also tested with packetdrill tests to
      verify that it fixes the timer behavior in the corner cases discussed
      in the netdev threads mentioned above.
      
      This patch is a bug fix patch intended to be queued for -stable
      relases.
      
      Fixes: 6ba8a3b1 ("tcp: Tail loss probe (TLP)")
      Reported-by: default avatarKlavs Klavsen <kl@vsen.dk>
      Reported-by: default avatarMao Wenan <maowenan@huawei.com>
      Signed-off-by: default avatarNeal Cardwell <ncardwell@google.com>
      Signed-off-by: default avatarYuchung Cheng <ycheng@google.com>
      Signed-off-by: default avatarNandita Dukkipati <nanditad@google.com>
      Acked-by: default avatarEric Dumazet <edumazet@google.com>
      Signed-off-by: default avatarDavid S. Miller <davem@davemloft.net>
      df92c839
    • Neal Cardwell's avatar
      tcp: enable xmit timer fix by having TLP use time when RTO should fire · a2815817
      Neal Cardwell authored
      Have tcp_schedule_loss_probe() base the TLP scheduling decision based
      on when the RTO *should* fire. This is to enable the upcoming xmit
      timer fix in this series, where tcp_schedule_loss_probe() cannot
      assume that the last timer installed was an RTO timer (because we are
      no longer doing the "rearm RTO, rearm RTO, rearm TLP" dance on every
      ACK). So tcp_schedule_loss_probe() must independently figure out when
      an RTO would want to fire.
      
      In the new TLP implementation following in this series, we cannot
      assume that icsk_timeout was set based on an RTO; after processing a
      cumulative ACK the icsk_timeout we see can be from a previous TLP or
      RTO. So we need to independently recalculate the RTO time (instead of
      reading it out of icsk_timeout). Removing this dependency on the
      nature of icsk_timeout makes things a little easier to reason about
      anyway.
      
      Note that the old and new code should be equivalent, since they are
      both saying: "if the RTO is in the future, but at an earlier time than
      the normal TLP time, then set the TLP timer to fire when the RTO would
      have fired".
      
      Fixes: 6ba8a3b1 ("tcp: Tail loss probe (TLP)")
      Signed-off-by: default avatarNeal Cardwell <ncardwell@google.com>
      Signed-off-by: default avatarYuchung Cheng <ycheng@google.com>
      Signed-off-by: default avatarNandita Dukkipati <nanditad@google.com>
      Acked-by: default avatarEric Dumazet <edumazet@google.com>
      Signed-off-by: default avatarDavid S. Miller <davem@davemloft.net>
      a2815817
    • Neal Cardwell's avatar
      tcp: introduce tcp_rto_delta_us() helper for xmit timer fix · e1a10ef7
      Neal Cardwell authored
      Pure refactor. This helper will be required in the xmit timer fix
      later in the patch series. (Because the TLP logic will want to make
      this calculation.)
      
      Fixes: 6ba8a3b1 ("tcp: Tail loss probe (TLP)")
      Signed-off-by: default avatarNeal Cardwell <ncardwell@google.com>
      Signed-off-by: default avatarYuchung Cheng <ycheng@google.com>
      Signed-off-by: default avatarNandita Dukkipati <nanditad@google.com>
      Acked-by: default avatarEric Dumazet <edumazet@google.com>
      Signed-off-by: default avatarDavid S. Miller <davem@davemloft.net>
      e1a10ef7
    • Xin Long's avatar
      ipv6: set rt6i_protocol properly in the route when it is installed · b91d5329
      Xin Long authored
      After commit c2ed1880 ("net: ipv6: check route protocol when
      deleting routes"), ipv6 route checks rt protocol when trying to
      remove a rt entry.
      
      It introduced a side effect causing 'ip -6 route flush cache' not
      to work well. When flushing caches with iproute, all route caches
      get dumped from kernel then removed one by one by sending DELROUTE
      requests to kernel for each cache.
      
      The thing is iproute sends the request with the cache whose proto
      is set with RTPROT_REDIRECT by rt6_fill_node() when kernel dumps
      it. But in kernel the rt_cache protocol is still 0, which causes
      the cache not to be matched and removed.
      
      So the real reason is rt6i_protocol in the route is not set when
      it is allocated. As David Ahern's suggestion, this patch is to
      set rt6i_protocol properly in the route when it is installed and
      remove the codes setting rtm_protocol according to rt6i_flags in
      rt6_fill_node.
      
      This is also an improvement to keep rt6i_protocol consistent with
      rtm_protocol.
      
      Fixes: c2ed1880 ("net: ipv6: check route protocol when deleting routes")
      Reported-by: default avatarJianlin Shi <jishi@redhat.com>
      Suggested-by: default avatarDavid Ahern <dsahern@gmail.com>
      Signed-off-by: default avatarXin Long <lucien.xin@gmail.com>
      Signed-off-by: default avatarDavid S. Miller <davem@davemloft.net>
      b91d5329
    • Eric Dumazet's avatar
      net: fix keepalive code vs TCP_FASTOPEN_CONNECT · 2dda6400
      Eric Dumazet authored
      syzkaller was able to trigger a divide by 0 in TCP stack [1]
      
      Issue here is that keepalive timer needs to be updated to not attempt
      to send a probe if the connection setup was deferred using
      TCP_FASTOPEN_CONNECT socket option added in linux-4.11
      
      [1]
       divide error: 0000 [#1] SMP
       CPU: 18 PID: 0 Comm: swapper/18 Not tainted
       task: ffff986f62f4b040 ti: ffff986f62fa2000 task.ti: ffff986f62fa2000
       RIP: 0010:[<ffffffff8409cc0d>]  [<ffffffff8409cc0d>] __tcp_select_window+0x8d/0x160
       Call Trace:
        <IRQ>
        [<ffffffff8409d951>] tcp_transmit_skb+0x11/0x20
        [<ffffffff8409da21>] tcp_xmit_probe_skb+0xc1/0xe0
        [<ffffffff840a0ee8>] tcp_write_wakeup+0x68/0x160
        [<ffffffff840a151b>] tcp_keepalive_timer+0x17b/0x230
        [<ffffffff83b3f799>] call_timer_fn+0x39/0xf0
        [<ffffffff83b40797>] run_timer_softirq+0x1d7/0x280
        [<ffffffff83a04ddb>] __do_softirq+0xcb/0x257
        [<ffffffff83ae03ac>] irq_exit+0x9c/0xb0
        [<ffffffff83a04c1a>] smp_apic_timer_interrupt+0x6a/0x80
        [<ffffffff83a03eaf>] apic_timer_interrupt+0x7f/0x90
        <EOI>
        [<ffffffff83fed2ea>] ? cpuidle_enter_state+0x13a/0x3b0
        [<ffffffff83fed2cd>] ? cpuidle_enter_state+0x11d/0x3b0
      
      Tested:
      
      Following packetdrill no longer crashes the kernel
      
      `echo 0 >/proc/sys/net/ipv4/tcp_timestamps`
      
      // Cache warmup: send a Fast Open cookie request
          0 socket(..., SOCK_STREAM, IPPROTO_TCP) = 3
         +0 fcntl(3, F_SETFL, O_RDWR|O_NONBLOCK) = 0
         +0 setsockopt(3, SOL_TCP, TCP_FASTOPEN_CONNECT, [1], 4) = 0
         +0 connect(3, ..., ...) = -1 EINPROGRESS (Operation is now in progress)
         +0 > S 0:0(0) <mss 1460,nop,nop,sackOK,nop,wscale 8,FO,nop,nop>
       +.01 < S. 123:123(0) ack 1 win 14600 <mss 1460,nop,nop,sackOK,nop,wscale 6,FO abcd1234,nop,nop>
         +0 > . 1:1(0) ack 1
         +0 close(3) = 0
         +0 > F. 1:1(0) ack 1
         +0 < F. 1:1(0) ack 2 win 92
         +0 > .  2:2(0) ack 2
      
         +0 socket(..., SOCK_STREAM, IPPROTO_TCP) = 4
         +0 fcntl(4, F_SETFL, O_RDWR|O_NONBLOCK) = 0
         +0 setsockopt(4, SOL_TCP, TCP_FASTOPEN_CONNECT, [1], 4) = 0
         +0 setsockopt(4, SOL_SOCKET, SO_KEEPALIVE, [1], 4) = 0
       +.01 connect(4, ..., ...) = 0
         +0 setsockopt(4, SOL_TCP, TCP_KEEPIDLE, [5], 4) = 0
         +10 close(4) = 0
      
      `echo 1 >/proc/sys/net/ipv4/tcp_timestamps`
      
      Fixes: 19f6d3f3 ("net/tcp-fastopen: Add new API support")
      Signed-off-by: default avatarEric Dumazet <edumazet@google.com>
      Reported-by: default avatarDmitry Vyukov <dvyukov@google.com>
      Cc: Wei Wang <weiwan@google.com>
      Cc: Yuchung Cheng <ycheng@google.com>
      Signed-off-by: default avatarDavid S. Miller <davem@davemloft.net>
      2dda6400
    • David S. Miller's avatar
      Merge tag 'batadv-net-for-davem-20170802' of git://git.open-mesh.org/linux-merge · 4d2bbb0e
      David S. Miller authored
      Simon Wunderlich says:
      
      ====================
      Here is a batman-adv bugfix:
      
       - fix TT sync flag inconsistency problems, which can lead to excess packets,
         by Linus Luessing
      ====================
      Signed-off-by: default avatarDavid S. Miller <davem@davemloft.net>
      4d2bbb0e
  3. 02 Aug, 2017 19 commits
  4. 01 Aug, 2017 12 commits
    • K. Den's avatar
      gue: fix remcsum when GRO on and CHECKSUM_PARTIAL boundary is outer UDP · 1bff8a0c
      K. Den authored
      In the case that GRO is turned on and the original received packet is
      CHECKSUM_PARTIAL, if the outer UDP header is exactly at the last
      csum-unnecessary point, which for instance could occur if the packet
      comes from another Linux guest on the same Linux host, we have to do
      either remcsum_adjust or set up CHECKSUM_PARTIAL again with its
      csum_start properly reset considering RCO.
      
      However, since b7fe10e5 ("gro: Fix remcsum offload to deal with frags
      in GRO") that barrier in such case could be skipped if GRO turned on,
      hence we pass over it and the inner L4 validation mistakenly reckons
      it as a bad csum.
      
      This patch makes remcsum_offload being reset at the same time of GRO
      remcsum cleanup, so as to make it work in such case as before.
      
      Fixes: b7fe10e5 ("gro: Fix remcsum offload to deal with frags in GRO")
      Signed-off-by: default avatarKoichiro Den <den@klaipeden.com>
      Signed-off-by: default avatarDavid S. Miller <davem@davemloft.net>
      1bff8a0c
    • K. Den's avatar
      vxlan: fix remcsum when GRO on and CHECKSUM_PARTIAL boundary is outer UDP · be73b304
      K. Den authored
      In the case that GRO is turned on and the original received packet is
      CHECKSUM_PARTIAL, if the outer UDP header is exactly at the last
      csum-unnecessary point, which for instance could occur if the packet
      comes from another Linux guest on the same Linux host, we have to do
      either remcsum_adjust or set up CHECKSUM_PARTIAL again with its
      csum_start properly reset considering RCO.
      
      However, since b7fe10e5("gro: Fix remcsum offload to deal with frags
      in GRO") that barrier in such case could be skipped if GRO turned on,
      hence we pass over it and the inner L4 validation mistakenly reckons
      it as a bad csum.
      
      This patch makes remcsum_offload being reset at the same time of GRO
      remcsum cleanup, so as to make it work in such case as before.
      
      Fixes: b7fe10e5 ("gro: Fix remcsum offload to deal with frags in GRO")
      Signed-off-by: default avatarKoichiro Den <den@klaipeden.com>
      Signed-off-by: default avatarDavid S. Miller <davem@davemloft.net>
      be73b304
    • yujuan.qi's avatar
      Cipso: cipso_v4_optptr enter infinite loop · 40413955
      yujuan.qi authored
      in for(),if((optlen > 0) && (optptr[1] == 0)), enter infinite loop.
      
      Test: receive a packet which the ip length > 20 and the first byte of ip option is 0, produce this issue
      Signed-off-by: default avataryujuan.qi <yujuan.qi@mediatek.com>
      Acked-by: default avatarPaul Moore <paul@paul-moore.com>
      Signed-off-by: default avatarDavid S. Miller <davem@davemloft.net>
      40413955
    • David S. Miller's avatar
      Merge branch 'ethernet-ti-cpts-fix-tx-timestamping-timeout' · fdaa419b
      David S. Miller authored
      Grygorii Strashko says:
      
      ====================
      net: ethernet: ti: cpts: fix tx timestamping timeout
      
      With the low Ethernet connection speed cpdma notification about packet
      processing can be received before CPTS TX timestamp event, which is set
      when packet actually left CPSW while cpdma notification is sent when packet
      pushed in CPSW fifo. As result, when connection is slow and CPU is fast
      enough TX timestamping is not working properly.
      Issue was discovered using timestamping tool on am57x boards with Ethernet link
      speed forced to 100M and on am335x-evm with Ethernet link speed forced to 10M.
      
      Patch3 - This series fixes it by introducing TX SKB queue to store PTP SKBs for
      which Ethernet Transmit Event hasn't been received yet and then re-check this
      queue with new Ethernet Transmit Events by scheduling CPTS overflow
      work more often until TX SKB queue is not empty.
      
      Patch 1,2 - As CPTS overflow work is time critical task it important to ensure
      that its scheduling is not delayed. Unfortunately, There could be significant
      delay in CPTS work schedule under high system load and on -RT which could cause
      CPTS misbehavior due to internal counter overflow and there is no way to tune
      CPTS overflow work execution policy and priority manually. The kthread_worker
      can be used instead of workqueues, as it creates separate named kthread for
      each worker and its its execution policy and priority can be configured
      using chrt tool. Instead of modifying CPTS driver itself it was proposed to
      it was proposed to add PTP auxiliary worker to the PHC subsystem [1], so
      other drivers can benefit from this feature also.
      
      [1] https://www.spinics.net/lists/netdev/msg445392.html
      
      changes in v4:
      - fixed memleak in ptp_clock_register()
      - undocumented change in cpts_find_ts() moved to separate patch (minor fix)
      
      changes in v3:
      - patch 1: added proper error handling in ptp_clock_register.
        minor comments applied.
      
      changes in v2:
      - added PTP auxiliary worker to the PHC subsystem
      
      Links
      v3: https://www.spinics.net/lists/netdev/msg446058.html
      v2: https://www.spinics.net/lists/netdev/msg445859.html
      v1: https://www.spinics.net/lists/netdev/msg445387.html
      ====================
      Signed-off-by: default avatarDavid S. Miller <davem@davemloft.net>
      fdaa419b
    • Grygorii Strashko's avatar
      net: ethernet: ti: cpts: fix fifo read in cpts_find_ts · a93439cc
      Grygorii Strashko authored
      Now the call chain
       cpts_find_ts()
        |- cpts_fifo_read(cpts, CPTS_EV_PUSH)
      
      will stop reading CPTS FIFO if PUSH event is found. But this is not
      expected and CPTS FIFI should be completely drained here. This is most
      probably copy-paste error and it has no negative impact as CPTS_EV_PUSH
      should not be present in FIFO without TS_PUSH request and
      cpts_systim_read() and cpts_find_ts() synchronized by spin_lock.
      
      Correct above by calling cpts_fifo_read() with -1 parameter, so it will
      read all CPTS event from FIFO.
      Signed-off-by: default avatarGrygorii Strashko <grygorii.strashko@ti.com>
      Signed-off-by: default avatarDavid S. Miller <davem@davemloft.net>
      a93439cc
    • Grygorii Strashko's avatar
      net: ethernet: ti: cpts: fix tx timestamping timeout · 0d5f54fe
      Grygorii Strashko authored
      With the low speed Ethernet connection CPDMA notification about packet
      processing can be received before CPTS TX timestamp event, which is set
      when packet actually left CPSW while cpdma notification is sent when packet
      pushed in CPSW fifo.  As result, when connection is slow and CPU is fast
      enough TX timestamping is not working properly.
      
      Fix it, by introducing TX SKB queue to store PTP SKBs for which Ethernet
      Transmit Event hasn't been received yet and then re-check this queue
      with new Ethernet Transmit Events by scheduling CPTS overflow
      work more often (every 1 jiffies) until TX SKB queue is not empty.
      
      Side effect of this change is:
       - User space tools require to take into account possible delay in TX
      timestamp processing (for example ptp4l works with tx_timestamp_timeout=400
      under net traffic and tx_timestamp_timeout=25 in idle).
      Signed-off-by: default avatarGrygorii Strashko <grygorii.strashko@ti.com>
      Signed-off-by: default avatarDavid S. Miller <davem@davemloft.net>
      0d5f54fe
    • Grygorii Strashko's avatar
      net: ethernet: ti: cpts: convert to use ptp auxiliary worker · 999f1292
      Grygorii Strashko authored
      There could be significant delay in CPTS work schedule under high system
      load and on -RT which could cause CPTS misbehavior due to internal counter
      overflow. Usage of own kthread_worker allows to avoid such kind of issues
      and makes it possible to tune priority of CPTS kthread_worker thread on -RT
      (thread name "cpts").
      
      Hence, the CPTS driver is converted to use PTP auxiliary worker as PHC
      subsystem implements such functionality in a generic way now.
      Signed-off-by: default avatarGrygorii Strashko <grygorii.strashko@ti.com>
      Signed-off-by: default avatarDavid S. Miller <davem@davemloft.net>
      999f1292
    • Grygorii Strashko's avatar
      ptp: introduce ptp auxiliary worker · d9535cb7
      Grygorii Strashko authored
      Many PTP drivers required to perform some asynchronous or periodic work,
      like periodically handling PHC counter overflow or handle delayed timestamp
      for RX/TX network packets. In most of the cases, such work is implemented
      using workqueues. Unfortunately, Kernel workqueues might introduce
      significant delay in work scheduling under high system load and on -RT,
      which could cause misbehavior of PTP drivers due to internal counter
      overflow, for example, and there is no way to tune its execution policy and
      priority manuallly.
      
      Hence, The kthread_worker can be used insted of workqueues, as it create
      separte named kthread for each worker and its its execution policy and
      priority can be configured using chrt tool.
      
      This prblem was reported for two drivers TI CPSW CPTS and dp83640, so
      instead of modifying each of these driver it was proposed to add PTP
      auxiliary worker to the PHC subsystem.
      
      The patch adds PTP auxiliary worker in PHC subsystem using kthread_worker
      and kthread_delayed_work and introduces two new PHC subsystem APIs:
      
      - long (*do_aux_work)(struct ptp_clock_info *ptp) callback in
      ptp_clock_info structure, which driver should assign if it require to
      perform asynchronous or periodic work. Driver should return the delay of
      the PTP next auxiliary work scheduling time (>=0) or negative value in case
      further scheduling is not required.
      
      - int ptp_schedule_worker(struct ptp_clock *ptp, unsigned long delay) which
      allows schedule PTP auxiliary work.
      
      The name of kthread_worker thread corresponds PTP PHC device name "ptp%d".
      Signed-off-by: default avatarGrygorii Strashko <grygorii.strashko@ti.com>
      Signed-off-by: default avatarDavid S. Miller <davem@davemloft.net>
      d9535cb7
    • Linus Torvalds's avatar
      Merge git://git.kernel.org/pub/scm/linux/kernel/git/davem/net · bc78d646
      Linus Torvalds authored
      Pull networking fixes from David Miller:
      
       1) Handle notifier registry failures properly in tun/tap driver, from
          Tonghao Zhang.
      
       2) Fix bpf verifier handling of subtraction bounds and add a testcase
          for this, from Edward Cree.
      
       3) Increase reset timeout in ftgmac100 driver, from Ben Herrenschmidt.
      
       4) Fix use after free in prd_retire_rx_blk_timer_exired() in AF_PACKET,
          from Cong Wang.
      
       5) Fix SElinux regression due to recent UDP optimizations, from Paolo
          Abeni.
      
       6) We accidently increment IPSTATS_MIB_FRAGFAILS in the ipv6 code
          paths, fix from Stefano Brivio.
      
       7) Fix some mem leaks in dccp, from Xin Long.
      
       8) Adjust MDIO_BUS kconfig deps to avoid build errors, from Arnd
          Bergmann.
      
       9) Mac address length check and buffer size fixes from Cong Wang.
      
      10) Don't leak sockets in ipv6 udp early demux, from Paolo Abeni.
      
      11) Fix return value when copy_from_user() fails in
          bpf_prog_get_info_by_fd(), from Daniel Borkmann.
      
      12) Handle PHY_HALTED properly in phy library state machine, from
          Florian Fainelli.
      
      13) Fix OOPS in fib_sync_down_dev(), from Ido Schimmel.
      
      14) Fix truesize calculation in virtio_net which led to performance
          regressions, from Michael S Tsirkin.
      
      * git://git.kernel.org/pub/scm/linux/kernel/git/davem/net: (76 commits)
        samples/bpf: fix bpf tunnel cleanup
        udp6: fix jumbogram reception
        ppp: Fix a scheduling-while-atomic bug in del_chan
        Revert "net: bcmgenet: Remove init parameter from bcmgenet_mii_config"
        virtio_net: fix truesize for mergeable buffers
        mv643xx_eth: fix of_irq_to_resource() error check
        MAINTAINERS: Add more files to the PHY LIBRARY section
        ipv4: fib: Fix NULL pointer deref during fib_sync_down_dev()
        net: phy: Correctly process PHY_HALTED in phy_stop_machine()
        sunhme: fix up GREG_STAT and GREG_IMASK register offsets
        bpf: fix bpf_prog_get_info_by_fd to dump correct xlated_prog_len
        tcp: avoid bogus gcc-7 array-bounds warning
        net: tc35815: fix spelling mistake: "Intterrupt" -> "Interrupt"
        bpf: don't indicate success when copy_from_user fails
        udp6: fix socket leak on early demux
        net: thunderx: Fix BGX transmit stall due to underflow
        Revert "vhost: cache used event for better performance"
        team: use a larger struct for mac address
        net: check dev->addr_len for dev_set_mac_address()
        phy: bcm-ns-usb3: fix MDIO_BUS dependency
        ...
      bc78d646
    • William Tu's avatar
      samples/bpf: fix bpf tunnel cleanup · cc75f851
      William Tu authored
      test_tunnel_bpf.sh fails to remove the vxlan11 tunnel device, causing the
      next geneve tunnelling test case fails.  In addition, the geneve reserved bit
      in tcbpf2_kern.c should be zero, according to the RFC.
      Signed-off-by: default avatarWilliam Tu <u9012063@gmail.com>
      Signed-off-by: default avatarDavid S. Miller <davem@davemloft.net>
      cc75f851
    • Paolo Abeni's avatar
      udp6: fix jumbogram reception · cb891fa6
      Paolo Abeni authored
      Since commit 67a51780 ("ipv6: udp: leverage scratch area
      helpers") udp6_recvmsg() read the skb len from the scratch area,
      to avoid a cache miss.
      But the UDP6 rx path support RFC 2675 UDPv6 jumbograms, and their
      length exceeds the 16 bits available in the scratch area. As a side
      effect the length returned by recvmsg() is:
      <ingress datagram len> % (1<<16)
      
      This commit addresses the issue allocating one more bit in the
      IP6CB flags field and setting it for incoming jumbograms.
      Such field is still in the first cacheline, so at recvmsg()
      time we can check it and fallback to access skb->len if
      required, without a measurable overhead.
      
      Fixes: 67a51780 ("ipv6: udp: leverage scratch area helpers")
      Signed-off-by: default avatarPaolo Abeni <pabeni@redhat.com>
      Signed-off-by: default avatarDavid S. Miller <davem@davemloft.net>
      cb891fa6
    • Gao Feng's avatar
      ppp: Fix a scheduling-while-atomic bug in del_chan · ddab8282
      Gao Feng authored
      The PPTP set the pptp_sock_destruct as the sock's sk_destruct, it would
      trigger this bug when __sk_free is invoked in atomic context, because of
      the call path pptp_sock_destruct->del_chan->synchronize_rcu.
      
      Now move the synchronize_rcu to pptp_release from del_chan. This is the
      only one case which would free the sock and need the synchronize_rcu.
      
      The following is the panic I met with kernel 3.3.8, but this issue should
      exist in current kernel too according to the codes.
      
      BUG: scheduling while atomic
      __schedule_bug+0x5e/0x64
      __schedule+0x55/0x580
      ? ppp_unregister_channel+0x1cd5/0x1de0 [ppp_generic]
      ? dev_hard_start_xmit+0x423/0x530
      ? sch_direct_xmit+0x73/0x170
      __cond_resched+0x16/0x30
      _cond_resched+0x22/0x30
      wait_for_common+0x18/0x110
      ? call_rcu_bh+0x10/0x10
      wait_for_completion+0x12/0x20
      wait_rcu_gp+0x34/0x40
      ? wait_rcu_gp+0x40/0x40
      synchronize_sched+0x1e/0x20
      0xf8417298
      0xf8417484
      ? sock_queue_rcv_skb+0x109/0x130
      __sk_free+0x16/0x110
      ? udp_queue_rcv_skb+0x1f2/0x290
      sk_free+0x16/0x20
      __udp4_lib_rcv+0x3b8/0x650
      Signed-off-by: default avatarGao Feng <gfree.wind@vip.163.com>
      Signed-off-by: default avatarDavid S. Miller <davem@davemloft.net>
      ddab8282