1. 07 Mar, 2024 17 commits
  2. 06 Mar, 2024 23 commits
    • Juntong Deng's avatar
      inet: Add getsockopt support for IP_ROUTER_ALERT and IPV6_ROUTER_ALERT · eeb78df4
      Juntong Deng authored
      Currently getsockopt does not support IP_ROUTER_ALERT and
      IPV6_ROUTER_ALERT, and we are unable to get the values of these two
      socket options through getsockopt.
      
      This patch adds getsockopt support for IP_ROUTER_ALERT and
      IPV6_ROUTER_ALERT.
      Signed-off-by: default avatarJuntong Deng <juntong.deng@outlook.com>
      Reviewed-by: default avatarEric Dumazet <edumazet@google.com>
      Signed-off-by: default avatarDavid S. Miller <davem@davemloft.net>
      eeb78df4
    • David S. Miller's avatar
      Merge branch 'ynl-small-recv' · edf7468d
      David S. Miller authored
      Jakub Kicinski says:
      
      ====================
      tools: ynl: add --dbg-small-recv for easier kernel testing
      
      When testing netlink dumps I usually hack some user space up
      to constrain its user space buffer size (iproute2, ethtool or ynl).
      Netlink will try to fill the messages up, so since these apps use
      large buffers by default, the dumps are rarely fragmented.
      
      I was hoping to figure out a way to create a selftest for dump
      testing, but so far I have no idea how to do that in a useful
      and generic way.
      
      Until someone does that, make manual dump testing easier with YNL.
      Create a special option for limiting the buffer size, so I don't
      have to make the same edits each time, and maybe others will benefit,
      too :)
      
      Example:
      
        $ ./cli.py [...] --dbg-small-recv >/dev/null
        Recv: read 3712 bytes, 29 messages
           nl_len = 128 (112) nl_flags = 0x0 nl_type = 19
          [...]
           nl_len = 128 (112) nl_flags = 0x0 nl_type = 19
        Recv: read 3968 bytes, 31 messages
           nl_len = 128 (112) nl_flags = 0x0 nl_type = 19
          [...]
           nl_len = 128 (112) nl_flags = 0x0 nl_type = 19
        Recv: read 532 bytes, 5 messages
           nl_len = 128 (112) nl_flags = 0x0 nl_type = 19
          [...]
           nl_len = 128 (112) nl_flags = 0x0 nl_type = 19
           nl_len = 20 (4) nl_flags = 0x2 nl_type = 3
      
      Now let's make the DONE not fit in the last message:
      
        $ ./cli.py [...] --dbg-small-recv 4499 >/dev/null
        Recv: read 3712 bytes, 29 messages
           nl_len = 128 (112) nl_flags = 0x0 nl_type = 19
          [...]
           nl_len = 128 (112) nl_flags = 0x0 nl_type = 19
        Recv: read 4480 bytes, 35 messages
           nl_len = 128 (112) nl_flags = 0x0 nl_type = 19
          [...]
           nl_len = 128 (112) nl_flags = 0x0 nl_type = 19
        Recv: read 20 bytes, 1 messages
           nl_len = 20 (4) nl_flags = 0x2 nl_type = 3
      
      A real test would also have to check the messages are complete
      and not duplicated. That part has to be done manually right now.
      
      Note that the first message is always conservatively sized by the kernel.
      Still, I think this is good enough to be useful.
      
      v2:
       - patch 2:
         - move the recv_size setting up
         - change the default to 0 so that cli.py doesn't have to worry
           what the "unset" value is
      v1: https://lore.kernel.org/all/20240301230542.116823-1-kuba@kernel.org/
      ====================
      Signed-off-by: default avatarDavid S. Miller <davem@davemloft.net>
      edf7468d
    • Jakub Kicinski's avatar
      tools: ynl: add --dbg-small-recv for easier kernel testing · c0111878
      Jakub Kicinski authored
      Most "production" netlink clients use large buffers to
      make dump efficient, which means that handling of dump
      continuation in the kernel is not very well tested.
      
      Add an option for debugging / testing handling of dumps.
      It enables printing of extra netlink-level debug and
      lowers the recv() buffer size in one go. When used
      without any argument (--dbg-small-recv) it picks
      a very small default (4000), explicit size can be set,
      too (--dbg-small-recv 5000).
      
      Example:
      
      $ ./cli.py [...] --dbg-small-recv
      Recv: read 3712 bytes, 29 messages
         nl_len = 128 (112) nl_flags = 0x0 nl_type = 19
       [...]
         nl_len = 128 (112) nl_flags = 0x0 nl_type = 19
      Recv: read 3968 bytes, 31 messages
         nl_len = 128 (112) nl_flags = 0x0 nl_type = 19
       [...]
         nl_len = 128 (112) nl_flags = 0x0 nl_type = 19
      Recv: read 532 bytes, 5 messages
         nl_len = 128 (112) nl_flags = 0x0 nl_type = 19
       [...]
         nl_len = 128 (112) nl_flags = 0x0 nl_type = 19
         nl_len = 20 (4) nl_flags = 0x2 nl_type = 3
      
      (the [...] are edits to shorten the commit message).
      
      Note that the first message of the dump is sized conservatively
      by the kernel.
      Signed-off-by: default avatarJakub Kicinski <kuba@kernel.org>
      Reviewed-by: default avatarDonald Hunter <donald.hunter@gmail.com>
      Signed-off-by: default avatarDavid S. Miller <davem@davemloft.net>
      c0111878
    • Jakub Kicinski's avatar
      tools: ynl: support debug printing messages · a6a41521
      Jakub Kicinski authored
      For manual debug, allow printing the netlink level messages
      to stderr.
      Reviewed-by: default avatarDonald Hunter <donald.hunter@gmail.com>
      Signed-off-by: default avatarJakub Kicinski <kuba@kernel.org>
      Signed-off-by: default avatarDavid S. Miller <davem@davemloft.net>
      a6a41521
    • Jakub Kicinski's avatar
      tools: ynl: allow setting recv() size · 7c93a887
      Jakub Kicinski authored
      Make the size of the buffer we use for recv() configurable.
      The details of the buffer sizing in netlink are somewhat
      arcane, we could spend a lot of time polishing this API.
      Let's just leave some hopefully helpful comments for now.
      This is a for-developers-only feature, anyway.
      Signed-off-by: default avatarJakub Kicinski <kuba@kernel.org>
      Reviewed-by: default avatarDonald Hunter <donald.hunter@gmail.com>
      Signed-off-by: default avatarDavid S. Miller <davem@davemloft.net>
      7c93a887
    • Jakub Kicinski's avatar
      tools: ynl: move the new line in NlMsg __repr__ · 7df7231d
      Jakub Kicinski authored
      We add the new line even if message has no error or extack,
      which leads to print(nl_msg) ending with two new lines.
      Reviewed-by: default avatarDonald Hunter <donald.hunter@gmail.com>
      Signed-off-by: default avatarJakub Kicinski <kuba@kernel.org>
      Signed-off-by: default avatarDavid S. Miller <davem@davemloft.net>
      7df7231d
    • David S. Miller's avatar
      Merge branch 'tools-ynl-make-clean' · b206acf1
      David S. Miller authored
      Jakub Kicinski says:
      
      ====================
      tools: ynl: clean up make clean
      
      First change renames the clean target which removes build results,
      to a more common name. Second one add missing .PHONY targets.
      Third one ensures that clean deletes __pycache__.
      
      v2: add patch 2
      v1: https://lore.kernel.org/all/20240301235609.147572-1-kuba@kernel.org/
      
      ====================
      Signed-off-by: default avatarDavid S. Miller <davem@davemloft.net>
      b206acf1
    • Jakub Kicinski's avatar
      tools: ynl: remove __pycache__ during clean · 72fa191b
      Jakub Kicinski authored
      Build process uses python to generate the user space code.
      Remove __pycache__ on make clean.
      Signed-off-by: default avatarJakub Kicinski <kuba@kernel.org>
      Reviewed-by: default avatarDonald Hunter <donald.hunter@gmail.com>
      Signed-off-by: default avatarDavid S. Miller <davem@davemloft.net>
      72fa191b
    • Jakub Kicinski's avatar
      tools: ynl: add distclean to .PHONY in all makefiles · 1d8617b2
      Jakub Kicinski authored
      Donald points out most YNL makefiles are missing distclean
      in .PHONY, even tho generated/Makefile does list it.
      Suggested-by: default avatarDonald Hunter <donald.hunter@gmail.com>
      Signed-off-by: default avatarJakub Kicinski <kuba@kernel.org>
      Reviewed-by: default avatarDonald Hunter <donald.hunter@gmail.com>
      Signed-off-by: default avatarDavid S. Miller <davem@davemloft.net>
      1d8617b2
    • Jakub Kicinski's avatar
      tools: ynl: rename make hardclean -> distclean · 4e887471
      Jakub Kicinski authored
      The make target to remove all generated files used to be called
      "hardclean" because it deleted files which were tracked by git.
      We no longer track generated user space files, so use the more
      common "distclean" name.
      Signed-off-by: default avatarJakub Kicinski <kuba@kernel.org>
      Reviewed-by: default avatarDonald Hunter <donald.hunter@gmail.com>
      Signed-off-by: default avatarDavid S. Miller <davem@davemloft.net>
      4e887471
    • David S. Miller's avatar
      Merge branch '100GbE' of git://git.kernel.org/pub/scm/linux/kernel/git/tnguy/next-queue · db72b6fc
      David S. Miller authored
      Tony Nguyen says:
      
      ====================
      Intel Wired LAN Driver Updates 2024-03-04 (ice)
      
      This series contains updates to ice driver only.
      
      Jake changes the driver to use relative VSI index for VF VSIs as the VF
      driver has no direct use of the VSI number on ice hardware. He also
      reworks some Tx/Rx functions to clarify their uses, cleans up some style
      issues, and utilizes kernel helper functions.
      
      Maciej removes a redundant call to disable Tx queues on ifdown and
      removes some unnecessary devm usages.
      ====================
      Signed-off-by: default avatarDavid S. Miller <davem@davemloft.net>
      db72b6fc
    • David S. Miller's avatar
      Merge branch 'ravb-cleanups' · 39a096d6
      David S. Miller authored
      Niklas Söderlund says:
      
      ====================
      ravb: Align Rx descriptor setup and maintenance
      
      When RZ/G2L support was added the Rx code path was split in two, one to
      support R-Car and one to support RZ/G2L. One reason for this is that
      R-Car uses the extended Rx descriptor format, while RZ/G2L uses the
      normal descriptor format.
      
      In many aspects this is not needed as the extended descriptor format is
      just a normal descriptor with extra metadata (timestamsp) appended. And
      the R-Car SoCs can also use normal descriptors if hardware timestamps
      were not desired. This split has led to RZ/G2L gaining support for
      split descriptors in the Rx path while R-Car still lacks this.
      
      This series is the first step in trying to merge the R-Car and RZ/G2L Rx
      paths so features and bugs corrected in one will benefit the other.
      
      The first patch in the series clarifies that the driver now supports
      either normal or extended descriptors, not both at the same time by
      grouping them in a union. This is the foundation that later patches will
      build on the aligning the two Rx paths.
      
      Patches 2-5 deals with correcting small issues in the Rx frame and
      descriptor sizes that either were incorrect at the time they were added
      in 2017 (my bad) or concepts built on-top of this initial incorrect
      design.
      
      While finally patch 6 merges the R-Car and RZ/G2L for Rx descriptor
      setup and maintenance.
      
      When this work has landed I plan to follow up with more work aligning
      the rest of the Rx code paths and hopefully bring split descriptor
      support to the R-Car SoCs.
      ====================
      Signed-off-by: default avatarDavid S. Miller <davem@davemloft.net>
      39a096d6
    • Niklas Söderlund's avatar
      ravb: Unify Rx ring maintenance code paths · 644d037b
      Niklas Söderlund authored
      The R-Car and RZ/G2L Rx code paths were split in two separate
      implementations when support for RZ/G2L was added due to the fact that
      R-Car uses the extended descriptor format while RZ/G2L uses normal
      descriptors. This has led to a duplication of Rx logic with the only
      difference being the different Rx descriptors types used. The
      implementation however neglects to take into account that extended
      descriptors are normal descriptors with additional metadata at the end
      to carry hardware timestamp information.
      
      The hardware timestamp information is only consumed in the R-Car Rx
      loop and all the maintenance code around the Rx ring can be shared
      between the two implementations if the difference in descriptor length
      is carefully considered.
      
      This change merges the two implementations for Rx ring maintenance by
      adding a method to access both types of descriptors as normal
      descriptors, as this part covers all the fields needed for Rx ring
      maintenance the only difference between using normal or extended
      descriptor is the size of the memory region to allocate/free and the
      step size between each descriptor in the ring.
      Signed-off-by: default avatarNiklas Söderlund <niklas.soderlund+renesas@ragnatech.se>
      Reviewed-by: default avatarPaul Barker <paul.barker.ct@bp.renesas.com>
      Reviewed-by: default avatarSergey Shtylyov <s.shtylyov@omp.ru>
      Signed-off-by: default avatarDavid S. Miller <davem@davemloft.net>
      644d037b
    • Niklas Söderlund's avatar
      ravb: Move maximum Rx descriptor data usage to info struct · 555419b2
      Niklas Söderlund authored
      To make it possible to merge the R-Car and RZ/G2L code paths move the
      maximum usable size of a single Rx descriptor data slice into the
      hardware information instead of using two different defines in the two
      different code paths.
      Signed-off-by: default avatarNiklas Söderlund <niklas.soderlund+renesas@ragnatech.se>
      Reviewed-by: default avatarPaul Barker <paul.barker.ct@bp.renesas.com>
      Reviewed-by: default avatarSergey Shtylyov <s.shtylyov@omp.ru>
      Signed-off-by: default avatarDavid S. Miller <davem@davemloft.net>
      555419b2
    • Niklas Söderlund's avatar
      ravb: Use the max frame size from hardware info for RZ/G2L · 49686338
      Niklas Söderlund authored
      Remove the define describing the RZ/G2L maximum frame size and only use
      the information in the hardware information struct. This will make it
      easier to merge the R-Car and RZ/G2L code paths.
      
      There is no functional change as both the define and the maximum frame
      length in the hardware information is set to 8K.
      Signed-off-by: default avatarNiklas Söderlund <niklas.soderlund+renesas@ragnatech.se>
      Reviewed-by: default avatarPaul Barker <paul.barker.ct@bp.renesas.com>
      Reviewed-by: default avatarSergey Shtylyov <s.shtylyov@omp.ru>
      Signed-off-by: default avatarDavid S. Miller <davem@davemloft.net>
      49686338
    • Niklas Söderlund's avatar
      ravb: Create helper to allocate skb and align it · cfbad647
      Niklas Söderlund authored
      The EtherAVB device requires the SKB data to be aligned to 128 bytes.
      The alignment is done by allocating an skb 128 bytes larger than the
      maximum frame size supported by the device and adjusting the headroom to
      fit the requirement.
      
      This code has been refactored a few times and small issues have been
      added along the way. The issues are not harmful but prevent merging
      parts of the Rx code which have been split in two implementations with
      the addition of RZ/G2L support, a device that supports larger frame
      sizes.
      
      This change removes the need for duplicated and somewhat inaccurate
      hardware alignment constrains stored in the hardware information struct
      by creating a helper to handle the allocation of an skb and alignment of
      an skb data.
      
      For the R-Car class of devices the maximum frame size is 4K and each
      descriptor is limited to 2K of data. The current implementation does not
      support split descriptors, this limits the frame size to 2K. The
      current hardware information however records the descriptor size just
      under 2K due to bad understanding of the device when larger MTUs where
      added.
      
      For the RZ/G2L device the maximum frame size is 8K and each descriptor
      is limited to 4K of data. The current hardware information records this
      correctly, but it gets the alignment constrains wrong as just aligns it
      by 128, it does not extend it by 128 bytes to allow the full frame to be
      stored. This works because the RZ/G2L device supports split descriptors
      and allocates each skb to 8K and aligns each 4K descriptor in this
      space.
      Signed-off-by: default avatarNiklas Söderlund <niklas.soderlund+renesas@ragnatech.se>
      Reviewed-by: default avatarPaul Barker <paul.barker.ct@bp.renesas.com>
      Reviewed-by: default avatarSergey Shtylyov <s.shtylyov@omp.ru>
      Signed-off-by: default avatarDavid S. Miller <davem@davemloft.net>
      cfbad647
    • Niklas Söderlund's avatar
      ravb: Make it clear the information relates to maximum frame size · e82700b8
      Niklas Söderlund authored
      The struct member rx_max_buf_size was added before split descriptor
      support was added. It is unclear if the value describes the full skb
      frame buffer or the data descriptor buffer which can be combined into a
      single skb.
      
      Rename it to make it clear it referees to the maximum frame size and can
      cover multiple descriptors.
      Signed-off-by: default avatarNiklas Söderlund <niklas.soderlund+renesas@ragnatech.se>
      Reviewed-by: default avatarPaul Barker <paul.barker.ct@bp.renesas.com>
      Reviewed-by: default avatarSergey Shtylyov <s.shtylyov@omp.ru>
      Signed-off-by: default avatarDavid S. Miller <davem@davemloft.net>
      e82700b8
    • Niklas Söderlund's avatar
      ravb: Group descriptor types used in Rx ring · 4123c3fb
      Niklas Söderlund authored
      The Rx ring can either be made up of normal or extended descriptors, not
      a mix of the two at the same time. Make this explicit by grouping the
      two variables in a rx_ring union.
      
      The extension of the storage for more than one queue of normal
      descriptors from a single to NUM_RX_QUEUE queues have no practical
      effect. But aids in making the code readable as the code that uses it
      already piggyback on other members of struct ravb_private that are
      arrays of max length NUM_RX_QUEUE, e.g. rx_desc_dma. This will also make
      further refactoring easier.
      
      While at it, rename the normal descriptor Rx ring to make it clear it's
      not strictly related to the GbEthernet E-MAC IP found in RZ/G2L, normal
      descriptors could be used on R-Car SoCs too.
      Signed-off-by: default avatarNiklas Söderlund <niklas.soderlund+renesas@ragnatech.se>
      Reviewed-by: default avatarPaul Barker <paul.barker.ct@bp.renesas.com>
      Reviewed-by: default avatarSergey Shtylyov <s.shtylyov@omp.ru>
      Signed-off-by: default avatarDavid S. Miller <davem@davemloft.net>
      4123c3fb
    • David S. Miller's avatar
      Merge branch '200GbE' of git://git.kernel.org/pub/scm/linux/kernel/git/tnguy/next-queue · dbb0b6ca
      David S. Miller authored
      From: Tony Nguyen <anthony.l.nguyen@intel.com>
      To: davem@davemloft.net, kuba@kernel.org, pabeni@redhat.com,
      	edumazet@google.com, netdev@vger.kernel.org
      Cc: Tony Nguyen <anthony.l.nguyen@intel.com>, alan.brady@intel.com
      Tony Nguyen says:
      
      ====================
      idpf: refactor virtchnl messages
      
      Alan Brady says:
      
      The motivation for this series has two primary goals. We want to enable
      support of multiple simultaneous messages and make the channel more
      robust. The way it works right now, the driver can only send and receive
      a single message at a time and if something goes really wrong, it can
      lead to data corruption and strange bugs.
      
      To start the series, we introduce an idpf_virtchnl.h file. This reduces
      the burden on idpf.h which is overloaded with struct and function
      declarations.
      
      The conversion works by conceptualizing a send and receive as a
      "virtchnl transaction" (idpf_vc_xn) and introducing a "transaction
      manager" (idpf_vc_xn_manager). The vcxn_mngr will init a ring of
      transactions from which the driver will pop from a bitmap of free
      transactions to track in-flight messages. Instead of needing to handle a
      complicated send/recv for every a message, the driver now just needs to
      fill out a xn_params struct and hand it over to idpf_vc_xn_exec which
      will take care of all the messy bits. Once a message is sent and
      receives a reply, we leverage the completion API to signal the received
      buffer is ready to be used (assuming success, or an error code
      otherwise).
      
      At a low-level, this implements the "sw cookie" field of the virtchnl
      message descriptor to enable this. We have 16 bits we can put whatever
      we want and the recipient is required to apply the same cookie to the
      reply for that message.  We use the first 8 bits as an index into the
      array of transactions to enable fast lookups and we use the second 8
      bits as a salt to make sure each cookie is unique for that message. As
      transactions are received in arbitrary order, it's possible to reuse a
      transaction index and the salt guards against index conflicts to make
      certain the lookup is correct. As a primitive example, say index 1 is
      used with salt 1. The message times out without receiving a reply so
      index 1 is renewed to be ready for a new transaction, we report the
      timeout, and send the message again. Since index 1 is free to be used
      again now, index 1 is again sent but now salt is 2. This time we do get
      a reply, however it could be that the reply is _actually_ for the
      previous send index 1 with salt 1.  Without the salt we would have no
      way of knowing for sure if it's the correct reply, but with we will know
      for certain.
      
      Through this conversion we also get several other benefits. We can now
      more appropriately handle asynchronously sent messages by providing
      space for a callback to be defined. This notably allows us to handle MAC
      filter failures better; previously we could potentially have stale,
      failed filters in our list, which shouldn't really have a major impact
      but is obviously not correct. I also managed to remove fairly
      significant more lines than I added which is a win in my book.
      
      Additionally, this converts some variables to use auto-variables where
      appropriate. This makes the alloc paths much cleaner and less prone to
      memory leaks. We also fix a few virtchnl related bugs while we're here.
      
      ====================
      Signed-off-by: default avatarDavid S. Miller <davem@davemloft.net>
      dbb0b6ca
    • David S. Miller's avatar
      Merge branch 'netlink-emsgsize' · 784ee615
      David S. Miller authored
      Jakub Kicinski says:
      
      ====================
      netlink: handle EMSGSIZE errors in the core
      
      Ido discovered some time back that we usually force NLMSG_DONE
      to be delivered in a separate recv() syscall, even if it would
      fit into the same skb as data messages. He made nexthop try
      to fit DONE with data in commit 8743aeff ("nexthop: Fix
      infinite nexthop bucket dump when using maximum nexthop ID"),
      and nobody has complained so far.
      
      We have since also tried to follow the same pattern in new
      genetlink families, but explaining to people, or even remembering
      the correct handling ourselves is tedious.
      
      Let the netlink socket layer consume -EMSGSIZE errors.
      Practically speaking most families use this error code
      as "dump needs more space", anyway.
      
      v2:
       - init err to 0 in last patch
      v1: https://lore.kernel.org/all/20240301012845.2951053-1-kuba@kernel.org/
      ====================
      Signed-off-by: default avatarDavid S. Miller <davem@davemloft.net>
      784ee615
    • Jakub Kicinski's avatar
      genetlink: fit NLMSG_DONE into same read() as families · 87d38197
      Jakub Kicinski authored
      Make sure ctrl_fill_info() returns sensible error codes and
      propagate them out to netlink core. Let netlink core decide
      when to return skb->len and when to treat the exit as an
      error. Netlink core does better job at it, if we always
      return skb->len the core doesn't know when we're done
      dumping and NLMSG_DONE ends up in a separate read().
      Reviewed-by: default avatarEric Dumazet <edumazet@google.com>
      Signed-off-by: default avatarJakub Kicinski <kuba@kernel.org>
      Reviewed-by: default avatarIdo Schimmel <idosch@nvidia.com>
      Signed-off-by: default avatarDavid S. Miller <davem@davemloft.net>
      87d38197
    • Jakub Kicinski's avatar
      netdev: let netlink core handle -EMSGSIZE errors · 0b11b1c5
      Jakub Kicinski authored
      Previous change added -EMSGSIZE handling to af_netlink, we don't
      have to hide these errors any longer.
      
      Theoretically the error handling changes from:
       if (err == -EMSGSIZE)
      to
       if (err == -EMSGSIZE && skb->len)
      
      everywhere, but in practice it doesn't matter.
      All messages fit into NLMSG_GOODSIZE, so overflow of an empty
      skb cannot happen.
      Reviewed-by: default avatarEric Dumazet <edumazet@google.com>
      Signed-off-by: default avatarJakub Kicinski <kuba@kernel.org>
      Reviewed-by: default avatarIdo Schimmel <idosch@nvidia.com>
      Signed-off-by: default avatarDavid S. Miller <davem@davemloft.net>
      0b11b1c5
    • Jakub Kicinski's avatar
      netlink: handle EMSGSIZE errors in the core · b5a89915
      Jakub Kicinski authored
      Eric points out that our current suggested way of handling
      EMSGSIZE errors ((err == -EMSGSIZE) ? skb->len : err) will
      break if we didn't fit even a single object into the buffer
      provided by the user. This should not happen for well behaved
      applications, but we can fix that, and free netlink families
      from dealing with that completely by moving error handling
      into the core.
      
      Let's assume from now on that all EMSGSIZE errors in dumps are
      because we run out of skb space. Families can now propagate
      the error nla_put_*() etc generated and not worry about any
      return value magic. If some family really wants to send EMSGSIZE
      to user space, assuming it generates the same error on the next
      dump iteration the skb->len should be 0, and user space should
      still see the EMSGSIZE.
      
      This should simplify families and prevent mistakes in return
      values which lead to DONE being forced into a separate recv()
      call as discovered by Ido some time ago.
      Reviewed-by: default avatarEric Dumazet <edumazet@google.com>
      Signed-off-by: default avatarJakub Kicinski <kuba@kernel.org>
      Reviewed-by: default avatarIdo Schimmel <idosch@nvidia.com>
      Signed-off-by: default avatarDavid S. Miller <davem@davemloft.net>
      b5a89915