1. 21 Feb, 2024 18 commits
  2. 20 Feb, 2024 9 commits
    • Paolo Abeni's avatar
      Merge tag 'linux-can-next-for-6.9-20240220' of... · 49344462
      Paolo Abeni authored
      Merge tag 'linux-can-next-for-6.9-20240220' of git://git.kernel.org/pub/scm/linux/kernel/git/mkl/linux-can-next
      
      Marc Kleine-Budde says:
      
      ====================
      pull-request: can-next 2024-02-20
      
      this is a pull request of 9 patches for net-next/master.
      
      The first patch is by Francesco Dolcini and removes a redundant check
      for pm_clock_support from the m_can driver.
      
      Martin Hundebøll contributes 3 patches to the m_can/tcan4x5x driver to
      allow resume upon RX of a CAN frame.
      
      3 patches by Srinivas Goud add support for ECC statistics to the
      xilinx_can driver.
      
      The last 2 patches are by Oliver Hartkopp and me, target the CAN RAW
      protocol and fix an error in the getsockopt() for CAN-XL introduced in
      the previous pull request to net-next (linux-can-next-for-6.9-20240213).
      
      linux-can-next-for-6.9-20240220
      
      * tag 'linux-can-next-for-6.9-20240220' of git://git.kernel.org/pub/scm/linux/kernel/git/mkl/linux-can-next:
        can: raw: raw_getsockopt(): reduce scope of err
        can: raw: fix getsockopt() for new CAN_RAW_XL_VCID_OPTS
        can: xilinx_can: Add ethtool stats interface for ECC errors
        can: xilinx_can: Add ECC support
        dt-bindings: can: xilinx_can: Add 'xlnx,has-ecc' optional property
        can: tcan4x5x: support resuming from rx interrupt signal
        can: m_can: allow keeping the transceiver running in suspend
        dt-bindings: can: tcan4x5x: Document the wakeup-source flag
        can: m_can: remove redundant check for pm_clock_support
      ====================
      
      Link: https://lore.kernel.org/r/20240220085130.2936533-1-mkl@pengutronix.deSigned-off-by: default avatarPaolo Abeni <pabeni@redhat.com>
      49344462
    • Florian Westphal's avatar
      net: skbuff: add overflow debug check to pull/push helpers · 219eee9c
      Florian Westphal authored
      syzbot managed to trigger following splat:
      BUG: KASAN: use-after-free in __skb_flow_dissect+0x4a3b/0x5e50
      Read of size 1 at addr ffff888208a4000e by task a.out/2313
      [..]
        __skb_flow_dissect+0x4a3b/0x5e50
        __skb_get_hash+0xb4/0x400
        ip_tunnel_xmit+0x77e/0x26f0
        ipip_tunnel_xmit+0x298/0x410
        ..
      
      Analysis shows that the skb has a valid ->head, but bogus ->data
      pointer.
      
      skb->data gets its bogus value via the neigh layer, which does:
      
      1556    __skb_pull(skb, skb_network_offset(skb));
      
      ... and the skb was already dodgy at this point:
      
      skb_network_offset(skb) returns a negative value due to an
      earlier overflow of skb->network_header (u16).  __skb_pull thus
      "adjusts" skb->data by a huge offset, pointing outside skb->head
      area.
      
      Allow debug builds to splat when we try to pull/push more than
      INT_MAX bytes.
      
      After this, the syzkaller reproducer yields a more precise splat
      before the flow dissector attempts to read off skb->data memory:
      
      WARNING: CPU: 5 PID: 2313 at include/linux/skbuff.h:2653 neigh_connected_output+0x28e/0x400
        ip_finish_output2+0xb25/0xed0
        iptunnel_xmit+0x4ff/0x870
        ipgre_xmit+0x78e/0xbb0
      Signed-off-by: default avatarFlorian Westphal <fw@strlen.de>
      Reviewed-by: default avatarSimon Horman <horms@kernel.org>
      Link: https://lore.kernel.org/r/20240216113700.23013-1-fw@strlen.deSigned-off-by: default avatarPaolo Abeni <pabeni@redhat.com>
      219eee9c
    • Eric Dumazet's avatar
      net: reorganize "struct sock" fields · 5d4cc874
      Eric Dumazet authored
      Last major reorg happened in commit 9115e8cd ("net: reorganize
      struct sock for better data locality")
      
      Since then, many changes have been done.
      
      Before SO_PEEK_OFF support is added to TCP, we need
      to move sk_peek_off to a better location.
      
      It is time to make another pass, and add six groups,
      without explicit alignment.
      
      - sock_write_rx (following sk_refcnt) read-write fields in rx path.
      - sock_read_rx read-mostly fields in rx path.
      - sock_read_rxtx read-mostly fields in both rx and tx paths.
      - sock_write_rxtx read-write fields in both rx and tx paths.
      - sock_write_tx read-write fields in tx paths.
      - sock_read_tx read-mostly fields in tx paths.
      
      Results on TCP_RR benchmarks seem to show a gain (4 to 5 %).
      
      It is possible UDP needs a change, because sk_peek_off
      shares a cache line with sk_receive_queue.
      If this the case, we can exchange roles of sk->sk_receive
      and up->reader_queue queues.
      
      After this change, we have the following layout:
      
      struct sock {
      	struct sock_common         __sk_common;          /*     0  0x88 */
      	/* --- cacheline 2 boundary (128 bytes) was 8 bytes ago --- */
      	__u8                       __cacheline_group_begin__sock_write_rx[0]; /*  0x88     0 */
      	atomic_t                   sk_drops;             /*  0x88   0x4 */
      	__s32                      sk_peek_off;          /*  0x8c   0x4 */
      	struct sk_buff_head        sk_error_queue;       /*  0x90  0x18 */
      	struct sk_buff_head        sk_receive_queue;     /*  0xa8  0x18 */
      	/* --- cacheline 3 boundary (192 bytes) --- */
      	struct {
      		atomic_t           rmem_alloc;           /*  0xc0   0x4 */
      		int                len;                  /*  0xc4   0x4 */
      		struct sk_buff *   head;                 /*  0xc8   0x8 */
      		struct sk_buff *   tail;                 /*  0xd0   0x8 */
      	} sk_backlog;                                    /*  0xc0  0x18 */
      	struct {
      		atomic_t                   rmem_alloc;           /*     0   0x4 */
      		int                        len;                  /*   0x4   0x4 */
      		struct sk_buff *           head;                 /*   0x8   0x8 */
      		struct sk_buff *           tail;                 /*  0x10   0x8 */
      
      		/* size: 24, cachelines: 1, members: 4 */
      		/* last cacheline: 24 bytes */
      	};
      
      	__u8                       __cacheline_group_end__sock_write_rx[0]; /*  0xd8     0 */
      	__u8                       __cacheline_group_begin__sock_read_rx[0]; /*  0xd8     0 */
      	rcu *                      sk_rx_dst;            /*  0xd8   0x8 */
      	int                        sk_rx_dst_ifindex;    /*  0xe0   0x4 */
      	u32                        sk_rx_dst_cookie;     /*  0xe4   0x4 */
      	unsigned int               sk_ll_usec;           /*  0xe8   0x4 */
      	unsigned int               sk_napi_id;           /*  0xec   0x4 */
      	u16                        sk_busy_poll_budget;  /*  0xf0   0x2 */
      	u8                         sk_prefer_busy_poll;  /*  0xf2   0x1 */
      	u8                         sk_userlocks;         /*  0xf3   0x1 */
      	int                        sk_rcvbuf;            /*  0xf4   0x4 */
      	rcu *                      sk_filter;            /*  0xf8   0x8 */
      	/* --- cacheline 4 boundary (256 bytes) --- */
      	union {
      		rcu *              sk_wq;                /* 0x100   0x8 */
      		struct socket_wq * sk_wq_raw;            /* 0x100   0x8 */
      	};                                               /* 0x100   0x8 */
      	union {
      		rcu *                      sk_wq;                /*     0   0x8 */
      		struct socket_wq *         sk_wq_raw;            /*     0   0x8 */
      	};
      
      	void                       (*sk_data_ready)(struct sock *); /* 0x108   0x8 */
      	long                       sk_rcvtimeo;          /* 0x110   0x8 */
      	int                        sk_rcvlowat;          /* 0x118   0x4 */
      	__u8                       __cacheline_group_end__sock_read_rx[0]; /* 0x11c     0 */
      	__u8                       __cacheline_group_begin__sock_read_rxtx[0]; /* 0x11c     0 */
      	int                        sk_err;               /* 0x11c   0x4 */
      	struct socket *            sk_socket;            /* 0x120   0x8 */
      	struct mem_cgroup *        sk_memcg;             /* 0x128   0x8 */
      	rcu *                      sk_policy[2];         /* 0x130  0x10 */
      	/* --- cacheline 5 boundary (320 bytes) --- */
      	__u8                       __cacheline_group_end__sock_read_rxtx[0]; /* 0x140     0 */
      	__u8                       __cacheline_group_begin__sock_write_rxtx[0]; /* 0x140     0 */
      	socket_lock_t              sk_lock;              /* 0x140  0x20 */
      	u32                        sk_reserved_mem;      /* 0x160   0x4 */
      	int                        sk_forward_alloc;     /* 0x164   0x4 */
      	u32                        sk_tsflags;           /* 0x168   0x4 */
      	__u8                       __cacheline_group_end__sock_write_rxtx[0]; /* 0x16c     0 */
      	__u8                       __cacheline_group_begin__sock_write_tx[0]; /* 0x16c     0 */
      	int                        sk_write_pending;     /* 0x16c   0x4 */
      	atomic_t                   sk_omem_alloc;        /* 0x170   0x4 */
      	int                        sk_sndbuf;            /* 0x174   0x4 */
      	int                        sk_wmem_queued;       /* 0x178   0x4 */
      	refcount_t                 sk_wmem_alloc;        /* 0x17c   0x4 */
      	/* --- cacheline 6 boundary (384 bytes) --- */
      	unsigned long              sk_tsq_flags;         /* 0x180   0x8 */
      	union {
      		struct sk_buff *   sk_send_head;         /* 0x188   0x8 */
      		struct rb_root     tcp_rtx_queue;        /* 0x188   0x8 */
      	};                                               /* 0x188   0x8 */
      	union {
      		struct sk_buff *           sk_send_head;         /*     0   0x8 */
      		struct rb_root             tcp_rtx_queue;        /*     0   0x8 */
      	};
      
      	struct sk_buff_head        sk_write_queue;       /* 0x190  0x18 */
      	u32                        sk_dst_pending_confirm; /* 0x1a8   0x4 */
      	u32                        sk_pacing_status;     /* 0x1ac   0x4 */
      	struct page_frag           sk_frag;              /* 0x1b0  0x10 */
      	/* --- cacheline 7 boundary (448 bytes) --- */
      	struct timer_list          sk_timer;             /* 0x1c0  0x28 */
      
      	/* XXX last struct has 4 bytes of padding */
      
      	unsigned long              sk_pacing_rate;       /* 0x1e8   0x8 */
      	atomic_t                   sk_zckey;             /* 0x1f0   0x4 */
      	atomic_t                   sk_tskey;             /* 0x1f4   0x4 */
      	__u8                       __cacheline_group_end__sock_write_tx[0]; /* 0x1f8     0 */
      	__u8                       __cacheline_group_begin__sock_read_tx[0]; /* 0x1f8     0 */
      	unsigned long              sk_max_pacing_rate;   /* 0x1f8   0x8 */
      	/* --- cacheline 8 boundary (512 bytes) --- */
      	long                       sk_sndtimeo;          /* 0x200   0x8 */
      	u32                        sk_priority;          /* 0x208   0x4 */
      	u32                        sk_mark;              /* 0x20c   0x4 */
      	rcu *                      sk_dst_cache;         /* 0x210   0x8 */
      	netdev_features_t          sk_route_caps;        /* 0x218   0x8 */
      	u16                        sk_gso_type;          /* 0x220   0x2 */
      	u16                        sk_gso_max_segs;      /* 0x222   0x2 */
      	unsigned int               sk_gso_max_size;      /* 0x224   0x4 */
      	gfp_t                      sk_allocation;        /* 0x228   0x4 */
      	u32                        sk_txhash;            /* 0x22c   0x4 */
      	u8                         sk_pacing_shift;      /* 0x230   0x1 */
      	bool                       sk_use_task_frag;     /* 0x231   0x1 */
      	__u8                       __cacheline_group_end__sock_read_tx[0]; /* 0x232     0 */
      	u8                         sk_gso_disabled:1;    /* 0x232: 0 0x1 */
      	u8                         sk_kern_sock:1;       /* 0x232:0x1 0x1 */
      	u8                         sk_no_check_tx:1;     /* 0x232:0x2 0x1 */
      	u8                         sk_no_check_rx:1;     /* 0x232:0x3 0x1 */
      
      	/* XXX 4 bits hole, try to pack */
      
      	u8                         sk_shutdown;          /* 0x233   0x1 */
      	u16                        sk_type;              /* 0x234   0x2 */
      	u16                        sk_protocol;          /* 0x236   0x2 */
      	unsigned long              sk_lingertime;        /* 0x238   0x8 */
      	/* --- cacheline 9 boundary (576 bytes) --- */
      	struct proto *             sk_prot_creator;      /* 0x240   0x8 */
      	rwlock_t                   sk_callback_lock;     /* 0x248   0x8 */
      	int                        sk_err_soft;          /* 0x250   0x4 */
      	u32                        sk_ack_backlog;       /* 0x254   0x4 */
      	u32                        sk_max_ack_backlog;   /* 0x258   0x4 */
      	kuid_t                     sk_uid;               /* 0x25c   0x4 */
      	spinlock_t                 sk_peer_lock;         /* 0x260   0x4 */
      	int                        sk_bind_phc;          /* 0x264   0x4 */
      	struct pid *               sk_peer_pid;          /* 0x268   0x8 */
      	const struct cred  *       sk_peer_cred;         /* 0x270   0x8 */
      	ktime_t                    sk_stamp;             /* 0x278   0x8 */
      	/* --- cacheline 10 boundary (640 bytes) --- */
      	int                        sk_disconnects;       /* 0x280   0x4 */
      	u8                         sk_txrehash;          /* 0x284   0x1 */
      	u8                         sk_clockid;           /* 0x285   0x1 */
      	u8                         sk_txtime_deadline_mode:1; /* 0x286: 0 0x1 */
      	u8                         sk_txtime_report_errors:1; /* 0x286:0x1 0x1 */
      	u8                         sk_txtime_unused:6;   /* 0x286:0x2 0x1 */
      
      	/* XXX 1 byte hole, try to pack */
      
      	void *                     sk_user_data;         /* 0x288   0x8 */
      	void *                     sk_security;          /* 0x290   0x8 */
      	struct sock_cgroup_data    sk_cgrp_data;         /* 0x298   0x8 */
      	void                       (*sk_state_change)(struct sock *); /* 0x2a0   0x8 */
      	void                       (*sk_write_space)(struct sock *); /* 0x2a8   0x8 */
      	void                       (*sk_error_report)(struct sock *); /* 0x2b0   0x8 */
      	int                        (*sk_backlog_rcv)(struct sock *, struct sk_buff *); /* 0x2b8   0x8 */
      	/* --- cacheline 11 boundary (704 bytes) --- */
      	void                       (*sk_destruct)(struct sock *); /* 0x2c0   0x8 */
      	rcu *                      sk_reuseport_cb;      /* 0x2c8   0x8 */
      	rcu *                      sk_bpf_storage;       /* 0x2d0   0x8 */
      	struct callback_head       sk_rcu __attribute__((__aligned__(8))); /* 0x2d8  0x10 */
      	netns_tracker              ns_tracker;           /* 0x2e8   0x8 */
      
      	/* size: 752, cachelines: 12, members: 105 */
      	/* sum members: 749, holes: 1, sum holes: 1 */
      	/* sum bitfield members: 12 bits, bit holes: 1, sum bit holes: 4 bits */
      	/* paddings: 1, sum paddings: 4 */
      	/* forced alignments: 1 */
      	/* last cacheline: 48 bytes */
      };
      Signed-off-by: default avatarEric Dumazet <edumazet@google.com>
      Acked-by: default avatarPaolo Abeni <pabeni@redhat.com>
      Link: https://lore.kernel.org/r/20240216162006.2342759-1-edumazet@google.comSigned-off-by: default avatarPaolo Abeni <pabeni@redhat.com>
      5d4cc874
    • Colin Ian King's avatar
      net: tcp: Remove redundant initialization of variable len · 465c1abc
      Colin Ian King authored
      The variable len being initialized with a value that is never read, an
      if statement is initializing it in both paths of the if statement.
      The initialization is redundant and can be removed.
      
      Cleans up clang scan build warning:
      net/ipv4/tcp_ao.c:512:11: warning: Value stored to 'len' during its
      initialization is never read [deadcode.DeadStores]
      Signed-off-by: default avatarColin Ian King <colin.i.king@gmail.com>
      Reviewed-by: default avatarDmitry Safonov <0x7f454c46@gmail.com>
      Link: https://lore.kernel.org/r/20240216125443.2107244-1-colin.i.king@gmail.comSigned-off-by: default avatarPaolo Abeni <pabeni@redhat.com>
      465c1abc
    • Marc Kleine-Budde's avatar
      can: raw: raw_getsockopt(): reduce scope of err · 00bf80c4
      Marc Kleine-Budde authored
      Reduce the scope of the variable "err" to the individual cases. This
      is to avoid the mistake of setting "err" in the mistaken belief that
      it will be evaluated later.
      Reviewed-by: default avatarVincent Mailhol <mailhol.vincent@wanadoo.fr>
      Link: https://lore.kernel.org/all/20240220-raw-setsockopt-v1-1-7d34cb1377fc@pengutronix.deSigned-off-by: default avatarMarc Kleine-Budde <mkl@pengutronix.de>
      00bf80c4
    • Paolo Abeni's avatar
      Merge branch 'abstract-page-from-net-stack' · bb18fc7a
      Paolo Abeni authored
      Mina Almasry says:
      
      ====================
      Abstract page from net stack
      
      This series is a prerequisite to the devmem TCP series. For a full
      snapshot of the code which includes these changes, feel free to check:
      
      https://github.com/mina/linux/commits/tcpdevmem-rfcv5/
      
      Currently these components in the net stack use the struct page
      directly:
      
      1. Drivers.
      2. Page pool.
      3. skb_frag_t.
      
      To add support for new (non struct page) memory types to the net stack, we
      must first abstract the current memory type.
      
      Originally the plan was to reuse struct page* for the new memory types,
      and to set the LSB on the page* to indicate it's not really a page.
      However, for safe compiler type checking we need to introduce a new type.
      
      struct netmem is introduced to abstract the underlying memory type.
      Currently it's a no-op abstraction that is always a struct page underneath.
      In parallel there is an undergoing effort to add support for devmem to the
      net stack:
      
      https://lore.kernel.org/netdev/20231208005250.2910004-1-almasrymina@google.com/
      
      Cc: Jason Gunthorpe <jgg@nvidia.com>
      Cc: Christian König <christian.koenig@amd.com>
      Cc: Shakeel Butt <shakeelb@google.com>
      Cc: Yunsheng Lin <linyunsheng@huawei.com>
      Cc: Willem de Bruijn <willemdebruijn.kernel@gmail.com>
      ====================
      
      Link: https://lore.kernel.org/r/20240214223405.1972973-1-almasrymina@google.comSigned-off-by: default avatarPaolo Abeni <pabeni@redhat.com>
      bb18fc7a
    • Mina Almasry's avatar
      net: add netmem to skb_frag_t · 21d2e673
      Mina Almasry authored
      Use struct netmem* instead of page in skb_frag_t. Currently struct
      netmem* is always a struct page underneath, but the abstraction
      allows efforts to add support for skb frags not backed by pages.
      
      There is unfortunately 1 instance where the skb_frag_t is assumed to be
      a exactly a bio_vec in kcm. For this case, WARN_ON_ONCE and return error
      before doing a cast.
      
      Add skb[_frag]_fill_netmem_*() and skb_add_rx_frag_netmem() helpers so
      that the API can be used to create netmem skbs.
      Signed-off-by: default avatarMina Almasry <almasrymina@google.com>
      Acked-by: default avatarPaolo Abeni <pabeni@redhat.com>
      Signed-off-by: default avatarPaolo Abeni <pabeni@redhat.com>
      21d2e673
    • Mina Almasry's avatar
      net: introduce abstraction for network memory · 18ddbf5c
      Mina Almasry authored
      Add the netmem_ref type, an abstraction for network memory.
      
      To add support for new memory types to the net stack, we must first
      abstract the current memory type. Currently parts of the net stack
      use struct page directly:
      
      - page_pool
      - drivers
      - skb_frag_t
      
      Originally the plan was to reuse struct page* for the new memory types,
      and to set the LSB on the page* to indicate it's not really a page.
      However, for compiler type checking we need to introduce a new type.
      
      netmem_ref is introduced to abstract the underlying memory type.
      Currently it's a no-op abstraction that is always a struct page
      underneath. In parallel there is an undergoing effort to add support
      for devmem to the net stack:
      
      https://lore.kernel.org/netdev/20231208005250.2910004-1-almasrymina@google.com/
      
      netmem_ref can be pointers to different underlying memory types, and the
      low bits are set to indicate the memory type. Helpers are provided
      to convert netmem pointers to the underlying memory type (currently only
      struct page). In the devmem series helpers are provided so that calling
      code can use netmem without worrying about the underlying memory type
      unless absolutely necessary.
      Reviewed-by: default avatarShakeel Butt <shakeelb@google.com>
      Signed-off-by: default avatarMina Almasry <almasrymina@google.com>
      Signed-off-by: default avatarPaolo Abeni <pabeni@redhat.com>
      18ddbf5c
    • Oliver Hartkopp's avatar
      can: raw: fix getsockopt() for new CAN_RAW_XL_VCID_OPTS · c8fba5d6
      Oliver Hartkopp authored
      The code for the CAN_RAW_XL_VCID_OPTS getsockopt() was incompletely adopted
      from the CAN_RAW_FILTER getsockopt().
      
      Add the missing put_user() and return statements.
      
      Flagged by Smatch.
      
      Fixes: c83c22ec ("can: canxl: add virtual CAN network identifier support")
      Reported-by: default avatarSimon Horman <horms@kernel.org>
      Signed-off-by: default avatarOliver Hartkopp <socketcan@hartkopp.net>
      Link: https://lore.kernel.org/all/20240219200021.12113-1-socketcan@hartkopp.netSigned-off-by: default avatarMarc Kleine-Budde <mkl@pengutronix.de>
      c8fba5d6
  3. 19 Feb, 2024 11 commits
  4. 18 Feb, 2024 2 commits