- 03 Sep, 2022 10 commits
-
-
Zhengchao Shao authored
If fq_codel_init() fails, qdisc_create() invokes fq_codel_destroy() to clear resources. Therefore, remove redundant resource cleanup in fq_codel_init(). Signed-off-by: Zhengchao Shao <shaozhengchao@huawei.com> Signed-off-by: David S. Miller <davem@davemloft.net>
-
Wei Fang authored
The current driver support stop mode by calling machine api. The patch add dts support to set GPR register for stop request. imx8mq enter stop/exit stop mode by setting GPR bit, which can be accessed by A core. imx8qm enter stop/exit stop mode by calling IMX_SC ipc APIs that communicate with M core ipc service, and the M core set the related GPR bit at last. Signed-off-by: Wei Fang <wei.fang@nxp.com> Signed-off-by: David S. Miller <davem@davemloft.net>
-
André Apitzsch authored
The Lenovo USB-C Travel Hub supports MAC passthrough. Signed-off-by: André Apitzsch <git@apitzsch.eu> Signed-off-by: David S. Miller <davem@davemloft.net>
-
Gustavo A. R. Silva authored
We now have a cleaner way to keep compatibility with user-space (a.k.a. not breaking it) when we need to keep in place a one-element array (for its use in user-space) together with a flexible-array member (for its use in kernel-space) without making it hard to read at the source level. This is through the use of the new __DECLARE_FLEX_ARRAY() helper macro. The size and memory layout of the structure is preserved after the changes. See below. Before changes: $ pahole -C ip_msfilter net/ipv4/igmp.o struct ip_msfilter { union { struct { __be32 imsf_multiaddr_aux; /* 0 4 */ __be32 imsf_interface_aux; /* 4 4 */ __u32 imsf_fmode_aux; /* 8 4 */ __u32 imsf_numsrc_aux; /* 12 4 */ __be32 imsf_slist[1]; /* 16 4 */ }; /* 0 20 */ struct { __be32 imsf_multiaddr; /* 0 4 */ __be32 imsf_interface; /* 4 4 */ __u32 imsf_fmode; /* 8 4 */ __u32 imsf_numsrc; /* 12 4 */ __be32 imsf_slist_flex[0]; /* 16 0 */ }; /* 0 16 */ }; /* 0 20 */ /* size: 20, cachelines: 1, members: 1 */ /* last cacheline: 20 bytes */ }; After changes: $ pahole -C ip_msfilter net/ipv4/igmp.o struct ip_msfilter { __be32 imsf_multiaddr; /* 0 4 */ __be32 imsf_interface; /* 4 4 */ __u32 imsf_fmode; /* 8 4 */ __u32 imsf_numsrc; /* 12 4 */ union { __be32 imsf_slist[1]; /* 16 4 */ struct { struct { } __empty_imsf_slist_flex; /* 16 0 */ __be32 imsf_slist_flex[0]; /* 16 0 */ }; /* 16 0 */ }; /* 16 4 */ /* size: 20, cachelines: 1, members: 5 */ /* last cacheline: 20 bytes */ }; In the past, we had to duplicate the whole original structure within a union, and update the names of all the members. Now, we just need to declare the flexible-array member to be used in kernel-space through the __DECLARE_FLEX_ARRAY() helper together with the one-element array, within a union. This makes the source code more clean and easier to read. Link: https://github.com/KSPP/linux/issues/193Signed-off-by: Gustavo A. R. Silva <gustavoars@kernel.org> Reviewed-by: Kees Cook <keescook@chromium.org> Signed-off-by: David S. Miller <davem@davemloft.net>
-
Zhengchao Shao authored
tcf_qevent_parse_block_index() never returns a zero block_index. Therefore, it is unnecessary to check the value of block_index in tcf_qevent_init(). Signed-off-by: Zhengchao Shao <shaozhengchao@huawei.com> Link: https://lore.kernel.org/r/20220901011617.14105-1-shaozhengchao@huawei.comSigned-off-by: Jakub Kicinski <kuba@kernel.org>
-
GUO Zihua authored
Since Linux now supports CFI, it will be a good idea to fix mismatched return type for implementation of hooks. Otherwise this might get cought out by CFI and cause a panic. ltq_etop_tx() would return either NETDEV_TX_BUSY or NETDEV_TX_OK, so change the return type to netdev_tx_t directly. Signed-off-by: GUO Zihua <guozihua@huawei.com> Link: https://lore.kernel.org/r/20220902081521.59867-1-guozihua@huawei.comSigned-off-by: Jakub Kicinski <kuba@kernel.org>
-
GUO Zihua authored
Since Linux now supports CFI, it will be a good idea to fix mismatched return type for implementation of hooks. Otherwise this might get cought out by CFI and cause a panic. spl2sw_ethernet_start_xmit() would return either NETDEV_TX_BUSY or NETDEV_TX_OK, so change the return type to netdev_tx_t directly. Signed-off-by: GUO Zihua <guozihua@huawei.com> Link: https://lore.kernel.org/r/20220902081550.60095-1-guozihua@huawei.comSigned-off-by: Jakub Kicinski <kuba@kernel.org>
-
GUO Zihua authored
Since Linux now supports CFI, it will be a good idea to fix mismatched return type for implementation of hooks. Otherwise this might get cought out by CFI and cause a panic. eth_xmit() would return either NETDEV_TX_BUSY or NETDEV_TX_OK, so change the return type to netdev_tx_t directly. Signed-off-by: GUO Zihua <guozihua@huawei.com> Link: https://lore.kernel.org/r/20220902081612.60405-1-guozihua@huawei.comSigned-off-by: Jakub Kicinski <kuba@kernel.org>
-
GUO Zihua authored
Since Linux now supports CFI, it will be a good idea to fix mismatched return type for implementation of hooks. Otherwise this might get cought out by CFI and cause a panic. bcm4908_enet_start_xmit() would return either NETDEV_TX_BUSY or NETDEV_TX_OK, so change the return type to netdev_tx_t directly. Signed-off-by: GUO Zihua <guozihua@huawei.com> Reviewed-by: Florian Fainelli <f.fainelli@gmail.com> Link: https://lore.kernel.org/r/20220902075407.52358-1-guozihua@huawei.comSigned-off-by: Jakub Kicinski <kuba@kernel.org>
-
Gal Pressman authored
When CONFIG_IEEE802154_NL802154_EXPERIMENTAL is disabled, NL802154_CMD_DEL_SEC_LEVEL is undefined and results in a compilation error: net/ieee802154/nl802154.c:2503:19: error: 'NL802154_CMD_DEL_SEC_LEVEL' undeclared here (not in a function); did you mean 'NL802154_CMD_SET_CCA_ED_LEVEL'? 2503 | .resv_start_op = NL802154_CMD_DEL_SEC_LEVEL + 1, | ^~~~~~~~~~~~~~~~~~~~~~~~~~ | NL802154_CMD_SET_CCA_ED_LEVEL Unhide the experimental commands, having them defined in an enum makes no difference. Fixes: 9c5d03d3 ("genetlink: start to validate reserved header bytes") Signed-off-by: Gal Pressman <gal@nvidia.com> Acked-by: Stefan Schmidt <stefan@datenfreihafen.org> Tested-by: Sudip Mukherjee <sudipm.mukherjee@gmail.com> Link: https://lore.kernel.org/r/20220902030620.2737091-1-kuba@kernel.orgSigned-off-by: Jakub Kicinski <kuba@kernel.org>
-
- 02 Sep, 2022 17 commits
-
-
Jakub Kicinski authored
All callers are now gone. Signed-off-by: Jakub Kicinski <kuba@kernel.org> Signed-off-by: David S. Miller <davem@davemloft.net>
-
Eric Dumazet authored
Add some documentation for netdev_tx_sent_queue() and netdev_tx_completed_queue() Stating that netdev_tx_completed_queue() must be called once per TX completion round is apparently not obvious for everybody. Signed-off-by: Eric Dumazet <edumazet@google.com> Signed-off-by: David S. Miller <davem@davemloft.net>
-
David S. Miller authored
Alex Elder says: ==================== net: ipa: use IDs to track transaction state This series is the first of three groups of changes that simplify the way the IPA driver tracks the state of its transactions. Each GSI channel has a fixed number of transactions allocated at initialization time. The number allocated matches the number of TREs in the transfer ring associated with the channel. This is because the transfer ring limits the number of transfers that can ever be underway, and in the worst case, each transaction represents a single TRE. Transactions go through various states during their lifetime. Currently a set of lists keeps track of which transactions are in each state. Initially, all transactions are free. An allocated transaction is placed on the allocated list. Once an allocated transaction is committed, it is moved from the allocated to the committed list. When a committed transaction is sent to hardware (via a doorbell) it is moved to the pending list. When hardware signals that some work has completed, transactions are moved to the completed list. Finally, when a completed transaction is polled it's moved to the polled list before being removed when it becomes free. Changing a transaction's state thus normally involves manipulating two lists, and to prevent corruption a spinlock is held while the lists are updated. Transactions move through their states in a well-defined sequence though, and they do so strictly in order. So transaction 0 is always allocated before transaction 1; transaction 0 is always committed before transaction 1; and so on, through completion, polling, and becoming free. Because of this, it's sufficient to just keep track of which transaction is the first in each state. The rest of the transactions in a given state can be derived from the first transaction in an "adjacent" state. As a result, we can track the state of all transactions with a set of indexes, and can update these without the need for a spinlock. This first group of patches just defines the set of indexes that will be used for this new way of tracking transaction state. Two more groups of patches will follow. I've broken the 17 patches into these three groups to facilitate review. ==================== Signed-off-by: David S. Miller <davem@davemloft.net>
-
Alex Elder authored
Add a transaction ID to track the first element in the transaction array that has been polled. Advance the ID when we are releasing a transaction. Temporarily add warnings that verify that the first polled transaction tracked by the ID matches the first element on the polled list, both when polling and freeing. Remove the temporary warnings added by the previous commit. Signed-off-by: Alex Elder <elder@linaro.org> Signed-off-by: David S. Miller <davem@davemloft.net>
-
Alex Elder authored
Add a transaction ID field to track the first element in the transaction array that has completed but has not yet been polled. Advance the ID when we are processing a transaction in the NAPI polling loop (where completed transactions become polled). Temporarily add warnings that verify that the first completed transaction tracked by the ID matches the first element on the completed list, both when pending and completing. Remove the temporary warnings added by the previous commit. Signed-off-by: Alex Elder <elder@linaro.org> Signed-off-by: David S. Miller <davem@davemloft.net>
-
Alex Elder authored
Add a transaction ID field to track the first element in the transaction array that is pending (sent to hardware) but not yet complete. Advance the ID when a completion event for a channel indicates that transactions have completed. Temporarily add warnings that verify that the first pending transaction tracked by the ID matches the first element on the pending list, both when pending and completing, as well as when resetting the channel. Remove the temporary warnings added by the previous commit. Signed-off-by: Alex Elder <elder@linaro.org> Signed-off-by: David S. Miller <davem@davemloft.net>
-
Alex Elder authored
Add a transaction ID field to track the first element in a channel's transaction array that has been committed, but not yet passed to the hardware. Advance the ID when the hardware is notified via doorbell that TREs from a transaction are ready for consumption. Temporarily add warnings that verify that the first committed transaction tracked by the ID matches the first element on the committed list, both when committing and pending (at doorbell). Remove the temporary warnings added by the previous commit. Signed-off-by: Alex Elder <elder@linaro.org> Signed-off-by: David S. Miller <davem@davemloft.net>
-
Alex Elder authored
Transactions for a channel are now managed in an array, with a free transaction ID indicating which is the next one free. Add another transaction ID field to track the first element in the array that has been allocated. Advance it when a transaction is committed (because that is when that transaction leaves allocated state). Temporarily add warnings that verify that the first allocated transaction tracked by the ID matches the first element on the allocated list, both when allocating and committing a transaction. Signed-off-by: Alex Elder <elder@linaro.org> Signed-off-by: David S. Miller <davem@davemloft.net>
-
Alex Elder authored
Transactions are always allocated one at a time. The maximum number of them we could ever need occurs if each TRE is assigned to a transaction. So a channel requires no more transactions than the number of TREs in its transfer ring. That number is known to be a power-of-2 less than 65536. The transaction pool abstraction is used for other things, but for transactions we can use a simple array of transaction structures, and use a free index to indicate which entry in the array is the next one free for allocation. By having the number of elements in the array be a power-of-2, we can use an ever-incrementing 16-bit free index, and use it modulo the array size. Distinguish a "trans_id" (whose value can exceed the number of entries in the transaction array) from a "trans_index" (which is less than the number of entries). Signed-off-by: Alex Elder <elder@linaro.org> Signed-off-by: David S. Miller <davem@davemloft.net>
-
David S. Miller authored
Michael Walle says: ==================== net: lan966x: make reset optional This is the remaining part of the reset rework on the LAN966x targetting the netdev tree. The former series can be found at: https://lore.kernel.org/lkml/20220826115607.1148489-1-michael@walle.cc/ ==================== Signed-off-by: David S. Miller <davem@davemloft.net>
-
Michael Walle authored
There is no dedicated reset for just the switch core. The reset which is used up until now, is more of a global reset, resetting almost the whole SoC and cause spurious errors by doing so. Make it possible to handle the reset elsewhere and make the reset optional. Signed-off-by: Michael Walle <michael@walle.cc> Signed-off-by: David S. Miller <davem@davemloft.net>
-
Michael Walle authored
Make the reset line optional. It turns out, there is no dedicated reset for the switch. Instead, the reset which was used up until now, was kind of a global reset. This is now handled elsewhere, thus don't require a reset. Signed-off-by: Michael Walle <michael@walle.cc> Acked-by: Krzysztof Kozlowski <krzysztof.kozlowski@linaro.org> Signed-off-by: David S. Miller <davem@davemloft.net>
-
Eric Dumazet authored
This is a followup of commit c67b8555 ("ipv6: tcp: send consistent autoflowlabel in TIME_WAIT state"), but for SYN_RECV state. In some cases, TCP sends a challenge ACK on behalf of a SYN_RECV request. WHen this happens, we want to use the flow label that was used when the prior SYNACK packet was sent, instead of another one. After his patch, following packetdrill passes: 0 socket(..., SOCK_STREAM, IPPROTO_TCP) = 3 +0 setsockopt(3, SOL_SOCKET, SO_REUSEADDR, [1], 4) = 0 +0 bind(3, ..., ...) = 0 +0 listen(3, 1) = 0 +.2 < S 0:0(0) win 32792 <mss 1000,sackOK,nop,nop,nop,wscale 7> +0 > (flowlabel 0x11) S. 0:0(0) ack 1 <...> // Test if a challenge ack is properly sent (same flowlabel than prior SYNACK) +.01 < . 4000000000:4000000000(0) ack 1 win 320 +0 > (flowlabel 0x11) . 1:1(0) ack 1 Signed-off-by: Eric Dumazet <edumazet@google.com> Link: https://lore.kernel.org/r/20220831203729.458000-1-eric.dumazet@gmail.comSigned-off-by: Jakub Kicinski <kuba@kernel.org>
-
Vladimir Oltean authored
This has been validated on the Ocelot/Felix switch family (NXP LS1028A) and should be relevant to any switch driver that offloads the tc-flower and/or tc-matchall actions trap, drop, accept, mirred, for which DSA has operations. TEST: gact drop and ok (skip_hw) [ OK ] TEST: mirred egress flower redirect (skip_hw) [ OK ] TEST: mirred egress flower mirror (skip_hw) [ OK ] TEST: mirred egress matchall mirror (skip_hw) [ OK ] TEST: mirred_egress_to_ingress (skip_hw) [ OK ] TEST: gact drop and ok (skip_sw) [ OK ] TEST: mirred egress flower redirect (skip_sw) [ OK ] TEST: mirred egress flower mirror (skip_sw) [ OK ] TEST: mirred egress matchall mirror (skip_sw) [ OK ] TEST: trap (skip_sw) [ OK ] TEST: mirred_egress_to_ingress (skip_sw) [ OK ] Signed-off-by: Vladimir Oltean <vladimir.oltean@nxp.com> Reviewed-by: Florian Fainelli <f.fainelli@gmail.com> Link: https://lore.kernel.org/r/20220831170839.931184-1-vladimir.oltean@nxp.comSigned-off-by: Jakub Kicinski <kuba@kernel.org>
-
Jinpeng Cui authored
Return value directly from nsim_dev_reload_create() instead of getting value from redundant variable ret. Reported-by: Zeal Robot <zealci@zte.com.cn> Signed-off-by: Jinpeng Cui <cui.jinpeng2@zte.com.cn> Link: https://lore.kernel.org/r/20220831154329.305372-1-cui.jinpeng2@zte.com.cnSigned-off-by: Jakub Kicinski <kuba@kernel.org>
-
Juhee Kang authored
The open code is defined as a new helper function(netif_oper_up) on netdev.h, the code is dev->operstate == IF_OPER_UP || dev->operstate == IF_OPER_UNKNOWN. Thus, replace the open code to netif_oper_up. This patch doesn't change logic. Signed-off-by: Juhee Kang <claudiajkang@gmail.com> Link: https://lore.kernel.org/r/20220831125845.1333-1-claudiajkang@gmail.comSigned-off-by: Jakub Kicinski <kuba@kernel.org>
-
Zhengchao Shao authored
etf_enable_offload() is only called when q->offload is false in etf_init(). So remove true check in etf_enable_offload(). Signed-off-by: Zhengchao Shao <shaozhengchao@huawei.com> Acked-by: Vinicius Costa Gomes <vinicius.gomes@intel.com> Link: https://lore.kernel.org/r/20220831092919.146149-1-shaozhengchao@huawei.comSigned-off-by: Jakub Kicinski <kuba@kernel.org>
-
- 01 Sep, 2022 13 commits
-
-
git://git.kernel.org/pub/scm/linux/kernel/git/netdev/netJakub Kicinski authored
tools/testing/selftests/net/.gitignore sort the net-next version and use it Signed-off-by: Jakub Kicinski <kuba@kernel.org>
-
git://git.kernel.org/pub/scm/linux/kernel/git/netdev/netLinus Torvalds authored
Pull networking fixes from Paolo Abeni: "Including fixes from bluetooth, bpf and wireless. Current release - regressions: - bpf: - fix wrong last sg check in sk_msg_recvmsg() - fix kernel BUG in purge_effective_progs() - mac80211: - fix possible leak in ieee80211_tx_control_port() - potential NULL dereference in ieee80211_tx_control_port() Current release - new code bugs: - nfp: fix the access to management firmware hanging Previous releases - regressions: - ip: fix triggering of 'icmp redirect' - sched: tbf: don't call qdisc_put() while holding tree lock - bpf: fix corrupted packets for XDP_SHARED_UMEM - bluetooth: hci_sync: fix suspend performance regression - micrel: fix probe failure Previous releases - always broken: - tcp: make global challenge ack rate limitation per net-ns and default disabled - tg3: fix potential hang-up on system reboot - mac802154: fix reception for no-daddr packets Misc: - r8152: add PID for the lenovo onelink+ dock" * tag 'net-6.0-rc4' of git://git.kernel.org/pub/scm/linux/kernel/git/netdev/net: (56 commits) net/smc: Remove redundant refcount increase Revert "sch_cake: Return __NET_XMIT_STOLEN when consuming enqueued skb" tcp: make global challenge ack rate limitation per net-ns and default disabled tcp: annotate data-race around challenge_timestamp net: dsa: hellcreek: Print warning only once ip: fix triggering of 'icmp redirect' sch_cake: Return __NET_XMIT_STOLEN when consuming enqueued skb selftests: net: sort .gitignore file Documentation: networking: correct possessive "its" kcm: fix strp_init() order and cleanup mlxbf_gige: compute MDIO period based on i1clk ethernet: rocker: fix sleep in atomic context bug in neigh_timer_handler net: lan966x: improve error handle in lan966x_fdma_rx_get_frame() nfp: fix the access to management firmware hanging net: phy: micrel: Make the GPIO to be non-exclusive net: virtio_net: fix notification coalescing comments net/sched: fix netdevice reference leaks in attach_default_qdiscs() net: sched: tbf: don't call qdisc_put() while holding tree lock net: Use u64_stats_fetch_begin_irq() for stats fetch. net: dsa: xrs700x: Use irqsave variant for u64 stats update ...
-
git://git.kernel.org/pub/scm/linux/kernel/git/vbabka/slabLinus Torvalds authored
Pull slab fix from Vlastimil Babka: - A fix from Waiman Long to avoid a theoretical deadlock reported by lockdep. * tag 'slab-for-6.0-rc4' of git://git.kernel.org/pub/scm/linux/kernel/git/vbabka/slab: mm/slab_common: Deleting kobject in kmem_cache_destroy() without holding slab_mutex/cpu_hotplug_lock
-
git://git.kernel.org/pub/scm/linux/kernel/git/tiwai/soundLinus Torvalds authored
Pull sound fixes from Takashi Iwai: "Just handful changes at this time. The only major change is the regression fix about the x86 WC-page buffer allocation. The rest are trivial data-race fixes for ALSA sequencer core, the possible out-of-bounds access fixes in the new ALSA control hash code, and a few device-specific workarounds and fixes" * tag 'sound-6.0-rc4' of git://git.kernel.org/pub/scm/linux/kernel/git/tiwai/sound: ALSA: usb-audio: Add quirk for LH Labs Geek Out HD Audio 1V5 ALSA: hda/realtek: Add speaker AMP init for Samsung laptops with ALC298 ALSA: control: Re-order bounds checking in get_ctl_id_hash() ALSA: control: Fix an out-of-bounds bug in get_ctl_id_hash() ALSA: hda: intel-nhlt: Correct the handling of fmt_config flexible array ALSA: seq: Fix data-race at module auto-loading ALSA: seq: oss: Fix data-race for max_midi_devs access ALSA: memalloc: Revive x86-specific WC page allocations again
-
Zhengchao Shao authored
The kfree invoked by gred_destroy_vq checks whether the input parameter is empty. Therefore, gred_destroy() doesn't need to check table->tab. Signed-off-by: Zhengchao Shao <shaozhengchao@huawei.com> Link: https://lore.kernel.org/r/20220831041452.33026-1-shaozhengchao@huawei.comSigned-off-by: Paolo Abeni <pabeni@redhat.com>
-
Waiman Long authored
mm/slab_common: Deleting kobject in kmem_cache_destroy() without holding slab_mutex/cpu_hotplug_lock A circular locking problem is reported by lockdep due to the following circular locking dependency. +--> cpu_hotplug_lock --> slab_mutex --> kn->active --+ | | +-----------------------------------------------------+ The forward cpu_hotplug_lock ==> slab_mutex ==> kn->active dependency happens in kmem_cache_destroy(): cpus_read_lock(); mutex_lock(&slab_mutex); ==> sysfs_slab_unlink() ==> kobject_del() ==> kernfs_remove() ==> __kernfs_remove() ==> kernfs_drain(): rwsem_acquire(&kn->dep_map, ...); The backward kn->active ==> cpu_hotplug_lock dependency happens in kernfs_fop_write_iter(): kernfs_get_active(); ==> slab_attr_store() ==> cpu_partial_store() ==> flush_all(): cpus_read_lock() One way to break this circular locking chain is to avoid holding cpu_hotplug_lock and slab_mutex while deleting the kobject in sysfs_slab_unlink() which should be equivalent to doing a write_lock and write_unlock pair of the kn->active virtual lock. Since the kobject structures are not protected by slab_mutex or the cpu_hotplug_lock, we can certainly release those locks before doing the delete operation. Move sysfs_slab_unlink() and sysfs_slab_release() to the newly created kmem_cache_release() and call it outside the slab_mutex & cpu_hotplug_lock critical sections. There will be a slight delay in the deletion of sysfs files if kmem_cache_release() is called indirectly from a work function. Fixes: 5a836bf6 ("mm: slub: move flush_cpu_slab() invocations __free_slab() invocations out of IRQ context") Signed-off-by: Waiman Long <longman@redhat.com> Reviewed-by: Hyeonggon Yoo <42.hyeyoo@gmail.com> Reviewed-by: Roman Gushchin <roman.gushchin@linux.dev> Acked-by: David Rientjes <rientjes@google.com> Link: https://lore.kernel.org/all/YwOImVd+nRUsSAga@hyeyoo/Signed-off-by: Vlastimil Babka <vbabka@suse.cz>
-
Paolo Abeni authored
Sebastian Reichel says: ==================== RK3588 Ethernet Support This adds ethernet support for the RK3588(s) SoCs. Changes since PATCHv2: * Rebased to v6.0-rc1 * Wrap DT bindings additions at 80 characters * Add Acked-by from Krzysztof Changes since PATCHv1: * Drop first patch (not required) and rebase second patch accordingly * Rename DT property rockchip,php_grf -> rockchip,php-grf ==================== Link: https://lore.kernel.org/r/20220830154559.61506-1-sebastian.reichel@collabora.comSigned-off-by: Paolo Abeni <pabeni@redhat.com>
-
Sebastian Reichel authored
Add compatible string for RK3588 gmac, which is similar to the RK3568 one, but needs another syscon device for clock selection. Acked-by: Krzysztof Kozlowski <krzysztof.kozlowski@linaro.org> Signed-off-by: Sebastian Reichel <sebastian.reichel@collabora.com> Signed-off-by: Paolo Abeni <pabeni@redhat.com>
-
David Wu authored
Add constants and callback functions for the dwmac on RK3588 soc. As can be seen, the base structure is the same, only registers and the bits in them moved slightly. Signed-off-by: David Wu <david.wu@rock-chips.com> [rebase, squash fixes] Signed-off-by: Sebastian Reichel <sebastian.reichel@collabora.com> Signed-off-by: Paolo Abeni <pabeni@redhat.com>
-
Yacan Liu authored
For passive connections, the refcount increment has been done in smc_clcsock_accept()-->smc_sock_alloc(). Fixes: 3b2dec26 ("net/smc: restructure client and server code in af_smc") Signed-off-by: Yacan Liu <liuyacan@corp.netease.com> Reviewed-by: Tony Lu <tonylu@linux.alibaba.com> Link: https://lore.kernel.org/r/20220830152314.838736-1-liuyacan@corp.netease.comSigned-off-by: Paolo Abeni <pabeni@redhat.com>
-
Suman Ghosh authored
As of now all transmit queues transmit packets out of same scheduler queue hierarchy. Due to this PFC frames sent by peer are not handled properly, either all transmit queues are backpressured or none. To fix this when user enables PFC for a given priority map relavant transmit queue to a different scheduler queue hierarcy, so that backpressure is applied only to the traffic egressing out of that TXQ. Signed-off-by: Suman Ghosh <sumang@marvell.com> Link: https://lore.kernel.org/r/20220830120304.158060-1-sumang@marvell.comSigned-off-by: Paolo Abeni <pabeni@redhat.com>
-
Zhengchao Shao authored
Currently, the change function can be called by two ways. The one way is that qdisc_change() will call it. Before calling change function, qdisc_change() ensures tca[TCA_OPTIONS] is not empty. The other way is that .init() will call it. The opt parameter is also checked before calling change function in .init(). Therefore, it's no need to check the input parameter opt in change function. Signed-off-by: Zhengchao Shao <shaozhengchao@huawei.com> Acked-by: Toke Høiland-Jørgensen <toke@toke.dk> Link: https://lore.kernel.org/r/20220829071219.208646-1-shaozhengchao@huawei.comSigned-off-by: Paolo Abeni <pabeni@redhat.com>
-
Jakub Kicinski authored
This reverts commit 90fabae8. Patch was applied hastily, revert and let the v2 be reviewed. Fixes: 90fabae8 ("sch_cake: Return __NET_XMIT_STOLEN when consuming enqueued skb") Link: https://lore.kernel.org/all/87wnao2ha3.fsf@toke.dk/Signed-off-by: Jakub Kicinski <kuba@kernel.org>
-