- 30 Apr, 2024 5 commits
-
-
Jakub Kicinski authored
Russell King says: ==================== net: dsa: microchip: use phylink_mac_ops for ksz driver This four patch series switches the Microchip KSZ DSA driver to use phylink_mac_ops support, and for this one we go a little further beyond a simple conversion. This driver has four distinct cases: lan937x ksz9477 ksz8 ksz8830 Three of these cases are handled by shimming the existing DSA calls through ksz_dev_ops, and the final case is handled through a conditional in ksz_phylink_mac_config(). These can all be handled with separate phylink_mac_ops. To get there, we do a progressive conversion. Patch 1 removes ksz_dev_ops' phylink_mac_config() method which is not populated in any of the arrays - and is thus redundant. Patch 2 switches the driver to use a common set of phylink_mac_ops for all cases, doing the simple conversion to avoid the DSA shim. Patch 3 pushes the phylink_mac_ops down to the first three classes (lan937x, ksz9477, ksz8) adding an appropriate pointer to the phylink_mac_ops to struct ksz_chip_data, and using that to populate DSA's ds->phylink_mac_ops pointer. The difference between each of these are the mac_link_up() method. mac_config() and mac_link_down() remain common between each at this stage. Patch 4 splits out ksz8830, which needs different mac_config() handling, and thus means we have a difference in mac_config() methods between the now four phylink_mac_ops structures. Build tested only, with additional -Wunused-const-variable flag. ==================== Link: https://lore.kernel.org/r/ZivP/R1IwKEPb5T6@shell.armlinux.org.ukSigned-off-by: Jakub Kicinski <kuba@kernel.org>
-
Russell King (Oracle) authored
Use a separate phylink_mac_ops for the KSZ8830 chip-id. Signed-off-by: Russell King (Oracle) <rmk+kernel@armlinux.org.uk> Link: https://lore.kernel.org/r/E1s0O7R-009gq2-Qm@rmk-PC.armlinux.org.ukSigned-off-by: Jakub Kicinski <kuba@kernel.org>
-
Russell King (Oracle) authored
Signed-off-by: Russell King (Oracle) <rmk+kernel@armlinux.org.uk> Link: https://lore.kernel.org/r/E1s0O7M-009gpw-Lj@rmk-PC.armlinux.org.ukSigned-off-by: Jakub Kicinski <kuba@kernel.org>
-
Russell King (Oracle) authored
Convert ksz_common to provide its own phylink MAC operations, thus avoiding the shim layer in DSA's port.c Signed-off-by: Russell King (Oracle) <rmk+kernel@armlinux.org.uk> Reviewed-by: Florian Fainelli <florian.fainelli@broadcom.com> Link: https://lore.kernel.org/r/E1s0O7H-009gpq-IF@rmk-PC.armlinux.org.ukSigned-off-by: Jakub Kicinski <kuba@kernel.org>
-
Russell King (Oracle) authored
The phylink_mac_config function pointer member of struct ksz_dev_ops is never initialised, so let's remove it to simplify the code. Signed-off-by: Russell King (Oracle) <rmk+kernel@armlinux.org.uk> Reviewed-by: Florian Fainelli <florian.fainelli@broadcom.com> Link: https://lore.kernel.org/r/E1s0O7C-009gpk-Dh@rmk-PC.armlinux.org.ukSigned-off-by: Jakub Kicinski <kuba@kernel.org>
-
- 29 Apr, 2024 18 commits
-
-
https://git.kernel.org/pub/scm/linux/kernel/git/bpf/bpf-nextJakub Kicinski authored
Daniel Borkmann says: ==================== pull-request: bpf-next 2024-04-29 We've added 147 non-merge commits during the last 32 day(s) which contain a total of 158 files changed, 9400 insertions(+), 2213 deletions(-). The main changes are: 1) Add an internal-only BPF per-CPU instruction for resolving per-CPU memory addresses and implement support in x86 BPF JIT. This allows inlining per-CPU array and hashmap lookups and the bpf_get_smp_processor_id() helper, from Andrii Nakryiko. 2) Add BPF link support for sk_msg and sk_skb programs, from Yonghong Song. 3) Optimize x86 BPF JIT's emit_mov_imm64, and add support for various atomics in bpf_arena which can be JITed as a single x86 instruction, from Alexei Starovoitov. 4) Add support for passing mark with bpf_fib_lookup helper, from Anton Protopopov. 5) Add a new bpf_wq API for deferring events and refactor sleepable bpf_timer code to keep common code where possible, from Benjamin Tissoires. 6) Fix BPF_PROG_TEST_RUN infra with regards to bpf_dummy_struct_ops programs to check when NULL is passed for non-NULLable parameters, from Eduard Zingerman. 7) Harden the BPF verifier's and/or/xor value tracking, from Harishankar Vishwanathan. 8) Introduce crypto kfuncs to make BPF programs able to utilize the kernel crypto subsystem, from Vadim Fedorenko. 9) Various improvements to the BPF instruction set standardization doc, from Dave Thaler. 10) Extend libbpf APIs to partially consume items from the BPF ringbuffer, from Andrea Righi. 11) Bigger batch of BPF selftests refactoring to use common network helpers and to drop duplicate code, from Geliang Tang. 12) Support bpf_tail_call_static() helper for BPF programs with GCC 13, from Jose E. Marchesi. 13) Add bpf_preempt_{disable,enable}() kfuncs in order to allow a BPF program to have code sections where preemption is disabled, from Kumar Kartikeya Dwivedi. 14) Allow invoking BPF kfuncs from BPF_PROG_TYPE_SYSCALL programs, from David Vernet. 15) Extend the BPF verifier to allow different input maps for a given bpf_for_each_map_elem() helper call in a BPF program, from Philo Lu. 16) Add support for PROBE_MEM32 and bpf_addr_space_cast instructions for riscv64 and arm64 JITs to enable BPF Arena, from Puranjay Mohan. 17) Shut up a false-positive KMSAN splat in interpreter mode by unpoison the stack memory, from Martin KaFai Lau. 18) Improve xsk selftest coverage with new tests on maximum and minimum hardware ring size configurations, from Tushar Vyavahare. 19) Various ReST man pages fixes as well as documentation and bash completion improvements for bpftool, from Rameez Rehman & Quentin Monnet. 20) Fix libbpf with regards to dumping subsequent char arrays, from Quentin Deslandes. * tag 'for-netdev' of https://git.kernel.org/pub/scm/linux/kernel/git/bpf/bpf-next: (147 commits) bpf, docs: Clarify PC use in instruction-set.rst bpf_helpers.h: Define bpf_tail_call_static when building with GCC bpf, docs: Add introduction for use in the ISA Internet Draft selftests/bpf: extend BPF_SOCK_OPS_RTT_CB test for srtt and mrtt_us bpf: add mrtt and srtt as BPF_SOCK_OPS_RTT_CB args selftests/bpf: dummy_st_ops should reject 0 for non-nullable params bpf: check bpf_dummy_struct_ops program params for test runs selftests/bpf: do not pass NULL for non-nullable params in dummy_st_ops selftests/bpf: adjust dummy_st_ops_success to detect additional error bpf: mark bpf_dummy_struct_ops.test_1 parameter as nullable selftests/bpf: Add ring_buffer__consume_n test. bpf: Add bpf_guard_preempt() convenience macro selftests: bpf: crypto: add benchmark for crypto functions selftests: bpf: crypto skcipher algo selftests bpf: crypto: add skcipher to bpf crypto bpf: make common crypto API for TC/XDP programs bpf: update the comment for BTF_FIELDS_MAX selftests/bpf: Fix wq test. selftests/bpf: Use make_sockaddr in test_sock_addr selftests/bpf: Use connect_to_addr in test_sock_addr ... ==================== Link: https://lore.kernel.org/r/20240429131657.19423-1-daniel@iogearbox.netSigned-off-by: Jakub Kicinski <kuba@kernel.org>
-
Horatiu Vultur authored
Extend the PTP programmable gpios to implement also PTP_PF_EXTTS function. The pins can be configured to capture both of rising and falling edge. Once the event is seen, then an interrupt is generated and the LTC is saved in the registers. On lan8814 only GPIO 3 can be configured for this. This was tested using: ts2phc -m -l 7 -s generic -f ts2phc.cfg Where the configuration was the following: --- [global] ts2phc.pin_index 3 [eth0] --- Reviewed-by: Vadim Fedorenko <vadim.fedorenko@linux.dev> Signed-off-by: Horatiu Vultur <horatiu.vultur@microchip.com> Signed-off-by: David S. Miller <davem@davemloft.net>
-
David S. Miller authored
Luiz Angelo Daros de Luca says: ==================== net: dsa: realtek: fix LED support for rtl8366 This series fixes the LED support for rtl8366. The existing code was not tested in a device with switch LEDs and it was using a flawed logic. The driver now keeps the default LED configuration if nothing requests a different behavior. This may be enough for most devices. This can be achieved either by omitting the LED from the device-tree or configuring all LEDs in a group with the default state set to "keep". The hardware trigger for LEDs in Realtek switches is shared among all LEDs in a group. This behavior doesn't align well with the Linux LED API, which controls LEDs individually. Once the OS changes the brightness of a LED in a group still triggered by the hardware, the entire group switches to software-controlled LEDs, even for those not metioned in the device-tree. This shared behavior also prevents offloading the trigger to the hardware as it would require an orchestration between LEDs in a group, not currently present in the LED API. The assertion of device hardware reset during driver removal was removed because it was causing an issue with the LED release code. Devres devices are released after the driver's removal is executed. Asserting the reset at that point was causing timeout errors during LED release when it attempted to turn off the LED. To: Linus Walleij <linus.walleij@linaro.org> To: Alvin Šipraga <alsi@bang-olufsen.dk> To: Andrew Lunn <andrew@lunn.ch> To: Florian Fainelli <f.fainelli@gmail.com> To: Vladimir Oltean <olteanv@gmail.com> To: David S. Miller <davem@davemloft.net> To: Eric Dumazet <edumazet@google.com> To: Jakub Kicinski <kuba@kernel.org> To: Paolo Abeni <pabeni@redhat.com> To: Rob Herring <robh+dt@kernel.org> To: Krzysztof Kozlowski <krzysztof.kozlowski+dt@linaro.org> To: Conor Dooley <conor+dt@kernel.org> Cc: netdev@vger.kernel.org Cc: linux-kernel@vger.kernel.org Cc: devicetree@vger.kernel.org Signed-off-by: Luiz Angelo Daros de Luca <luizluca@gmail.com> Changes in v2: - Fixed commit message formatting - Added GROUP to LED group enum values. With that, moved the code that disables LED into a new function to keep 80-collumn limit. - Dropped unused enable argument in rb8366rb_get_port_led() - Fixed variable order in rtl8366rb_setup_led() - Removed redundant led group test in rb8366rb_{g,s}et_port_led() - Initialize ret as 0 in rtl8366rb_setup_leds() - Updated comments related to LED blinking and setup - Link to v1: https://lore.kernel.org/r/20240310-realtek-led-v1-0-4d9813ce938e@gmail.com Changes in v1: - Rebased on new relatek DSA drivers - Improved commit messages - Added commit to remove the reset assert during .remove - Link to RFC: https://lore.kernel.org/r/20240106184651.3665-1-luizluca@gmail.com ==================== Signed-off-by: David S. Miller <davem@davemloft.net>
-
Luiz Angelo Daros de Luca authored
This commit introduces LED drivers for rtl8366rb, enabling LEDs to be described in the device tree using the same format as qca8k. Each port can configure up to 4 LEDs. If all LEDs in a group use the default state "keep", they will use the default behavior after a reset. Changing the brightness of one LED, either manually or by a trigger, will disable the default hardware trigger and switch the entire LED group to manually controlled LEDs. Once in this mode, there is no way to revert to hardware-controlled LEDs (except by resetting the switch). Software triggers function as expected with manually controlled LEDs. Signed-off-by: Luiz Angelo Daros de Luca <luizluca@gmail.com> Reviewed-by: Linus Walleij <linus.walleij@linaro.org> Signed-off-by: David S. Miller <davem@davemloft.net>
-
Luiz Angelo Daros de Luca authored
The necessity of asserting the reset on removal was previously questioned, as DSA's own cleanup methods should suffice to prevent traffic leakage[1]. When a driver has subdrivers controlled by devres, they will be unregistered after the main driver's .remove is executed. If it asserts a reset, the subdrivers will be unable to communicate with the hardware during their cleanup. For LEDs, this means that they will fail to turn off, resulting in a timeout error. [1] https://lore.kernel.org/r/20240123215606.26716-9-luizluca@gmail.com/Signed-off-by: Luiz Angelo Daros de Luca <luizluca@gmail.com> Reviewed-by: Linus Walleij <linus.walleij@linaro.org> Signed-off-by: David S. Miller <davem@davemloft.net>
-
Luiz Angelo Daros de Luca authored
This switch family supports four LEDs for each of its six ports. Each LED group is composed of one of these four LEDs from all six ports. LED groups can be configured to display hardware information, such as link activity, or manually controlled through a bitmap in registers RTL8366RB_LED_0_1_CTRL_REG and RTL8366RB_LED_2_3_CTRL_REG. After a reset, the default LED group configuration for groups 0 to 3 indicates, respectively, link activity, link at 1000M, 100M, and 10M, or RTL8366RB_LED_CTRL_REG as 0x5432. These configurations are commonly used for LED indications. However, the driver was replacing that configuration to use manually controlled LEDs (RTL8366RB_LED_FORCE) without providing a way for the OS to control them. The default configuration is deemed more useful than fixed, uncontrollable turned-on LEDs. The driver was enabling/disabling LEDs during port_enable/disable. However, these events occur when the port is administratively controlled (up or down) and are not related to link presence. Additionally, when a port N was disabled, the driver was turning off all LEDs for group N, not only the corresponding LED for port N in any of those 4 groups. In such cases, if port 0 was brought down, the LEDs for all ports in LED group 0 would be turned off. As another side effect, the driver was wrongly warning that port 5 didn't have an LED ("no LED for port 5"). Since showing the administrative state of ports is not an orthodox way to use LEDs, it was not worth it to fix it and all this code was dropped. The code to disable LEDs was simplified only changing each LED group to the RTL8366RB_LED_OFF state. Registers RTL8366RB_LED_0_1_CTRL_REG and RTL8366RB_LED_2_3_CTRL_REG are only used when the corresponding LED group is configured with RTL8366RB_LED_FORCE and they don't need to be cleaned. The code still references an LED controlled by RTL8366RB_INTERRUPT_CONTROL_REG, but as of now, no test device has actually used it. Also, some magic numbers were replaced by macros. Signed-off-by: Luiz Angelo Daros de Luca <luizluca@gmail.com> Reviewed-by: Linus Walleij <linus.walleij@linaro.org> Signed-off-by: David S. Miller <davem@davemloft.net>
-
Eric Dumazet authored
Instead of (struct rt6_info *)dst casts, we can use : #define dst_rt6_info(_ptr) \ container_of_const(_ptr, struct rt6_info, dst) Some places needed missing const qualifiers : ip6_confirm_neigh(), ipv6_anycast_destination(), ipv6_unicast_destination(), has_gateway() v2: added missing parts (David Ahern) Signed-off-by: Eric Dumazet <edumazet@google.com> Reviewed-by: David Ahern <dsahern@kernel.org> Signed-off-by: David S. Miller <davem@davemloft.net>
-
Dave Thaler authored
This patch elaborates on the use of PC by expanding the PC acronym, explaining the units, and the relative position to which the offset applies. Signed-off-by: Dave Thaler <dthaler1968@googlemail.com> Signed-off-by: Daniel Borkmann <daniel@iogearbox.net> Reviewed-by: David Vernet <void@manifault.com> Link: https://lore.kernel.org/bpf/20240426231126.5130-1-dthaler1968@gmail.com
-
David S. Miller authored
Petr Machata says: ==================== mlxsw: Improve events processing performance Amit Cohen writes: Spectrum ASICs only support a single interrupt, it means that all the events are handled by one IRQ (interrupt request) handler. Currently, we schedule a tasklet to handle events in EQ, then we also use tasklet for CQ, SDQ and RDQ. Tasklet runs in softIRQ (software IRQ) context, and will be run on the same CPU which scheduled it. It means that today we have one CPU which handles all the packets (both network packets and EMADs) from hardware. The existing implementation is not efficient and can be improved. Measuring latency of EMADs in the driver (without the time in FW) shows that latency is increased by factor of 28 (x28) when network traffic is handled by the driver. Measuring throughput in CPU shows that CPU can handle ~35% less packets of specific flow when corrupted packets are also handled by the driver. There are cases that these values even worse, we measure decrease of ~44% packet rate. This can be improved if network packet and EMADs will be handled in parallel by several CPUs, and more than that, if different types of traffic will be handled in parallel. We can achieve this using NAPI. This set converts the driver to process completions from hardware via NAPI. The idea is to add NAPI instance per CQ (which is mapped 1:1 to SDQ/RDQ), which means that each DQ can be handled separately. we have DQ for EMADs and DQs for each trap group (like LLDP, BGP, L3 drops, etc..). See more details in commit messages. An additional improvement which is done as part of this set is related to doorbells' ring. The idea is to handle small chunks of Rx packets (which is also recommended using NAPI) and ring doorbells once per chunk. This reduces the access to hardware which is expensive (time wise) and might take time because of memory barriers. With this set we can see better performance. To summerize: EMADs latency: +------------------------------------------------------------------------+ | | Before this set | Now | |------------------|---------------------------|-------------------------| | Increased factor | x28 | x1.5 | +------------------------------------------------------------------------+ Note that we can see even measurements that show better latency when traffic is handled by the driver. Throughput: +------------------------------------------------------------------------+ | | Before this set | Now | |-------------|----------------------------|-----------------------------| | Reduced | 35% | 6% | | packet rate | | | +------------------------------------------------------------------------+ Additional improvements are planned - use page pool for buffer allocations and avoid cache miss of each SKB using napi_build_skb(). Patch set overview: Patches #1-#2 improve access to hardware by reducing dorbells' rings Patch #3-#4 are preaparations for NAPI usage Patch #5 converts the driver to use NAPI ==================== Signed-off-by: David S. Miller <davem@davemloft.net>
-
Amit Cohen authored
Spectrum ASICs only support a single interrupt, that means that all the events are handled by one IRQ (interrupt request) handler. Once an interrupt is received, we schedule tasklet to handle events from EQ and then schedule tasklets to handle completions from CQs. Tasklet runs in softIRQ (software IRQ) context, and will be run on the same CPU which scheduled it. That means that today we use only one CPU to handle all the packets (both network packets and EMADs) from hardware. This can be improved using NAPI. The idea is to use NAPI instance per CQ, which is mapped 1:1 to DQ (RDQ or SDQ). NAPI poll method can be run in kernel thread, so then the driver will be able to handle WQEs in several CPUs. Convert the existing code to use NAPI APIs. Add NAPI instance as part of 'struct mlxsw_pci_queue' and initialize it as part of CQs initialization. Set the appropriate poll method and dummy net device, according to queue number, similar to tasklet setup. For CQs which are used for completions of RDQ, use Rx poll method and 'napi_dev_rx', which is set as 'threaded'. It means that Rx poll method will run in kernel context, so several RDQs will be handled in parallel. For CQs which are used for completions of SDQ, use Tx poll method and 'napi_dev_tx', this method will run in softIRQ context, as it is recommended in NAPI documentation, as Tx packets' processing is short task. Convert mlxsw_pci_cq_{rx,tx}_tasklet() to poll methods. Handle 'budget' argument - ignore it in Tx poll method, as it is recommended to not limit Tx processing. For Rx processing, handle up to 'budget' completions. Return 'work_done' which is the amount of completions that were handled. Handle the following cases: 1. After processing 'budget' completions, the driver still has work to do: Return work-done = budget. In that case, the NAPI instance will be polled again (without the need to be rescheduled). Do not re-arm the queue, as NAPI will handle the reschedule, so we do not have to involve hardware to send an additional interrupt for the completions that should be processed. 2. Event processing has been completed: Call napi_complete_done() to mark NAPI processing as completed, which means that the poll method will not be rescheduled. Re-arm the queue, as all completions were handled. In case that poll method handled exactly 'budget' completions, return work-done = budget -1, to distinguish from the case that driver still has completions to handle. Otherwise, return the amount of completions that were handled. Signed-off-by: Amit Cohen <amcohen@nvidia.com> Reviewed-by: Ido Schimmel <idosch@nvidia.com> Signed-off-by: Petr Machata <petrm@nvidia.com> Signed-off-by: David S. Miller <davem@davemloft.net>
-
Amit Cohen authored
The next patch will set the driver to use NAPI for event processing. Then tasklet mechanism will be used only for EQ. Reorganize 'mlxsw_pci_queue' to hold EQ and CQ attributes in a union. For now, add tasklet for both EQ and CQ. This will be changed in the next patch, as 'tasklet_struct' will be replaced with NAPI instance. Signed-off-by: Amit Cohen <amcohen@nvidia.com> Reviewed-by: Ido Schimmel <idosch@nvidia.com> Signed-off-by: Petr Machata <petrm@nvidia.com> Signed-off-by: David S. Miller <davem@davemloft.net>
-
Amit Cohen authored
mlxsw will use NAPI for event processing in a next patch. As preparation, add two dummy net devices and initialize them. NAPI instance should be attached to net device. Usually each queue is used by a single net device in network drivers, so the mapping between net device to NAPI instance is intuitive. In our case, Rx queues are not per port, they are per trap-group. Tx queues are mapped to net devices, but we do not have a separate queue for each local port, several ports share the same queue. Use init_dummy_netdev() to initialize dummy net devices for NAPI. To run NAPI poll method in a kernel thread, the net device which NAPI instance is attached to should be marked as 'threaded'. It is recommended to handle Tx packets in softIRQ context, as usually this is a short task - just free the Tx packet which has been transmitted. Rx packets handling is more complicated task, so drivers can use a dedicated kernel thread to process them. It allows processing packets from different Rx queues in parallel. We would like to handle only Rx packets in kernel threads, which means that we will use two dummy net devices (one for Rx and one for Tx). Set only one of them with 'threaded' as it will be used for Rx processing. Do not fail in case that setting 'threaded' fails, as it is better to use regular softIRQ NAPI rather than preventing the driver from loading. Note that the net devices are initialized with init_dummy_netdev(), so they are not registered, which means that they will not be visible to user. It will not be possible to change 'threaded' configuration from user space, but it is reasonable in our case, as there is no another configuration which makes sense, considering that user has no influence on the usage of each queue. Signed-off-by: Amit Cohen <amcohen@nvidia.com> Reviewed-by: Ido Schimmel <idosch@nvidia.com> Signed-off-by: Petr Machata <petrm@nvidia.com> Signed-off-by: David S. Miller <davem@davemloft.net>
-
Amit Cohen authored
Currently, for each CQE in CQ, we ring CQ doorbell, then handle RDQ and ring RDQ doorbell. Finally we ring CQ arm doorbell - once per CQ tasklet. The idea of ringing CQ doorbell before RDQ doorbell, is to be sure that when we post new WQE (after RDQ is handled), there is an available CQE. This was done because of a hardware bug as part of commit c9ebea04 ("mlxsw: pci: Ring CQ's doorbell before RDQ's"). There is no real reason to ring RDQ and CQ doorbells for each completion, it is better to handle several completions and reduce number of ringings, as access to hardware is expensive (time wise) and might take time because of memory barriers. A previous patch changed CQ tasklet to handle up to 64 Rx packets. With this limitation, we can ring CQ and RDQ doorbells once per CQ tasklet. The counters of the doorbells are increased by the amount of packets that we handled, then the device will know for which completion to send an additional event. To avoid reordering CQ and RDQ doorbells' ring, let the tasklet to ring also RDQ doorbell, mlxsw_pci_cqe_rdq_handle() handles the counter but does not ring the doorbell. Note that with this change there is no need to copy the CQE, as we ring CQ doorbell only after Rx packet processing (which uses the CQE) is done. Signed-off-by: Amit Cohen <amcohen@nvidia.com> Reviewed-by: Ido Schimmel <idosch@nvidia.com> Signed-off-by: Petr Machata <petrm@nvidia.com> Signed-off-by: David S. Miller <davem@davemloft.net>
-
Amit Cohen authored
We can get many completions in one interrupt. Currently, the CQ tasklet handles up to half queue size completions, and then arms the hardware to generate additional events, which means that in case that there were additional completions that we did not handle, we will get immediately an additional interrupt to handle the rest. The decision to handle up to half of the queue size is arbitrary and was determined in 2015, when mlxsw driver was added to the kernel. One additional fact that should be taken into account is that while WQEs from RDQ are handled, the CPU that handles the tasklet is dedicated for this task, which means that we might hold the CPU for a long time. Handle WQEs in smaller chucks, then arm CQ doorbell to notify the hardware to send additional notifications. Set the chunk size to 64 as this number is recommended using NAPI and the driver will use NAPI in a next patch. Note that for now we use ARM doorbell to retrigger CQ tasklet, but with NAPI it will be more efficient as software will reschedule the poll method and we will not involve hardware for that. Signed-off-by: Amit Cohen <amcohen@nvidia.com> Reviewed-by: Ido Schimmel <idosch@nvidia.com> Signed-off-by: Petr Machata <petrm@nvidia.com> Signed-off-by: David S. Miller <davem@davemloft.net>
-
Eric Dumazet authored
This is a followup of commit c4e86b43 ("net: add two more call_rcu_hurry()") fib6_info_destroy_rcu() is calling nexthop_put() or fib6_nh_release() We must not delay it too much or risk unregister_netdevice/ref_tracker traces because references to netdev are not released in time. This should speedup device/netns dismantles when CONFIG_RCU_LAZY=y Signed-off-by: Eric Dumazet <edumazet@google.com> Reviewed-by: Jiri Pirko <jiri@nvidia.com> Reviewed-by: David Ahern <dsahern@kernel.org> Signed-off-by: David S. Miller <davem@davemloft.net>
-
Eric Dumazet authored
This is a followup of commit c4e86b43 ("net: add two more call_rcu_hurry()") Our reference to ifa->ifa_dev must be freed ASAP to release the reference to the netdev the same way. inet_rcu_free_ifa() in_dev_put() -> in_dev_finish_destroy() -> netdev_put() This should speedup device/netns dismantles when CONFIG_RCU_LAZY=y Signed-off-by: Eric Dumazet <edumazet@google.com> Reviewed-by: David Ahern <dsahern@kernel.org> Signed-off-by: David S. Miller <davem@davemloft.net>
-
Eric Dumazet authored
This came while reviewing commit c4e86b43 ("net: add two more call_rcu_hurry()"). Paolo asked if adding one synchronize_rcu() would help. While synchronize_rcu() does not help, making sure to call rcu_barrier() before msleep(wait) is definitely helping to make sure lazy call_rcu() are completed. Instead of waiting ~100 seconds in my tests, the ref_tracker splats occurs one time only, and netdev_wait_allrefs_any() latency is reduced to the strict minimum. Ideally we should audit our call_rcu() users to make sure no refcount (or cascading call_rcu()) is held too long, because rcu_barrier() is quite expensive. Fixes: 0e4be9e5 ("net: use exponential backoff in netdev_wait_allrefs") Signed-off-by: Eric Dumazet <edumazet@google.com> Link: https://lore.kernel.org/all/28bbf698-befb-42f6-b561-851c67f464aa@kernel.org/T/#m76d73ed6b03cd930778ac4d20a777f22a08d6824Reviewed-by: Jiri Pirko <jiri@nvidia.com> Signed-off-by: David S. Miller <davem@davemloft.net>
-
Tanmay Patil authored
If the base-time for taprio is in the past, start the schedule at the time of the form "base_time + N*cycle_time" where N is the smallest possible integer such that the above time is in the future. Signed-off-by: Tanmay Patil <t-patil@ti.com> Signed-off-by: Chintan Vankar <c-vankar@ti.com> Reviewed-by: Simon Horman <horms@kernel.org> Signed-off-by: David S. Miller <davem@davemloft.net>
-
- 27 Apr, 2024 1 commit
-
-
Jakub Kicinski authored
When using YNL in tests appending the doc string to the type name makes it harder to check that we got the correct error. Put the doc under a separate key. Reviewed-by: Donald Hunter <donald.hunter@gmail.com> Link: https://lore.kernel.org/r/20240426003111.359285-1-kuba@kernel.orgSigned-off-by: Jakub Kicinski <kuba@kernel.org>
-
- 26 Apr, 2024 16 commits
-
-
Jakub Kicinski authored
Jakub Kicinski says: ==================== selftests: drv-net: round some sharp edges I had to explain how to run the driver tests twice already. Improve the README so we can just point to it. Improve the config validation. v1: https://lore.kernel.org/r/20240424221444.4194069-1-kuba@kernel.org/ ==================== Link: https://lore.kernel.org/r/20240425222341.309778-1-kuba@kernel.orgSigned-off-by: Jakub Kicinski <kuba@kernel.org>
-
Jakub Kicinski authored
Throw a slightly more helpful exception when env variables are partially populated. Prior to this change we'd get a dictionary key exception somewhere later on. Reviewed-by: Willem de Bruijn <willemb@google.com> Link: https://lore.kernel.org/r/20240425222341.309778-4-kuba@kernel.orgSigned-off-by: Jakub Kicinski <kuba@kernel.org>
-
Jakub Kicinski authored
The shell lexer is not helping much, do very basic parsing manually. Reviewed-by: Willem de Bruijn <willemb@google.com> Link: https://lore.kernel.org/r/20240425222341.309778-3-kuba@kernel.orgSigned-off-by: Jakub Kicinski <kuba@kernel.org>
-
Jakub Kicinski authored
Add more info to the README. It's also now copied to GitHub for increased visibility: https://github.com/linux-netdev/nipa/wiki/Running-driver-testsReviewed-by: Willem de Bruijn <willemb@google.com> Link: https://lore.kernel.org/r/20240425222341.309778-2-kuba@kernel.orgSigned-off-by: Jakub Kicinski <kuba@kernel.org>
-
Eric Dumazet authored
I forgot to call tcp_skb_collapse_tstamp() in the case we consume the second skb in write queue. Neal suggested to create a common helper used by tcp_mtu_probe() and tcp_grow_skb(). Fixes: 8ee602c6 ("tcp: try to send bigger TSO packets") Signed-off-by: Eric Dumazet <edumazet@google.com> Acked-by: Neal Cardwell <ncardwell@google.com> Link: https://lore.kernel.org/r/20240425193450.411640-1-edumazet@google.comSigned-off-by: Jakub Kicinski <kuba@kernel.org>
-
Justin Stitt authored
This pattern of strncpy with some pointer arithmetic setting fixed-sized intervals with string literal data is a bit weird so let's use ethtool_puts() as this has more obvious behavior and is less-error prone. Nicely, we also get to drop a usage of the now deprecated strncpy() [1]. Link: https://www.kernel.org/doc/html/latest/process/deprecated.html#strncpy-on-nul-terminated-strings [1] Link: https://github.com/KSPP/linux/issues/90Suggested-by: Alexander Lobakin <aleksander.lobakin@intel.com> Reviewed-by: Kees Cook <keescook@chromium.org> Signed-off-by: Justin Stitt <justinstitt@google.com> Link: https://lore.kernel.org/r/20240425-strncpy-drivers-net-dsa-lan9303-core-c-v4-1-9fafd419d7bb@google.comSigned-off-by: Jakub Kicinski <kuba@kernel.org>
-
Jose E. Marchesi authored
The definition of bpf_tail_call_static in tools/lib/bpf/bpf_helpers.h is guarded by a preprocessor check to assure that clang is recent enough to support it. This patch updates the guard so the function is compiled when using GCC 13 or later as well. Tested in bpf-next master. No regressions. Signed-off-by: Jose E. Marchesi <jose.marchesi@oracle.com> Signed-off-by: Daniel Borkmann <daniel@iogearbox.net> Link: https://lore.kernel.org/bpf/20240426145158.14409-1-jose.marchesi@oracle.com
-
Paolo Abeni authored
Jason Xing says: ==================== Implement reset reason mechanism to detect From: Jason Xing <kernelxing@tencent.com> In production, there are so many cases about why the RST skb is sent but we don't have a very convenient/fast method to detect the exact underlying reasons. RST is implemented in two kinds: passive kind (like tcp_v4_send_reset()) and active kind (like tcp_send_active_reset()). The former can be traced carefully 1) in TCP, with the help of drop reasons, which is based on Eric's idea[1], 2) in MPTCP, with the help of reset options defined in RFC 8684. The latter is relatively independent, which should be implemented on our own, such as active reset reasons which can not be replace by skb drop reason or something like this. In this series, I focus on the fundamental implement mostly about how the rstreason mechanism works and give the detailed passive part as an example, not including the active reset part. In future, we can go further and refine those NOT_SPECIFIED reasons. Here are some examples when tracing: <idle>-0 [002] ..s1. 1830.262425: tcp_send_reset: skbaddr=x skaddr=x src=x dest=x state=x reason=NOT_SPECIFIED <idle>-0 [002] ..s1. 1830.262425: tcp_send_reset: skbaddr=x skaddr=x src=x dest=x state=x reason=NO_SOCKET [1] Link: https://lore.kernel.org/all/CANn89iJw8x-LqgsWOeJQQvgVg6DnL5aBRLi10QN2WBdr+X4k=w@mail.gmail.com/ ==================== Link: https://lore.kernel.org/r/20240425031340.46946-1-kerneljasonxing@gmail.comSigned-off-by: Paolo Abeni <pabeni@redhat.com>
-
Jason Xing authored
At last, we should let it work by introducing this reset reason in trace world. One of the possible expected outputs is: ... tcp_send_reset: skbaddr=xxx skaddr=xxx src=xxx dest=xxx state=TCP_ESTABLISHED reason=NOT_SPECIFIED Signed-off-by: Jason Xing <kernelxing@tencent.com> Reviewed-by: Steven Rostedt (Google) <rostedt@goodmis.org> Reviewed-by: Eric Dumazet <edumazet@google.com> Signed-off-by: Paolo Abeni <pabeni@redhat.com>
-
Jason Xing authored
Since we have mapped every mptcp reset reason definition in enum sk_rst_reason, introducing a new helper can cover some missing places where we have already set the subflow->reset_reason. Note: using SK_RST_REASON_NOT_SPECIFIED is the same as SK_RST_REASON_MPTCP_RST_EUNSPEC. They are both unknown. So we can convert it directly. Suggested-by: Paolo Abeni <pabeni@redhat.com> Signed-off-by: Jason Xing <kernelxing@tencent.com> Reviewed-by: Matthieu Baerts (NGI0) <matttbe@kernel.org> Reviewed-by: Eric Dumazet <edumazet@google.com> Signed-off-by: Paolo Abeni <pabeni@redhat.com>
-
Jason Xing authored
It relies on what reset options in the skb are as rfc8684 says. Reusing this logic can save us much energy. This patch replaces most of the prior NOT_SPECIFIED reasons. Signed-off-by: Jason Xing <kernelxing@tencent.com> Reviewed-by: Matthieu Baerts (NGI0) <matttbe@kernel.org> Reviewed-by: Eric Dumazet <edumazet@google.com> Signed-off-by: Paolo Abeni <pabeni@redhat.com>
-
Jason Xing authored
Reuse the dropreason logic to show the exact reason of tcp reset, so we can finally display the corresponding item in enum sk_reset_reason instead of reinventing new reset reasons. This patch replaces all the prior NOT_SPECIFIED reasons. Signed-off-by: Jason Xing <kernelxing@tencent.com> Reviewed-by: Eric Dumazet <edumazet@google.com> Signed-off-by: Paolo Abeni <pabeni@redhat.com>
-
Jason Xing authored
Like what we did to passive reset: only passing possible reset reason in each active reset path. No functional changes. Signed-off-by: Jason Xing <kernelxing@tencent.com> Acked-by: Matthieu Baerts (NGI0) <matttbe@kernel.org> Reviewed-by: Eric Dumazet <edumazet@google.com> Signed-off-by: Paolo Abeni <pabeni@redhat.com>
-
Jason Xing authored
Adjust the parameter and support passing reason of reset which is for now NOT_SPECIFIED. No functional changes. Signed-off-by: Jason Xing <kernelxing@tencent.com> Acked-by: Matthieu Baerts (NGI0) <matttbe@kernel.org> Reviewed-by: Eric Dumazet <edumazet@google.com> Signed-off-by: Paolo Abeni <pabeni@redhat.com>
-
Jason Xing authored
Add a new standalone file for the easy future extension to support both active reset and passive reset in the TCP/DCCP/MPTCP protocols. This patch only does the preparations for reset reason mechanism, nothing else changes. The reset reasons are divided into three parts: 1) reuse drop reasons for passive reset in TCP 2) our own independent reasons which aren't relying on other reasons at all 3) reuse MP_TCPRST option for MPTCP The benefits of a standalone reset reason are listed here: 1) it can cover more than one case, such as reset reasons in MPTCP, active reset reasons. 2) people can easily/fastly understand and maintain this mechanism. 3) we get unified format of output with prefix stripped. 4) more new reset reasons are on the way ... I will implement the basic codes of active/passive reset reason in those three protocols, which are not complete for this moment. For passive reset part in TCP, I only introduce the NO_SOCKET common case which could be set as an example. After this series applied, it will have the ability to open a new gate to let other people contribute more reasons into it :) Signed-off-by: Jason Xing <kernelxing@tencent.com> Acked-by: Matthieu Baerts (NGI0) <matttbe@kernel.org> Reviewed-by: Eric Dumazet <edumazet@google.com> Signed-off-by: Paolo Abeni <pabeni@redhat.com>
-
Song Yoong Siang authored
This patch adds support to per-packet Tx hardware timestamp request to AF_XDP zero-copy packet via XDP Tx metadata framework. Please note that user needs to enable Tx HW timestamp capability via igc_ioctl() with SIOCSHWTSTAMP cmd before sending xsk Tx hardware timestamp request. Same as implementation in RX timestamp XDP hints kfunc metadata, Timer 0 (adjustable clock) is used in xsk Tx hardware timestamp. i225/i226 have four sets of timestamping registers. *skb and *xsk_tx_buffer pointers are used to indicate whether the timestamping register is already occupied. Furthermore, a boolean variable named xsk_pending_ts is used to hold the transmit completion until the tx hardware timestamp is ready. This is because, for i225/i226, the timestamp notification event comes some time after the transmit completion event. The driver will retrigger hardware irq to clean the packet after retrieve the tx hardware timestamp. Besides, xsk_meta is added into struct igc_tx_timestamp_request as a hook to the metadata location of the transmit packet. When the Tx timestamp interrupt is fired, the interrupt handler will copy the value of Tx hwts into metadata location via xsk_tx_metadata_complete(). This patch is tested with tools/testing/selftests/bpf/xdp_hw_metadata on Intel ADL-S platform. Below are the test steps and results. Test Step 1: Run xdp_hw_metadata app ./xdp_hw_metadata <iface> > /dev/shm/result.log Test Step 2: Enable Tx hardware timestamp hwstamp_ctl -i <iface> -t 1 -r 1 Test Step 3: Run ptp4l and phc2sys for time synchronization Test Step 4: Generate UDP packets with 1ms interval for 10s trafgen --dev <iface> '{eth(da=<addr>), udp(dp=9091)}' -t 1ms -n 10000 Test Step 5: Rerun Step 1-3 with 10s iperf3 as background traffic Test Step 6: Rerun Step 1-4 with 10s iperf3 as background traffic Based on iperf3 results below, the impact of holding tx completion to throughput is not observable. Result of last UDP packet (no. 10000) in Step 4: poll: 1 (0) skip=99 fail=0 redir=10000 xsk_ring_cons__peek: 1 0x5640a37972d0: rx_desc[9999]->addr=f2110 addr=f2110 comp_addr=f2110 EoP rx_hash: 0x2049BE1D with RSS type:0x1 HW RX-time: 1679819246792971268 (sec:1679819246.7930) delta to User RX-time sec:0.0000 (14.990 usec) XDP RX-time: 1679819246792981987 (sec:1679819246.7930) delta to User RX-time sec:0.0000 (4.271 usec) No rx_vlan_tci or rx_vlan_proto, err=-95 0x5640a37972d0: ping-pong with csum=ab19 (want 315b) csum_start=34 csum_offset=6 0x5640a37972d0: complete tx idx=9999 addr=f010 HW TX-complete-time: 1679819246793036971 (sec:1679819246.7930) delta to User TX-complete-time sec:0.0001 (77.656 usec) XDP RX-time: 1679819246792981987 (sec:1679819246.7930) delta to User TX-complete-time sec:0.0001 (132.640 usec) HW RX-time: 1679819246792971268 (sec:1679819246.7930) delta to HW TX-complete-time sec:0.0001 (65.703 usec) 0x5640a37972d0: complete rx idx=10127 addr=f2110 Result of iperf3 without tx hwts request in step 5: [ ID] Interval Transfer Bitrate Retr [ 5] 0.00-10.00 sec 2.74 GBytes 2.36 Gbits/sec 0 sender [ 5] 0.00-10.05 sec 2.74 GBytes 2.34 Gbits/sec receiver Result of iperf3 running parallel with trafgen command in step 6: [ ID] Interval Transfer Bitrate Retr [ 5] 0.00-10.00 sec 2.74 GBytes 2.36 Gbits/sec 0 sender [ 5] 0.00-10.04 sec 2.74 GBytes 2.34 Gbits/sec receiver Co-developed-by: Lai Peter Jun Ann <jun.ann.lai@intel.com> Signed-off-by: Lai Peter Jun Ann <jun.ann.lai@intel.com> Signed-off-by: Song Yoong Siang <yoong.siang.song@intel.com> Acked-by: John Fastabend <john.fastabend@gmail.com> Acked-by: Vinicius Costa Gomes <vinicius.gomes@intel.com> Tested-by: Naama Meir <naamax.meir@linux.intel.com> Signed-off-by: Tony Nguyen <anthony.l.nguyen@intel.com> Link: https://lore.kernel.org/r/20240424210256.3440903-1-anthony.l.nguyen@intel.comSigned-off-by: Paolo Abeni <pabeni@redhat.com>
-