- 17 Sep, 2020 16 commits
-
-
Vadym Kochan authored
Marvell Prestera 98DX326x integrates up to 24 ports of 1GbE with 8 ports of 10GbE uplinks or 2 ports of 40Gbps stacking for a largely wireless SMB deployment. The current implementation supports only boards designed for the Marvell Switchdev solution and requires special firmware. The core Prestera switching logic is implemented in prestera_main.c, there is an intermediate hw layer between core logic and firmware. It is implemented in prestera_hw.c, the purpose of it is to encapsulate hw related logic, in future there is a plan to support more devices with different HW related configurations. This patch contains only basic switch initialization and RX/TX support over SDMA mechanism. Currently supported devices have DMA access range <= 32bit and require ZONE_DMA to be enabled, for such cases SDMA driver checks if the skb allocated in proper range supported by the Prestera device. Also meanwhile there is no TX interrupt support in current firmware version so recycling work is scheduled on each xmit. Port's mac address is generated from the switch base mac which may be provided via device-tree (static one or as nvme cell), or randomly generated. This is required by the firmware. Co-developed-by: Andrii Savka <andrii.savka@plvision.eu> Signed-off-by: Andrii Savka <andrii.savka@plvision.eu> Co-developed-by: Oleksandr Mazur <oleksandr.mazur@plvision.eu> Signed-off-by: Oleksandr Mazur <oleksandr.mazur@plvision.eu> Co-developed-by: Serhiy Boiko <serhiy.boiko@plvision.eu> Signed-off-by: Serhiy Boiko <serhiy.boiko@plvision.eu> Co-developed-by: Serhiy Pshyk <serhiy.pshyk@plvision.eu> Signed-off-by: Serhiy Pshyk <serhiy.pshyk@plvision.eu> Co-developed-by: Taras Chornyi <taras.chornyi@plvision.eu> Signed-off-by: Taras Chornyi <taras.chornyi@plvision.eu> Co-developed-by: Volodymyr Mytnyk <volodymyr.mytnyk@plvision.eu> Signed-off-by: Volodymyr Mytnyk <volodymyr.mytnyk@plvision.eu> Signed-off-by: Vadym Kochan <vadym.kochan@plvision.eu> Signed-off-by: David S. Miller <davem@davemloft.net>
-
YueHaibing authored
It is never used, so can remove it. Signed-off-by: YueHaibing <yuehaibing@huawei.com> Signed-off-by: David S. Miller <davem@davemloft.net>
-
YueHaibing authored
It is not used since commit a09ceb0e ("sched: remove qdisc->drop") Signed-off-by: YueHaibing <yuehaibing@huawei.com> Signed-off-by: David S. Miller <davem@davemloft.net>
-
Matthieu Baerts authored
In case of errors, this message was printed: (...) # read: Resource temporarily unavailable # client exit code 0, server 3 # \nnetns ns1-0-BJlt5D socket stat for 10003: (...) Obviously, the idea was to add a new line before the socket stat and not print "\nnetns". Fixes: b08fbf24 ("selftests: add test-cases for MPTCP MP_JOIN") Fixes: 048d19d4 ("mptcp: add basic kselftest for mptcp") Signed-off-by: Matthieu Baerts <matthieu.baerts@tessares.net> Acked-by: Paolo Abeni <pabeni@redhat.com> Signed-off-by: David S. Miller <davem@davemloft.net>
-
Xie He authored
1. Change all "dev->hard_header" to "dev->header_ops" 2. On receiving incoming frames when header_ops == NULL: The comment only says what is wrong, but doesn't say what is right. This patch changes the comment to make it clear what is right. 3. On transmitting and receiving outgoing frames when header_ops == NULL: The comment explains that the LL header will be later added by the driver. However, I think it's better to simply say that the LL header is invisible to us. This phrasing is better from a software engineering perspective, because this makes it clear that what happens in the driver should be hidden from us and we should not care about what happens internally in the driver. 4. On resuming the LL header (for RAW frames) when header_ops == NULL: The comment says we are "unlikely" to restore the LL header. However, we should say that we are "unable" to restore it. It's not possible (rather than not likely) to restore it, because: 1) There is no way for us to restore because the LL header internally processed by the driver should be invisible to us. 2) In function packet_rcv and tpacket_rcv, the code only tries to restore the LL header when header_ops != NULL. Cc: Willem de Bruijn <willemdebruijn.kernel@gmail.com> Signed-off-by: Xie He <xie.he.0141@gmail.com> Acked-by: Willem de Bruijn <willemb@google.com> Signed-off-by: David S. Miller <davem@davemloft.net>
-
David S. Miller authored
Huazhong Tan says: ==================== net: hns3: updates for -next There are some optimizations related to IO path. Change since V1: - fixes a unsuitable handling in hns3_lb_clear_tx_ring() of #6 which pointed out by Saeed Mahameed. previous version: V1: https://patchwork.ozlabs.org/project/netdev/cover/1600085217-26245-1-git-send-email-tanhuazhong@huawei.com/ ==================== Signed-off-by: David S. Miller <davem@davemloft.net>
-
Yunsheng Lin authored
Use napi_consume_skb() to batch consuming skb when cleaning tx desc in NAPI polling. Signed-off-by: Yunsheng Lin <linyunsheng@huawei.com> Signed-off-by: Huazhong Tan <tanhuazhong@huawei.com> Signed-off-by: David S. Miller <davem@davemloft.net>
-
Yunsheng Lin authored
writel() can be used to order I/O vs memory by default when writing portable drivers. Use writel() to replace wmb() + writel_relaxed(), and writel() is dma_wmb() + writel_relaxed() for ARM64, so there is an optimization here because dma_wmb() is a lighter barrier than wmb(). Signed-off-by: Yunsheng Lin <linyunsheng@huawei.com> Signed-off-by: Huazhong Tan <tanhuazhong@huawei.com> Signed-off-by: David S. Miller <davem@davemloft.net>
-
Yunsheng Lin authored
Currently HNS3_RING_RX_RING_FBDNUM_REG register is read to determine how many rx desc can be cleaned. To avoid the register read operation in the critical data path, use the valid bit in the rx desc to determine if a specific rx desc can be cleaned. The hns3 driver clear valid bit in the rx desc before notifying the rx desc to the hw, and hw will only set the valid bit of the rx desc after corresponding buffer is filled with packet data and other field in the rx desc is set accordingly. Add hns3_rx_ring_move_fw() function to clear the valid bit in the rx desc before moving rx ring's next_to_clean forward to avoid double cleaning a rx desc, also add a dma_rmb() barrier in hns3_handle_rx_bd() to make sure valid bit is set before reading other field in the rx desc. Signed-off-by: Yunsheng Lin <linyunsheng@huawei.com> Signed-off-by: Huazhong Tan <tanhuazhong@huawei.com> Signed-off-by: David S. Miller <davem@davemloft.net>
-
Yunsheng Lin authored
Currently HNS3_RING_TX_RING_HEAD_REG register is read to determine how many tx desc can be cleaned. To avoid the register read operation in the critical data path, use the valid bit in the tx desc to determine if a specific tx desc can be cleaned. The hns3 driver sets valid bit in the tx desc before ringing a doorbell to the hw, and hw will only clear the valid bit of the tx desc after corresponding packet is sent out to the wire. And because next_to_use for tx ring is a changing variable when the driver is filling the tx desc, so reuse the pull_len for rx ring to record the tx desc that has notified to the hw, so that hns3_nic_reclaim_desc() can decide how many tx desc's valid bit need checking when reclaiming tx desc. And io_err_cnt stat is also removed for it is not used anymore. Signed-off-by: Yunsheng Lin <linyunsheng@huawei.com> Signed-off-by: Huazhong Tan <tanhuazhong@huawei.com> Signed-off-by: David S. Miller <davem@davemloft.net>
-
Yunsheng Lin authored
Use netdev_xmit_more() to defer the tx doorbell operation when the skb is passed to the driver continuously. By doing this we can improve the overall xmit performance by avoid some doorbell operations. Also, the tx_err_cnt stat is not used, so rename it to tx_more stat. Signed-off-by: Yunsheng Lin <linyunsheng@huawei.com> Signed-off-by: Huazhong Tan <tanhuazhong@huawei.com> Signed-off-by: David S. Miller <davem@davemloft.net>
-
Yunsheng Lin authored
Batch the page reference count updates instead of doing them one at a time. By doing this we can improve the overall receive performance by avoid some atomic increment operations when the rx page is reused. Signed-off-by: Yunsheng Lin <linyunsheng@huawei.com> Signed-off-by: Huazhong Tan <tanhuazhong@huawei.com> Signed-off-by: David S. Miller <davem@davemloft.net>
-
Liu Shixin authored
Use DEFINE_SEQ_ATTRIBUTE macro to simplify the code. Signed-off-by: Liu Shixin <liushixin2@huawei.com> Signed-off-by: David S. Miller <davem@davemloft.net>
-
Shannon Nelson authored
Use the dim library to manage dynamic interrupt moderation in ionic. v3: rebase v2: untangled declarations in ionic_dim_work() Signed-off-by: Shannon Nelson <snelson@pensando.io> Acked-by: Jakub Kicinski <kuba@kernel.org> Signed-off-by: David S. Miller <davem@davemloft.net>
-
Karsten Graul authored
smc->clcsock and smc->clcsock->sk are used before the check if they can be dereferenced. Fix this by checking the variables first. Fixes: a60a2b1e ("net/smc: reduce active tcp_listen workers") Reported-by: kernel test robot <lkp@intel.com> Reported-by: Dan Carpenter <dan.carpenter@oracle.com> Signed-off-by: Karsten Graul <kgraul@linux.ibm.com> Signed-off-by: David S. Miller <davem@davemloft.net>
-
Nikolay Aleksandrov authored
When we're handling TO_EXCLUDE report in EXCLUDE filter mode we should not ignore the return value of __grp_src_toex_excl() as we'll miss sending notifications about group changes. Fixes: 5bf1e00b ("net: bridge: mcast: support for IGMPV3/MLDv2 CHANGE_TO_INCLUDE/EXCLUDE report") Signed-off-by: Nikolay Aleksandrov <nikolay@cumulusnetworks.com> Signed-off-by: David S. Miller <davem@davemloft.net>
-
- 16 Sep, 2020 18 commits
-
-
Song, Yoong Siang authored
This patch add support to --show-ring & --set-ring Ethtool functions: - Adding min, max, power of two check to new ring parameter's value. - Bring down the network interface before changing the value of ring parameters. - Bring up the network interface after changing the value of ring parameters. Signed-off-by: Song, Yoong Siang <yoong.siang.song@intel.com> Signed-off-by: Voon Weifeng <weifeng.voon@intel.com> Signed-off-by: Ong Boon Leong <boon.leong.ong@intel.com> Signed-off-by: David S. Miller <davem@davemloft.net>
-
David S. Miller authored
Ido Schimmel says: ==================== mlxsw: Refactor headroom management Petr says: On Spectrum, port buffers, also called port headroom, is where packets are stored while they are parsed and the forwarding decision is being made. For lossless traffic flows, in case shared buffer admission is not allowed, headroom is also where to put the extra traffic received before the sent PAUSE takes effect. Another aspect of the port headroom is the so called internal buffer, which is used for egress mirroring. Linux supports two DCB interfaces related to the headroom: dcbnl_setbuffer for configuration, and dcbnl_getbuffer for inspection. In order to make it possible to implement these interfaces, it is first necessary to clean up headroom handling, which is currently strewn in several places in the driver. The end goal is an architecture whereby it is possible to take a copy of the current configuration, adjust parameters, and then hand the proposed configuration over to the system to implement it. When everything works, the proposed configuration is accepted and saved. First, this centralizes the reconfiguration handling to one function, which takes care of coordinating buffer size changes and priority map changes to avoid introducing drops. Second, the fact that the configuration is all in one place makes it easy to keep a backup and handle error path rollbacks, which were previously hard to understand. Patch #1 introduces struct mlxsw_sp_hdroom, which will keep port headroom configuration. Patch #2 unifies handling of delay provision between PFC and PAUSE. From now on, delay is to be measured in bytes of extra space, and will not include MTU. PFC handler sets the delay directly from the parameter it gets through the DCB interface. For PAUSE, MLXSW_SP_PAUSE_DELAY is converted to have the same meaning. In patches #3-#5, MTU, lossiness and priorities are gradually moved over to struct mlxsw_sp_hdroom. In patches #6-#11, handling of buffer resizing and priority maps is moved from spectrum.c and spectrum_dcb.c to spectrum_buffers.c. The API is gradually adapted so that struct mlxsw_sp_hdroom becomes the main interface through which the various clients express how the headroom should be configured. Patch #12 is a small cleanup that the previous transformation made possible. In patch #13, the port init code becomes a boring client of the headroom code, instead of rolling its own thing. Patches #14 and #15 move handling of internal mirroring buffer to the new headroom code as well. Previously, this code was in the SPAN module. This patchset converts the SPAN module to another boring client of the headroom code. ==================== Signed-off-by: David S. Miller <davem@davemloft.net>
-
Petr Machata authored
Traffic mirroring modes that are in-chip implemented on egress need an internal buffer to work. As the only client, the SPAN module was managing the buffer so far. However logically it belongs to the buffers module. E.g. buffer size validation needs to take the size of the internal buffer into account. Therefore move the related code from SPAN to spectrum_buffers. Move over the callbacks that determine the minimum buffer size as a function of maximum speed and MTU. Add a field describing the internal buffer to struct mlxsw_sp_hdroom. Extend mlxsw_sp_hdroom_bufs_reset_sizes() to take care of sizing the internal buffer as well. Change the SPAN module to invoke that function and mlxsw_sp_hdroom_configure() like all the other hdroom clients. Drop the now-unnecessary mlxsw_sp_span_port_buffer_disable(). Signed-off-by: Petr Machata <petrm@nvidia.com> Reviewed-by: Jiri Pirko <jiri@nvidia.com> Signed-off-by: Ido Schimmel <idosch@nvidia.com> Signed-off-by: David S. Miller <davem@davemloft.net>
-
Petr Machata authored
The size of the internal buffer is currently calculated in the SPAN module. Logically it belongs to the spectrum_buffers module, where it should be moved. However, that being a chip-specific operation, it needs dynamic dispatch. There currently is a chip-specific structure for description of shared buffer values, struct mlxsw_sp_sb_vals. However placing ops into this structure would be confusing. Therefore introduce a new per-chip structure, currently empty, and initialize the ops pointer as appropriate. Signed-off-by: Petr Machata <petrm@nvidia.com> Reviewed-by: Jiri Pirko <jiri@nvidia.com> Signed-off-by: Ido Schimmel <idosch@nvidia.com> Signed-off-by: David S. Miller <davem@davemloft.net>
-
Petr Machata authored
Currently mlxsw_sp_port_headroom_init() configures both priomap and buffers by hand. Additionally, for port buffers, it configures buffer 0 with a size that it will never again have if PFC configuration is touched. Rewrite the init code to become a client of the new hdroom code. The only difference in invocation is that the configuration is forced, so that it is issued even if the desired configuration happens to match what is contained in (hitherto not initialized with meaningful values) mlxsw_sp_port->hdroom. Since now mlxsw_sp_port_headroom_init() initializes all the PG buffers to meaningful values, mlxsw_sp_hdroom_configure_buffers() can avoid querying the current configuration, and can fill the whole PBMC itself. Signed-off-by: Petr Machata <petrm@nvidia.com> Reviewed-by: Jiri Pirko <jiri@nvidia.com> Signed-off-by: Ido Schimmel <idosch@nvidia.com> Signed-off-by: David S. Miller <davem@davemloft.net>
-
Petr Machata authored
This function is now only used from the buffers module, and is a trivial field reference. Just inline it and drop the related artifacts. Signed-off-by: Petr Machata <petrm@nvidia.com> Reviewed-by: Jiri Pirko <jiri@nvidia.com> Signed-off-by: Ido Schimmel <idosch@nvidia.com> Signed-off-by: David S. Miller <davem@davemloft.net>
-
Petr Machata authored
Move all the headroom code to the spectrum_buffers module, where it belongs. Rename mlxsw_sp_pg_buf_threshold_get() and mlxsw_sp_pg_buf_pack() to ..._hdroom_... to match the naming convention of the new headroom code. Signed-off-by: Petr Machata <petrm@nvidia.com> Reviewed-by: Jiri Pirko <jiri@nvidia.com> Signed-off-by: Ido Schimmel <idosch@nvidia.com> Signed-off-by: David S. Miller <davem@davemloft.net>
-
Petr Machata authored
The ETS handler performs the headroom configuration in three steps: first it resizes the buffers and adds any new ones. Then it redirects priorities to the new buffers. And finally it sets the size of the now-unused buffers to zero. This way no packet drops are introduced. This sort of careful approach will also be useful for configuring port buffer sizes and priority map by hand, through dcbnl_setbuffer. Therefore move the code from the DCB handler to the generic headroom function. Signed-off-by: Petr Machata <petrm@nvidia.com> Reviewed-by: Jiri Pirko <jiri@nvidia.com> Signed-off-by: Ido Schimmel <idosch@nvidia.com> Signed-off-by: David S. Miller <davem@davemloft.net>
-
Petr Machata authored
The new hdroom code has certain conventions: iteration over priorities is done through a variable named `prio', configuration is not pushed unless it is dirty, but a `force' flag can be used to override this, updated configuration is written to port. Convert the function mlxsw_sp_port_pg_prio_map() to use these conventions and rename appropriately to fit in. Signed-off-by: Petr Machata <petrm@nvidia.com> Reviewed-by: Jiri Pirko <jiri@nvidia.com> Signed-off-by: Ido Schimmel <idosch@nvidia.com> Signed-off-by: David S. Miller <davem@davemloft.net>
-
Petr Machata authored
The ETS handler performs the headroom configuration in three steps: first it resizes the buffers and adds any new ones. Then it redirects priorities to the new buffers. And finally it sets the size of the now-unused buffers to zero. This way no packet drops are introduced. Both of the buffer size configuration operations are simply buffer size configurations, there is no material difference between setting buffers to zero and any other value. Therefore simply invoke the same mlxsw_sp_hdroom_configure(), and drop mlxsw_sp_port_pg_destroy() and mlxsw_sp_ets_has_pg() which are now unused. Signed-off-by: Petr Machata <petrm@nvidia.com> Reviewed-by: Jiri Pirko <jiri@nvidia.com> Signed-off-by: Ido Schimmel <idosch@nvidia.com> Signed-off-by: David S. Miller <davem@davemloft.net>
-
Petr Machata authored
Split mlxsw_sp_port_headroom_set() to three functions. mlxsw_sp_hdroom_bufs_reset_sizes() changes the sizes of the individual PG buffers, and mlxsw_sp_hdroom_configure_buffers() will actually apply the configuration. A third function, mlxsw_sp_hdroom_bufs_fit(), verifies that the requested buffer configuration matches total headroom size requirements. Add wrappers, mlxsw_sp_hdroom_configure() and __..., that will eventually perform full headroom configuration, but for now, only have them verify the configured headroom size, and invoke mlxsw_sp_hdroom_configure_buffers(). Have them take the `force` argument to prepare for a later patch, even though it is currently unused. Note that the loop in mlxsw_sp_hdroom_configure_buffers() only goes through DCBX_MAX_BUFFERS. Since there is no logic to configure the control buffer, it needs to keep the values queried from the FW. Eventually this function should configure all the PGs. Note that conversion of __mlxsw_sp_dcbnl_ieee_setets() is not trivial. That function performs the headroom configuration in three steps: first it resizes the buffers and adds any new ones. Then it redirects priorities to the new buffers. And finally it sets the size of the now-unused buffers to zero. This way no packet drops are introduced. So after invoking mlxsw_sp_hdroom_bufs_reset_sizes(), tweak the configuration to keep the old sizes of PG buffers for those buffers whose size was set to zero. Signed-off-by: Petr Machata <petrm@nvidia.com> Reviewed-by: Jiri Pirko <jiri@nvidia.com> Signed-off-by: Ido Schimmel <idosch@nvidia.com> Signed-off-by: David S. Miller <davem@davemloft.net>
-
Petr Machata authored
So far, port buffers were always autoconfigured. When dcbnl_setbuffer callback is implemented, it will allow the user to change the buffer size configuration by hand. The sizes therefore need to be a configuration parameter, not always deduced, and therefore belong to struct mlxsw_sp_hdroom, where the configuration routine should take them from. Update mlxsw_sp_port_headroom_set() to update these sizes. Have the function update the sizes even for the case that a given buffer is not used. Additionally, change the loop iteration end to DCBX_MAX_BUFFERS instead of IEEE_8021QAZ_MAX_TCS. The value is the same, but the semantics differ. Signed-off-by: Petr Machata <petrm@nvidia.com> Reviewed-by: Jiri Pirko <jiri@nvidia.com> Signed-off-by: Ido Schimmel <idosch@nvidia.com> Signed-off-by: David S. Miller <davem@davemloft.net>
-
Petr Machata authored
Client-side configuration has lossiness as an attribute of a priority. Therefore add a "lossy" attribute to struct mlxsw_sp_hdroom_prio. To a Spectrum ASIC, lossiness is a feature of a port buffer. Therefore add struct mlxsw_sp_hdroom_buf, which in the following patches will get more attributes, but right now only use it to track port buffer lossiness. Instead of passing around the primary indicators of PFC and pause_en, add a function mlxsw_sp_hdroom_bufs_reset_lossiness() to compute the buffer lossiness from the priority map and priority lossiness. Change mlxsw_sp_port_headroom_set() to take the buffer lossy flag from the headroom configuration. Have the PFC and pause handlers configure priority lossiness in mlxsw_sp_hdroom, from where it will propagate. Signed-off-by: Petr Machata <petrm@nvidia.com> Reviewed-by: Jiri Pirko <jiri@nvidia.com> Signed-off-by: Ido Schimmel <idosch@nvidia.com> Signed-off-by: David S. Miller <davem@davemloft.net>
-
Petr Machata authored
The mapping from priorities to buffers determines which buffers should be configured. Lossiness of these priorities combined with the mapping determines whether a given buffer should be lossy. Currently this configuration is stored implicitly in DCB ETS, PFC and ethtool PAUSE configuration. Keeping it together with the rest of the headroom configuration and deriving it as needed from PFC / ETS / PAUSE will make things clearer. To that end, add a field "prios" to struct mlxsw_sp_hdroom. Previously, __mlxsw_sp_port_headroom_set() took prio_tc as an argument, and assumed that the same mapping as we use on the egress should be used on ingress as well. Instead, track this configuration at each priority, so that it can be adjusted flexibly. In the following patches, as dcbnl_setbuffer is implemented, it will need to store its own mapping, and it will also be sometimes necessary to revert back to the original ETS mapping. Therefore track two buffer indices: the one for chip configuration (buf_idx), and the source one (ets_buf_idx). Introduce a function to configure the chip-level buffer index, and for now have it simply copy the ETS mapping over to the chip mapping. Update the ETS handler to project prio_tc to the ets_buf_idx and invoke the buf_idx recomputation. Now that there is a canonical place to look for this configuration, mlxsw_sp_port_headroom_set() does not need to invent def_prio_tc to use if DCB is compiled out. Signed-off-by: Petr Machata <petrm@nvidia.com> Reviewed-by: Jiri Pirko <jiri@nvidia.com> Signed-off-by: Ido Schimmel <idosch@nvidia.com> Signed-off-by: David S. Miller <davem@davemloft.net>
-
Petr Machata authored
MTU influences sizes of auto-allocated buffers. Make it a part of port buffer configuration and have __mlxsw_sp_port_headroom_set() take it from there, instead of as an argument. Signed-off-by: Petr Machata <petrm@nvidia.com> Reviewed-by: Jiri Pirko <jiri@nvidia.com> Signed-off-by: Ido Schimmel <idosch@nvidia.com> Signed-off-by: David S. Miller <davem@davemloft.net>
-
Petr Machata authored
When a priority is marked as lossless using DCB PFC, or when pause frames are enabled on a port, mlxsw adds to port buffers an extra space to cover the traffic that will arrive between the time that a pause or PFC frame is emitted, and the time traffic actually stops. This is called the delay. The concept is the same in PFC and pause, however the way the extra buffer space is calculated differs. In this patch, unify this handling. Delay is to be measured in bytes of extra space, and will not include MTU. PFC handler sets the delay directly from the parameter it gets through the DCB interface. To convert pause handler, move MLXSW_SP_PAUSE_DELAY to ethtool module, convert to bytes, and reduce it by maximum MTU, and divide by two. Then it has the same meaning as the delay_bytes set by the PFC handler. Keep the delay_bytes value in struct mlxsw_sp_hdroom introduced in the previous patch. Change PFC and pause handlers to store the new delay value there and have __mlxsw_sp_port_headroom_set() take it from there. Instead of mlxsw_sp_pfc_delay_get() and mlxsw_sp_pg_buf_delay_get(), introduce mlxsw_sp_hdroom_buf_delay_get() to calculate the delay provision. Drop the unnecessary MLXSW_SP_CELL_FACTOR, and instead add an explanatory comment describing the formula used. Signed-off-by: Petr Machata <petrm@nvidia.com> Reviewed-by: Jiri Pirko <jiri@nvidia.com> Signed-off-by: Ido Schimmel <idosch@nvidia.com> Signed-off-by: David S. Miller <davem@davemloft.net>
-
Petr Machata authored
The port headroom handling is currently strewn across several modules and tricky to follow: MTU, DCB PFC, DCB ETS and ethtool pause all influence the settings, and then there is the completely separate initial configuraion in spectrum_buffers. A following patch will implement the dcbnl_setbuffer callback, which is going to further complicate the landscape. In order to simplify work with port buffers, the following patches are going to centralize all port-buffer handling in spectrum_buffers. As a first step, introduce a (currently empty) struct mlxsw_sp_hdroom that will keep the configuration parameters, and allocate and free it in appropriate places. Signed-off-by: Petr Machata <petrm@nvidia.com> Reviewed-by: Jiri Pirko <jiri@nvidia.com> Signed-off-by: Ido Schimmel <idosch@nvidia.com> Signed-off-by: David S. Miller <davem@davemloft.net>
-
git://git.kernel.org/pub/scm/linux/kernel/git/saeed/linuxDavid S. Miller authored
Saeed Mahameed says: ==================== mlx5-updates-2020-09-15 Various updates to mlx5 driver, 1) Eli adds support for TC trap action. 2) Eran, minor improvements to clock.c code structure 3) Better handling of error reporting in LAG from Jianbo 4) IPv6 traffic class (DSCP) header rewrite support from Maor 5) Ofer Levi adds support for CQE compression of multi-strides packets 6) Vu, Enables use of vport meta data by default. 7) Some minor code cleanup ==================== Reviewed-by: Jakub Kicinski <kuba@kernel.org> Signed-off-by: David S. Miller <davem@davemloft.net>
-
- 15 Sep, 2020 6 commits
-
-
David S. Miller authored
Ido Schimmel says: ==================== nexthop: Small changes This patch set contains a few small changes that I split out of the RFC I sent last week [1]. Main change is the conversion of the nexthop notification chain to a blocking chain so that it could be reused by device drivers for nexthop objects programming in the future. Tested with fib_nexthops.sh: Tests passed: 164 Tests failed: 0 [1] https://lore.kernel.org/netdev/20200908091037.2709823-1-idosch@idosch.org/ ==================== Signed-off-by: David S. Miller <davem@davemloft.net>
-
Ido Schimmel authored
Commit c7cdbe2e ("vxlan: support for nexthop notifiers") registered a listener in the VXLAN driver to the nexthop notification chain. Its purpose is to cleanup FDB entries that use a nexthop that is being deleted. Test that such FDB entries are removed when the nexthop group that they use is deleted. Test that entries are not deleted when a single nexthop in the group is deleted. Signed-off-by: Ido Schimmel <idosch@nvidia.com> Reviewed-by: David Ahern <dsahern@gmail.com> Signed-off-by: David S. Miller <davem@davemloft.net>
-
Ido Schimmel authored
Currently, the in-kernel delete notification is emitted from the error path of nexthop_add() and replace_nexthop(), which can be confusing to in-kernel listeners as they are not familiar with the nexthop. Instead, only emit the notification when the nexthop is actually deleted. The following sub-cases are covered: 1. User space deletes the nexthop 2. The nexthop is deleted by the kernel due to a netdev event (e.g., nexthop device going down) 3. A group is deleted because its last nexthop is being deleted 4. The network namespace of the nexthop device is deleted Signed-off-by: Ido Schimmel <idosch@nvidia.com> Reviewed-by: David Ahern <dsahern@gmail.com> Signed-off-by: David S. Miller <davem@davemloft.net>
-
Ido Schimmel authored
Currently, the only listener of the nexthop notification chain is the VXLAN driver. Subsequent patches will add more listeners (e.g., device drivers such as netdevsim) that need to be able to block when processing notifications. Therefore, convert the notification chain to a blocking one. This is safe as notifications are always emitted from process context. Signed-off-by: Ido Schimmel <idosch@nvidia.com> Reviewed-by: David Ahern <dsahern@gmail.com> Signed-off-by: David S. Miller <davem@davemloft.net>
-
Ido Schimmel authored
Not used anywhere. Signed-off-by: Ido Schimmel <idosch@nvidia.com> Suggested-by: David Ahern <dsahern@gmail.com> Signed-off-by: David S. Miller <davem@davemloft.net>
-
Ido Schimmel authored
Not used or implemented anywhere. Signed-off-by: Ido Schimmel <idosch@nvidia.com> Reviewed-by: David Ahern <dsahern@gmail.com> Signed-off-by: David S. Miller <davem@davemloft.net>
-