- 21 Aug, 2019 1 commit
-
-
git://git.kernel.org/pub/scm/linux/kernel/git/jkirsher/next-queueDavid S. Miller authored
Jeff Kirsher says: ==================== 100GbE Intel Wired LAN Driver Updates 2019-08-20 This series contains updates to ice driver only. Brett fixes the detection of a hung transmit ring by checking the software based tail (next_to_use) to determine if there is pending work. Updates the driver to assume that using more than one receive queue per receive ring container is a rare case, so use unlikely() in the case were we actually need to divide our budget for multiple queues. Fixed an issue where the write back on ITR bit was not being set when interrupts are disabled, which was causing only write backs when polling only when a cache line is filled. Cleans up unnecessary wait times during VF bring up and reset paths. Increased the mailbox size for receive queues that are used to communicate with VFs to accommodate the large number of VFs that the driver can support. Akeem restructures the initialization flows for VFs, including how VFs are configured and resources allocated to improve flows so that when we clean up resources, we do not try to free resources that were never allocated. Organizes code to ensure that VF specific code is located in the SR-IOV specific file. Paul fixes an issue when setting the pause parameter which was incorrectly blocking users from changing receive or transmit pause settings. Ensure register access for MSIX vector index is only done in the PF space and not absolute device space. Usha fixes a potential kernel hang in the DCB rebuild path when in CEE mode, where the ETS recommended DCB configuration is not being set or set correctly. Mitch updates the driver to process all receive descriptors, regardless of the size of the associated data. Tony fixes and issue during the reset/rebuild path of a PF VSI where we were assuming that the PF VSI was always to be enabled, which can attempt to bring up a PF VSI on a downed interface which can lead to various crashes. Pawel fixes up variable definitions to match the type of data being stored. v2: Dropped patch 1 of the series to add ethtool support to query/add channels on a VSI, while we re-qork the functionality to match the ethtool expected behavior to report combined (Tx and Rx) numbers. v3: Updated patch 4 to use kzalloc() and kfree() instead devm_kzalloc() and devm_kfree(). ==================== Signed-off-by: David S. Miller <davem@davemloft.net>
-
- 20 Aug, 2019 39 commits
-
-
Brett Creeley authored
When we fail to add/delete MAC filters in the VF, the print doesn't distinguish between the two. Fix that by printing whether or not we failed to add/delete the MAC filter respectively. Signed-off-by: Brett Creeley <brett.creeley@intel.com> Tested-by: Andrew Bowers <andrewx.bowers@intel.com> Signed-off-by: Jeff Kirsher <jeffrey.t.kirsher@intel.com>
-
Pawel Kaminski authored
These queue variables are being assigned values that are type u16. Change the local variables to match these types. Since these represent queue counts, they should never be negative. Signed-off-by: Pawel Kaminski <pawel.kaminski@intel.com> Signed-off-by: Tony Nguyen <anthony.l.nguyen@intel.com> Tested-by: Andrew Bowers <andrewx.bowers@intel.com> Signed-off-by: Jeff Kirsher <jeffrey.t.kirsher@intel.com>
-
Akeem G Abodunrin authored
In order to use some of the VF resources definition in the SR-IOV specific virtchnl header file, this patch moves applicable code to ice_virtchnl_pf.h file accordingly... and they should have been defined in the destination file originally. Signed-off-by: Akeem G Abodunrin <akeem.g.abodunrin@intel.com> Tested-by: Andrew Bowers <andrewx.bowers@intel.com> Signed-off-by: Jeff Kirsher <jeffrey.t.kirsher@intel.com>
-
Brett Creeley authored
Currently we use the ICE_MBXQ_LEN for both the Mailbox send and receive queues that are used to communicate with VFs. This is fine for the send queue because the PF driver will lock the queue for every single send, but for the Mailbox receive queue every VF is posting to its Mailbox send queue and the hardware is then handing the message to the PF on its Mailbox receive queue. This becomes a problem with many VFs because it seems to overburden the Mailbox receive queue on the PF. Fix this by increasing the Mailbox receive queue for the PF to 512 entries. The number 512 was determined based on the number of VFs supported by the device. We can have a total of 256 VFs so in the worst case this allows the VFs to put 2 messages in the PFs Mailbox receive queue at the same time. Signed-off-by: Brett Creeley <brett.creeley@intel.com> Tested-by: Andrew Bowers <andrewx.bowers@intel.com> Signed-off-by: Jeff Kirsher <jeffrey.t.kirsher@intel.com>
-
Brett Creeley authored
Currently there are a couple places where the VF is waiting too long when checking the status of registers. This is causing the AVF driver to spin for longer than necessary in the __IAVF_STARTUP state. Sometimes it causes the AVF to go into the __IAVF_COMM_FAILED, which may retrigger the __IAVF_STARTUP state. Try to reduce the chance of this happening by removing unnecessary wait times in VF bringup/resets. Signed-off-by: Brett Creeley <brett.creeley@intel.com> Tested-by: Andrew Bowers <andrewx.bowers@intel.com> Signed-off-by: Jeff Kirsher <jeffrey.t.kirsher@intel.com>
-
Paul Greenwalt authored
Register access for GLINT_DYN_CTL and GLINT_VECT2FUNC should be within the PF space and not the absolute device space. Signed-off-by: Paul Greenwalt <paul.greenwalt@intel.com> Tested-by: Andrew Bowers <andrewx.bowers@intel.com> Signed-off-by: Jeff Kirsher <jeffrey.t.kirsher@intel.com>
-
Tony Nguyen authored
During rebuild ice_ena_vsi() is called to recover the VSI state. This function assumes the PF VSI is always to be enabled, however, it's possible that during reset/rebuild the interface can be brought down. If this occurs, we can attempt to bring up the PF VSI on a downed interface which can lead to various crashes. If the interface is not running, do not bring up the associated VSI. Signed-off-by: Tony Nguyen <anthony.l.nguyen@intel.com> Tested-by: Andrew Bowers <andrewx.bowers@intel.com> Signed-off-by: Jeff Kirsher <jeffrey.t.kirsher@intel.com>
-
Mitch Williams authored
In some circumstances, the hardware will hand us a receive descriptor which has no data attached, but is otherwise valid. The receive code was improperly ignoring these descriptors, which result in an infinite loop. To fix this, change the receive code to process all descriptors, regardless of the size of the associated data. Add checks to the memory-handling functions to allow for zero size. Signed-off-by: Mitch Williams <mitch.a.williams@intel.com> Tested-by: Andrew Bowers <andrewx.bowers@intel.com> Signed-off-by: Jeff Kirsher <jeffrey.t.kirsher@intel.com>
-
Usha Ketineni authored
This patch fixes the set local MIB AQ call failures in the DCB rebuild path by setting the defaults for the ETS recommended DCB configuration. Also, willing bits for the DCB configuration needs to be set correctly. Resets works fine in IEEE mode as the ETS recommended DCB configuration is populated but not in CEE mode. Without this patch, PFR causes the kernel hang in CEE mode. Signed-off-by: Usha Ketineni <usha.k.ketineni@intel.com> Tested-by: Andrew Bowers <andrewx.bowers@intel.com> Signed-off-by: Jeff Kirsher <jeffrey.t.kirsher@intel.com>
-
Brett Creeley authored
Currently when busy polling is enabled we aren't setting/enabling WB_ON_ITR in the driver. This doesn't break the driver, but it does cause issues. If we don't enable WB_ON_ITR mode we will still get write-backs from hardware during polling when a cache line has been filled, but if a cache line is not filled we will not get the write-back because WB_ON_ITR is not set. Fix this by enabling WB_ON_ITR in the driver when interrupts are disabled. Signed-off-by: Brett Creeley <brett.creeley@intel.com> Signed-off-by: Tony Nguyen <anthony.l.nguyen@intel.com> Tested-by: Andrew Bowers <andrewx.bowers@intel.com> Signed-off-by: Jeff Kirsher <jeffrey.t.kirsher@intel.com>
-
David S. Miller authored
Merge tag 'linux-can-next-for-5.4-20190820' of git://git.kernel.org/pub/scm/linux/kernel/git/mkl/linux-can-next Marc Kleine-Budde says: ==================== pull-request: can-next 2019-08-20 this is a pull request for net-next/master consisting of 18 patches. The first patch is by Geert Uytterhoeven, it removes the unused platform data support from the rcar_can driver. A patch by Nishka Dasgupta marks the structure peak_pciec_i2c_bit_ops in the peak_pci driver as constant. A patch by me removes the custom DMA support from the hi311x driver. The next 4 patches target the tcan4x5x driver and are also by me, they first clean up the driver a bit, and then add missing error handling and fix a bug in the length calculation in the regmap callbacks. The next 2 patches are by me for the m_can_platform driver, they also remove unneeded casts and add missing error handling. The remaining 9 patches all target the mcp251x driver. The first 5 are clean up patches by me, the next relaxes the timing in the mcp251x_hw_reset() function. Alexander Shiyan's patch improves the name which is used while registering the interrupt handler. Phil Elwell's patch improves the mcp251x_open() function to use the DT-supplied interrupt flags instead of hard coding them. The final patch is again by me, it removes the custom DMA support from the hi311x driver. ==================== Signed-off-by: David S. Miller <davem@davemloft.net>
-
Paul Greenwalt authored
When ETHTOOL_GLINKSETTINGS is defined get pause param pause->autoneg reports SW configured setting, however when not defined get pause param pause->autoneg reports the link status. Set pause param needs to compare pause->autoneg with the same source as get pause param to block the user from changing autoneg with the set pause param option, or the user may be incorrectly blocked from changing Rx|Tx pause settings. Signed-off-by: Paul Greenwalt <paul.greenwalt@intel.com> Tested-by: Andrew Bowers <andrewx.bowers@intel.com> Signed-off-by: Jeff Kirsher <jeffrey.t.kirsher@intel.com>
-
David S. Miller authored
Julian Wiedmann says: ==================== s390/net: updates 2019-08-20 please apply the following patches to net-next. This series brings a mix of cleanups and small improvements for various parts of qeth's control path. Also, a minor cleanup for ctcm and lcs. ==================== Signed-off-by: David S. Miller <davem@davemloft.net>
-
Julian Wiedmann authored
lcs passes an intparm when calling ccw_device_*(), even though lcs_irq() later makes no use of this. To reduce the confusion, consistently pass 0 as intparm instead. Signed-off-by: Julian Wiedmann <jwi@linux.ibm.com> Reviewed-by: Sebastian Ott <sebott@linux.ibm.com> Signed-off-by: David S. Miller <davem@davemloft.net>
-
Julian Wiedmann authored
ctcm passes an intparm when calling ccw_device_*(), even though ctcm_irq_handler() later makes no use of this. To reduce the confusion, consistently pass 0 as intparm instead. Signed-off-by: Julian Wiedmann <jwi@linux.ibm.com> Reviewed-by: Sebastian Ott <sebott@linux.ibm.com> Signed-off-by: David S. Miller <davem@davemloft.net>
-
Julian Wiedmann authored
We have logic to determine the desired promisc mode in _each_ code path. Change things around so that there is a clean split between (a) high-level code that selects the new mode, and (b) implementations of the various mechanisms to program this mode. This also keeps qeth_promisc_to_bridge() from polluting the debug logs on each RX modeset. Signed-off-by: Julian Wiedmann <jwi@linux.ibm.com> Reviewed-by: Alexandra Winter <wintera@linux.ibm.com> Signed-off-by: David S. Miller <davem@davemloft.net>
-
Julian Wiedmann authored
When processing the reply for a vnicc cmd, there's no need to remember which specific sub-cmd type we initially sent. The reply itself contains all the needed information. Signed-off-by: Julian Wiedmann <jwi@linux.ibm.com> Signed-off-by: David S. Miller <davem@davemloft.net>
-
Julian Wiedmann authored
Except for card->read_cmd, every cmd we issue now passes through qeth_send_control_data() and allocates a qeth_reply struct. The way we use this struct requires additional refcounting, and pointer tracking. Clean up things by moving most of qeth_reply's content into the main cmd struct. This keeps things in one place, saves us the additional refcounting and simplifies the overall code flow. A nice little benefit is that we can now match incoming replies against the pending requests themselves, without caching the requests' seqnos. The qeth_reply struct stays around for a little bit longer in a shrunk form, to avoid touching every single callback. Signed-off-by: Julian Wiedmann <jwi@linux.ibm.com> Signed-off-by: David S. Miller <davem@davemloft.net>
-
Julian Wiedmann authored
Current code releases the cmd struct after its initial IO has completed. Any reply processing is done independently, using a separate qeth_reply struct. In preparation for merging the cmd and reply structs together, take an additional reference on the cmd object so that it stays around all the way until qeth_send_control_data() returns. Signed-off-by: Julian Wiedmann <jwi@linux.ibm.com> Signed-off-by: David S. Miller <davem@davemloft.net>
-
Julian Wiedmann authored
qeth_snmp_command_cb() is the only cmd callback that pulls the reply's data length from a low-level transport header field. This requires additional complexity (ie. reply->offset) to make the header accessible to what is supposed to be a pure IPA cmd callback. Adapter cmds have a length field in their sub-cmd header, get the data length from there instead. Signed-off-by: Julian Wiedmann <jwi@linux.ibm.com> Signed-off-by: David S. Miller <davem@davemloft.net>
-
Julian Wiedmann authored
When an cmd IO completes in qeth_irq(), calculate how much data was processed by the device and pass this value to the cmd's callback. This allows cmds that retrieve data from the device to check whether sufficient data was received, so we do that in qeth_read_conf_data_cb(). Suggested-by: Jens Remus <jremus@linux.ibm.com> Signed-off-by: Julian Wiedmann <jwi@linux.ibm.com> Signed-off-by: David S. Miller <davem@davemloft.net>
-
Julian Wiedmann authored
Rather than fumbling with hard-coded offsets, use the proper struct to access the retrieved RCD information. Signed-off-by: Julian Wiedmann <jwi@linux.ibm.com> Signed-off-by: David S. Miller <davem@davemloft.net>
-
YueHaibing authored
If CONFIG_INET is not set, building fails: drivers/net/netdevsim/dev.o: In function `nsim_dev_trap_report_work': dev.c:(.text+0x67b): undefined reference to `ip_send_check' Use ip_fast_csum instead of ip_send_check to avoid dependencies on CONFIG_INET. Reported-by: Hulk Robot <hulkci@huawei.com> Fixes: da58f90f ("netdevsim: Add devlink-trap support") Signed-off-by: YueHaibing <yuehaibing@huawei.com> Acked-by: Jakub Kicinski <jakub.kicinski@netronome.com> Reviewed-by: Ido Schimmel <idosch@mellanox.com> Tested-by: Ido Schimmel <idosch@mellanox.com> Signed-off-by: David S. Miller <davem@davemloft.net>
-
David S. Miller authored
Vivien Didelot says: ==================== net: dsa: enable and disable all ports The DSA stack currently calls the .port_enable and .port_disable switch callbacks for slave ports only. However, it is useful to call them for all port types. For example this allows some drivers to delay the optimization of power consumption after the switch is setup. This can also help reducing the setup code of drivers a bit. The first DSA core patches enable and disable all ports of a switch, regardless their type. The last mv88e6xxx patches remove redundant code from the driver setup and the said callbacks, now that they handle SERDES power for all ports. Changes in v2: do not guard .port_disable for broadcom switches. ==================== Signed-off-by: David S. Miller <davem@davemloft.net>
-
Vivien Didelot authored
Now that mv88e6xxx_serdes_power is only called after driver setup, we can wrap the SERDES IRQ code directly within it for clarity. Signed-off-by: Vivien Didelot <vivien.didelot@gmail.com> Signed-off-by: David S. Miller <davem@davemloft.net>
-
Vivien Didelot authored
SERDES is powered on for CPU and DSA ports and powered down for unused ports at setup time. But now that DSA calls mv88e6xxx_port_enable and mv88e6xxx_port_disable for all ports, the SERDES power can now be handled after setup inconditionally for all ports. Using the port enable and disable callbacks also have the benefit to handle the SERDES IRQ for non user ports as well. Signed-off-by: Vivien Didelot <vivien.didelot@gmail.com> Signed-off-by: David S. Miller <davem@davemloft.net>
-
Vivien Didelot authored
When disabling a port, that is not for the driver to decide what to do with the STP state. This is already handled by the DSA layer. Signed-off-by: Vivien Didelot <vivien.didelot@gmail.com> Reviewed-by: Andrew Lunn <andrew@lunn.ch> Signed-off-by: David S. Miller <davem@davemloft.net>
-
Vivien Didelot authored
Call the .port_enable and .port_disable functions for all ports, not only the user ports, so that drivers may optimize the power consumption of all ports after a successful setup. Unused ports are now disabled on setup. CPU and DSA ports are now enabled on setup and disabled on teardown. User ports were already enabled at slave creation and disabled at slave destruction. Signed-off-by: Vivien Didelot <vivien.didelot@gmail.com> Reviewed-by: Florian Fainelli <f.fainelli@gmail.com> Signed-off-by: David S. Miller <davem@davemloft.net>
-
Vivien Didelot authored
The .port_enable and .port_disable operations are currently only called for user ports, hence assuming they have a slave device. In preparation for using these operations for other port types as well, simply guard all implementations against non user ports and return directly in such case. Note that bcm_sf2_sw_suspend() currently calls bcm_sf2_port_disable() (and thus b53_disable_port()) against the user and CPU ports, so do not guards those functions. They will be called for unused ports in the future, but that was expected by those drivers anyway. Signed-off-by: Vivien Didelot <vivien.didelot@gmail.com> Signed-off-by: David S. Miller <davem@davemloft.net>
-
Vivien Didelot authored
It is currently difficult to read the different steps involved in the setup and teardown of ports in the DSA code. Keep it simple with a single switch statement for each port type: UNUSED, CPU, DSA, or USER. Also no need to call devlink_port_unregister from within dsa_port_setup as this step is inconditionally handled by dsa_port_teardown on error. Signed-off-by: Vivien Didelot <vivien.didelot@gmail.com> Signed-off-by: David S. Miller <davem@davemloft.net>
-
Akeem G Abodunrin authored
This patch restructures how VFs are configured, and resources allocated. Instead of freeing resources that were never allocated, and resetting empty VFs that have never been created - the new flow will just allocate resources for number of requested VFs based on the availability. During VFs initialization process, global interrupt is disabled, and rearmed after getting MSIX vectors for VFs. This allows immediate mailbox communications, instead of delaying it till later and VFs. PF communications resulted to using polling instead of actual interrupt. The issue manifested when creating higher number of VFs (128 VFs) per PF. Signed-off-by: Akeem G Abodunrin <akeem.g.abodunrin@intel.com> Tested-by: Andrew Bowers <andrewx.bowers@intel.com> Signed-off-by: Jeff Kirsher <jeffrey.t.kirsher@intel.com>
-
Brett Creeley authored
Currently we divide budget by the number of Rx queues per Rx ring container in ice_napi_poll even if there is only 1. This is an unnecessary divide for the normal case of 1 Rx ring per Rx ring container. Fix this by using an unlikely() call in the case where we actually need to divide. Also, we will always set budget_per_ring even if there are no Rx rings in the Rx ring container so we don't need to initialize it to 0. Signed-off-by: Brett Creeley <brett.creeley@intel.com> Tested-by: Andrew Bowers <andrewx.bowers@intel.com> Signed-off-by: Jeff Kirsher <jeffrey.t.kirsher@intel.com>
-
Brett Creeley authored
Currently in ice_get_tx_pending we try to read a Tx ring's tail. This is then compared with the software based head (next_to_clean) to determine if we have pending work. This will never work because reading of the Tx ring's tail is no longer supported. Fix this by using the software based tail (next_to_use) to determine if there is pending work. Signed-off-by: Brett Creeley <brett.creeley@intel.com> Tested-by: Andrew Bowers <andrewx.bowers@intel.com> Signed-off-by: Jeff Kirsher <jeffrey.t.kirsher@intel.com>
-
Hayes Wang authored
Move the tx bottom function from NAPI to a new tasklet. Then, for multi-cores, the bottom functions of tx and rx may be run at same time with different cores. This is used to improve performance. On x86, Tx/Rx 943/943 Mbits/sec -> 945/944. For arm platform, Tx/Rx: 917/917 Mbits/sec -> 933/933. Signed-off-by: Hayes Wang <hayeswang@realtek.com> Signed-off-by: David S. Miller <davem@davemloft.net>
-
Marc Kleine-Budde authored
There is no need to duplicate what SPI core already does, i.e. mapping buffers for DMA capable transfers. This patch removes all related pices of code. Tested-by: Sean Nyekjaer <sean@geanix.com> Signed-off-by: Marc Kleine-Budde <mkl@pengutronix.de>
-
Phil Elwell authored
The MCP2515 datasheet clearly describes a level-triggered interrupt pin. Therefore the receiving interrupt controller must also be configured for level-triggered operation otherwise there is a danger of a missed interrupt condition blocking all subsequent interrupts. The ONESHOT flag ensures that the interrupt is masked until the threaded interrupt handler exits. Rather than change the flags globally (they must have worked for at least one user), keep the old behavior for for non DT devices. DT based devices specify the flags in their corresonding DT node. See: https://github.com/raspberrypi/linux/issues/2175 https://github.com/raspberrypi/linux/issues/2263Signed-off-by: Phil Elwell <phil@raspberrypi.org> Tested-by: Sean Nyekjaer <sean@geanix.com> Signed-off-by: Marc Kleine-Budde <mkl@pengutronix.de>
-
Alexander Shiyan authored
Passing driver name as name during request_threaded_irq() results in all CAN IRQs have same name. This does not help much to easily identify which IRQ belongs to which CAN instance. Therefore pass dev_name() during request_threaded_irq() so that better identifiable name is listed for CAN devices in cat /proc/interrupts output. Output of cat /proc/interrupts Before this patch: 253: 2 gpio-mxc 13 Edge mcp251x 259: 2 gpio-mxc 19 Edge mcp251x After this patch: 253: 2 gpio-mxc 13 Edge spi1.1 259: 2 gpio-mxc 19 Edge spi1.2 Signed-off-by: Alexander Shiyan <shc_work@mail.ru> Tested-by: Sean Nyekjaer <sean@geanix.com> Signed-off-by: Marc Kleine-Budde <mkl@pengutronix.de>
-
Marc Kleine-Budde authored
Some boards take longer than 5ms to power up after a reset, so allow some retries attempts before giving up. Fixes: ff06d611 ("can: mcp251x: Improve mcp251x_hw_reset()") Cc: linux-stable <stable@vger.kernel.org> Tested-by: Sean Nyekjaer <sean@geanix.com> Signed-off-by: Marc Kleine-Budde <mkl@pengutronix.de>
-
Marc Kleine-Budde authored
This patch changes all the uint8_t in the arguments in several function to u8. Acked-by: Sean Nyekjaer <sean@geanix.com> Signed-off-by: Marc Kleine-Budde <mkl@pengutronix.de>
-