- 26 Mar, 2019 23 commits
-
-
Anirudh Venkataramanan authored
Remove some redundant text in the function header for __ice_vsi_get_qs Signed-off-by: Anirudh Venkataramanan <anirudh.venkataramanan@intel.com> Tested-by: Andrew Bowers <andrewx.bowers@intel.com> Signed-off-by: Jeff Kirsher <jeffrey.t.kirsher@intel.com>
-
Anirudh Venkataramanan authored
Single statement if conditions don't need braces. Remove it. Signed-off-by: Anirudh Venkataramanan <anirudh.venkataramanan@intel.com> Tested-by: Andrew Bowers <andrewx.bowers@intel.com> Signed-off-by: Jeff Kirsher <jeffrey.t.kirsher@intel.com>
-
Anirudh Venkataramanan authored
Commit 37bb8390 ("ice: Move common functions out of ice_main.c part 7/7") seems to have inadvertently introduced a function prototype for ice_vsi_cfg_tc without a corresponding function implementation. Remove it. Signed-off-by: Anirudh Venkataramanan <anirudh.venkataramanan@intel.com> Tested-by: Andrew Bowers <andrewx.bowers@intel.com> Signed-off-by: Jeff Kirsher <jeffrey.t.kirsher@intel.com>
-
Brett Creeley authored
Currently we aren't checking for the ICE_FC_NONE case for the current flow control mode. This is causing "Unknown" to be printed for the current flow control method if flow control is disabled. Fix this by adding the case for ICE_FC_NONE to print "None". Signed-off-by: Brett Creeley <brett.creeley@intel.com> Signed-off-by: Anirudh Venkataramanan <anirudh.venkataramanan@intel.com> Tested-by: Andrew Bowers <andrewx.bowers@intel.com> Signed-off-by: Jeff Kirsher <jeffrey.t.kirsher@intel.com>
-
Brett Creeley authored
Currently the ice_q_vector structure and ice_ring_container structure are taking up more space than necessary due to cache alignment holes and unnecessary variables respectively. This is not helping the driver's performance. The following fixes were done to improve cache alignment, reduce wasted space, and increase performance. 1. Remove the ice_latency_range enum as it is unused. 2. Remove the latency_range variable in the ice_ring_container structure. 3. Change the size of the itr_idx in the ice_ring_container structure from an int to an u16. This reduced the size of ice_ring_container structure to 32 Bytes so it has no holes or padding. 4. Re-arrange the ice_q_vector structure using pahole to align members as best as possible in regards to 64 Byte cache line size. Signed-off-by: Brett Creeley <brett.creeley@intel.com> Signed-off-by: Anirudh Venkataramanan <anirudh.venkataramanan@intel.com> Tested-by: Andrew Bowers <andrewx.bowers@intel.com> Signed-off-by: Jeff Kirsher <jeffrey.t.kirsher@intel.com>
-
Preethi Banala authored
If filter already exists, do not go through error path flow but instead continue to process rest of the function. Hence have an appropriate check after adding MAC filters. Signed-off-by: Preethi Banala <preethi.banala@intel.com> Signed-off-by: Anirudh Venkataramanan <anirudh.venkataramanan@intel.com> Tested-by: Andrew Bowers <andrewx.bowers@intel.com> Signed-off-by: Jeff Kirsher <jeffrey.t.kirsher@intel.com>
-
Akeem G Abodunrin authored
This patch fixes issue that occurs when VF is attempting to remove default LAN/MAC address, which is programmed by the administrator. We shouldn't return error for the call by the VF, but continue with the remaining steps to handle MAC opcode. Also update the dev_err message to explicitly say that VF can't change MAC programmed by PF. Also change "mac" to "MAC" for kernel print statements in the same file. Signed-off-by: Akeem G Abodunrin <akeem.g.abodunrin@intel.com> Signed-off-by: Anirudh Venkataramanan <anirudh.venkataramanan@intel.com> Tested-by: Andrew Bowers <andrewx.bowers@intel.com> Signed-off-by: Jeff Kirsher <jeffrey.t.kirsher@intel.com>
-
Mitch Williams authored
The VPINT_MBX_CTL register array must be programmed to enable VF admin queue interrupts. Without this, VFs never get interrupts on vector 0, and some VF drivers will fail to init. Signed-off-by: Mitch Williams <mitch.a.williams@intel.com> Signed-off-by: Anirudh Venkataramanan <anirudh.venkataramanan@intel.com> Tested-by: Andrew Bowers <andrewx.bowers@intel.com> Signed-off-by: Jeff Kirsher <jeffrey.t.kirsher@intel.com>
-
Anirudh Venkataramanan authored
commit 63f545ed ("ice: Add support for adaptive interrupt moderation") was meant to add support for adaptive interrupt moderation but there was an error on my part while formatting the patch, and thus only part of the patch ended up being submitted. This patch rectifies the error by adding the rest of the code. Fixes: 63f545ed ("ice: Add support for adaptive interrupt moderation") Signed-off-by: Anirudh Venkataramanan <anirudh.venkataramanan@intel.com> Tested-by: Andrew Bowers <andrewx.bowers@intel.com> Signed-off-by: Jeff Kirsher <jeffrey.t.kirsher@intel.com>
-
Brett Creeley authored
This patch implements the following pci_error_handler ops: .error_detected = ice_pci_err_detected .slot_reset = ice_pci_err_slot_reset .reset_notify = ice_pci_err_reset_notify .reset_prepare = ice_pci_err_reset_prepare .reset_done = ice_pci_err_reset_done .resume = ice_pci_err_resume Signed-off-by: Brett Creeley <brett.creeley@intel.com> Signed-off-by: Anirudh Venkataramanan <anirudh.venkataramanan@intel.com> Tested-by: Andrew Bowers <andrewx.bowers@intel.com> Signed-off-by: Jeff Kirsher <jeffrey.t.kirsher@intel.com>
-
Brett Creeley authored
Currently we check if the __ICE_PREPARED_FOR_RESET bit is set prior to calling ice_prepare_for_reset in ice_reset_subtask(), but we aren't checking that bit in ice_do_reset() before calling ice_prepare_for_reset(). This is not consistent and can cause issues if ice_prepare_for_reset() is called prior to ice_do_reset(). Fix this by checking if the __ICE_PREPARED_FOR_RESET bit is set internal to ice_prepare_for_reset(). Signed-off-by: Brett Creeley <brett.creeley@intel.com> Signed-off-by: Anirudh Venkataramanan <anirudh.venkataramanan@intel.com> Tested-by: Andrew Bowers <andrewx.bowers@intel.com> Signed-off-by: Jeff Kirsher <jeffrey.t.kirsher@intel.com>
-
Mitch Williams authored
When communicating with the AVF driver, we need to use the status codes from virtchnl.h, not our own ice-specific codes. Without this, when an error occurs, the VF will report nonsensical results. NOTE: this depends on changes made to include/linux/avf/virtchnl.h by commit bb58fd7e ("i40e: Update status codes") Signed-off-by: Mitch Williams <mitch.a.williams@intel.com> Signed-off-by: Anirudh Venkataramanan <anirudh.venkataramanan@intel.com> Tested-by: Andrew Bowers <andrewx.bowers@intel.com> Signed-off-by: Jeff Kirsher <jeffrey.t.kirsher@intel.com>
-
Jeremiah Kyle authored
Two log messages contained newlines in the middle of the message. This resulted in unexpected driver log output. This patch removes the newlines to restore consistency with the rest of the driver log messages. Signed-off-by: Jeremiah Kyle <jeremiah.kyle@intel.com> Signed-off-by: Anirudh Venkataramanan <anirudh.venkataramanan@intel.com> Tested-by: Andrew Bowers <andrewx.bowers@intel.com> Signed-off-by: Jeff Kirsher <jeffrey.t.kirsher@intel.com>
-
Flavio Leitner authored
When the conntrack is initialized, there is no helper attached yet so the nat info initialization (nf_nat_setup_info) skips adding the seqadj ext. A helper is attached later when the conntrack is not confirmed but is going to be committed. In this case, if NAT is needed then adds the seqadj ext as well. Fixes: 16ec3d4f ("openvswitch: Fix cached ct with helper.") Signed-off-by: Flavio Leitner <fbl@sysclose.org> Acked-by: Pravin B Shelar <pshelar@ovn.org> Signed-off-by: David S. Miller <davem@davemloft.net>
-
Ioana Ciornei authored
Take advantage of the software Rx batching by using netif_receive_skb_list instead of napi_gro_receive. Signed-off-by: Ioana Ciornei <ioana.ciornei@nxp.com> Signed-off-by: David S. Miller <davem@davemloft.net>
-
Wei Yongjun authored
Fix the return value check which testing the wrong variable in tipc_mcast_send_sync(). Fixes: c55c8eda ("tipc: smooth change between replicast and broadcast") Reported-by: Hulk Robot <hulkci@huawei.com> Signed-off-by: Wei Yongjun <weiyongjun1@huawei.com> Acked-by: Jon Maloy <jon.maloy@ericsson.com> Signed-off-by: David S. Miller <davem@davemloft.net>
-
David S. Miller authored
Heiner Kallweit says: ==================== net: phy: aquantia: report Aquantia-specific settings and features This series detects and reports quite some Aquantia-specific settings and features. v2: - propagate timeout in patch 2 ==================== Signed-off-by: David S. Miller <davem@davemloft.net>
-
Heiner Kallweit authored
The AQCS109 supports a proprietary 2-pair 1Gbps mode. The standard registers don't allow to tell between 1000BaseT and 1000BaseT2. Add reporting this proprietary mode based on a vendor register. Signed-off-by: Heiner Kallweit <hkallweit1@gmail.com> Reviewed-by: Florian Fainelli <f.fainelli@gmail.com> Signed-off-by: David S. Miller <davem@davemloft.net>
-
Heiner Kallweit authored
Add reporting firmware details. These details are available only once the firmware has finished initializing the chip. This can take some time and we need to poll for init completion. v2: - Propagate timeout in aqr107_wait_reset_complete(). Don't bail out completely on timeout because chip may be functional even w/o firmware image. Signed-off-by: Heiner Kallweit <hkallweit1@gmail.com> Signed-off-by: David S. Miller <davem@davemloft.net>
-
Heiner Kallweit authored
If both link partners are Aquantia PHY's then additional information is exchanged as part of the auto-negotiation. Report remote capabilities if link partner is Aquantia PHY. Signed-off-by: Heiner Kallweit <hkallweit1@gmail.com> Reviewed-by: Florian Fainelli <f.fainelli@gmail.com> Signed-off-by: David S. Miller <davem@davemloft.net>
-
Vladimir Oltean authored
When phylink_of_phy_connect fails, dsa_slave_phy_setup tries to save the day by connecting to an alternative PHY, none other than a PHY on the switch's internal MDIO bus, at an address equal to the port's index. However this does not take into consideration the scenario when the switch that failed to probe an external PHY does not have an internal MDIO bus at all. Fixes: aab9c406 ("net: dsa: Plug in PHYLINK support") Signed-off-by: Vladimir Oltean <olteanv@gmail.com> Reviewed-by: Florian Fainelli <f.fainelli@gmail.com> Signed-off-by: David S. Miller <davem@davemloft.net>
-
Heiner Kallweit authored
Simplify aqr_config_aneg(). Signed-off-by: Heiner Kallweit <hkallweit1@gmail.com> Signed-off-by: David S. Miller <davem@davemloft.net>
-
git://git.kernel.org/pub/scm/linux/kernel/git/jkirsher/next-queueDavid S. Miller authored
Jeff Kirsher says: ==================== 100GbE Intel Wired LAN Driver Updates 2019-03-25 This series contains updates to the ice driver only. Victor updates the ice driver to be able to update the VSI queue configuration dynamically, by providing the ability to increase or decrease the VSI's number of queues. Michal fixes an issue when the VM starts or the VF driver is reloaded, the VLAN switch rule was lost (i.e. not added), so ensure it gets added in these cases. Brett updates the driver to support link events over the admin receive queue, instead of polling link events. Maciej refactors the code a bit to introduce a new function to fetch the receiver buffer and do the DMA synchronization to reduce the code duplication. Also added ice_can_reuse_rx_page() to verify whether the page can be reused so that in the future, we can use this check elsewhere in the driver. Additional driver optimizations so that we can drop the ice_pull_tail() altogether. Added support for bulk updates of refcount instead of doing it one by one. Refactored the page counting and buffer recycling so that we can use this code to clean up receive buffers when there is no skb allocated, like XDP. Added DMA_ATTR_WEAK_ORDERING and DMA_ATTR_SKIP_CPU_SYNC attributes to the DMA API during the mapping operations on the receive side, so that nonx86 platforms will be able to sync with what is being used (2k buffers) instead of the entire page. Dave fixes the driver to perform the most intrusive of the resets requested and clear the other request bits so that we do not end up with repeated reset, after reset. Bruce adds a iterator macro to clean up several for() loops. Chinh modifies the packet flags to be more generic so that they can be used for both receive and transmit. ==================== Signed-off-by: David S. Miller <davem@davemloft.net>
-
- 25 Mar, 2019 15 commits
-
-
Chinh T Cao authored
This structure is used to define the packet flags. These flags are applicable for both TX and RX packet. Thus, this patch changes its name from ice_rx_flag64_bits to ice_flg64_bits, and its member definition. Signed-off-by: Chinh T Cao <chinh.t.cao@intel.com> Reviewed-by: Bruce Allan <bruce.w.allan@intel.com> Signed-off-by: Anirudh Venkataramanan <anirudh.venkataramanan@intel.com> Tested-by: Andrew Bowers <andrewx.bowers@intel.com> Signed-off-by: Jeff Kirsher <jeffrey.t.kirsher@intel.com>
-
Bruce Allan authored
There are numerous for() loops iterating over each of the max traffic classes. Use a simple iterator macro instead to make the code cleaner. Signed-off-by: Bruce Allan <bruce.w.allan@intel.com> Signed-off-by: Anirudh Venkataramanan <anirudh.venkataramanan@intel.com> Tested-by: Andrew Bowers <andrewx.bowers@intel.com> Signed-off-by: Jeff Kirsher <jeffrey.t.kirsher@intel.com>
-
Preethi Banala authored
Update VF VSI tc info along with vsi->num_txq/num_rxq when VF requests to configure queues. Signed-off-by: Preethi Banala <preethi.banala@intel.com> Signed-off-by: Anirudh Venkataramanan <anirudh.venkataramanan@intel.com> Tested-by: Andrew Bowers <andrewx.bowers@intel.com> Signed-off-by: Jeff Kirsher <jeffrey.t.kirsher@intel.com>
-
Dave Ertman authored
In the current implementation of ice_reset_subtask, if multiple reset types are set in the pf->state, the most intrusive one is meant to be performed only, but the bits requesting the other types are not being cleared. This would lead to another reset being performed the next time the service task is scheduled. Change the flow of ice_reset_subtask so that all reset request bits in pf->state are cleared, and we still perform the most intrusive of the resets requested. Signed-off-by: Dave Ertman <david.m.ertman@intel.com> Signed-off-by: Anirudh Venkataramanan <anirudh.venkataramanan@intel.com> Tested-by: Andrew Bowers <andrewx.bowers@intel.com> Signed-off-by: Jeff Kirsher <jeffrey.t.kirsher@intel.com>
-
Maciej Fijalkowski authored
Provide DMA_ATTR_WEAK_ORDERING and DMA_ATTR_SKIP_CPU_SYNC attributes to the DMA API during the mapping operations on Rx side. With this change the non-x86 platforms will be able to sync only with what is being used (2k buffer) instead of entire page. This should yield a slight performance improvement. Furthermore, DMA unmap may destroy the changes that were made to the buffer by CPU when platform is not a x86 one. DMA_ATTR_SKIP_CPU_SYNC attribute usage fixes this issue. Also add a sync_single_for_device call during the Rx buffer assignment, to make sure that the cache lines are cleared before device attempting to write to the buffer. Signed-off-by: Maciej Fijalkowski <maciej.fijalkowski@intel.com> Signed-off-by: Anirudh Venkataramanan <anirudh.venkataramanan@intel.com> Tested-by: Andrew Bowers <andrewx.bowers@intel.com> Signed-off-by: Jeff Kirsher <jeffrey.t.kirsher@intel.com>
-
Maciej Fijalkowski authored
Refactor ice_fetch_rx_buf and ice_add_rx_frag in a way that we have standalone functions that do either the skb construction or frag addition to previously constructed skb. The skb handling between rx_bufs is spread among various functions. The ice_get_rx_buf will retrieve the skb pointer from rx_buf and if it is a NULL pointer then we do the ice_construct_skb, otherwise we add a frag to the current skb via ice_add_rx_frag. Then, on the ice_put_rx_buf the skb pointer that belongs to rx_buf will be cleared. Moving further, if the current frame is not EOP frame we assign the current skb to the rx_buf that is pointed by updated next_to_clean indicator. What is more during the buffer reuse let's assign each member of ice_rx_buf individually so we avoid the unnecessary copy of skb. Last but not least, this logic split will allow us for better code reuse when adding a support for build_skb. Signed-off-by: Maciej Fijalkowski <maciej.fijalkowski@intel.com> Signed-off-by: Anirudh Venkataramanan <anirudh.venkataramanan@intel.com> Tested-by: Andrew Bowers <andrewx.bowers@intel.com> Signed-off-by: Jeff Kirsher <jeffrey.t.kirsher@intel.com>
-
Maciej Fijalkowski authored
Pull out the code responsible for page counting and buffer recycling so that it will be possible to clean up the Rx buffers in cases where we won't allocate skb (ex. XDP) Signed-off-by: Maciej Fijalkowski <maciej.fijalkowski@intel.com> Signed-off-by: Anirudh Venkataramanan <anirudh.venkataramanan@intel.com> Tested-by: Andrew Bowers <andrewx.bowers@intel.com> Signed-off-by: Jeff Kirsher <jeffrey.t.kirsher@intel.com>
-
Maciej Fijalkowski authored
{get,put}_page are atomic operations which we use for page count handling. The current logic for refcount handling is that we increment it when passing a skb with the data from the first half of page up to netstack and recycle the second half of page. This operation protects us from losing a page since the network stack can decrement the refcount of page from skb. The performance can be gently improved by doing the bulk updates of refcount instead of doing it one by one. During the buffer initialization, maximize the page's refcount and don't allow the refcount to become less than two. Signed-off-by: Maciej Fijalkowski <maciej.fijalkowski@intel.com> Signed-off-by: Anirudh Venkataramanan <anirudh.venkataramanan@intel.com> Tested-by: Andrew Bowers <andrewx.bowers@intel.com> Signed-off-by: Jeff Kirsher <jeffrey.t.kirsher@intel.com>
-
Maciej Fijalkowski authored
Instead of adding a frag and later when dealing with EOP frame accessing that frag in order to copy the headers onto linear part of skb, we can do this in ice_add_rx_frag in case where the data_len is still 0 and frame won't fit onto the linear part as a whole. Function comment of ice_pull_tail was a bit misleading because of mentioned optimizations that can be performed (drop a frag/maintaining accurate truesize of skb) - it seems that this part of logic was dropped and the comment was not updated to reflect this change. Signed-off-by: Maciej Fijalkowski <maciej.fijalkowski@intel.com> Signed-off-by: Anirudh Venkataramanan <anirudh.venkataramanan@intel.com> Tested-by: Andrew Bowers <andrewx.bowers@intel.com> Signed-off-by: Jeff Kirsher <jeffrey.t.kirsher@intel.com>
-
Maciej Fijalkowski authored
Introduce ice_can_reuse_rx_page which will verify whether the page can be reused and return the boolean result to caller. Signed-off-by: Maciej Fijalkowski <maciej.fijalkowski@intel.com> Signed-off-by: Anirudh Venkataramanan <anirudh.venkataramanan@intel.com> Tested-by: Andrew Bowers <andrewx.bowers@intel.com> Signed-off-by: Jeff Kirsher <jeffrey.t.kirsher@intel.com>
-
Maciej Fijalkowski authored
Introduce ice_get_rx_buf, which will fetch the Rx buffer and do the DMA synchronization. Length of the packet that hardware Rx descriptor contains is now read in ice_clean_rx_irq, so we can feed ice_get_rx_buf with it and resign from rx_desc passed as argument in ice_fetch_rx_buf and ice_add_rx_frag. Signed-off-by: Maciej Fijalkowski <maciej.fijalkowski@intel.com> Signed-off-by: Anirudh Venkataramanan <anirudh.venkataramanan@intel.com> Tested-by: Andrew Bowers <andrewx.bowers@intel.com> Signed-off-by: Jeff Kirsher <jeffrey.t.kirsher@intel.com>
-
Brett Creeley authored
The hardware now supports link events over the admin receive queue (ARQ), so enable HW link events over the ARQ and remove code for link event polling. Signed-off-by: Brett Creeley <brett.creeley@intel.com> Reviewed-by: Bruce Allan <bruce.w.allan@intel.com> Signed-off-by: Anirudh Venkataramanan <anirudh.venkataramanan@intel.com> Tested-by: Andrew Bowers <andrewx.bowers@intel.com> Signed-off-by: Jeff Kirsher <jeffrey.t.kirsher@intel.com>
-
Alan Brady authored
Someone went through the effort of making this a variable so let's use it instead of recalculating it again. Signed-off-by: Alan Brady <alan.brady@intel.com> Signed-off-by: Anirudh Venkataramanan <anirudh.venkataramanan@intel.com> Tested-by: Andrew Bowers <andrewx.bowers@intel.com> Signed-off-by: Jeff Kirsher <jeffrey.t.kirsher@intel.com>
-
Michal Swiatkowski authored
The VLAN rule is lost when VM starts or the AVF driver (iavf.ko) is reloaded. So it is necessary to add this rule again. Signed-off-by: Michal Swiatkowski <michal.swiatkowski@intel.com> Signed-off-by: Anirudh Venkataramanan <anirudh.venkataramanan@intel.com> Tested-by: Andrew Bowers <andrewx.bowers@intel.com> Signed-off-by: Jeff Kirsher <jeffrey.t.kirsher@intel.com>
-
Victor Raj authored
When VSI increases the number of queues dynamically, the scheduler just needs to add the new required nodes rather than re-adjusting with previously allocated number of nodes. Readjusting didn't provide enough parents to add the upper layer nodes also can't place lan and rdma subtrees separately. In decrease case, keep the VSI configuration with max number of queues always. This will leave some extra nodes in the tree but no harm done. Signed-off-by: Victor Raj <victor.raj@intel.com> Reviewed-by: Bruce Allan <bruce.w.allan@intel.com> Signed-off-by: Anirudh Venkataramanan <anirudh.venkataramanan@intel.com> Tested-by: Andrew Bowers <andrewx.bowers@intel.com> Signed-off-by: Jeff Kirsher <jeffrey.t.kirsher@intel.com>
-
- 24 Mar, 2019 2 commits
-
-
David S. Miller authored
Jiri Pirko says: ==================== devlink: small spring cleanup Mostly cosmetics and janitor work. ==================== Signed-off-by: David S. Miller <davem@davemloft.net>
-
Jiri Pirko authored
Some drivers are becoming more dependent on NET_DEVLINK being selected in configuration. With upcoming compat functions, the behavior would be wrong in case devlink was not compiled in. So make the drivers select NET_DEVLINK and rely on the functions being there, not just stubs. Signed-off-by: Jiri Pirko <jiri@mellanox.com> Signed-off-by: David S. Miller <davem@davemloft.net>
-