- 01 Aug, 2019 22 commits
-
-
git://git.kernel.org/pub/scm/linux/kernel/git/jkirsher/next-queueDavid S. Miller authored
Jeff Kirsher says: ==================== 100GbE Intel Wired LAN Driver Updates 2019-07-31 This series contains updates to ice driver only. Paul adds support for reporting what the link partner is advertising for flow control settings. Jake fixes the hardware statistics register which is prone to rollover since the statistic registers are either 32 or 40 bits wide, depending on which register is being read. So use a 64 bit software statistic to store off the hardware statistics to track past when it rolls over. Fixes an issue with the locking of the control queue, where locks were being destroyed at run time. Tony fixes an issue that was created when interrupt tracking was refactored and the call to ice_vsi_setup_vector_base() was removed from the PF VSI instead of the VF VSI. Adds a check before trying to configure a port to ensure that media is attached. Brett fixes an issue in the receive queue configuration where prefena (Prefetch Enable) was being set to 0 which caused the hardware to only fetch descriptors when there are none free in the cache for a received packet. Updates the driver to only bump the receive tail once per napi_poll call, instead of the current model of bumping the tail up to 4 times per napi_poll call. Adds statistics for receive drops at the port level to ethtool/netlink. Cleans up duplicate code in the allocation of receive buffer code. Akeem updates the driver to ensure that VFs stay disabled until the setup or reset is completed. Modifies the driver to use the allocated number of transmit queues per VSI to set up the scheduling tree versus using the total number of available transmit queues. Also fix the driver to update the total number of configured queues, after a successful VF request to change its number of queues before updating the corresponding VSI for that VF. Cleaned up unnecessary flags that are no longer needed. ==================== Signed-off-by: David S. Miller <davem@davemloft.net>
-
David S. Miller authored
Huazhong Tan says: ==================== net: hns3: some code optimizations & bugfixes & features This patch-set includes code optimizations, bugfixes and features for the HNS3 ethernet controller driver. [patch 01/12] adds support for reporting link change event. [patch 02/12] adds handler for NCSI error. [patch 03/12] fixes bug related to debugfs. [patch 04/12] adds a code optimization for setting ring parameters. [patch 05/12 - 09/12] adds some cleanups. [patch 10/12 - 12/12] adds some patches related to reset issue. ==================== Signed-off-by: David S. Miller <davem@davemloft.net>
-
Huazhong Tan authored
When calling hclge_reset_event() within HCLGE_RESET_INTERVAL, it returns directly now. If no one call it again, then the error which needs a reset to fix it can not be fixed. So this patch activates the reset timer for this case, and adds checking in the end of the reset procedure to make this error fixed earlier. Signed-off-by: Huazhong Tan <tanhuazhong@huawei.com> Reviewed-by: Peng Li <lipeng321@huawei.com> Signed-off-by: David S. Miller <davem@davemloft.net>
-
Huazhong Tan authored
Currently, the reset interrupt is cleared in the reset task, which is too late. Since, when the hardware finish the previous reset, it can begin to do a new global/IMP reset, if this new coming reset type is same as the previous one, the driver will clear them together, then driver can not get that there is another reset, but the hardware still wait for the driver to deal with the second one. So this patch clears PF's reset interrupt status in the hclge_irq_handle(), the hardware waits for handshaking from driver before doing reset, so the driver and hardware deal with reset one by one. BTW, when VF doing global/IMP reset, it reads PF's reset interrupt register to get that whether PF driver's re-initialization is done, since VF's re-initialization should be done after PF's. So we add a new command and a register bit to do that. When VF receive reset interrupt, it sets up this bit, and PF finishes re-initialization send command to clear this bit, then VF do re-initialization. Fixes: 4ed340ab ("net: hns3: Add reset process in hclge_main") Signed-off-by: Huazhong Tan <tanhuazhong@huawei.com> Reviewed-by: Yunsheng Lin <linyunsheng@huawei.com> Signed-off-by: David S. Miller <davem@davemloft.net>
-
Huazhong Tan authored
Currently, the driver sets handshake status to tell the hardware that the driver have downed the netdev and it can continue with reset process. The driver will clear the handshake status when re-initializing the CMDQ, and does not recover this status when reset fail, which may cause the hardware to wait for the handshake status to be set and not being able to continue with reset process. So this patch delays clearing handshake status just before UP, and recovers this status when reset fail. BTW, this patch adds a new function hclge(vf)_reset_handshake() to deal with the reset handshake issue, and renames HCLGE(VF)_NIC_CMQ_ENABLE to HCLGE(VF)_NIC_SW_RST_RDY which represents this register bit more accurately. Fixes: ada13ee3 ("net: hns3: add handshake with hardware while doing reset") Signed-off-by: Huazhong Tan <tanhuazhong@huawei.com> Reviewed-by: Peng Li <lipeng321@huawei.com> Signed-off-by: David S. Miller <davem@davemloft.net>
-
Guojia Liao authored
The member 'mac_add' defined in hclge_mac_ethertype_idx_rd_cmd means MAC address, so 'mac_addr' is a better name for it. Signed-off-by: Guojia Liao <liaoguojia@huawei.com> Signed-off-by: Guangbin Huang <huangguangbin2@huawei.com> Signed-off-by: Huazhong Tan <tanhuazhong@huawei.com> Signed-off-by: David S. Miller <davem@davemloft.net>
-
Weihang Li authored
The 4th and 5th parameter of hclge_cmd_query_error is useless, so this patch removes them. Signed-off-by: Weihang Li <liweihang@hisilicon.com> Reviewed-by: Peng Li <lipeng321@huawei.com> Signed-off-by: Huazhong Tan <tanhuazhong@huawei.com> Signed-off-by: David S. Miller <davem@davemloft.net>
-
Yunsheng Lin authored
When hclge_tm_schd_info_update calls hclge_tm_schd_info_init to initialize the schedule info, hdev->tm_info.num_pg and hdev->tx_sch_mode is not changed, which makes the checking in hclge_tm_schd_info_init unnecessary. So this patch moves the hdev->tm_info.num_pg and hdev->tx_sch_mode checking into hclge_tm_schd_init and changes the return type of hclge_tm_schd_info_init from int to void. Signed-off-by: Yunsheng Lin <linyunsheng@huawei.com> Reviewed-by: Peng Li <lipeng321@huawei.com> Signed-off-by: Huazhong Tan <tanhuazhong@huawei.com> Signed-off-by: David S. Miller <davem@davemloft.net>
-
Yunsheng Lin authored
The unused_count variable is used to indicate how many RX BD need attaching new buffer in hns3_clean_rx_ring, and the clean_count variable has the similar meaning. This patch removes the clean_count variable and use unused_count to uniformly indicate the RX BD that need attaching new buffer. This patch also clean up some coding style related to variable assignment in hns3_clean_rx_ring. Signed-off-by: Yunsheng Lin <linyunsheng@huawei.com> Reviewed-by: Peng Li <lipeng321@huawei.com> Signed-off-by: Huazhong Tan <tanhuazhong@huawei.com> Signed-off-by: David S. Miller <davem@davemloft.net>
-
Jian Shen authored
The local variable return_status in hclge_get_mac_val_cmd_status() is useless. So this patch returns the error code directly, instead of using this variable. Also, replace some '%d' with '%u' in hclge_get_mac_val_cmd_status(). Signed-off-by: Jian Shen <shenjian15@huawei.com> Reviewed-by: Peng Li <lipeng321@huawei.com> Signed-off-by: Huazhong Tan <tanhuazhong@huawei.com> Signed-off-by: David S. Miller <davem@davemloft.net>
-
Jian Shen authored
Previously, when changing the ring parameters, we free the old ring resources firstly, and then setup the new ring resources. In some case of an memory allocation fail, there will be no resources to use. This patch refines it by setup new ring resources and free the old ring resources in order. Also reduce the max ring BD number to 32760 according to UM. Signed-off-by: Jian Shen <shenjian15@huawei.com> Reviewed-by: Peng Li <lipeng321@huawei.com> Signed-off-by: Huazhong Tan <tanhuazhong@huawei.com> Signed-off-by: David S. Miller <davem@davemloft.net>
-
Yufeng Mo authored
Some commands are not supported on DCB-unsupported ports. This patch distinguishes these commands and does not query unsupported commands in debugfs. This patch also fix an error in the dump "qos buf cfg" command in debugfs. Fixes: 2849d4e7 ("net: hns3: Add "tc config" info query function") Fixes: 7d9d7f88 ("net: hns3: Add "qos buffer" config info query function") Signed-off-by: Yufeng Mo <moyufeng@huawei.com> Reviewed-by: Peng Li <lipeng321@huawei.com> Signed-off-by: Huazhong Tan <tanhuazhong@huawei.com> Signed-off-by: David S. Miller <davem@davemloft.net>
-
Huazhong Tan authored
When NCSI has HW error, the IMP will report this error to the driver by sending a mailbox. After received this message, the driver should assert a global reset to fix this kind of HW error. Signed-off-by: Huazhong Tan <tanhuazhong@huawei.com> Reviewed-by: Peng Li <lipeng321@huawei.com> Signed-off-by: David S. Miller <davem@davemloft.net>
-
Jian Shen authored
Previously, PF updates link status per second. For some scenario, it requires link down event being reported more quickly. To solve it, firmware pushes the link change event to PF with CMDQ message, and driver updates the link status directly. Signed-off-by: Jian Shen <shenjian15@huawei.com> Reviewed-by: Peng Li <lipeng321@huawei.com> Signed-off-by: Huazhong Tan <tanhuazhong@huawei.com> Signed-off-by: David S. Miller <davem@davemloft.net>
-
YueHaibing authored
Use devm_platform_ioremap_resource() to simplify the code a bit. This is detected by coccinelle. Reported-by: Hulk Robot <hulkci@huawei.com> Signed-off-by: YueHaibing <yuehaibing@huawei.com> Signed-off-by: David S. Miller <davem@davemloft.net>
-
YueHaibing authored
Use devm_platform_ioremap_resource() to simplify the code a bit. This is detected by coccinelle. Reported-by: Hulk Robot <hulkci@huawei.com> Signed-off-by: YueHaibing <yuehaibing@huawei.com> Signed-off-by: David S. Miller <davem@davemloft.net>
-
YueHaibing authored
Use devm_platform_ioremap_resource() to simplify the code a bit. This is detected by coccinelle. Reported-by: Hulk Robot <hulkci@huawei.com> Signed-off-by: YueHaibing <yuehaibing@huawei.com> Signed-off-by: David S. Miller <davem@davemloft.net>
-
YueHaibing authored
Use devm_platform_ioremap_resource() to simplify the code a bit. This is detected by coccinelle. Reported-by: Hulk Robot <hulkci@huawei.com> Signed-off-by: YueHaibing <yuehaibing@huawei.com> Signed-off-by: David S. Miller <davem@davemloft.net>
-
YueHaibing authored
Use devm_platform_ioremap_resource() to simplify the code a bit. This is detected by coccinelle. Reported-by: Hulk Robot <hulkci@huawei.com> Signed-off-by: YueHaibing <yuehaibing@huawei.com> Signed-off-by: David S. Miller <davem@davemloft.net>
-
YueHaibing authored
Use devm_platform_ioremap_resource() to simplify the code a bit. This is detected by coccinelle. Reported-by: Hulk Robot <hulkci@huawei.com> Signed-off-by: YueHaibing <yuehaibing@huawei.com> Signed-off-by: David S. Miller <davem@davemloft.net>
-
YueHaibing authored
Use devm_platform_ioremap_resource() to simplify the code a bit. This is detected by coccinelle. Reported-by: Hulk Robot <hulkci@huawei.com> Signed-off-by: YueHaibing <yuehaibing@huawei.com> Signed-off-by: David S. Miller <davem@davemloft.net>
-
YueHaibing authored
Use devm_platform_ioremap_resource() to simplify the code a bit. This is detected by coccinelle. Reported-by: Hulk Robot <hulkci@huawei.com> Signed-off-by: YueHaibing <yuehaibing@huawei.com> Signed-off-by: David S. Miller <davem@davemloft.net>
-
- 31 Jul, 2019 18 commits
-
-
Nikolay Aleksandrov authored
In user-space there's no way to distinguish why an mdb entry was deleted and that is a problem for daemons which would like to keep the mdb in sync with remote ends (e.g. mlag) but would also like to converge faster. In almost all cases we'd like to age-out the remote entry for performance and convergence reasons except when fast-leave is enabled. In that case we want explicit immediate remote delete, thus add mdb flag which is set only when the entry is being deleted due to fast-leave. Signed-off-by: Nikolay Aleksandrov <nikolay@cumulusnetworks.com> Signed-off-by: David S. Miller <davem@davemloft.net>
-
Lucas Bates authored
The -d command line argument to tdc requires the name of a physical device on the system where the tests will be run. If -d has not been used, tdc will skip tests that require a physical device. This patch is intended to better document what the -d option does and how it is used. Signed-off-by: Lucas Bates <lucasb@mojatatu.com> Signed-off-by: David S. Miller <davem@davemloft.net>
-
git://git.kernel.org/pub/scm/linux/kernel/git/saeed/linuxDavid S. Miller authored
Saeed Mahameed says: ==================== mlx5-updates-2019-07-29 This series includes updates to mlx5 driver, 1) Simplifications, cleanup and warning prints improvements 2) From Vlad Buslov: Refactor mlx5 tc flow handling for unlocked execution (Part 1) Currently, all cls API hardware offloads driver callbacks require caller to hold rtnl lock when calling them. Cls API has already been updated to update software filters in parallel (on classifiers that support unlocked execution), however hardware offloads code still obtains rtnl lock before calling driver tc callbacks. This set implements partial support for unlocked execution that is leveraged by follow up refactorings in specific mlx5 tc subsystems and patch to cls API that implements API that allows drivers to register their callbacks as rtnl-unlocked. In mlx5 tc code mlx5e_tc_flow is the main structure that is used to represent tc filter. Currently, code the structure itself and its handlers in both tc and eswitch layers do not implement any kind of synchronizations and assume external global synchronization provided by rtnl lock instead. Implement following changes to remove dependency on rtnl lock in flow handling code that are intended to be used a groundwork for following changes to provide fully rtnl-independent mlx5 tc: - Extend struct mlx5e_tc_flow with atomic reference counter and rcu to allow concurrent access from multiple tc and neigh update workqueue instances without introducing any additional locks specific to the structure. Its 'flags' field type is changed to atomic bitmask ops which is necessary for tc to interact with other concurrent tc instances or concurrent neigh update that need to skip flows that are not fully initialized (new INIT_DONE flow flag) and can change the flags according to neighbor state (flipping OFFLOADED flag). - Protect unready flows list by new uplink_priv->unready_flows_lock mutex. - Convert calls to netdev APIs that require rtnl lock in flow handling code to their rcu counterparts. - Modify eswitch code that is called from tc layer and assume implicit external synchronization to be concurrency safe: change esw->offloads.num_flows type to atomic integer and re-arrange esw->state_lock usage to protect additional data. Some of approaches to synchronizations presented in this patch set are quite complicated (lockless concurrent usage of data structures with rcu and reference counting, using fine-grained locking when necessary, retry mechanisms to handle concurrent insertion of another instance of data structure with same key, etc.). This is necessary to allow calling the firmware in parallel in most cases, which is the main motivation of this change since firmware calls are mach heavier operation than atomic operations, multitude of locks and potential multiple retries during concurrent accesses to same elements. ==================== Signed-off-by: David S. Miller <davem@davemloft.net>
-
Tony Nguyen authored
Update driver version to 0.7.5 Signed-off-by: Tony Nguyen <anthony.l.nguyen@intel.com> Tested-by: Andrew Bowers <andrewx.bowers@intel.com> Signed-off-by: Jeff Kirsher <jeffrey.t.kirsher@intel.com>
-
Akeem G Abodunrin authored
As a result of refactoring of VF VSIs interrupts code, there is no need to track its configuration status again with ICE_VF_STATE_CFG_INTR flag - In fact, it is not being checked anywhere in the code right now, so this patch removes the dead code as applicable to the flag. Signed-off-by: Akeem G Abodunrin <akeem.g.abodunrin@intel.com> Tested-by: Andrew Bowers <andrewx.bowers@intel.com> Signed-off-by: Jeff Kirsher <jeffrey.t.kirsher@intel.com>
-
Brett Creeley authored
This flag is not needed and is called every time we re-enable interrupts in the hotpath so remove it. Also remove ice_vsi_req_irq() because it was a wrapper function for ice_vsi_req_irq_msix() whose sole purpose was checking the ICE_FLAG_MSIX_ENA flag. Signed-off-by: Brett Creeley <brett.creeley@intel.com> Tested-by: Andrew Bowers <andrewx.bowers@intel.com> Signed-off-by: Jeff Kirsher <jeffrey.t.kirsher@intel.com>
-
Akeem G Abodunrin authored
Since Tx rings are being managed by FW/NVM, Tx rings might have not been set up or driver had already wiped them off - In that case, call to disable LAN Tx queue is being returned as not in existence. This patch makes sure we don't return unnecessary error for such scenario. Signed-off-by: Akeem G Abodunrin <akeem.g.abodunrin@intel.com> Tested-by: Andrew Bowers <andrewx.bowers@intel.com> Signed-off-by: Jeff Kirsher <jeffrey.t.kirsher@intel.com>
-
Brett Creeley authored
Currently if the call to ice_alloc_mapped_page() fails we jump to the no_buf label, possibly call ice_release_rx_desc(), and return true indicating that there is more work to do. In the success case we just fall out of the while loop, possibly call ice_alloc_mapped_page(), and return false saying we exhausted cleaned_count. This flow can be improved by breaking if ice_alloc_mapped_page() fails and then the flow outside of the while loop is the same for the failure and success case. Signed-off-by: Brett Creeley <brett.creeley@intel.com> Tested-by: Andrew Bowers <andrewx.bowers@intel.com> Signed-off-by: Jeff Kirsher <jeffrey.t.kirsher@intel.com>
-
Brett Creeley authored
Currently we are not reporting dropped counts at the port level to ethtool or netlink. This was found when debugging Rx dropped issues and the total packets sent did not equal the total packets received minus the rx_dropped, which was very confusing. To determine dropped counts at the port level we need to read the PRTRPB_RDPC register. To fix reporting we will store the dropped counts in the PF's rx_discards. This will be reported to netlink by storing it in the PF VSI's rx_missed_errors signaling that the receiver missed the packet. Also, we will report this to ethtool in the rx_dropped.nic field. Signed-off-by: Brett Creeley <brett.creeley@intel.com> Signed-off-by: Tony Nguyen <anthony.l.nguyen@intel.com> Tested-by: Andrew Bowers <andrewx.bowers@intel.com> Signed-off-by: Jeff Kirsher <jeffrey.t.kirsher@intel.com>
-
Akeem G Abodunrin authored
In case there is a request from a VF to change its number of queues, and the request was successful, we need to update number of queues configured on the VF before updating corresponding VSI for that VF, especially LAN Tx queue tree and TC update, otherwise, we would continued to use old value of vf->num_vf_qs for allocated Tx/Rx queues... Signed-off-by: Akeem G Abodunrin <akeem.g.abodunrin@intel.com> Tested-by: Andrew Bowers <andrewx.bowers@intel.com> Signed-off-by: Jeff Kirsher <jeffrey.t.kirsher@intel.com>
-
Akeem G Abodunrin authored
This patch uses allocated number of Tx queues per VSI to set up its scheduling tree instead of using total number of available Tx queues. Only PF VSIs have total number of allocated Tx queues equal to number of available Tx queues, other VSIs have different number of queues configured. Signed-off-by: Akeem G Abodunrin <akeem.g.abodunrin@intel.com> Signed-off-by: Tony Nguyen <anthony.l.nguyen@intel.com> Tested-by: Andrew Bowers <andrewx.bowers@intel.com> Signed-off-by: Jeff Kirsher <jeffrey.t.kirsher@intel.com>
-
Brett Creeley authored
Currently we bump the Rx tail and release/give buffers to hardware every 16 descriptors. This causes us to bump Rx tail up to 4 times per napi_poll call. Also we are always bumping tail on an odd index and this is a problem because hardware ignores the lower 3 bits in the QRX_TAIL register. This is making it so hardware sees tail bumps only every 8 descriptors. Instead lets only bump Rx tail once per napi_poll if the value aligns with hardware's expectations of the lower 3 bits being cleared. Also only release/give Rx buffers once per napi_poll call. Signed-off-by: Brett Creeley <brett.creeley@intel.com> Tested-by: Andrew Bowers <andrewx.bowers@intel.com> Signed-off-by: Jeff Kirsher <jeffrey.t.kirsher@intel.com>
-
Akeem G Abodunrin authored
This patch adds code to clear VFs enable status until reset is completed, and Tx/Rx rings are setup. Without this patch, the code flow request Tx queues to be disabled after reset, especially PFR - where VF VSI Tx rings have already been wiped off in the NVM and result to adminq error based on the call to disable Tx LAN queue in ice_reset_all_vfs function call. Signed-off-by: Akeem G Abodunrin <akeem.g.abodunrin@intel.com> Tested-by: Andrew Bowers <andrewx.bowers@intel.com> Signed-off-by: Jeff Kirsher <jeffrey.t.kirsher@intel.com>
-
Tony Nguyen authored
The firmware reports an error when trying to configure a port with no media. Instead of always configuring the port, check for media before attempting to configure it. In the absence of media, turn off link and poll for media to become available before re-enabling link. Move ice_force_phys_link_state() up to avoid forward declaration. Signed-off-by: Tony Nguyen <anthony.l.nguyen@intel.com> Tested-by: Andrew Bowers <andrewx.bowers@intel.com> Signed-off-by: Jeff Kirsher <jeffrey.t.kirsher@intel.com>
-
Jacob Keller authored
The ice_init_all_ctrlq and ice_shutdown_all_ctrlq functions create and destroy the locks used to protect the send and receive process of each control queue. This is problematic, as the driver may use these functions to shutdown and re-initialize the control queues at run time. For example, it may do this in response to a device reset. If the driver failed to recover from a reset, it might leave the control queues offline. In this case, the locks will no longer be initialized. A later call to ice_sq_send_cmd will then attempt to acquire a lock that has been destroyed. It is incorrect behavior to access a lock that has been destroyed. Indeed, ice_aq_send_cmd already tries to avoid accessing an offline control queue, but the check occurs inside the lock. The root of the problem is that the locks are destroyed at run time. Modify ice_init_all_ctrlq and ice_shutdown_all_ctrlq such that they no longer create or destroy the locks. Introduce new functions, ice_create_all_ctrlq and ice_destroy_all_ctrlq. Call these functions in ice_init_hw and ice_deinit_hw. Now, the control queue locks will remain valid for the life of the driver, and will not be destroyed until the driver unloads. This also allows removing a duplicate check of the sq.count and rq.count values when shutting down the controlqs. The ice_shutdown_ctrlq function already checks this value under the lock. Previously commit dec64ff1 ("ice: use [sr]q.count when checking if queue is initialized") needed this check to happen outside the lock, because it prevented duplicate attempts at destroying the locks. The driver may now safely use ice_init_all_ctrlq and ice_shutdown_all_ctrlq while handling reset events, without causing the locks to be invalid. Signed-off-by: Jacob Keller <jacob.e.keller@intel.com> Signed-off-by: Tony Nguyen <anthony.l.nguyen@intel.com> Tested-by: Andrew Bowers <andrewx.bowers@intel.com> Signed-off-by: Jeff Kirsher <jeffrey.t.kirsher@intel.com>
-
Brett Creeley authored
Currently we are always setting prefena to 0. This is causing the hardware to only fetch descriptors when there are none free in the cache for a received packet instead of prefetching when it has used the last descriptor regardless of incoming packets. Fix this by allowing the hardware to prefetch Rx descriptors. Signed-off-by: Brett Creeley <brett.creeley@intel.com> Tested-by: Andrew Bowers <andrewx.bowers@intel.com> Signed-off-by: Jeff Kirsher <jeffrey.t.kirsher@intel.com>
-
Tony Nguyen authored
When interrupt tracking was refactored, during rebuild, the call to ice_vsi_setup_vector_base() was inadvertently removed from the PF VSI instead of being removed from the VF VSI. During reset, the failure to properly setup the vector base generates a call trace. Correct this so that resets/rebuilds properly complete. Fixes: cbe66bfe ("ice: Refactor interrupt tracking") Signed-off-by: Tony Nguyen <anthony.l.nguyen@intel.com> Tested-by: Andrew Bowers <andrewx.bowers@intel.com> Signed-off-by: Jeff Kirsher <jeffrey.t.kirsher@intel.com>
-
Jacob Keller authored
Currently, ice_stat_update32 and ice_stat_update40 will limit the value of the software statistic to 32 or 40 bits wide, depending on which register is being read. This means that if a driver is running for a long time, the displayed software register values will roll over to zero at 40 bits or 32 bits. This occurs because the functions directly assign the difference between the previous value and current value of the hardware statistic. Instead, add this value to the current software statistic, and then update the previous value. In this way, each time ice_stat_update40 or ice_stat_update32 are called, they will increment the software tracking value by the difference of the hardware register from its last read. The software tracking value will correctly count up until it overflows a u64. The only requirement is that the ice_stat_update functions be called at least once each time the hardware register overflows. While we're fixing ice_stat_update40, modify it to use rd64 instead of two calls to rd32. Additionally, drop the now unnecessary hireg function parameter. Signed-off-by: Jacob Keller <jacob.e.keller@intel.com> Signed-off-by: Tony Nguyen <anthony.l.nguyen@intel.com> Tested-by: Andrew Bowers <andrewx.bowers@intel.com> Signed-off-by: Jeff Kirsher <jeffrey.t.kirsher@intel.com>
-