- 14 Feb, 2019 20 commits
-
-
Jesper Dangaard Brouer authored
The page_pool API is using page->private to store DMA addresses. As pointed out by David Miller we can't use that on 32-bit architectures with 64-bit DMA This patch adds a new dma_addr_t struct to allow storing DMA addresses Signed-off-by: Jesper Dangaard Brouer <brouer@redhat.com> Signed-off-by: Ilias Apalodimas <ilias.apalodimas@linaro.org> Acked-by: Andrew Morton <akpm@linux-foundation.org> Signed-off-by: David S. Miller <davem@davemloft.net>
-
YueHaibing authored
Remove duplicated include. Signed-off-by: YueHaibing <yuehaibing@huawei.com> Signed-off-by: David S. Miller <davem@davemloft.net>
-
John Hurley authored
With the introduction of flow_stats_update(), drivers now update the stats fields of the passed tc_cls_flower_offload struct, rather than call tcf_exts_stats_update() directly to update the stats of offloaded TC flower rules. However, if multiple qdiscs are registered to a TC shared block and a flower rule is applied, then, when getting stats for the rule, multiple callbacks may be made. Take this into consideration by modifying flow_stats_update to gather the stats from all callbacks. Currently, the values in tc_cls_flower_offload only account for the last stats callback in the list. Fixes: 3b1903ef ("flow_offload: add statistics retrieval infrastructure and use it") Signed-off-by: John Hurley <john.hurley@netronome.com> Reviewed-by: Jakub Kicinski <jakub.kicinski@netronome.com> Acked-by: Pablo Neira Ayuso <pablo@netfilter.org> Signed-off-by: David S. Miller <davem@davemloft.net>
-
Vlad Buslov authored
Recently introduced tc_setup_flow_action() can fail when parsing tcf_exts on some unsupported action commands. However, this should not affect the case when user did not explicitly request hw offload by setting skip_sw flag. Modify tc_setup_flow_action() callers to only propagate the error if skip_sw flag is set for filter that is being offloaded, and set extack error message in that case. Signed-off-by: Vlad Buslov <vladbu@mellanox.com> Fixes: 3a7b6861 ("cls_api: add translator to flow_action representation") Signed-off-by: David S. Miller <davem@davemloft.net>
-
Colin Ian King authored
There are some statements that are indented incorrectly. Fix these. Signed-off-by: Colin Ian King <colin.king@canonical.com> Signed-off-by: David S. Miller <davem@davemloft.net>
-
Colin Ian King authored
There are some statements in an if-block that are not correctly indented. Fix these. Signed-off-by: Colin Ian King <colin.king@canonical.com> Signed-off-by: David S. Miller <davem@davemloft.net>
-
Yang Wei authored
dev_consume_skb_irq() should be called in eth_txdone_irq() when skb xmit done. It makes drop profiles(dropwatch, perf) more friendly. Signed-off-by: Yang Wei <yang.wei9@zte.com.cn> Signed-off-by: David S. Miller <davem@davemloft.net>
-
Yang Wei authored
dev_consume_skb_irq() should be called in at91ether_interrupt() when skb xmit done. It makes drop profiles(dropwatch, perf) more friendly. Signed-off-by: Yang Wei <yang.wei9@zte.com.cn> Reviewed-by: Claudiu Beznea <claudiu.beznea@microchip.com> Signed-off-by: David S. Miller <davem@davemloft.net>
-
Yang Wei authored
dev_consume_skb_irq() should be called when skb xmit done. It makes drop profiles(dropwatch, perf) more friendly. Signed-off-by: Yang Wei <yang.wei9@zte.com.cn> Signed-off-by: David S. Miller <davem@davemloft.net>
-
Yang Wei authored
dev_consume_skb_irq() should be called in intr_handler() when skb xmit done. It makes drop profiles(dropwatch, perf) more friendly. Signed-off-by: Yang Wei <yang.wei9@zte.com.cn> Signed-off-by: David S. Miller <davem@davemloft.net>
-
Yang Wei authored
dev_consume_skb_irq() should be called in moxart_tx_finished() when skb xmit done. It makes drop profiles(dropwatch, perf) more friendly. Signed-off-by: Yang Wei <yang.wei9@zte.com.cn> Signed-off-by: David S. Miller <davem@davemloft.net>
-
Yang Wei authored
dev_consume_skb_irq() should be called in mace_interrupt() when skb xmit done. It makes drop profiles(dropwatch, perf) more friendly. Signed-off-by: Yang Wei <yang.wei9@zte.com.cn> Signed-off-by: David S. Miller <davem@davemloft.net>
-
Yang Wei authored
dev_consume_skb_irq() should be called when skb xmit done. It makes drop profiles(dropwatch, perf) more friendly. Signed-off-by: Yang Wei <yang.wei9@zte.com.cn> Signed-off-by: David S. Miller <davem@davemloft.net>
-
Yang Wei authored
dev_consume_skb_irq() should be called in emac_mac_tx_process() when skb xmit done. It makes drop profiles(dropwatch, perf) more friendly. Signed-off-by: Yang Wei <yang.wei9@zte.com.cn> Signed-off-by: David S. Miller <davem@davemloft.net>
-
Yang Wei authored
dev_consume_skb_irq() should be called when skb xmit done. It makes drop profiles(dropwatch, perf) more friendly. Signed-off-by: Yang Wei <yang.wei9@zte.com.cn> Signed-off-by: David S. Miller <davem@davemloft.net>
-
David S. Miller authored
Maxime Chevallier says: ==================== net: phy: Add 2.5G/5GBASET PHYs support The 802.3bz standard defines 2 modes based on the NBASET alliance work that allow to use 2.5Gbps and 5Gbps speeds on Cat 5e, 6 and 7 cables. This series adds the necessary infrastructure to handle these modes with C45 PHYs. This series was originally part of a bigger one, that has seen 2 iterations [1] [2] that added support for these modes on Marvell Alaska PHYs. Following some discussions with Heiner and Andrew [3], we decided to split-out the generic parts so that we can work together on the following steps to get these mode fully working with Aquantia and Marvell PHYS. The first 3 patches are reworking some of the internal network phy infrastructure to handle the new modes in a more generic way. The 4th patch adds all the C45 register definition and accesses that follows the 802.3bz standard to support 2.5GBASET and 5GBASET. [1] : https://lore.kernel.org/netdev/20190118152352.26417-1-maxime.chevallier@bootlin.com/ [2] : https://lore.kernel.org/netdev/20190207094939.27369-1-maxime.chevallier@bootlin.com/ [3] : https://lore.kernel.org/netdev/81c340ea-54b0-1abf-94af-b8dc4ee83e3a@gmail.com/ ==================== Signed-off-by: David S. Miller <davem@davemloft.net>
-
Maxime Chevallier authored
The 802.3bz specification, based on previous by the NBASET alliance, defines the 2.5GBaseT and 5GBaseT link modes for ethernet traffic on cat5e, cat6 and cat7 cables. These mode integrate with the already defined C45 MDIO PMA/PMD registers set that added 10G support, by defining some previously reserved bits, and adding a new register (2.5G/5G Extended abilities). This commit adds the required definitions in include/uapi/linux/mdio.h to support these modes, and detect when a link-partner advertises them. It also adds support for these mode in the generic C45 PHY infrastructure. Signed-off-by: Maxime Chevallier <maxime.chevallier@bootlin.com> Reviewed-by: Heiner Kallweit <hkallweit1@gmail.com> Reviewed-by: Andrew Lunn <andrew@lunn.ch> Signed-off-by: David S. Miller <davem@davemloft.net>
-
Maxime Chevallier authored
Marvell 10G PHY driver has a generic way of initializing the supported link modes by reading the PHY's C45 PMA abilities. This can be made generic, since these registers are part of the 802.3 specifications. This commit extracts the config_init link_mode initialization code from marvell10g and uses it to introduce the genphy_c45_pma_read_abilities function. Only PMA modes are read, it's still up to the caller to set the Pause parameters. Signed-off-by: Maxime Chevallier <maxime.chevallier@bootlin.com> Reviewed-by: Andrew Lunn <andrew@lunn.ch> Signed-off-by: David S. Miller <davem@davemloft.net>
-
Maxime Chevallier authored
Since of_set_phy_supported was moved to phy-core.c, we can also move of_set_phy_eee_broken to the same location, so that we have all OF functions in the same place. This patch doesn't intend to introduce any change in behaviour. Signed-off-by: Maxime Chevallier <maxime.chevallier@bootlin.com> Reviewed-by: Andrew Lunn <andrew@lunn.ch> Signed-off-by: David S. Miller <davem@davemloft.net>
-
Maxime Chevallier authored
When setting a PHY's max speed using either the max-speed DT property or ethtool, we should mask-out all non-compatible modes according to the settings table, instead of just the 10/100BASET modes. Signed-off-by: Maxime Chevallier <maxime.chevallier@bootlin.com> Suggested-by: Russell King <rmk+kernel@armlinux.org.uk> Reviewed-by: Andrew Lunn <andrew@lunn.ch> Signed-off-by: David S. Miller <davem@davemloft.net>
-
- 13 Feb, 2019 3 commits
-
-
David S. Miller authored
Florian Fainelli says: ==================== Remove unused variables This removes unused variables from mlxsw and ethsw after the recent removal of SWITCHDEV_ATTR_ID_PORT_BRIDGE_FLAGS, build scripts are now fixed to take care of those warnings :). ==================== Signed-off-by: David S. Miller <davem@davemloft.net>
-
Florian Fainelli authored
After 1b8b589d ("staging: fsl-dpaa2: ethsw: Remove getting PORT_BRIDGE_FLAGS") we are not accessing any driver private data anymore, remove port_priv which is unused. Fixes: 1b8b589d ("staging: fsl-dpaa2: ethsw: Remove getting PORT_BRIDGE_FLAGS") Signed-off-by: Florian Fainelli <f.fainelli@gmail.com> Signed-off-by: David S. Miller <davem@davemloft.net>
-
Florian Fainelli authored
After 1ecb1957 ("mlxsw: spectrum_switchdev: Remove getting PORT_BRIDGE_FLAGS") we are not accessing any driver private data structure, so the mlxsw_sp_port and mlxsw_sp variables are unused. Fixes: 1ecb1957 ("mlxsw: spectrum_switchdev: Remove getting PORT_BRIDGE_FLAGS") Signed-off-by: Florian Fainelli <f.fainelli@gmail.com> Signed-off-by: David S. Miller <davem@davemloft.net>
-
- 12 Feb, 2019 17 commits
-
-
Florian Fainelli authored
After 610d2b60 ("rocker: Remove getting PORT_BRIDGE_FLAGS") we no longer have a port_attr_bridge_flags_get member in the rocker_world_ops structre, fix that. Fixes: 610d2b60 ("rocker: Remove getting PORT_BRIDGE_FLAGS") Signed-off-by: Florian Fainelli <f.fainelli@gmail.com> Signed-off-by: David S. Miller <davem@davemloft.net>
-
David S. Miller authored
Vlad Buslov says: ==================== Refactor classifier API to work with chain/classifiers without rtnl lock Currently, all netlink protocol handlers for updating rules, actions and qdiscs are protected with single global rtnl lock which removes any possibility for parallelism. This patch set is a third step to remove rtnl lock dependency from TC rules update path. Recently, new rtnl registration flag RTNL_FLAG_DOIT_UNLOCKED was added. Handlers registered with this flag are called without RTNL taken. End goal is to have rule update handlers(RTM_NEWTFILTER, RTM_DELTFILTER, etc.) to be registered with UNLOCKED flag to allow parallel execution. However, there is no intention to completely remove or split rtnl lock itself. This patch set addresses specific problems in implementation of classifiers API that prevent its control path from being executed concurrently, and completes refactoring of cls API rules update handlers by removing rtnl lock dependency from code that handles chains and classifiers. Rules update handlers are registered with RTNL_FLAG_DOIT_UNLOCKED flag. This patch set substitutes global rtnl lock dependency on rules update path in cls API by extending its data structures with following locks: - tcf_block with 'lock' mutex. It is used to protect block state and life-time management fields of chains on the block (chain->refcnt, chain->action_refcnt, chain->explicitly crated, etc.). - tcf_chain with 'filter_chain_lock' mutex, that is used to protect list of classifier instances attached to chain. chain0->filter_chain_lock serializes calls to head change callbacks and allows them to rely on filter_chain_lock for serialization instead of rtnl lock. - tcf_proto with 'lock' spinlock that is intended to be used to synchronize access to classifiers that support unlocked execution. Classifiers are extended with reference counting to accommodate parallel access by unlocked cls API. Classifier ops structure is extended with additional 'put' function to allow reference counting of filters and intended to be used by classifiers that implement rtnl-unlocked API. Users of classifiers and individual filter instances are modified to always hold reference while working with them. Classifiers that support unlocked execution still need to know the status of rtnl lock, so their API is extended with additional 'rtnl_held' argument that is used to indicate that caller holds rtnl lock. Cls API propagates rtnl lock status across its helper functions and passes it to classifier. Changes from V3 to V4: - Patch 1: - Extract code that manages chain 'explicitly_created' flag into standalone patch. - Patch 2 - new. Changes from V2 to V3: - Change block->lock and chain->filter_chain_lock type to mutex. This removes the need for async miniqp refactoring and allows calling sleeping functions while holding the block->lock and chain->filter_chain_lock locks. - Previous patch 1 - async miniqp is no longer needed, remove the patch. - Patch 1: - Change block->lock type to mutex. - Implement tcf_block_destroy() helper function that destroys block->lock mutex before deallocating the block. - Revert GFP_KERNEL->GFP_ATOMIC memory allocation flags of tcf_chain which is no longer needed after block->lock type change. - Patch 6: - Change chain->filter_chain_lock type to mutex. - Assume chain0->filter_chain_lock synchronizations instead of rtnl lock in mini_qdisc_pair_swap() function that is called from head change callback of ingress Qdisc. With filter_chain_lock type changed to mutex it is now possible to call sleeping function while holding it, so it is now used instead of async implementation from previous versions of this patch set. - Patch 7: - Add local tp_next var to tcf_chain_flush() and use it to store tp->next pointer dereferenced with rcu_dereference_protected() to satisfy kbuild test robot. - Reset tp pointer to NULL at the beginning of tc_new_tfilter() to prevent its uninitialized usage in error handling code. This code was already implemented in patch 10, but must be in patch 8 to preserve code bisectability. - Put parent chain in tcf_proto_destroy(). In previous version this code was implemented in patch 1 which was removed in V3. Changes from V1 to V2: - Patch 1: - Use smp_store_release() instead of xchg() for setting miniqp->tp_head. - Move Qdisc deallocation to tc_proto_wq ordered workqueue that is used to destroy tcf proto instances. This is necessary to ensure that Qdisc is destroyed after all instances of chain/proto that it contains in order to prevent use-after-free error in tc_chain_notify_delete(). - Cache parent net device ifindex in block to prevent use-after-free of dev queue in tc_chain_notify_delete(). - Patch 2: - Use lockdep_assert_held() instead of spin_is_locked() for assertion. - Use 'free_block' argument in tcf_chain_destroy() instead of checking block's reference count and chain_list for second time. - Patch 7: - Refactor tcf_chain0_head_change_cb_add() to not take block->lock and chain0->filter_chain_lock in correct order. - Patch 10: - Always set 'tp_created' flag when creating tp to prevent releasing the chain twice when tp with same priority was inserted concurrently. - Patch 11: - Add additional check to prevent creation of new proto instance when parent chain is being flushed to reduce CPU usage. - Don't call tcf_chain_delete_empty() if tp insertion failed. - Patch 16 - new. - Patch 17: - Refactor to only lock take rtnl lock once (at the beginning of rule update handlers). - Always release rtnl mutex in the same function that locked it. Remove unlock code from tcf_block_release(). ==================== Signed-off-by: David S. Miller <davem@davemloft.net>
-
Vlad Buslov authored
Register netlink protocol handlers for message types RTM_NEWTFILTER, RTM_DELTFILTER, RTM_GETTFILTER as unlocked. Set rtnl_held variable that tracks rtnl mutex state to be false by default. Introduce tcf_proto_is_unlocked() helper that is used to check tcf_proto_ops->flag to determine if ops can be called without taking rtnl lock. Manually lookup Qdisc, class and block in rule update handlers. Verify that both Qdisc ops and proto ops are unlocked before using any of their callbacks, and obtain rtnl lock otherwise. Signed-off-by: Vlad Buslov <vladbu@mellanox.com> Acked-by: Jiri Pirko <jiri@mellanox.com> Signed-off-by: David S. Miller <davem@davemloft.net>
-
Vlad Buslov authored
Refactor tcf_block_find() code into three standalone functions: - __tcf_qdisc_find() to lookup Qdisc and increment its reference counter. - __tcf_qdisc_cl_find() to lookup class. - __tcf_block_find() to lookup block and increment its reference counter. This change is necessary to allow netlink tc rule update handlers to call these functions directly in order to conditionally take rtnl lock according to Qdisc class ops flags before calling any of class ops functions. Signed-off-by: Vlad Buslov <vladbu@mellanox.com> Acked-by: Jiri Pirko <jiri@mellanox.com> Signed-off-by: David S. Miller <davem@davemloft.net>
-
Vlad Buslov authored
Extend Qdisc_class_ops with flags. Create enum to hold possible class ops flag values. Add first class ops flags value QDISC_CLASS_OPS_DOIT_UNLOCKED to indicate that class ops functions can be called without taking rtnl lock. Signed-off-by: Vlad Buslov <vladbu@mellanox.com> Acked-by: Jiri Pirko <jiri@mellanox.com> Signed-off-by: David S. Miller <davem@davemloft.net>
-
Vlad Buslov authored
Add 'rtnl_held' flag to tcf proto change, delete, destroy, dump, walk functions to track rtnl lock status. Extend users of these function in cls API to propagate rtnl lock status to them. This allows classifiers to obtain rtnl lock when necessary and to pass rtnl lock status to extensions and driver offload callbacks. Add flags field to tcf proto ops. Add flag value to indicate that classifier doesn't require rtnl lock. Signed-off-by: Vlad Buslov <vladbu@mellanox.com> Acked-by: Jiri Pirko <jiri@mellanox.com> Signed-off-by: David S. Miller <davem@davemloft.net>
-
Vlad Buslov authored
Add optional tp->ops->put() API to be implemented for filter reference counting. This new function is called by cls API to release filter reference for filters returned by tp->ops->change() or tp->ops->get() functions. Implement tfilter_put() helper to call tp->ops->put() only for classifiers that implement it. Signed-off-by: Vlad Buslov <vladbu@mellanox.com> Acked-by: Jiri Pirko <jiri@mellanox.com> Signed-off-by: David S. Miller <davem@davemloft.net>
-
Vlad Buslov authored
Actions API is already updated to not rely on rtnl lock for synchronization. However, it need to be provided with rtnl status when called from classifiers API in order to be able to correctly release the lock when loading kernel module. Extend extension validation function with 'rtnl_held' flag which is passed to actions API. Add new 'rtnl_held' parameter to tcf_exts_validate() in cls API. No classifier is currently updated to support unlocked execution, so pass hardcoded 'true' flag parameter value. Signed-off-by: Vlad Buslov <vladbu@mellanox.com> Acked-by: Jiri Pirko <jiri@mellanox.com> Signed-off-by: David S. Miller <davem@davemloft.net>
-
Vlad Buslov authored
Extend tcf_chain with 'flushing' flag. Use the flag to prevent insertion of new classifier instances when chain flushing is in progress in order to prevent resource leak when tcf_proto is created by unlocked users concurrently. Return EAGAIN error from tcf_chain_tp_insert_unique() to restart tc_new_tfilter() and lookup the chain/proto again. Signed-off-by: Vlad Buslov <vladbu@mellanox.com> Acked-by: Jiri Pirko <jiri@mellanox.com> Signed-off-by: David S. Miller <davem@davemloft.net>
-
Vlad Buslov authored
Implement unique insertion function to atomically attach tcf_proto to chain after verifying that no other tcf proto with specified priority exists. Implement delete function that verifies that tp is actually empty before deleting it. Use these functions to refactor cls API to account for concurrent tp and rule update instead of relying on rtnl lock. Add new 'deleting' flag to tcf proto. Use it to restart search when iterating over tp's on chain to prevent accessing potentially inval tp->next pointer. Extend tcf proto with spinlock that is intended to be used to protect its data from concurrent modification instead of relying on rtnl mutex. Use it to protect 'deleting' flag. Add lockdep macros to validate that lock is held when accessing protected fields. Signed-off-by: Vlad Buslov <vladbu@mellanox.com> Acked-by: Jiri Pirko <jiri@mellanox.com> Signed-off-by: David S. Miller <davem@davemloft.net>
-
Vlad Buslov authored
All users of chain->filters_chain rely on rtnl lock and assume that no new classifier instances are added when traversing the list. Use tcf_get_next_proto() to traverse filters list without relying on rtnl mutex. This function iterates over classifiers by taking reference to current iterator classifier only and doesn't assume external synchronization of filters list. Signed-off-by: Vlad Buslov <vladbu@mellanox.com> Acked-by: Jiri Pirko <jiri@mellanox.com> Signed-off-by: David S. Miller <davem@davemloft.net>
-
Vlad Buslov authored
In order to remove dependency on rtnl lock and allow concurrent tcf_proto modification, extend tcf_proto with reference counter. Implement helper get/put functions for tcf proto and use them to modify cls API to always take reference to tcf_proto while using it. Only release reference to parent chain after releasing last reference to tp. Signed-off-by: Vlad Buslov <vladbu@mellanox.com> Acked-by: Jiri Pirko <jiri@mellanox.com> Signed-off-by: David S. Miller <davem@davemloft.net>
-
Vlad Buslov authored
Extend tcf_chain with new filter_chain_lock mutex. Always lock the chain when accessing filter_chain list, instead of relying on rtnl lock. Dereference filter_chain with tcf_chain_dereference() lockdep macro to verify that all users of chain_list have the lock taken. Rearrange tp insert/remove code in tc_new_tfilter/tc_del_tfilter to execute all necessary code while holding chain lock in order to prevent invalidation of chain_info structure by potential concurrent change. This also serializes calls to tcf_chain0_head_change(), which allows head change callbacks to rely on filter_chain_lock for synchronization instead of rtnl mutex. Signed-off-by: Vlad Buslov <vladbu@mellanox.com> Acked-by: Jiri Pirko <jiri@mellanox.com> Signed-off-by: David S. Miller <davem@davemloft.net>
-
Vlad Buslov authored
When cls API is called without protection of rtnl lock, parallel modification of chain is possible, which means that chain template can be changed concurrently in certain circumstances. For example, when chain is 'deleted' by new user-space chain API, the chain might continue to be used if it is referenced by actions, and can be 're-created' again by user. In such case same chain structure is reused and its template is changed. To protect from described scenario, cache chain template while holding block lock. Introduce standalone tc_chain_notify_delete() function that works with cached template values, instead of chains themselves. Signed-off-by: Vlad Buslov <vladbu@mellanox.com> Acked-by: Jiri Pirko <jiri@mellanox.com> Signed-off-by: David S. Miller <davem@davemloft.net>
-
Vlad Buslov authored
All users of block->chain_list rely on rtnl lock and assume that no new chains are added when traversing the list. Use tcf_get_next_chain() to traverse chain list without relying on rtnl mutex. This function iterates over chains by taking reference to current iterator chain only and doesn't assume external synchronization of chain list. Don't take reference to all chains in block when flushing and use tcf_get_next_chain() to safely iterate over chain list instead. Remove tcf_block_put_all_chains() that is no longer used. Signed-off-by: Vlad Buslov <vladbu@mellanox.com> Acked-by: Jiri Pirko <jiri@mellanox.com> Signed-off-by: David S. Miller <davem@davemloft.net>
-
Vlad Buslov authored
In order to remove dependency on rtnl lock, use block->lock to protect chain0 struct from concurrent modification. Rearrange code in chain0 callback add and del functions to only access chain0 when block->lock is held. Signed-off-by: Vlad Buslov <vladbu@mellanox.com> Acked-by: Jiri Pirko <jiri@mellanox.com> Signed-off-by: David S. Miller <davem@davemloft.net>
-
Vlad Buslov authored
In order to remove dependency on rtnl lock, modify chain API to use block->lock to protect chain from concurrent modification. Rearrange tc_ctl_chain() code to call tcf_chain_hold() while holding block->lock to prevent concurrent chain removal. Signed-off-by: Vlad Buslov <vladbu@mellanox.com> Acked-by: Jiri Pirko <jiri@mellanox.com> Signed-off-by: David S. Miller <davem@davemloft.net>
-