- 30 Jan, 2021 11 commits
-
-
Yevgeny Kliteynik authored
Allow sw_owner_v2 based on sw_format_version. Signed-off-by: Alex Vesker <valex@nvidia.com> Signed-off-by: Yevgeny Kliteynik <kliteyn@nvidia.com> Signed-off-by: Saeed Mahameed <saeedm@nvidia.com>
-
Yevgeny Kliteynik authored
Till now the code assumed that need to copy reduced size of the ste because the rest is the mask part which shouldn't be changed. This is not true for all types of HW (like STEv1). Take all 64B from the new STE and write them in the replaced STE place. This change will make it easier to handle all STE HW types because we have all the data that is about to be written into HW. Signed-off-by: Erez Shitrit <erezsh@nvidia.com> Signed-off-by: Alex Vesker <valex@nvidia.com> Signed-off-by: Yevgeny Kliteynik <kliteyn@nvidia.com> Signed-off-by: Saeed Mahameed <saeedm@nvidia.com>
-
Yevgeny Kliteynik authored
STEv0 format and STEv1 HW format are different, each has a different order: STEv0: CTRL 32B, TAG 16B, BITMASK 16B STEv1: CTRL 32B, BITMASK 16B, TAG 16B To make this transparent to upper layers we introduce a new ste_ctx function to format the STE prior to writing it. Signed-off-by: Erez Shitrit <erezsh@nvidia.com> Signed-off-by: Alex Vesker <valex@nvidia.com> Signed-off-by: Yevgeny Kliteynik <kliteyn@nvidia.com> Signed-off-by: Saeed Mahameed <saeedm@nvidia.com>
-
Yevgeny Kliteynik authored
In these cases we need to update only the ctrl area of the STE. So it is better to write only the control 32B and avoid copying the unneeded reduced 48B (control 32B + tag 16B). Signed-off-by: Erez Shitrit <erezsh@nvidia.com> Signed-off-by: Alex Vesker <valex@nvidia.com> Signed-off-by: Yevgeny Kliteynik <kliteyn@nvidia.com> Signed-off-by: Saeed Mahameed <saeedm@nvidia.com>
-
Yevgeny Kliteynik authored
Add HW specific modify header fields and logic to STEv1 file. Since STEv0 and STEv1 modify actions values are different, each version has its own implementation. Signed-off-by: Alex Vesker <valex@nvidia.com> Signed-off-by: Yevgeny Kliteynik <kliteyn@nvidia.com> Signed-off-by: Saeed Mahameed <saeedm@nvidia.com>
-
Yevgeny Kliteynik authored
Add HW specific action apply logic to STEv1. Since STEv0 and STEv1 actions format is different, each version has its implementation. Signed-off-by: Alex Vesker <valex@nvidia.com> Signed-off-by: Yevgeny Kliteynik <kliteyn@nvidia.com> Signed-off-by: Saeed Mahameed <saeedm@nvidia.com>
-
Yevgeny Kliteynik authored
Add HW specific setter and getters to STEv1 file. Since STEv0 and STEv1 format are different, each version should implemented different setters and getters. Signed-off-by: Alex Vesker <valex@nvidia.com> Signed-off-by: Yevgeny Kliteynik <kliteyn@nvidia.com> Signed-off-by: Saeed Mahameed <saeedm@nvidia.com>
-
Yevgeny Kliteynik authored
Some flex parser protocols are native as part of STEv1. The check for supported protocols was modified to allow this. Signed-off-by: Alex Vesker <valex@mellanox.com> Signed-off-by: Yevgeny Kliteynik <kliteyn@nvidia.com> Signed-off-by: Saeed Mahameed <saeedm@nvidia.com>
-
Yevgeny Kliteynik authored
Add STEv1 match logic to a new file. This file will be used for HW specific STEv1. Signed-off-by: Alex Vesker <valex@nvidia.com> Signed-off-by: Yevgeny Kliteynik <kliteyn@nvidia.com> Signed-off-by: Saeed Mahameed <saeedm@nvidia.com>
-
Yevgeny Kliteynik authored
Add mlx5_ifc_dr_ste_v1.h - a new header with HW specific STE structs for version 1. Signed-off-by: Alex Vesker <valex@nvidia.com> Signed-off-by: Yevgeny Kliteynik <kliteyn@nvidia.com> Signed-off-by: Saeed Mahameed <saeedm@nvidia.com>
-
Yevgeny Kliteynik authored
Fix 32-bit variable shift wrapping in dr_ste_v0_get_miss_addr. Fixes: 6b93b400 ("net/mlx5: DR, Move STEv0 setters and getters") Reported-by: Dan Carpenter <dan.carpenter@oracle.com> Signed-off-by: Yevgeny Kliteynik <kliteyn@nvidia.com> Reviewed-by: Alex Vesker <valex@nvidia.com> Signed-off-by: Saeed Mahameed <saeedm@nvidia.com>
-
- 29 Jan, 2021 29 commits
-
-
Bjorn Helgaas authored
Fix misspellings of "physical". Signed-off-by: Bjorn Helgaas <bhelgaas@google.com> Acked-by: Willem de Bruijn <willemb@google.com> Link: https://lore.kernel.org/r/20210127181359.3008316-1-helgaas@kernel.orgSigned-off-by: Jakub Kicinski <kuba@kernel.org>
-
dingsenjie authored
allocted -> allocated Signed-off-by: dingsenjie <dingsenjie@yulong.com> Link: https://lore.kernel.org/r/20210127022801.8028-1-dingsenjie@163.comSigned-off-by: Jakub Kicinski <kuba@kernel.org>
-
Jakub Kicinski authored
Petr Machata says: ==================== nexthop: Preparations for resilient next-hop groups At this moment, there is only one type of next-hop group: an mpath group. Mpath groups implement the hash-threshold algorithm, described in RFC 2992[1]. To select a next hop, hash-threshold algorithm first assigns a range of hashes to each next hop in the group, and then selects the next hop by comparing the SKB hash with the individual ranges. When a next hop is removed from the group, the ranges are recomputed, which leads to reassignment of parts of hash space from one next hop to another. RFC 2992 illustrates it thus: +-------+-------+-------+-------+-------+ | 1 | 2 | 3 | 4 | 5 | +-------+-+-----+---+---+-----+-+-------+ | 1 | 2 | 4 | 5 | +---------+---------+---------+---------+ Before and after deletion of next hop 3 under the hash-threshold algorithm. Note how next hop 2 gave up part of the hash space in favor of next hop 1, and 4 in favor of 5. While there will usually be some overlap between the previous and the new distribution, some traffic flows change the next hop that they resolve to. If a multipath group is used for load-balancing between multiple servers, this hash space reassignment causes an issue that packets from a single flow suddenly end up arriving at a server that does not expect them, which may lead to TCP reset. If a multipath group is used for load-balancing among available paths to the same server, the issue is that different latencies and reordering along the way causes the packets to arrive in wrong order. Resilient hashing is a technique to address the above problem. Resilient next-hop group has another layer of indirection between the group itself and its constituent next hops: a hash table. The selection algorithm uses a straightforward modulo operation to choose a hash bucket, and then reads the next hop that this bucket contains, and forwards traffic there. This indirection brings an important feature. In the hash-threshold algorithm, the range of hashes associated with a next hop must be continuous. With a hash table, mapping between the hash table buckets and the individual next hops is arbitrary. Therefore when a next hop is deleted the buckets that held it are simply reassigned to other next hops: +-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+ |1|1|1|1|2|2|2|2|3|3|3|3|4|4|4|4|5|5|5|5| +-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+ v v v v +-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+ |1|1|1|1|2|2|2|2|1|2|4|5|4|4|4|4|5|5|5|5| +-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+ Before and after deletion of next hop 3 under the resilient hashing algorithm. When weights of next hops in a group are altered, it may be possible to choose a subset of buckets that are currently not used for forwarding traffic, and use those to satisfy the new next-hop distribution demands, keeping the "busy" buckets intact. This way, established flows are ideally kept being forwarded to the same endpoints through the same paths as before the next-hop group change. This patchset prepares the next-hop code for eventual introduction of resilient hashing groups. - Patches #1-#4 carry otherwise disjoint changes that just remove certain assumptions in the next-hop code. - Patches #5-#6 extend the in-kernel next-hop notifiers to support more next-hop group types. - Patches #7-#12 refactor RTNL message handlers. Resilient next-hop groups will introduce a new logical object, a hash table bucket. It turns out that handling bucket-related messages is similar to how next-hop messages are handled. These patches extract the commonalities into reusable components. The plan is to contribute approximately the following patchsets: 1) Nexthop policy refactoring (already pushed) 2) Preparations for resilient next hop groups (this patchset) 3) Implementation of resilient next hop group 4) Netdevsim offload plus a suite of selftests 5) Preparations for mlxsw offload of resilient next-hop groups 6) mlxsw offload including selftests Interested parties can look at the current state of the code at [2] and [3]. [1] https://tools.ietf.org/html/rfc2992 [2] https://github.com/idosch/linux/commits/submit/res_integ_v1 [3] https://github.com/idosch/iproute2/commits/submit/res_v1 ==================== Link: https://lore.kernel.org/r/cover.1611836479.git.petrm@nvidia.comSigned-off-by: Jakub Kicinski <kuba@kernel.org>
-
Petr Machata authored
Validation of messages for get / del of a next hop is the same as will be validation of messages for get of a resilient next hop group bucket. The difference is that policy for resilient next hop group buckets is a superset of that used for next-hop get. It is therefore possible to reuse the code that validates the nhmsg fields, extracts the next-hop ID, and validates that. To that end, extract from nh_valid_get_del_req() a helper __nh_valid_get_del_req() that does just that. Make the nlh argument const so that the function can be called from the dump context, which only has a const nlh. Propagate the constness to nh_valid_get_del_req(). Signed-off-by: Petr Machata <petrm@nvidia.com> Reviewed-by: Ido Schimmel <idosch@nvidia.com> Reviewed-by: David Ahern <dsahern@kernel.org> Signed-off-by: Jakub Kicinski <kuba@kernel.org>
-
Petr Machata authored
In order to allow different handling for next-hop tree dumper and for bucket dumper, parameterize the next-hop tree walker with a callback. Add rtm_dump_nexthop_cb() with just the bits relevant for next-hop tree dumping. Signed-off-by: Petr Machata <petrm@nvidia.com> Reviewed-by: Ido Schimmel <idosch@nvidia.com> Reviewed-by: David Ahern <dsahern@kernel.org> Signed-off-by: Jakub Kicinski <kuba@kernel.org>
-
Petr Machata authored
Extract from rtm_dump_nexthop() a helper to walk the next hop tree. A separate function for this will be reusable from the bucket dumper. Signed-off-by: Petr Machata <petrm@nvidia.com> Reviewed-by: Ido Schimmel <idosch@nvidia.com> Reviewed-by: David Ahern <dsahern@kernel.org> Signed-off-by: Jakub Kicinski <kuba@kernel.org>
-
Petr Machata authored
The dump operations need to keep state from one invocation to another. A scratch area is dedicated for this purpose in the passed-in argument, cb, namely via two aliased arrays, struct netlink_callback.args and .ctx. Dumping of buckets will end up having to iterate over next hops as well, and it would be nice to be able to reuse the iteration logic with the NH dumper. The fact that the logic currently relies on fixed index to the .args array, and the indices would have to be coordinated between the two dumpers, makes this somewhat awkward. To make the access patters clearer, introduce a helper struct with a NH index, and instead of using the .args array directly, use it through this structure. Signed-off-by: Petr Machata <petrm@nvidia.com> Reviewed-by: Ido Schimmel <idosch@nvidia.com> Reviewed-by: David Ahern <dsahern@kernel.org> Signed-off-by: Jakub Kicinski <kuba@kernel.org>
-
Petr Machata authored
Requests to dump nexthops have many attributes in common with those that requests to dump buckets of resilient NH groups will have. However, they have different policies. To allow reuse of this code, extract a policy-agnostic wrapper out of nh_valid_dump_req(), and convert this function into a thin wrapper around it. Signed-off-by: Petr Machata <petrm@nvidia.com> Reviewed-by: Ido Schimmel <idosch@nvidia.com> Reviewed-by: David Ahern <dsahern@kernel.org> Signed-off-by: Jakub Kicinski <kuba@kernel.org>
-
Petr Machata authored
Requests to dump nexthops have many attributes in common with those that requests to dump buckets of resilient NH groups will have. In order to make reuse of this code simpler, convert the code to use a single structure with filtering configuration instead of passing around the parameters one by one. Signed-off-by: Petr Machata <petrm@nvidia.com> Reviewed-by: Ido Schimmel <idosch@nvidia.com> Reviewed-by: David Ahern <dsahern@kernel.org> Signed-off-by: Jakub Kicinski <kuba@kernel.org>
-
Petr Machata authored
After there are several next-hop group types, initialization and finalization of notifier type needs to reflect the actual type. Transform nh_notifier_grp_info_init() and _fini() to make extending them easier. Signed-off-by: Petr Machata <petrm@nvidia.com> Reviewed-by: Ido Schimmel <idosch@nvidia.com> Reviewed-by: David Ahern <dsahern@kernel.org> Signed-off-by: Jakub Kicinski <kuba@kernel.org>
-
Ido Schimmel authored
Currently there are only two types of in-kernel nexthop notification. The two are distinguished by the 'is_grp' boolean field in 'struct nh_notifier_info'. As more notification types are introduced for more next-hop group types, a boolean is not an easily extensible interface. Instead, convert it to an enum. Signed-off-by: Ido Schimmel <idosch@nvidia.com> Reviewed-by: Petr Machata <petrm@nvidia.com> Signed-off-by: Petr Machata <petrm@nvidia.com> Reviewed-by: David Ahern <dsahern@kernel.org> Signed-off-by: Jakub Kicinski <kuba@kernel.org>
-
Petr Machata authored
Most of the code that deals with nexthop groups relies on the fact that the group is of exactly one well-known type. Currently there is only one type, "mpath", but as more next-hop group types come, it becomes desirable to have a central place where the setting is validated. Introduce such place into nexthop_create_group(), such that the check is done before the code that relies on that invariant is invoked. Signed-off-by: Petr Machata <petrm@nvidia.com> Reviewed-by: Ido Schimmel <idosch@nvidia.com> Reviewed-by: David Ahern <dsahern@kernel.org> Signed-off-by: Jakub Kicinski <kuba@kernel.org>
-
Petr Machata authored
The values that a next-hop group needs to keep track of depend on the group type. Introduce a union to separate fields specific to the mpath groups from fields specific to other group types. Signed-off-by: Petr Machata <petrm@nvidia.com> Reviewed-by: Ido Schimmel <idosch@nvidia.com> Reviewed-by: David Ahern <dsahern@kernel.org> Signed-off-by: Jakub Kicinski <kuba@kernel.org>
-
Petr Machata authored
The logic for selecting path depends on the next-hop group type. Adapt the nexthop_select_path() to dispatch according to the group type. Signed-off-by: Petr Machata <petrm@nvidia.com> Reviewed-by: Ido Schimmel <idosch@nvidia.com> Reviewed-by: David Ahern <dsahern@kernel.org> Signed-off-by: Jakub Kicinski <kuba@kernel.org>
-
David Ahern authored
nexthop_free_mpath really should be nexthop_free_group. Rename it. Signed-off-by: David Ahern <dsahern@kernel.org> Reviewed-by: Ido Schimmel <idosch@nvidia.com> Signed-off-by: Petr Machata <petrm@nvidia.com> Reviewed-by: David Ahern <dsahern@kernel.org> Signed-off-by: Jakub Kicinski <kuba@kernel.org>
-
Jakub Kicinski authored
Julian Wiedmann says: ==================== net/iucv: updates 2021-01-28 This reworks & simplifies the TX notification path in af_iucv, so that we can send out SG skbs over TRANS_HIPER sockets. Also remove a noisy WARN_ONCE() in the RX path. ==================== Link: https://lore.kernel.org/r/20210128114108.39409-1-jwi@linux.ibm.comSigned-off-by: Jakub Kicinski <kuba@kernel.org>
-
Julian Wiedmann authored
The TX path no longer falls apart when some of its SG skbs are later linearized by lower layers of the stack. So enable the use of SG skbs in iucv_sock_sendmsg() again. This effectively reverts commit dc5367bc ("net/af_iucv: don't use paged skbs for TX on HiperSockets"). Signed-off-by: Julian Wiedmann <jwi@linux.ibm.com> Acked-by: Willem de Bruijn <willemb@google.com> Signed-off-by: Jakub Kicinski <kuba@kernel.org>
-
Julian Wiedmann authored
Stop maintaining the skb_send_q list for TRANS_HIPER sockets. Not only is it extra overhead, but keeping around a list of skb clones means that we later also have to match the ->sk_txnotify() calls against these clones and free them accordingly. The current matching logic (comparing the skbs' shinfo location) is frustratingly fragile, and breaks if the skb's head is mangled in any sort of way while passing from dev_queue_xmit() to the device's HW queue. Also adjust the interface for ->sk_txnotify(), to make clear that we don't actually care about any skb internals. Signed-off-by: Julian Wiedmann <jwi@linux.ibm.com> Acked-by: Willem de Bruijn <willemb@google.com> Signed-off-by: Jakub Kicinski <kuba@kernel.org>
-
Julian Wiedmann authored
The TX code keeps track of all skbs that are in-flight but haven't actually been sent out yet. For native IUCV sockets that's not a huge deal, but with TRANS_HIPER sockets it would be much better if we didn't need to maintain a list of skb clones. Note that we actually only care about the _count_ of skbs in this stage of the TX pipeline. So as prep work for removing the skb tracking on TRANS_HIPER sockets, keep track of the skb count in a separate variable and pair any list {enqueue, unlink} with a count {increment, decrement}. Then replace all occurences where we currently look at the skb list's fill level. Signed-off-by: Julian Wiedmann <jwi@linux.ibm.com> Acked-by: Willem de Bruijn <willemb@google.com> Signed-off-by: Jakub Kicinski <kuba@kernel.org>
-
Julian Wiedmann authored
Whoever called iucv_sk(sk)->sk_txnotify() must already know that they're dealing with an af_iucv socket. Signed-off-by: Julian Wiedmann <jwi@linux.ibm.com> Acked-by: Willem de Bruijn <willemb@google.com> Signed-off-by: Jakub Kicinski <kuba@kernel.org>
-
Alexander Egorenkov authored
syzbot reported the following finding: AF_IUCV failed to receive skb, len=0 WARNING: CPU: 0 PID: 522 at net/iucv/af_iucv.c:2039 afiucv_hs_rcv+0x174/0x190 net/iucv/af_iucv.c:2039 CPU: 0 PID: 522 Comm: syz-executor091 Not tainted 5.10.0-rc1-syzkaller-07082-g55027a88ec9f #0 Hardware name: IBM 3906 M04 701 (KVM/Linux) Call Trace: [<00000000b87ea538>] afiucv_hs_rcv+0x178/0x190 net/iucv/af_iucv.c:2039 ([<00000000b87ea534>] afiucv_hs_rcv+0x174/0x190 net/iucv/af_iucv.c:2039) [<00000000b796533e>] __netif_receive_skb_one_core+0x13e/0x188 net/core/dev.c:5315 [<00000000b79653ce>] __netif_receive_skb+0x46/0x1c0 net/core/dev.c:5429 [<00000000b79655fe>] netif_receive_skb_internal+0xb6/0x220 net/core/dev.c:5534 [<00000000b796ac3a>] netif_receive_skb+0x42/0x318 net/core/dev.c:5593 [<00000000b6fd45f4>] tun_rx_batched.isra.0+0x6fc/0x860 drivers/net/tun.c:1485 [<00000000b6fddc4e>] tun_get_user+0x1c26/0x27f0 drivers/net/tun.c:1939 [<00000000b6fe0f00>] tun_chr_write_iter+0x158/0x248 drivers/net/tun.c:1968 [<00000000b4f22bfa>] call_write_iter include/linux/fs.h:1887 [inline] [<00000000b4f22bfa>] new_sync_write+0x442/0x648 fs/read_write.c:518 [<00000000b4f238fe>] vfs_write.part.0+0x36e/0x5d8 fs/read_write.c:605 [<00000000b4f2984e>] vfs_write+0x10e/0x148 fs/read_write.c:615 [<00000000b4f29d0e>] ksys_write+0x166/0x290 fs/read_write.c:658 [<00000000b8dc4ab4>] system_call+0xe0/0x28c arch/s390/kernel/entry.S:415 Last Breaking-Event-Address: [<00000000b8dc64d4>] __s390_indirect_jump_r14+0x0/0xc Malformed RX packets shouldn't generate any warnings because debugging info already flows to dropmon via the kfree_skb(). Signed-off-by: Alexander Egorenkov <egorenar@linux.ibm.com> Reviewed-by: Julian Wiedmann <jwi@linux.ibm.com> Signed-off-by: Julian Wiedmann <jwi@linux.ibm.com> Acked-by: Willem de Bruijn <willemb@google.com> Signed-off-by: Jakub Kicinski <kuba@kernel.org>
-
Jakub Kicinski authored
Julian Wiedmann says: ==================== s390/qeth: updates 2021-01-28 Nothing special, mostly fine-tuning and follow-on cleanups for earlier fixes. ==================== Link: https://lore.kernel.org/r/20210128112551.18780-1-jwi@linux.ibm.comSigned-off-by: Jakub Kicinski <kuba@kernel.org>
-
Julian Wiedmann authored
When do_qdio() returns with an unexpected error, qeth_flush_buffers() kicks off a recovery action. In such a case there's no point in starting TX completion processing, the device gets torn down anyway. So take a closer look at do_qdio()'s return value, and skip the TX completion processing accordingly. Signed-off-by: Julian Wiedmann <jwi@linux.ibm.com> Acked-by: Willem de Bruijn <willemb@google.com> Signed-off-by: Jakub Kicinski <kuba@kernel.org>
-
Julian Wiedmann authored
As part of the TX queue selection for af_iucv skbs, qeth_l3_get_cast_type_rcu() ends up calling qeth_get_ether_cast_type(). Which is rather fragile, since such skbs don't have a proper ETH header and we rely on it being zeroed out in the right places. Add a separate case for ETH_P_AF_IUCV instead that does the right thing. When later building the HW header for such skbs, don't hard-code the cast type but follow the same path as for other protocol types. Here the cast type should naturally come from the skb's queue mapping. Signed-off-by: Julian Wiedmann <jwi@linux.ibm.com> Acked-by: Willem de Bruijn <willemb@google.com> Signed-off-by: Jakub Kicinski <kuba@kernel.org>
-
Julian Wiedmann authored
qeth_l3_hard_start_xmit() already determined the skb's proto. Avoid doing so a second time when it calls qeth_l3_get_cast_type(). Signed-off-by: Julian Wiedmann <jwi@linux.ibm.com> Acked-by: Willem de Bruijn <willemb@google.com> Signed-off-by: Jakub Kicinski <kuba@kernel.org>
-
Julian Wiedmann authored
Replace our home-grown helper with the more robust vlan_get_protocol(). This is pretty much a 1:1 replacement, we just need to pass around a proper ETH_P_* everyhwere and convert the old value range. For readability also convert the protocol checks in qeth_l3_hard_start_xmit() to a switch statement. Signed-off-by: Julian Wiedmann <jwi@linux.ibm.com> Acked-by: Willem de Bruijn <willemb@google.com> Signed-off-by: Jakub Kicinski <kuba@kernel.org>
-
Julian Wiedmann authored
We have two usage patterns: 1. get & ->setup() a new discipline, or 2. ->remove() & put the currently loaded one. Add corresponding helpers that hide the internals & error handling. Signed-off-by: Julian Wiedmann <jwi@linux.ibm.com> Reviewed-by: Alexandra Winter <wintera@linux.ibm.com> Acked-by: Willem de Bruijn <willemb@google.com> Signed-off-by: Jakub Kicinski <kuba@kernel.org>
-
Jakub Kicinski authored
Alex Elder says: ==================== net: ipa: hardware pipeline cleanup fixes There is a procedure currently referred to as a "tag process" that is performed to clear the IPA hardware pipeline--either at the time of a modem crash, or when suspending modem GSI channels. One thing done in this procedure is issuing a command that sends a data packet originating from the AP->command TX endpoint, destined for the AP<-LAN RX (default) endpoint. And although we currently wait for the send to complete, we do *not* wait for the packet to be received. But the pipeline can't be assumed clear until we have actually received this packet. This series addresses this by detecting when the pipeline-clearing packet has been received, and using a completion to allow a waiter to know when that has happened. This uses the IPA status capability (which sends an extra status buffer for certain packets). It also uses the ability to supply a "tag" with a packet, which will be delivered with the packet's status buffer. We tag the data packet that's sent to clear the pipeline, and use the receipt of a status buffer associated with a tagged packet to determine when that packet has arrived. "Tag status" just desribes one aspect of this procedure, so some symbols are renamed to be more like "pipeline clear" so they better describe the larger purpose. Finally, two functions used in this code don't use their arguments, so those arguments are removed. ==================== Link: https://lore.kernel.org/r/20210126185703.29087-1-elder@linaro.orgSigned-off-by: Jakub Kicinski <kuba@kernel.org>
-
Alex Elder authored
The only time we transfer data (rather than issuing a command) out of the AP->command TX endpoint is when we're clearing the hardware pipeline. All that's needed is a "small" data buffer, and its contents aren't even important. For convenience, we just transfer a command structure in this case (it's already mapped for DMA). The TRE is added to a transaction using ipa_cmd_ip_tag_status_add(), but we ignore the size value provided to that function. So just get rid of the size argument. Signed-off-by: Alex Elder <elder@linaro.org> Acked-by: Willem de Bruijn <willemb@google.com> Signed-off-by: Jakub Kicinski <kuba@kernel.org>
-