Merge branch 'bpf-xskmap-perf-improvements'
Björn Töpel says: ==================== This set consists of three patches from Maciej and myself which are optimizing the XSKMAP lookups. In the first patch, the sockets are moved to be stored at the tail of the struct xsk_map. The second patch, Maciej implements map_gen_lookup() for XSKMAP. The third patch, introduced in this revision, moves various XSKMAP functions, to permit the compiler to do more aggressive inlining. Based on the XDP program from tools/lib/bpf/xsk.c where bpf_map_lookup_elem() is explicitly called, this work yields a 5% improvement for xdpsock's rxdrop scenario. The last patch yields 2% improvement. Jonathan's Acked-by: for patch 1 and 2 was carried on. Note that the overflow checks are done in the bpf_map_area_alloc() and bpf_map_charge_init() functions, which was fixed in commit ff1c08e1 ("bpf: Change size to u64 for bpf_map_{area_alloc, charge_init}()"). [1] https://patchwork.ozlabs.org/patch/1186170/ v1->v2: * Change size/cost to size_t and use {struct, array}_size where appropriate. (Jakub) v2->v3: * Proper commit message for patch 2. v3->v4: * Change size_t to u64 to handle 32-bit overflows. (Jakub) * Introduced patch 3. v4->v5: * Use BPF_SIZEOF size, instead of BPF_DW, for correct pointer-sized loads. (Daniel) ==================== Signed-off-by: Daniel Borkmann <daniel@iogearbox.net>
Showing
Please register or sign in to comment