Merge branch 'Optimize performance of update hash-map when free is zero'
Feng zhou says: ==================== From: Feng Zhou <zhoufeng.zf@bytedance.com> We encountered bad case on big system with 96 CPUs that alloc_htab_elem() would last for 1ms. The reason is that after the prealloc hashtab has no free elems, when trying to update, it will still grab spin_locks of all cpus. If there are multiple update users, the competition is very serious. 0001: Use head->first to check whether the free list is empty or not before taking the lock. 0002: Add benchmark to reproduce this worst case. Changelog: v5->v6: Addressed comments from Alexei Starovoitov. - Adjust the commit log. some details in here: https://lore.kernel.org/all/20220608021050.47279-1-zhoufeng.zf@bytedance.com/ v4->v5: Addressed comments from Alexei Starovoitov. - Use head->first. - Use cpu+max_entries. some details in here: https://lore.kernel.org/bpf/20220601084149.13097-1-zhoufeng.zf@bytedance.com/ v3->v4: Addressed comments from Daniel Borkmann. - Use READ_ONCE/WRITE_ONCE. some details in here: https://lore.kernel.org/all/20220530091340.53443-1-zhoufeng.zf@bytedance.com/ v2->v3: Addressed comments from Alexei Starovoitov, Andrii Nakryiko. - Adjust the way the benchmark is tested. - Adjust the code format. some details in here: https://lore.kernel.org/all/20220524075306.32306-1-zhoufeng.zf@bytedance.com/T/ v1->v2: Addressed comments from Alexei Starovoitov. - add a benchmark to reproduce the issue. - Adjust the code format that avoid adding indent. some details in here: https://lore.kernel.org/all/877ac441-045b-1844-6938-fcaee5eee7f2@bytedance.com/T/ ==================== Signed-off-by: Alexei Starovoitov <ast@kernel.org>
Showing
Please register or sign in to comment