Commit c2e42ddf authored by Hou Tao's avatar Hou Tao Committed by Alexei Starovoitov

bpf, cpumask: Clean up bpf_cpu_map_entry directly in cpu_map_free

After synchronous_rcu(), both the dettached XDP program and
xdp_do_flush() are completed, and the only user of bpf_cpu_map_entry
will be cpu_map_kthread_run(), so instead of calling
__cpu_map_entry_replace() to stop kthread and cleanup entry after a RCU
grace period, do these things directly.
Signed-off-by: default avatarHou Tao <houtao1@huawei.com>
Reviewed-by: default avatarToke Høiland-Jørgensen <toke@redhat.com>
Link: https://lore.kernel.org/r/20230816045959.358059-3-houtao@huaweicloud.comSigned-off-by: default avatarAlexei Starovoitov <ast@kernel.org>
parent 8f8500a2
...@@ -566,16 +566,15 @@ static void cpu_map_free(struct bpf_map *map) ...@@ -566,16 +566,15 @@ static void cpu_map_free(struct bpf_map *map)
/* At this point bpf_prog->aux->refcnt == 0 and this map->refcnt == 0, /* At this point bpf_prog->aux->refcnt == 0 and this map->refcnt == 0,
* so the bpf programs (can be more than one that used this map) were * so the bpf programs (can be more than one that used this map) were
* disconnected from events. Wait for outstanding critical sections in * disconnected from events. Wait for outstanding critical sections in
* these programs to complete. The rcu critical section only guarantees * these programs to complete. synchronize_rcu() below not only
* no further "XDP/bpf-side" reads against bpf_cpu_map->cpu_map. * guarantees no further "XDP/bpf-side" reads against
* It does __not__ ensure pending flush operations (if any) are * bpf_cpu_map->cpu_map, but also ensure pending flush operations
* complete. * (if any) are completed.
*/ */
synchronize_rcu(); synchronize_rcu();
/* For cpu_map the remote CPUs can still be using the entries /* The only possible user of bpf_cpu_map_entry is
* (struct bpf_cpu_map_entry). * cpu_map_kthread_run().
*/ */
for (i = 0; i < cmap->map.max_entries; i++) { for (i = 0; i < cmap->map.max_entries; i++) {
struct bpf_cpu_map_entry *rcpu; struct bpf_cpu_map_entry *rcpu;
...@@ -584,8 +583,8 @@ static void cpu_map_free(struct bpf_map *map) ...@@ -584,8 +583,8 @@ static void cpu_map_free(struct bpf_map *map)
if (!rcpu) if (!rcpu)
continue; continue;
/* bq flush and cleanup happens after RCU grace-period */ /* Stop kthread and cleanup entry directly */
__cpu_map_entry_replace(cmap, i, NULL); /* call_rcu */ __cpu_map_entry_free(&rcpu->free_work.work);
} }
bpf_map_area_free(cmap->cpu_map); bpf_map_area_free(cmap->cpu_map);
bpf_map_area_free(cmap); bpf_map_area_free(cmap);
......
Markdown is supported
0%
or
You are about to add 0 people to the discussion. Proceed with caution.
Finish editing this message first!
Please register or to comment