Commit beb3c23c authored by Andrey Konovalov's avatar Andrey Konovalov Committed by Andrew Morton

lib/stackdepot: annotate racy pool_index accesses

Accesses to pool_index are protected by pool_lock everywhere except
in a sanity check in stack_depot_fetch. The read access there can race
with the write access in depot_alloc_stack.

Use WRITE/READ_ONCE() to annotate the racy accesses.

As the sanity check is only used to print a warning in case of a
violation of the stack depot interface usage, it does not make a lot
of sense to use proper synchronization.

[andreyknvl@google.com: s/pool_index/pool_index_cached/ in stack_depot_fetch()]
  Link: https://lkml.kernel.org/r/95cf53f0da2c112aa2cc54456cbcd6975c3ff343.1676129911.git.andreyknvl@google.com
Link: https://lkml.kernel.org/r/359ac9c13cd0869c56740fb2029f505e41593830.1676063693.git.andreyknvl@google.comSigned-off-by: default avatarAndrey Konovalov <andreyknvl@google.com>
Reviewed-by: default avatarAlexander Potapenko <glider@google.com>
Signed-off-by: default avatarAndrew Morton <akpm@linux-foundation.org>
parent 36aa1e67
...@@ -278,8 +278,12 @@ depot_alloc_stack(unsigned long *entries, int size, u32 hash, void **prealloc) ...@@ -278,8 +278,12 @@ depot_alloc_stack(unsigned long *entries, int size, u32 hash, void **prealloc)
return NULL; return NULL;
} }
/* Move on to the next pool. */ /*
pool_index++; * Move on to the next pool.
* WRITE_ONCE pairs with potential concurrent read in
* stack_depot_fetch().
*/
WRITE_ONCE(pool_index, pool_index + 1);
pool_offset = 0; pool_offset = 0;
/* /*
* If the maximum number of pools is not reached, take note * If the maximum number of pools is not reached, take note
...@@ -502,6 +506,11 @@ unsigned int stack_depot_fetch(depot_stack_handle_t handle, ...@@ -502,6 +506,11 @@ unsigned int stack_depot_fetch(depot_stack_handle_t handle,
unsigned long **entries) unsigned long **entries)
{ {
union handle_parts parts = { .handle = handle }; union handle_parts parts = { .handle = handle };
/*
* READ_ONCE pairs with potential concurrent write in
* depot_alloc_stack.
*/
int pool_index_cached = READ_ONCE(pool_index);
void *pool; void *pool;
size_t offset = parts.offset << DEPOT_STACK_ALIGN; size_t offset = parts.offset << DEPOT_STACK_ALIGN;
struct stack_record *stack; struct stack_record *stack;
...@@ -510,9 +519,9 @@ unsigned int stack_depot_fetch(depot_stack_handle_t handle, ...@@ -510,9 +519,9 @@ unsigned int stack_depot_fetch(depot_stack_handle_t handle,
if (!handle) if (!handle)
return 0; return 0;
if (parts.pool_index > pool_index) { if (parts.pool_index > pool_index_cached) {
WARN(1, "pool index %d out of bounds (%d) for stack id %08x\n", WARN(1, "pool index %d out of bounds (%d) for stack id %08x\n",
parts.pool_index, pool_index, handle); parts.pool_index, pool_index_cached, handle);
return 0; return 0;
} }
pool = stack_pools[parts.pool_index]; pool = stack_pools[parts.pool_index];
......
Markdown is supported
0%
or
You are about to add 0 people to the discussion. Proceed with caution.
Finish editing this message first!
Please register or to comment