Commit 97a9474a authored by Thomas Gleixner's avatar Thomas Gleixner

Merge branch 'kcsan-for-tip' of...

Merge branch 'kcsan-for-tip' of git://git.kernel.org/pub/scm/linux/kernel/git/paulmck/linux-rcu into locking/kcsan

Pull KCSAN updates from Paul McKenney.
parents 3b02a051 50a19ad4
The Kernel Concurrency Sanitizer (KCSAN)
========================================
Overview
--------
*Kernel Concurrency Sanitizer (KCSAN)* is a dynamic data race detector for
kernel space. KCSAN is a sampling watchpoint-based data race detector. Key
priorities in KCSAN's design are lack of false positives, scalability, and
simplicity. More details can be found in `Implementation Details`_.
KCSAN uses compile-time instrumentation to instrument memory accesses. KCSAN is
supported in both GCC and Clang. With GCC it requires version 7.3.0 or later.
With Clang it requires version 7.0.0 or later.
The Kernel Concurrency Sanitizer (KCSAN) is a dynamic race detector, which
relies on compile-time instrumentation, and uses a watchpoint-based sampling
approach to detect races. KCSAN's primary purpose is to detect `data races`_.
Usage
-----
To enable KCSAN configure kernel with::
KCSAN is supported in both GCC and Clang. With GCC it requires version 7.3.0 or
later. With Clang it requires version 7.0.0 or later.
To enable KCSAN configure the kernel with::
CONFIG_KCSAN = y
KCSAN provides several other configuration options to customize behaviour (see
their respective help text for more info).
the respective help text in ``lib/Kconfig.kcsan`` for more info).
Error reports
~~~~~~~~~~~~~
......@@ -96,7 +91,8 @@ The other less common type of data race report looks like this::
This report is generated where it was not possible to determine the other
racing thread, but a race was inferred due to the data value of the watched
memory location having changed. These can occur either due to missing
instrumentation or e.g. DMA accesses.
instrumentation or e.g. DMA accesses. These reports will only be generated if
``CONFIG_KCSAN_REPORT_RACE_UNKNOWN_ORIGIN=y`` (selected by default).
Selective analysis
~~~~~~~~~~~~~~~~~~
......@@ -110,9 +106,26 @@ the below options are available:
behaviour when encountering a data race is deemed safe.
* Disabling data race detection for entire functions can be accomplished by
using the function attribute ``__no_kcsan`` (or ``__no_kcsan_or_inline`` for
``__always_inline`` functions). To dynamically control for which functions
data races are reported, see the `debugfs`_ blacklist/whitelist feature.
using the function attribute ``__no_kcsan``::
__no_kcsan
void foo(void) {
...
To dynamically limit for which functions to generate reports, see the
`DebugFS interface`_ blacklist/whitelist feature.
For ``__always_inline`` functions, replace ``__always_inline`` with
``__no_kcsan_or_inline`` (which implies ``__always_inline``)::
static __no_kcsan_or_inline void foo(void) {
...
Note: Older compiler versions (GCC < 9) also do not always honor the
``__no_kcsan`` attribute on regular ``inline`` functions. If false positives
with these compilers cannot be tolerated, for small functions where
``__always_inline`` would be appropriate, ``__no_kcsan_or_inline`` should be
preferred instead.
* To disable data race detection for a particular compilation unit, add to the
``Makefile``::
......@@ -124,13 +137,29 @@ the below options are available:
KCSAN_SANITIZE := n
debugfs
~~~~~~~
Furthermore, it is possible to tell KCSAN to show or hide entire classes of
data races, depending on preferences. These can be changed via the following
Kconfig options:
* ``CONFIG_KCSAN_REPORT_VALUE_CHANGE_ONLY``: If enabled and a conflicting write
is observed via a watchpoint, but the data value of the memory location was
observed to remain unchanged, do not report the data race.
* ``CONFIG_KCSAN_ASSUME_PLAIN_WRITES_ATOMIC``: Assume that plain aligned writes
up to word size are atomic by default. Assumes that such writes are not
subject to unsafe compiler optimizations resulting in data races. The option
causes KCSAN to not report data races due to conflicts where the only plain
accesses are aligned writes up to word size.
DebugFS interface
~~~~~~~~~~~~~~~~~
* The file ``/sys/kernel/debug/kcsan`` can be read to get stats.
The file ``/sys/kernel/debug/kcsan`` provides the following interface:
* KCSAN can be turned on or off by writing ``on`` or ``off`` to
``/sys/kernel/debug/kcsan``.
* Reading ``/sys/kernel/debug/kcsan`` returns various runtime statistics.
* Writing ``on`` or ``off`` to ``/sys/kernel/debug/kcsan`` allows turning KCSAN
on or off, respectively.
* Writing ``!some_func_name`` to ``/sys/kernel/debug/kcsan`` adds
``some_func_name`` to the report filter list, which (by default) blacklists
......@@ -142,91 +171,121 @@ debugfs
can be used to silence frequently occurring data races; the whitelist feature
can help with reproduction and testing of fixes.
Tuning performance
~~~~~~~~~~~~~~~~~~
Core parameters that affect KCSAN's overall performance and bug detection
ability are exposed as kernel command-line arguments whose defaults can also be
changed via the corresponding Kconfig options.
* ``kcsan.skip_watch`` (``CONFIG_KCSAN_SKIP_WATCH``): Number of per-CPU memory
operations to skip, before another watchpoint is set up. Setting up
watchpoints more frequently will result in the likelihood of races to be
observed to increase. This parameter has the most significant impact on
overall system performance and race detection ability.
* ``kcsan.udelay_task`` (``CONFIG_KCSAN_UDELAY_TASK``): For tasks, the
microsecond delay to stall execution after a watchpoint has been set up.
Larger values result in the window in which we may observe a race to
increase.
* ``kcsan.udelay_interrupt`` (``CONFIG_KCSAN_UDELAY_INTERRUPT``): For
interrupts, the microsecond delay to stall execution after a watchpoint has
been set up. Interrupts have tighter latency requirements, and their delay
should generally be smaller than the one chosen for tasks.
They may be tweaked at runtime via ``/sys/module/kcsan/parameters/``.
Data Races
----------
Informally, two operations *conflict* if they access the same memory location,
and at least one of them is a write operation. In an execution, two memory
operations from different threads form a **data race** if they *conflict*, at
least one of them is a *plain access* (non-atomic), and they are *unordered* in
the "happens-before" order according to the `LKMM
<../../tools/memory-model/Documentation/explanation.txt>`_.
In an execution, two memory accesses form a *data race* if they *conflict*,
they happen concurrently in different threads, and at least one of them is a
*plain access*; they *conflict* if both access the same memory location, and at
least one is a write. For a more thorough discussion and definition, see `"Plain
Accesses and Data Races" in the LKMM`_.
Relationship with the Linux Kernel Memory Model (LKMM)
~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~
.. _"Plain Accesses and Data Races" in the LKMM: https://git.kernel.org/pub/scm/linux/kernel/git/torvalds/linux.git/tree/tools/memory-model/Documentation/explanation.txt#n1922
Relationship with the Linux-Kernel Memory Consistency Model (LKMM)
~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~
The LKMM defines the propagation and ordering rules of various memory
operations, which gives developers the ability to reason about concurrent code.
Ultimately this allows to determine the possible executions of concurrent code,
and if that code is free from data races.
KCSAN is aware of *atomic* accesses (``READ_ONCE``, ``WRITE_ONCE``,
``atomic_*``, etc.), but is oblivious of any ordering guarantees. In other
words, KCSAN assumes that as long as a plain access is not observed to race
with another conflicting access, memory operations are correctly ordered.
KCSAN is aware of *marked atomic operations* (``READ_ONCE``, ``WRITE_ONCE``,
``atomic_*``, etc.), but is oblivious of any ordering guarantees and simply
assumes that memory barriers are placed correctly. In other words, KCSAN
assumes that as long as a plain access is not observed to race with another
conflicting access, memory operations are correctly ordered.
This means that KCSAN will not report *potential* data races due to missing
memory ordering. If, however, missing memory ordering (that is observable with
a particular compiler and architecture) leads to an observable data race (e.g.
entering a critical section erroneously), KCSAN would report the resulting
data race.
Race conditions vs. data races
~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~
Race conditions are logic bugs, where unexpected interleaving of racing
concurrent operations result in an erroneous state.
Data races on the other hand are defined at the *memory model/language level*.
Many data races are also harmful race conditions, which a tool like KCSAN
reports! However, not all data races are race conditions and vice-versa.
KCSAN's intent is to report data races according to the LKMM. A data race
detector can only work at the memory model/language level.
Deeper analysis, to find high-level race conditions only, requires conveying
the intended kernel logic to a tool. This requires (1) the developer writing a
specification or model of their code, and then (2) the tool verifying that the
implementation matches. This has been done for small bits of code using model
checkers and other formal methods, but does not scale to the level of what can
be covered with a dynamic analysis based data race detector such as KCSAN.
For reasons outlined in this `article <https://lwn.net/Articles/793253/>`_,
data races can be much more subtle, but can cause no less harm than high-level
race conditions.
memory ordering. Developers should therefore carefully consider the required
memory ordering requirements that remain unchecked. If, however, missing
memory ordering (that is observable with a particular compiler and
architecture) leads to an observable data race (e.g. entering a critical
section erroneously), KCSAN would report the resulting data race.
Race Detection Beyond Data Races
--------------------------------
For code with complex concurrency design, race-condition bugs may not always
manifest as data races. Race conditions occur if concurrently executing
operations result in unexpected system behaviour. On the other hand, data races
are defined at the C-language level. The following macros can be used to check
properties of concurrent code where bugs would not manifest as data races.
.. kernel-doc:: include/linux/kcsan-checks.h
:functions: ASSERT_EXCLUSIVE_WRITER ASSERT_EXCLUSIVE_WRITER_SCOPED
ASSERT_EXCLUSIVE_ACCESS ASSERT_EXCLUSIVE_ACCESS_SCOPED
ASSERT_EXCLUSIVE_BITS
Implementation Details
----------------------
The general approach is inspired by `DataCollider
KCSAN relies on observing that two accesses happen concurrently. Crucially, we
want to (a) increase the chances of observing races (especially for races that
manifest rarely), and (b) be able to actually observe them. We can accomplish
(a) by injecting various delays, and (b) by using address watchpoints (or
breakpoints).
If we deliberately stall a memory access, while we have a watchpoint for its
address set up, and then observe the watchpoint to fire, two accesses to the
same address just raced. Using hardware watchpoints, this is the approach taken
in `DataCollider
<http://usenix.org/legacy/events/osdi10/tech/full_papers/Erickson.pdf>`_.
Unlike DataCollider, KCSAN does not use hardware watchpoints, but instead
relies on compiler instrumentation. Watchpoints are implemented using an
efficient encoding that stores access type, size, and address in a long; the
benefits of using "soft watchpoints" are portability and greater flexibility in
limiting which accesses trigger a watchpoint.
relies on compiler instrumentation and "soft watchpoints".
More specifically, KCSAN requires instrumenting plain (unmarked, non-atomic)
memory operations; for each instrumented plain access:
In KCSAN, watchpoints are implemented using an efficient encoding that stores
access type, size, and address in a long; the benefits of using "soft
watchpoints" are portability and greater flexibility. KCSAN then relies on the
compiler instrumenting plain accesses. For each instrumented plain access:
1. Check if a matching watchpoint exists; if yes, and at least one access is a
write, then we encountered a racing access.
2. Periodically, if no matching watchpoint exists, set up a watchpoint and
stall for a small delay.
stall for a small randomized delay.
3. Also check the data value before the delay, and re-check the data value
after delay; if the values mismatch, we infer a race of unknown origin.
To detect data races between plain and atomic memory operations, KCSAN also
annotates atomic accesses, but only to check if a watchpoint exists
(``kcsan_check_atomic_*``); i.e. KCSAN never sets up a watchpoint on atomic
accesses.
To detect data races between plain and marked accesses, KCSAN also annotates
marked accesses, but only to check if a watchpoint exists; i.e. KCSAN never
sets up a watchpoint on marked accesses. By never setting up watchpoints for
marked operations, if all accesses to a variable that is accessed concurrently
are properly marked, KCSAN will never trigger a watchpoint and therefore never
report the accesses.
Key Properties
~~~~~~~~~~~~~~
1. **Memory Overhead:** The current implementation uses a small array of longs
to encode watchpoint information, which is negligible.
1. **Memory Overhead:** The overall memory overhead is only a few MiB
depending on configuration. The current implementation uses a small array of
longs to encode watchpoint information, which is negligible.
2. **Performance Overhead:** KCSAN's runtime aims to be minimal, using an
efficient watchpoint encoding that does not require acquiring any shared
......@@ -253,14 +312,17 @@ Key Properties
Alternatives Considered
-----------------------
An alternative data race detection approach for the kernel can be found in
An alternative data race detection approach for the kernel can be found in the
`Kernel Thread Sanitizer (KTSAN) <https://github.com/google/ktsan/wiki>`_.
KTSAN is a happens-before data race detector, which explicitly establishes the
happens-before order between memory operations, which can then be used to
determine data races as defined in `Data Races`_. To build a correct
happens-before relation, KTSAN must be aware of all ordering rules of the LKMM
and synchronization primitives. Unfortunately, any omission leads to false
positives, which is especially important in the context of the kernel which
includes numerous custom synchronization mechanisms. Furthermore, KTSAN's
implementation requires metadata for each memory location (shadow memory);
currently, for each page, KTSAN requires 4 pages of shadow memory.
determine data races as defined in `Data Races`_.
To build a correct happens-before relation, KTSAN must be aware of all ordering
rules of the LKMM and synchronization primitives. Unfortunately, any omission
leads to large numbers of false positives, which is especially detrimental in
the context of the kernel which includes numerous custom synchronization
mechanisms. To track the happens-before relation, KTSAN's implementation
requires metadata for each memory location (shadow memory), which for each page
corresponds to 4 pages of shadow memory, and can translate into overhead of
tens of GiB on a large system.
......@@ -326,9 +326,9 @@ unsigned long read_word_at_a_time(const void *addr)
#define data_race(expr) \
({ \
typeof(({ expr; })) __val; \
kcsan_nestable_atomic_begin(); \
kcsan_disable_current(); \
__val = ({ expr; }); \
kcsan_nestable_atomic_end(); \
kcsan_enable_current(); \
__val; \
})
#else
......
......@@ -3,6 +3,8 @@
#ifndef _LINUX_KCSAN_CHECKS_H
#define _LINUX_KCSAN_CHECKS_H
/* Note: Only include what is already included by compiler.h. */
#include <linux/compiler_attributes.h>
#include <linux/types.h>
/*
......@@ -12,10 +14,12 @@
* WRITE : write access;
* ATOMIC: access is atomic;
* ASSERT: access is not a regular access, but an assertion;
* SCOPED: access is a scoped access;
*/
#define KCSAN_ACCESS_WRITE 0x1
#define KCSAN_ACCESS_ATOMIC 0x2
#define KCSAN_ACCESS_ASSERT 0x4
#define KCSAN_ACCESS_SCOPED 0x8
/*
* __kcsan_*: Always calls into the runtime when KCSAN is enabled. This may be used
......@@ -26,12 +30,27 @@
/**
* __kcsan_check_access - check generic access for races
*
* @ptr address of access
* @size size of access
* @type access type modifier
* @ptr: address of access
* @size: size of access
* @type: access type modifier
*/
void __kcsan_check_access(const volatile void *ptr, size_t size, int type);
/**
* kcsan_disable_current - disable KCSAN for the current context
*
* Supports nesting.
*/
void kcsan_disable_current(void);
/**
* kcsan_enable_current - re-enable KCSAN for the current context
*
* Supports nesting.
*/
void kcsan_enable_current(void);
void kcsan_enable_current_nowarn(void); /* Safe in uaccess regions. */
/**
* kcsan_nestable_atomic_begin - begin nestable atomic region
*
......@@ -64,7 +83,7 @@ void kcsan_flat_atomic_end(void);
* Force treating the next n memory accesses for the current context as atomic
* operations.
*
* @n number of following memory accesses to treat as atomic.
* @n: number of following memory accesses to treat as atomic.
*/
void kcsan_atomic_next(int n);
......@@ -74,15 +93,64 @@ void kcsan_atomic_next(int n);
* Set the access mask for all accesses for the current context if non-zero.
* Only value changes to bits set in the mask will be reported.
*
* @mask bitmask
* @mask: bitmask
*/
void kcsan_set_access_mask(unsigned long mask);
/* Scoped access information. */
struct kcsan_scoped_access {
struct list_head list;
const volatile void *ptr;
size_t size;
int type;
};
/*
* Automatically call kcsan_end_scoped_access() when kcsan_scoped_access goes
* out of scope; relies on attribute "cleanup", which is supported by all
* compilers that support KCSAN.
*/
#define __kcsan_cleanup_scoped \
__maybe_unused __attribute__((__cleanup__(kcsan_end_scoped_access)))
/**
* kcsan_begin_scoped_access - begin scoped access
*
* Begin scoped access and initialize @sa, which will cause KCSAN to
* continuously check the memory range in the current thread until
* kcsan_end_scoped_access() is called for @sa.
*
* Scoped accesses are implemented by appending @sa to an internal list for the
* current execution context, and then checked on every call into the KCSAN
* runtime.
*
* @ptr: address of access
* @size: size of access
* @type: access type modifier
* @sa: struct kcsan_scoped_access to use for the scope of the access
*/
struct kcsan_scoped_access *
kcsan_begin_scoped_access(const volatile void *ptr, size_t size, int type,
struct kcsan_scoped_access *sa);
/**
* kcsan_end_scoped_access - end scoped access
*
* End a scoped access, which will stop KCSAN checking the memory range.
* Requires that kcsan_begin_scoped_access() was previously called once for @sa.
*
* @sa: a previously initialized struct kcsan_scoped_access
*/
void kcsan_end_scoped_access(struct kcsan_scoped_access *sa);
#else /* CONFIG_KCSAN */
static inline void __kcsan_check_access(const volatile void *ptr, size_t size,
int type) { }
static inline void kcsan_disable_current(void) { }
static inline void kcsan_enable_current(void) { }
static inline void kcsan_enable_current_nowarn(void) { }
static inline void kcsan_nestable_atomic_begin(void) { }
static inline void kcsan_nestable_atomic_end(void) { }
static inline void kcsan_flat_atomic_begin(void) { }
......@@ -90,32 +158,48 @@ static inline void kcsan_flat_atomic_end(void) { }
static inline void kcsan_atomic_next(int n) { }
static inline void kcsan_set_access_mask(unsigned long mask) { }
struct kcsan_scoped_access { };
#define __kcsan_cleanup_scoped __maybe_unused
static inline struct kcsan_scoped_access *
kcsan_begin_scoped_access(const volatile void *ptr, size_t size, int type,
struct kcsan_scoped_access *sa) { return sa; }
static inline void kcsan_end_scoped_access(struct kcsan_scoped_access *sa) { }
#endif /* CONFIG_KCSAN */
#ifdef __SANITIZE_THREAD__
/*
* kcsan_*: Only calls into the runtime when the particular compilation unit has
* KCSAN instrumentation enabled. May be used in header files.
* Only calls into the runtime when the particular compilation unit has KCSAN
* instrumentation enabled. May be used in header files.
*/
#ifdef __SANITIZE_THREAD__
#define kcsan_check_access __kcsan_check_access
/*
* Only use these to disable KCSAN for accesses in the current compilation unit;
* calls into libraries may still perform KCSAN checks.
*/
#define __kcsan_disable_current kcsan_disable_current
#define __kcsan_enable_current kcsan_enable_current_nowarn
#else
static inline void kcsan_check_access(const volatile void *ptr, size_t size,
int type) { }
static inline void __kcsan_enable_current(void) { }
static inline void __kcsan_disable_current(void) { }
#endif
/**
* __kcsan_check_read - check regular read access for races
*
* @ptr address of access
* @size size of access
* @ptr: address of access
* @size: size of access
*/
#define __kcsan_check_read(ptr, size) __kcsan_check_access(ptr, size, 0)
/**
* __kcsan_check_write - check regular write access for races
*
* @ptr address of access
* @size size of access
* @ptr: address of access
* @size: size of access
*/
#define __kcsan_check_write(ptr, size) \
__kcsan_check_access(ptr, size, KCSAN_ACCESS_WRITE)
......@@ -123,16 +207,16 @@ static inline void kcsan_check_access(const volatile void *ptr, size_t size,
/**
* kcsan_check_read - check regular read access for races
*
* @ptr address of access
* @size size of access
* @ptr: address of access
* @size: size of access
*/
#define kcsan_check_read(ptr, size) kcsan_check_access(ptr, size, 0)
/**
* kcsan_check_write - check regular write access for races
*
* @ptr address of access
* @size size of access
* @ptr: address of access
* @size: size of access
*/
#define kcsan_check_write(ptr, size) \
kcsan_check_access(ptr, size, KCSAN_ACCESS_WRITE)
......@@ -158,18 +242,82 @@ static inline void kcsan_check_access(const volatile void *ptr, size_t size,
* allowed. This assertion can be used to specify properties of concurrent code,
* where violation cannot be detected as a normal data race.
*
* For example, if a per-CPU variable is only meant to be written by a single
* CPU, but may be read from other CPUs; in this case, reads and writes must be
* marked properly, however, if an off-CPU WRITE_ONCE() races with the owning
* CPU's WRITE_ONCE(), would not constitute a data race but could be a harmful
* race condition. Using this macro allows specifying this property in the code
* and catch such bugs.
* For example, if we only have a single writer, but multiple concurrent
* readers, to avoid data races, all these accesses must be marked; even
* concurrent marked writes racing with the single writer are bugs.
* Unfortunately, due to being marked, they are no longer data races. For cases
* like these, we can use the macro as follows:
*
* .. code-block:: c
*
* @var variable to assert on
* void writer(void) {
* spin_lock(&update_foo_lock);
* ASSERT_EXCLUSIVE_WRITER(shared_foo);
* WRITE_ONCE(shared_foo, ...);
* spin_unlock(&update_foo_lock);
* }
* void reader(void) {
* // update_foo_lock does not need to be held!
* ... = READ_ONCE(shared_foo);
* }
*
* Note: ASSERT_EXCLUSIVE_WRITER_SCOPED(), if applicable, performs more thorough
* checking if a clear scope where no concurrent writes are expected exists.
*
* @var: variable to assert on
*/
#define ASSERT_EXCLUSIVE_WRITER(var) \
__kcsan_check_access(&(var), sizeof(var), KCSAN_ACCESS_ASSERT)
/*
* Helper macros for implementation of for ASSERT_EXCLUSIVE_*_SCOPED(). @id is
* expected to be unique for the scope in which instances of kcsan_scoped_access
* are declared.
*/
#define __kcsan_scoped_name(c, suffix) __kcsan_scoped_##c##suffix
#define __ASSERT_EXCLUSIVE_SCOPED(var, type, id) \
struct kcsan_scoped_access __kcsan_scoped_name(id, _) \
__kcsan_cleanup_scoped; \
struct kcsan_scoped_access *__kcsan_scoped_name(id, _dummy_p) \
__maybe_unused = kcsan_begin_scoped_access( \
&(var), sizeof(var), KCSAN_ACCESS_SCOPED | (type), \
&__kcsan_scoped_name(id, _))
/**
* ASSERT_EXCLUSIVE_WRITER_SCOPED - assert no concurrent writes to @var in scope
*
* Scoped variant of ASSERT_EXCLUSIVE_WRITER().
*
* Assert that there are no concurrent writes to @var for the duration of the
* scope in which it is introduced. This provides a better way to fully cover
* the enclosing scope, compared to multiple ASSERT_EXCLUSIVE_WRITER(), and
* increases the likelihood for KCSAN to detect racing accesses.
*
* For example, it allows finding race-condition bugs that only occur due to
* state changes within the scope itself:
*
* .. code-block:: c
*
* void writer(void) {
* spin_lock(&update_foo_lock);
* {
* ASSERT_EXCLUSIVE_WRITER_SCOPED(shared_foo);
* WRITE_ONCE(shared_foo, 42);
* ...
* // shared_foo should still be 42 here!
* }
* spin_unlock(&update_foo_lock);
* }
* void buggy(void) {
* if (READ_ONCE(shared_foo) == 42)
* WRITE_ONCE(shared_foo, 1); // bug!
* }
*
* @var: variable to assert on
*/
#define ASSERT_EXCLUSIVE_WRITER_SCOPED(var) \
__ASSERT_EXCLUSIVE_SCOPED(var, KCSAN_ACCESS_ASSERT, __COUNTER__)
/**
* ASSERT_EXCLUSIVE_ACCESS - assert no concurrent accesses to @var
*
......@@ -177,30 +325,55 @@ static inline void kcsan_check_access(const volatile void *ptr, size_t size,
* writers). This assertion can be used to specify properties of concurrent
* code, where violation cannot be detected as a normal data race.
*
* For example, in a reference-counting algorithm where exclusive access is
* expected after the refcount reaches 0. We can check that this property
* actually holds as follows:
* For example, where exclusive access is expected after determining no other
* users of an object are left, but the object is not actually freed. We can
* check that this property actually holds as follows:
*
* .. code-block:: c
*
* if (refcount_dec_and_test(&obj->refcnt)) {
* ASSERT_EXCLUSIVE_ACCESS(*obj);
* safely_dispose_of(obj);
* do_some_cleanup(obj);
* release_for_reuse(obj);
* }
*
* @var variable to assert on
* Note: ASSERT_EXCLUSIVE_ACCESS_SCOPED(), if applicable, performs more thorough
* checking if a clear scope where no concurrent accesses are expected exists.
*
* Note: For cases where the object is freed, `KASAN <kasan.html>`_ is a better
* fit to detect use-after-free bugs.
*
* @var: variable to assert on
*/
#define ASSERT_EXCLUSIVE_ACCESS(var) \
__kcsan_check_access(&(var), sizeof(var), KCSAN_ACCESS_WRITE | KCSAN_ACCESS_ASSERT)
/**
* ASSERT_EXCLUSIVE_ACCESS_SCOPED - assert no concurrent accesses to @var in scope
*
* Scoped variant of ASSERT_EXCLUSIVE_ACCESS().
*
* Assert that there are no concurrent accesses to @var (no readers nor writers)
* for the entire duration of the scope in which it is introduced. This provides
* a better way to fully cover the enclosing scope, compared to multiple
* ASSERT_EXCLUSIVE_ACCESS(), and increases the likelihood for KCSAN to detect
* racing accesses.
*
* @var: variable to assert on
*/
#define ASSERT_EXCLUSIVE_ACCESS_SCOPED(var) \
__ASSERT_EXCLUSIVE_SCOPED(var, KCSAN_ACCESS_WRITE | KCSAN_ACCESS_ASSERT, __COUNTER__)
/**
* ASSERT_EXCLUSIVE_BITS - assert no concurrent writes to subset of bits in @var
*
* Bit-granular variant of ASSERT_EXCLUSIVE_WRITER(var).
* Bit-granular variant of ASSERT_EXCLUSIVE_WRITER().
*
* Assert that there are no concurrent writes to a subset of bits in @var;
* concurrent readers are permitted. This assertion captures more detailed
* bit-level properties, compared to the other (word granularity) assertions.
* Only the bits set in @mask are checked for concurrent modifications, while
* ignoring the remaining bits, i.e. concurrent writes (or reads) to ~@mask bits
* ignoring the remaining bits, i.e. concurrent writes (or reads) to ~mask bits
* are ignored.
*
* Use this for variables, where some bits must not be modified concurrently,
......@@ -210,17 +383,21 @@ static inline void kcsan_check_access(const volatile void *ptr, size_t size,
* but other bits may still be modified concurrently. A reader may wish to
* assert that this is true as follows:
*
* .. code-block:: c
*
* ASSERT_EXCLUSIVE_BITS(flags, READ_ONLY_MASK);
* foo = (READ_ONCE(flags) & READ_ONLY_MASK) >> READ_ONLY_SHIFT;
*
* Note: The access that immediately follows ASSERT_EXCLUSIVE_BITS() is
* assumed to access the masked bits only, and KCSAN optimistically assumes it
* is therefore safe, even in the presence of data races, and marking it with
* READ_ONCE() is optional from KCSAN's point-of-view. We caution, however,
* that it may still be advisable to do so, since we cannot reason about all
* compiler optimizations when it comes to bit manipulations (on the reader
* and writer side). If you are sure nothing can go wrong, we can write the
* above simply as:
* Note: The access that immediately follows ASSERT_EXCLUSIVE_BITS() is assumed
* to access the masked bits only, and KCSAN optimistically assumes it is
* therefore safe, even in the presence of data races, and marking it with
* READ_ONCE() is optional from KCSAN's point-of-view. We caution, however, that
* it may still be advisable to do so, since we cannot reason about all compiler
* optimizations when it comes to bit manipulations (on the reader and writer
* side). If you are sure nothing can go wrong, we can write the above simply
* as:
*
* .. code-block:: c
*
* ASSERT_EXCLUSIVE_BITS(flags, READ_ONLY_MASK);
* foo = (flags & READ_ONLY_MASK) >> READ_ONLY_SHIFT;
......@@ -230,15 +407,17 @@ static inline void kcsan_check_access(const volatile void *ptr, size_t size,
* be modified concurrently. Writers, where other bits may change concurrently,
* could use the assertion as follows:
*
* .. code-block:: c
*
* spin_lock(&foo_lock);
* ASSERT_EXCLUSIVE_BITS(flags, FOO_MASK);
* old_flags = READ_ONCE(flags);
* old_flags = flags;
* new_flags = (old_flags & ~FOO_MASK) | (new_foo << FOO_SHIFT);
* if (cmpxchg(&flags, old_flags, new_flags) != old_flags) { ... }
* spin_unlock(&foo_lock);
*
* @var variable to assert on
* @mask only check for modifications to bits set in @mask
* @var: variable to assert on
* @mask: only check for modifications to bits set in @mask
*/
#define ASSERT_EXCLUSIVE_BITS(var, mask) \
do { \
......
......@@ -40,6 +40,9 @@ struct kcsan_ctx {
* Access mask for all accesses if non-zero.
*/
unsigned long access_mask;
/* List of scoped accesses. */
struct list_head scoped_accesses;
};
/**
......@@ -47,25 +50,9 @@ struct kcsan_ctx {
*/
void kcsan_init(void);
/**
* kcsan_disable_current - disable KCSAN for the current context
*
* Supports nesting.
*/
void kcsan_disable_current(void);
/**
* kcsan_enable_current - re-enable KCSAN for the current context
*
* Supports nesting.
*/
void kcsan_enable_current(void);
#else /* CONFIG_KCSAN */
static inline void kcsan_init(void) { }
static inline void kcsan_disable_current(void) { }
static inline void kcsan_enable_current(void) { }
#endif /* CONFIG_KCSAN */
......
......@@ -169,6 +169,7 @@ struct task_struct init_task
.atomic_nest_count = 0,
.in_flat_atomic = false,
.access_mask = 0,
.scoped_accesses = {LIST_POISON1, NULL},
},
#endif
#ifdef CONFIG_TRACE_IRQFLAGS
......
......@@ -4,24 +4,17 @@
#define _KERNEL_KCSAN_ATOMIC_H
#include <linux/jiffies.h>
#include <linux/sched.h>
/*
* Helper that returns true if access to @ptr should be considered an atomic
* access, even though it is not explicitly atomic.
*
* List all volatile globals that have been observed in races, to suppress
* data race reports between accesses to these variables.
*
* For now, we assume that volatile accesses of globals are as strong as atomic
* accesses (READ_ONCE, WRITE_ONCE cast to volatile). The situation is still not
* entirely clear, as on some architectures (Alpha) READ_ONCE/WRITE_ONCE do more
* than cast to volatile. Eventually, we hope to be able to remove this
* function.
* Special rules for certain memory where concurrent conflicting accesses are
* common, however, the current convention is to not mark them; returns true if
* access to @ptr should be considered atomic. Called from slow-path.
*/
static __always_inline bool kcsan_is_atomic(const volatile void *ptr)
static bool kcsan_is_atomic_special(const volatile void *ptr)
{
/* only jiffies for now */
return ptr == &jiffies;
/* volatile globals that have been observed in data races. */
return ptr == &jiffies || ptr == &current->state;
}
#endif /* _KERNEL_KCSAN_ATOMIC_H */
......@@ -6,6 +6,7 @@
#include <linux/export.h>
#include <linux/init.h>
#include <linux/kernel.h>
#include <linux/list.h>
#include <linux/moduleparam.h>
#include <linux/percpu.h>
#include <linux/preempt.h>
......@@ -18,9 +19,10 @@
#include "kcsan.h"
static bool kcsan_early_enable = IS_ENABLED(CONFIG_KCSAN_EARLY_ENABLE);
static unsigned int kcsan_udelay_task = CONFIG_KCSAN_UDELAY_TASK;
static unsigned int kcsan_udelay_interrupt = CONFIG_KCSAN_UDELAY_INTERRUPT;
unsigned int kcsan_udelay_task = CONFIG_KCSAN_UDELAY_TASK;
unsigned int kcsan_udelay_interrupt = CONFIG_KCSAN_UDELAY_INTERRUPT;
static long kcsan_skip_watch = CONFIG_KCSAN_SKIP_WATCH;
static bool kcsan_interrupt_watcher = IS_ENABLED(CONFIG_KCSAN_INTERRUPT_WATCHER);
#ifdef MODULE_PARAM_PREFIX
#undef MODULE_PARAM_PREFIX
......@@ -30,6 +32,7 @@ module_param_named(early_enable, kcsan_early_enable, bool, 0);
module_param_named(udelay_task, kcsan_udelay_task, uint, 0644);
module_param_named(udelay_interrupt, kcsan_udelay_interrupt, uint, 0644);
module_param_named(skip_watch, kcsan_skip_watch, long, 0644);
module_param_named(interrupt_watcher, kcsan_interrupt_watcher, bool, 0444);
bool kcsan_enabled;
......@@ -40,10 +43,11 @@ static DEFINE_PER_CPU(struct kcsan_ctx, kcsan_cpu_ctx) = {
.atomic_nest_count = 0,
.in_flat_atomic = false,
.access_mask = 0,
.scoped_accesses = {LIST_POISON1, NULL},
};
/*
* Helper macros to index into adjacent slots slots, starting from address slot
* Helper macros to index into adjacent slots, starting from address slot
* itself, followed by the right and left slots.
*
* The purpose is 2-fold:
......@@ -67,7 +71,6 @@ static DEFINE_PER_CPU(struct kcsan_ctx, kcsan_cpu_ctx) = {
* slot=9: [10, 11, 9]
* slot=63: [64, 65, 63]
*/
#define NUM_SLOTS (1 + 2*KCSAN_CHECK_ADJACENT)
#define SLOT_IDX(slot, i) (slot + ((i + KCSAN_CHECK_ADJACENT) % NUM_SLOTS))
/*
......@@ -169,12 +172,16 @@ try_consume_watchpoint(atomic_long_t *watchpoint, long encoded_watchpoint)
return atomic_long_try_cmpxchg_relaxed(watchpoint, &encoded_watchpoint, CONSUMED_WATCHPOINT);
}
/*
* Return true if watchpoint was not touched, false if consumed.
*/
static inline bool remove_watchpoint(atomic_long_t *watchpoint)
/* Return true if watchpoint was not touched, false if already consumed. */
static inline bool consume_watchpoint(atomic_long_t *watchpoint)
{
return atomic_long_xchg_relaxed(watchpoint, CONSUMED_WATCHPOINT) != CONSUMED_WATCHPOINT;
}
/* Remove the watchpoint -- its slot may be reused after. */
static inline void remove_watchpoint(atomic_long_t *watchpoint)
{
return atomic_long_xchg_relaxed(watchpoint, INVALID_WATCHPOINT) != CONSUMED_WATCHPOINT;
atomic_long_set(watchpoint, INVALID_WATCHPOINT);
}
static __always_inline struct kcsan_ctx *get_ctx(void)
......@@ -186,12 +193,24 @@ static __always_inline struct kcsan_ctx *get_ctx(void)
return in_task() ? &current->kcsan_ctx : raw_cpu_ptr(&kcsan_cpu_ctx);
}
static __always_inline bool
is_atomic(const volatile void *ptr, size_t size, int type)
/* Check scoped accesses; never inline because this is a slow-path! */
static noinline void kcsan_check_scoped_accesses(void)
{
struct kcsan_ctx *ctx;
struct kcsan_ctx *ctx = get_ctx();
struct list_head *prev_save = ctx->scoped_accesses.prev;
struct kcsan_scoped_access *scoped_access;
ctx->scoped_accesses.prev = NULL; /* Avoid recursion. */
list_for_each_entry(scoped_access, &ctx->scoped_accesses, list)
__kcsan_check_access(scoped_access->ptr, scoped_access->size, scoped_access->type);
ctx->scoped_accesses.prev = prev_save;
}
if ((type & KCSAN_ACCESS_ATOMIC) != 0)
/* Rules for generic atomic accesses. Called from fast-path. */
static __always_inline bool
is_atomic(const volatile void *ptr, size_t size, int type, struct kcsan_ctx *ctx)
{
if (type & KCSAN_ACCESS_ATOMIC)
return true;
/*
......@@ -199,16 +218,15 @@ is_atomic(const volatile void *ptr, size_t size, int type)
* as atomic. This allows using them also in atomic regions, such as
* seqlocks, without implicitly changing their semantics.
*/
if ((type & KCSAN_ACCESS_ASSERT) != 0)
if (type & KCSAN_ACCESS_ASSERT)
return false;
if (IS_ENABLED(CONFIG_KCSAN_ASSUME_PLAIN_WRITES_ATOMIC) &&
(type & KCSAN_ACCESS_WRITE) != 0 && size <= sizeof(long) &&
(type & KCSAN_ACCESS_WRITE) && size <= sizeof(long) &&
IS_ALIGNED((unsigned long)ptr, size))
return true; /* Assume aligned writes up to word size are atomic. */
ctx = get_ctx();
if (unlikely(ctx->atomic_next > 0)) {
if (ctx->atomic_next > 0) {
/*
* Because we do not have separate contexts for nested
* interrupts, in case atomic_next is set, we simply assume that
......@@ -222,14 +240,12 @@ is_atomic(const volatile void *ptr, size_t size, int type)
--ctx->atomic_next; /* in task, or outer interrupt */
return true;
}
if (unlikely(ctx->atomic_nest_count > 0 || ctx->in_flat_atomic))
return true;
return kcsan_is_atomic(ptr);
return ctx->atomic_nest_count > 0 || ctx->in_flat_atomic;
}
static __always_inline bool
should_watch(const volatile void *ptr, size_t size, int type)
should_watch(const volatile void *ptr, size_t size, int type, struct kcsan_ctx *ctx)
{
/*
* Never set up watchpoints when memory operations are atomic.
......@@ -238,7 +254,7 @@ should_watch(const volatile void *ptr, size_t size, int type)
* should not count towards skipped instructions, and (2) to actually
* decrement kcsan_atomic_next for consecutive instruction stream.
*/
if (is_atomic(ptr, size, type))
if (is_atomic(ptr, size, type, ctx))
return false;
if (this_cpu_dec_return(kcsan_skip) >= 0)
......@@ -320,8 +336,9 @@ static noinline void kcsan_found_watchpoint(const volatile void *ptr,
flags = user_access_save();
if (consumed) {
kcsan_report(ptr, size, type, true, raw_smp_processor_id(),
KCSAN_REPORT_CONSUMED_WATCHPOINT);
kcsan_report(ptr, size, type, KCSAN_VALUE_CHANGE_MAYBE,
KCSAN_REPORT_CONSUMED_WATCHPOINT,
watchpoint - watchpoints);
} else {
/*
* The other thread may not print any diagnostics, as it has
......@@ -354,7 +371,7 @@ kcsan_setup_watchpoint(const volatile void *ptr, size_t size, int type)
unsigned long access_mask;
enum kcsan_value_change value_change = KCSAN_VALUE_CHANGE_MAYBE;
unsigned long ua_flags = user_access_save();
unsigned long irq_flags;
unsigned long irq_flags = 0;
/*
* Always reset kcsan_skip counter in slow-path to avoid underflow; see
......@@ -365,30 +382,22 @@ kcsan_setup_watchpoint(const volatile void *ptr, size_t size, int type)
if (!kcsan_is_enabled())
goto out;
/*
* Special atomic rules: unlikely to be true, so we check them here in
* the slow-path, and not in the fast-path in is_atomic(). Call after
* kcsan_is_enabled(), as we may access memory that is not yet
* initialized during early boot.
*/
if (!is_assert && kcsan_is_atomic_special(ptr))
goto out;
if (!check_encodable((unsigned long)ptr, size)) {
kcsan_counter_inc(KCSAN_COUNTER_UNENCODABLE_ACCESSES);
goto out;
}
/*
* Disable interrupts & preemptions to avoid another thread on the same
* CPU accessing memory locations for the set up watchpoint; this is to
* avoid reporting races to e.g. CPU-local data.
*
* An alternative would be adding the source CPU to the watchpoint
* encoding, and checking that watchpoint-CPU != this-CPU. There are
* several problems with this:
* 1. we should avoid stealing more bits from the watchpoint encoding
* as it would affect accuracy, as well as increase performance
* overhead in the fast-path;
* 2. if we are preempted, but there *is* a genuine data race, we
* would *not* report it -- since this is the common case (vs.
* CPU-local data accesses), it makes more sense (from a data race
* detection point of view) to simply disable preemptions to ensure
* as many tasks as possible run on other CPUs.
*
* Use raw versions, to avoid lockdep recursion via IRQ flags tracing.
*/
if (!kcsan_interrupt_watcher)
/* Use raw to avoid lockdep recursion via IRQ flags tracing. */
raw_local_irq_save(irq_flags);
watchpoint = insert_watchpoint((unsigned long)ptr, size, is_write);
......@@ -477,7 +486,7 @@ kcsan_setup_watchpoint(const volatile void *ptr, size_t size, int type)
value_change = KCSAN_VALUE_CHANGE_TRUE;
/* Check if this access raced with another. */
if (!remove_watchpoint(watchpoint)) {
if (!consume_watchpoint(watchpoint)) {
/*
* Depending on the access type, map a value_change of MAYBE to
* TRUE (always report) or FALSE (never report).
......@@ -507,8 +516,8 @@ kcsan_setup_watchpoint(const volatile void *ptr, size_t size, int type)
if (is_assert && value_change == KCSAN_VALUE_CHANGE_TRUE)
kcsan_counter_inc(KCSAN_COUNTER_ASSERT_FAILURES);
kcsan_report(ptr, size, type, value_change, smp_processor_id(),
KCSAN_REPORT_RACE_SIGNAL);
kcsan_report(ptr, size, type, value_change, KCSAN_REPORT_RACE_SIGNAL,
watchpoint - watchpoints);
} else if (value_change == KCSAN_VALUE_CHANGE_TRUE) {
/* Inferring a race, since the value should not have changed. */
......@@ -518,12 +527,18 @@ kcsan_setup_watchpoint(const volatile void *ptr, size_t size, int type)
if (IS_ENABLED(CONFIG_KCSAN_REPORT_RACE_UNKNOWN_ORIGIN) || is_assert)
kcsan_report(ptr, size, type, KCSAN_VALUE_CHANGE_TRUE,
smp_processor_id(),
KCSAN_REPORT_RACE_UNKNOWN_ORIGIN);
KCSAN_REPORT_RACE_UNKNOWN_ORIGIN,
watchpoint - watchpoints);
}
/*
* Remove watchpoint; must be after reporting, since the slot may be
* reused after this point.
*/
remove_watchpoint(watchpoint);
kcsan_counter_dec(KCSAN_COUNTER_USED_WATCHPOINTS);
out_unlock:
if (!kcsan_interrupt_watcher)
raw_local_irq_restore(irq_flags);
out:
user_access_restore(ua_flags);
......@@ -560,8 +575,14 @@ static __always_inline void check_access(const volatile void *ptr, size_t size,
if (unlikely(watchpoint != NULL))
kcsan_found_watchpoint(ptr, size, type, watchpoint,
encoded_watchpoint);
else if (unlikely(should_watch(ptr, size, type)))
else {
struct kcsan_ctx *ctx = get_ctx(); /* Call only once in fast-path. */
if (unlikely(should_watch(ptr, size, type, ctx)))
kcsan_setup_watchpoint(ptr, size, type);
else if (unlikely(ctx->scoped_accesses.prev))
kcsan_check_scoped_accesses();
}
}
/* === Public interface ===================================================== */
......@@ -604,6 +625,13 @@ void kcsan_enable_current(void)
}
EXPORT_SYMBOL(kcsan_enable_current);
void kcsan_enable_current_nowarn(void)
{
if (get_ctx()->disable_count-- == 0)
kcsan_disable_current();
}
EXPORT_SYMBOL(kcsan_enable_current_nowarn);
void kcsan_nestable_atomic_begin(void)
{
/*
......@@ -657,6 +685,55 @@ void kcsan_set_access_mask(unsigned long mask)
}
EXPORT_SYMBOL(kcsan_set_access_mask);
struct kcsan_scoped_access *
kcsan_begin_scoped_access(const volatile void *ptr, size_t size, int type,
struct kcsan_scoped_access *sa)
{
struct kcsan_ctx *ctx = get_ctx();
__kcsan_check_access(ptr, size, type);
ctx->disable_count++; /* Disable KCSAN, in case list debugging is on. */
INIT_LIST_HEAD(&sa->list);
sa->ptr = ptr;
sa->size = size;
sa->type = type;
if (!ctx->scoped_accesses.prev) /* Lazy initialize list head. */
INIT_LIST_HEAD(&ctx->scoped_accesses);
list_add(&sa->list, &ctx->scoped_accesses);
ctx->disable_count--;
return sa;
}
EXPORT_SYMBOL(kcsan_begin_scoped_access);
void kcsan_end_scoped_access(struct kcsan_scoped_access *sa)
{
struct kcsan_ctx *ctx = get_ctx();
if (WARN(!ctx->scoped_accesses.prev, "Unbalanced %s()?", __func__))
return;
ctx->disable_count++; /* Disable KCSAN, in case list debugging is on. */
list_del(&sa->list);
if (list_empty(&ctx->scoped_accesses))
/*
* Ensure we do not enter kcsan_check_scoped_accesses()
* slow-path if unnecessary, and avoids requiring list_empty()
* in the fast-path (to avoid a READ_ONCE() and potential
* uaccess warning).
*/
ctx->scoped_accesses.prev = NULL;
ctx->disable_count--;
__kcsan_check_access(sa->ptr, sa->size, sa->type);
}
EXPORT_SYMBOL(kcsan_end_scoped_access);
void __kcsan_check_access(const volatile void *ptr, size_t size, int type)
{
check_access(ptr, size, type);
......
......@@ -74,25 +74,34 @@ void kcsan_counter_dec(enum kcsan_counter_id id)
*/
static noinline void microbenchmark(unsigned long iters)
{
const struct kcsan_ctx ctx_save = current->kcsan_ctx;
const bool was_enabled = READ_ONCE(kcsan_enabled);
cycles_t cycles;
/* We may have been called from an atomic region; reset context. */
memset(&current->kcsan_ctx, 0, sizeof(current->kcsan_ctx));
/*
* Disable to benchmark fast-path for all accesses, and (expected
* negligible) call into slow-path, but never set up watchpoints.
*/
WRITE_ONCE(kcsan_enabled, false);
pr_info("KCSAN: %s begin | iters: %lu\n", __func__, iters);
cycles = get_cycles();
while (iters--) {
/*
* We can run this benchmark from multiple tasks; this address
* calculation increases likelyhood of some accesses
* overlapping. Make the access type an atomic read, to never
* set up watchpoints and test the fast-path only.
*/
unsigned long addr =
iters % (CONFIG_KCSAN_NUM_WATCHPOINTS * PAGE_SIZE);
__kcsan_check_access((void *)addr, sizeof(long), KCSAN_ACCESS_ATOMIC);
unsigned long addr = iters & ((PAGE_SIZE << 8) - 1);
int type = !(iters & 0x7f) ? KCSAN_ACCESS_ATOMIC :
(!(iters & 0xf) ? KCSAN_ACCESS_WRITE : 0);
__kcsan_check_access((void *)addr, sizeof(long), type);
}
cycles = get_cycles() - cycles;
pr_info("KCSAN: %s end | cycles: %llu\n", __func__, cycles);
WRITE_ONCE(kcsan_enabled, was_enabled);
/* restore context */
current->kcsan_ctx = ctx_save;
}
/*
......@@ -101,6 +110,7 @@ static noinline void microbenchmark(unsigned long iters)
*/
static long test_dummy;
static long test_flags;
static long test_scoped;
static noinline void test_thread(unsigned long iters)
{
const long CHANGE_BITS = 0xff00ff00ff00ff00L;
......@@ -111,7 +121,8 @@ static noinline void test_thread(unsigned long iters)
memset(&current->kcsan_ctx, 0, sizeof(current->kcsan_ctx));
pr_info("KCSAN: %s begin | iters: %lu\n", __func__, iters);
pr_info("test_dummy@%px, test_flags@%px\n", &test_dummy, &test_flags);
pr_info("test_dummy@%px, test_flags@%px, test_scoped@%px,\n",
&test_dummy, &test_flags, &test_scoped);
cycles = get_cycles();
while (iters--) {
......@@ -132,6 +143,18 @@ static noinline void test_thread(unsigned long iters)
test_flags ^= CHANGE_BITS; /* generate value-change */
__kcsan_check_write(&test_flags, sizeof(test_flags));
BUG_ON(current->kcsan_ctx.scoped_accesses.prev);
{
/* Should generate reports anywhere in this block. */
ASSERT_EXCLUSIVE_WRITER_SCOPED(test_scoped);
ASSERT_EXCLUSIVE_ACCESS_SCOPED(test_scoped);
BUG_ON(!current->kcsan_ctx.scoped_accesses.prev);
/* Unrelated accesses. */
__kcsan_check_access(&cycles, sizeof(cycles), 0);
__kcsan_check_access(&cycles, sizeof(cycles), KCSAN_ACCESS_ATOMIC);
}
BUG_ON(current->kcsan_ctx.scoped_accesses.prev);
}
cycles = get_cycles() - cycles;
......@@ -207,7 +230,7 @@ static ssize_t insert_report_filterlist(const char *func)
/* initial allocation */
report_filterlist.addrs =
kmalloc_array(report_filterlist.size,
sizeof(unsigned long), GFP_KERNEL);
sizeof(unsigned long), GFP_ATOMIC);
if (report_filterlist.addrs == NULL) {
ret = -ENOMEM;
goto out;
......@@ -217,7 +240,7 @@ static ssize_t insert_report_filterlist(const char *func)
size_t new_size = report_filterlist.size * 2;
unsigned long *new_addrs =
krealloc(report_filterlist.addrs,
new_size * sizeof(unsigned long), GFP_KERNEL);
new_size * sizeof(unsigned long), GFP_ATOMIC);
if (new_addrs == NULL) {
/* leave filterlist itself untouched */
......
......@@ -12,6 +12,10 @@
/* The number of adjacent watchpoints to check. */
#define KCSAN_CHECK_ADJACENT 1
#define NUM_SLOTS (1 + 2*KCSAN_CHECK_ADJACENT)
extern unsigned int kcsan_udelay_task;
extern unsigned int kcsan_udelay_interrupt;
/*
* Globally enable and disable KCSAN.
......@@ -132,7 +136,7 @@ enum kcsan_report_type {
* Print a race report from thread that encountered the race.
*/
extern void kcsan_report(const volatile void *ptr, size_t size, int access_type,
enum kcsan_value_change value_change, int cpu_id,
enum kcsan_report_type type);
enum kcsan_value_change value_change,
enum kcsan_report_type type, int watchpoint_idx);
#endif /* _KERNEL_KCSAN_KCSAN_H */
// SPDX-License-Identifier: GPL-2.0
#include <linux/debug_locks.h>
#include <linux/delay.h>
#include <linux/jiffies.h>
#include <linux/kernel.h>
#include <linux/lockdep.h>
......@@ -17,21 +19,49 @@
*/
#define NUM_STACK_ENTRIES 64
/*
* Other thread info: communicated from other racing thread to thread that set
* up the watchpoint, which then prints the complete report atomically. Only
* need one struct, as all threads should to be serialized regardless to print
* the reports, with reporting being in the slow-path.
*/
static struct {
/* Common access info. */
struct access_info {
const volatile void *ptr;
size_t size;
int access_type;
int task_pid;
int cpu_id;
};
/*
* Other thread info: communicated from other racing thread to thread that set
* up the watchpoint, which then prints the complete report atomically.
*/
struct other_info {
struct access_info ai;
unsigned long stack_entries[NUM_STACK_ENTRIES];
int num_stack_entries;
} other_info = { .ptr = NULL };
/*
* Optionally pass @current. Typically we do not need to pass @current
* via @other_info since just @task_pid is sufficient. Passing @current
* has additional overhead.
*
* To safely pass @current, we must either use get_task_struct/
* put_task_struct, or stall the thread that populated @other_info.
*
* We cannot rely on get_task_struct/put_task_struct in case
* release_report() races with a task being released, and would have to
* free it in release_report(). This may result in deadlock if we want
* to use KCSAN on the allocators.
*
* Since we also want to reliably print held locks for
* CONFIG_KCSAN_VERBOSE, the current implementation stalls the thread
* that populated @other_info until it has been consumed.
*/
struct task_struct *task;
};
/*
* To never block any producers of struct other_info, we need as many elements
* as we have watchpoints (upper bound on concurrent races to report).
*/
static struct other_info other_infos[CONFIG_KCSAN_NUM_WATCHPOINTS + NUM_SLOTS-1];
/*
* Information about reported races; used to rate limit reporting.
......@@ -68,10 +98,11 @@ struct report_time {
static struct report_time report_times[REPORT_TIMES_SIZE];
/*
* This spinlock protects reporting and other_info, since other_info is usually
* required when reporting.
* Spinlock serializing report generation, and access to @other_infos. Although
* it could make sense to have a finer-grained locking story for @other_infos,
* report generation needs to be serialized either way, so not much is gained.
*/
static DEFINE_SPINLOCK(report_lock);
static DEFINE_RAW_SPINLOCK(report_lock);
/*
* Checks if the race identified by thread frames frame1 and frame2 has
......@@ -161,11 +192,11 @@ skip_report(enum kcsan_value_change value_change, unsigned long top_frame)
* maintainers.
*/
char buf[64];
int len = scnprintf(buf, sizeof(buf), "%ps", (void *)top_frame);
snprintf(buf, sizeof(buf), "%ps", (void *)top_frame);
if (!strnstr(buf, "rcu_", sizeof(buf)) &&
!strnstr(buf, "_rcu", sizeof(buf)) &&
!strnstr(buf, "_srcu", sizeof(buf)))
if (!strnstr(buf, "rcu_", len) &&
!strnstr(buf, "_rcu", len) &&
!strnstr(buf, "_srcu", len))
return true;
}
......@@ -174,6 +205,20 @@ skip_report(enum kcsan_value_change value_change, unsigned long top_frame)
static const char *get_access_type(int type)
{
if (type & KCSAN_ACCESS_ASSERT) {
if (type & KCSAN_ACCESS_SCOPED) {
if (type & KCSAN_ACCESS_WRITE)
return "assert no accesses (scoped)";
else
return "assert no writes (scoped)";
} else {
if (type & KCSAN_ACCESS_WRITE)
return "assert no accesses";
else
return "assert no writes";
}
}
switch (type) {
case 0:
return "read";
......@@ -183,17 +228,14 @@ static const char *get_access_type(int type)
return "write";
case KCSAN_ACCESS_WRITE | KCSAN_ACCESS_ATOMIC:
return "write (marked)";
/*
* ASSERT variants:
*/
case KCSAN_ACCESS_ASSERT:
case KCSAN_ACCESS_ASSERT | KCSAN_ACCESS_ATOMIC:
return "assert no writes";
case KCSAN_ACCESS_ASSERT | KCSAN_ACCESS_WRITE:
case KCSAN_ACCESS_ASSERT | KCSAN_ACCESS_WRITE | KCSAN_ACCESS_ATOMIC:
return "assert no accesses";
case KCSAN_ACCESS_SCOPED:
return "read (scoped)";
case KCSAN_ACCESS_SCOPED | KCSAN_ACCESS_ATOMIC:
return "read (marked, scoped)";
case KCSAN_ACCESS_SCOPED | KCSAN_ACCESS_WRITE:
return "write (scoped)";
case KCSAN_ACCESS_SCOPED | KCSAN_ACCESS_WRITE | KCSAN_ACCESS_ATOMIC:
return "write (marked, scoped)";
default:
BUG();
}
......@@ -217,19 +259,35 @@ static const char *get_thread_desc(int task_id)
}
/* Helper to skip KCSAN-related functions in stack-trace. */
static int get_stack_skipnr(unsigned long stack_entries[], int num_entries)
static int get_stack_skipnr(const unsigned long stack_entries[], int num_entries)
{
char buf[64];
int skip = 0;
char *cur;
int len, skip;
for (skip = 0; skip < num_entries; ++skip) {
len = scnprintf(buf, sizeof(buf), "%ps", (void *)stack_entries[skip]);
/* Never show tsan_* or {read,write}_once_size. */
if (strnstr(buf, "tsan_", len) ||
strnstr(buf, "_once_size", len))
continue;
cur = strnstr(buf, "kcsan_", len);
if (cur) {
cur += sizeof("kcsan_") - 1;
if (strncmp(cur, "test", sizeof("test") - 1))
continue; /* KCSAN runtime function. */
/* KCSAN related test. */
}
for (; skip < num_entries; ++skip) {
snprintf(buf, sizeof(buf), "%ps", (void *)stack_entries[skip]);
if (!strnstr(buf, "csan_", sizeof(buf)) &&
!strnstr(buf, "tsan_", sizeof(buf)) &&
!strnstr(buf, "_once_size", sizeof(buf))) {
/*
* No match for runtime functions -- @skip entries to skip to
* get to first frame of interest.
*/
break;
}
}
return skip;
}
......@@ -245,12 +303,23 @@ static int sym_strcmp(void *addr1, void *addr2)
return strncmp(buf1, buf2, sizeof(buf1));
}
static void print_verbose_info(struct task_struct *task)
{
if (!task)
return;
pr_err("\n");
debug_show_held_locks(task);
print_irqtrace_events(task);
}
/*
* Returns true if a report was generated, false otherwise.
*/
static bool print_report(const volatile void *ptr, size_t size, int access_type,
enum kcsan_value_change value_change, int cpu_id,
enum kcsan_report_type type)
static bool print_report(enum kcsan_value_change value_change,
enum kcsan_report_type type,
const struct access_info *ai,
const struct other_info *other_info)
{
unsigned long stack_entries[NUM_STACK_ENTRIES] = { 0 };
int num_stack_entries = stack_trace_save(stack_entries, NUM_STACK_ENTRIES, 1);
......@@ -266,9 +335,9 @@ static bool print_report(const volatile void *ptr, size_t size, int access_type,
return false;
if (type == KCSAN_REPORT_RACE_SIGNAL) {
other_skipnr = get_stack_skipnr(other_info.stack_entries,
other_info.num_stack_entries);
other_frame = other_info.stack_entries[other_skipnr];
other_skipnr = get_stack_skipnr(other_info->stack_entries,
other_info->num_stack_entries);
other_frame = other_info->stack_entries[other_skipnr];
/* @value_change is only known for the other thread */
if (skip_report(value_change, other_frame))
......@@ -290,13 +359,13 @@ static bool print_report(const volatile void *ptr, size_t size, int access_type,
*/
cmp = sym_strcmp((void *)other_frame, (void *)this_frame);
pr_err("BUG: KCSAN: %s in %ps / %ps\n",
get_bug_type(access_type | other_info.access_type),
get_bug_type(ai->access_type | other_info->ai.access_type),
(void *)(cmp < 0 ? other_frame : this_frame),
(void *)(cmp < 0 ? this_frame : other_frame));
} break;
case KCSAN_REPORT_RACE_UNKNOWN_ORIGIN:
pr_err("BUG: KCSAN: %s in %pS\n", get_bug_type(access_type),
pr_err("BUG: KCSAN: %s in %pS\n", get_bug_type(ai->access_type),
(void *)this_frame);
break;
......@@ -310,27 +379,28 @@ static bool print_report(const volatile void *ptr, size_t size, int access_type,
switch (type) {
case KCSAN_REPORT_RACE_SIGNAL:
pr_err("%s to 0x%px of %zu bytes by %s on cpu %i:\n",
get_access_type(other_info.access_type), other_info.ptr,
other_info.size, get_thread_desc(other_info.task_pid),
other_info.cpu_id);
get_access_type(other_info->ai.access_type), other_info->ai.ptr,
other_info->ai.size, get_thread_desc(other_info->ai.task_pid),
other_info->ai.cpu_id);
/* Print the other thread's stack trace. */
stack_trace_print(other_info.stack_entries + other_skipnr,
other_info.num_stack_entries - other_skipnr,
stack_trace_print(other_info->stack_entries + other_skipnr,
other_info->num_stack_entries - other_skipnr,
0);
if (IS_ENABLED(CONFIG_KCSAN_VERBOSE))
print_verbose_info(other_info->task);
pr_err("\n");
pr_err("%s to 0x%px of %zu bytes by %s on cpu %i:\n",
get_access_type(access_type), ptr, size,
get_thread_desc(in_task() ? task_pid_nr(current) : -1),
cpu_id);
get_access_type(ai->access_type), ai->ptr, ai->size,
get_thread_desc(ai->task_pid), ai->cpu_id);
break;
case KCSAN_REPORT_RACE_UNKNOWN_ORIGIN:
pr_err("race at unknown origin, with %s to 0x%px of %zu bytes by %s on cpu %i:\n",
get_access_type(access_type), ptr, size,
get_thread_desc(in_task() ? task_pid_nr(current) : -1),
cpu_id);
get_access_type(ai->access_type), ai->ptr, ai->size,
get_thread_desc(ai->task_pid), ai->cpu_id);
break;
default:
......@@ -340,6 +410,9 @@ static bool print_report(const volatile void *ptr, size_t size, int access_type,
stack_trace_print(stack_entries + skipnr, num_stack_entries - skipnr,
0);
if (IS_ENABLED(CONFIG_KCSAN_VERBOSE))
print_verbose_info(current);
/* Print report footer. */
pr_err("\n");
pr_err("Reported by Kernel Concurrency Sanitizer on:\n");
......@@ -349,142 +422,188 @@ static bool print_report(const volatile void *ptr, size_t size, int access_type,
return true;
}
static void release_report(unsigned long *flags, enum kcsan_report_type type)
static void release_report(unsigned long *flags, struct other_info *other_info)
{
if (type == KCSAN_REPORT_RACE_SIGNAL)
other_info.ptr = NULL; /* mark for reuse */
if (other_info)
/*
* Use size to denote valid/invalid, since KCSAN entirely
* ignores 0-sized accesses.
*/
other_info->ai.size = 0;
spin_unlock_irqrestore(&report_lock, *flags);
raw_spin_unlock_irqrestore(&report_lock, *flags);
}
/*
* Depending on the report type either sets other_info and returns false, or
* acquires the matching other_info and returns true. If other_info is not
* required for the report type, simply acquires report_lock and returns true.
* Sets @other_info->task and awaits consumption of @other_info.
*
* Precondition: report_lock is held.
* Postcondition: report_lock is held.
*/
static bool prepare_report(unsigned long *flags, const volatile void *ptr,
size_t size, int access_type, int cpu_id,
enum kcsan_report_type type)
static void set_other_info_task_blocking(unsigned long *flags,
const struct access_info *ai,
struct other_info *other_info)
{
if (type != KCSAN_REPORT_CONSUMED_WATCHPOINT &&
type != KCSAN_REPORT_RACE_SIGNAL) {
/* other_info not required; just acquire report_lock */
spin_lock_irqsave(&report_lock, *flags);
return true;
}
retry:
spin_lock_irqsave(&report_lock, *flags);
switch (type) {
case KCSAN_REPORT_CONSUMED_WATCHPOINT:
if (other_info.ptr != NULL)
break; /* still in use, retry */
other_info.ptr = ptr;
other_info.size = size;
other_info.access_type = access_type;
other_info.task_pid = in_task() ? task_pid_nr(current) : -1;
other_info.cpu_id = cpu_id;
other_info.num_stack_entries = stack_trace_save(other_info.stack_entries, NUM_STACK_ENTRIES, 1);
spin_unlock_irqrestore(&report_lock, *flags);
/*
* The other thread will print the summary; other_info may now
* be consumed.
* We may be instrumenting a code-path where current->state is already
* something other than TASK_RUNNING.
*/
return false;
case KCSAN_REPORT_RACE_SIGNAL:
if (other_info.ptr == NULL)
break; /* no data available yet, retry */
const bool is_running = current->state == TASK_RUNNING;
/*
* To avoid deadlock in case we are in an interrupt here and this is a
* race with a task on the same CPU (KCSAN_INTERRUPT_WATCHER), provide a
* timeout to ensure this works in all contexts.
*
* Await approximately the worst case delay of the reporting thread (if
* we are not interrupted).
*/
int timeout = max(kcsan_udelay_task, kcsan_udelay_interrupt);
other_info->task = current;
do {
if (is_running) {
/*
* First check if this is the other_info we are expecting, i.e.
* matches based on how watchpoint was encoded.
*/
if (!matching_access((unsigned long)other_info.ptr &
WATCHPOINT_ADDR_MASK,
other_info.size,
(unsigned long)ptr & WATCHPOINT_ADDR_MASK,
size))
break; /* mismatching watchpoint, retry */
if (!matching_access((unsigned long)other_info.ptr,
other_info.size, (unsigned long)ptr,
size)) {
* Let lockdep know the real task is sleeping, to print
* the held locks (recall we turned lockdep off, so
* locking/unlocking @report_lock won't be recorded).
*/
set_current_state(TASK_UNINTERRUPTIBLE);
}
raw_spin_unlock_irqrestore(&report_lock, *flags);
/*
* If the actual accesses to not match, this was a false
* positive due to watchpoint encoding.
* We cannot call schedule() since we also cannot reliably
* determine if sleeping here is permitted -- see in_atomic().
*/
kcsan_counter_inc(
KCSAN_COUNTER_ENCODING_FALSE_POSITIVES);
/* discard this other_info */
release_report(flags, KCSAN_REPORT_RACE_SIGNAL);
return false;
udelay(1);
raw_spin_lock_irqsave(&report_lock, *flags);
if (timeout-- < 0) {
/*
* Abort. Reset @other_info->task to NULL, since it
* appears the other thread is still going to consume
* it. It will result in no verbose info printed for
* this task.
*/
other_info->task = NULL;
break;
}
/*
* If invalid, or @ptr nor @current matches, then @other_info
* has been consumed and we may continue. If not, retry.
*/
} while (other_info->ai.size && other_info->ai.ptr == ai->ptr &&
other_info->task == current);
if (is_running)
set_current_state(TASK_RUNNING);
}
/* Populate @other_info; requires that the provided @other_info not in use. */
static void prepare_report_producer(unsigned long *flags,
const struct access_info *ai,
struct other_info *other_info)
{
raw_spin_lock_irqsave(&report_lock, *flags);
access_type |= other_info.access_type;
if ((access_type & KCSAN_ACCESS_WRITE) == 0) {
/*
* While the address matches, this is not the other_info
* from the thread that consumed our watchpoint, since
* neither this nor the access in other_info is a write.
* It is invalid to continue with the report, since we
* only have information about reads.
*
* This can happen due to concurrent races on the same
* address, with at least 4 threads. To avoid locking up
* other_info and all other threads, we have to consume
* it regardless.
*
* A concrete case to illustrate why we might lock up if
* we do not consume other_info:
*
* We have 4 threads, all accessing the same address
* (or matching address ranges). Assume the following
* watcher and watchpoint consumer pairs:
* write1-read1, read2-write2. The first to populate
* other_info is write2, however, write1 consumes it,
* resulting in a report of write1-write2. This report
* is valid, however, now read1 populates other_info;
* read2-read1 is an invalid conflict, yet, no other
* conflicting access is left. Therefore, we must
* consume read1's other_info.
* The same @other_infos entry cannot be used concurrently, because
* there is a one-to-one mapping to watchpoint slots (@watchpoints in
* core.c), and a watchpoint is only released for reuse after reporting
* is done by the consumer of @other_info. Therefore, it is impossible
* for another concurrent prepare_report_producer() to set the same
* @other_info, and are guaranteed exclusivity for the @other_infos
* entry pointed to by @other_info.
*
* Since this case is assumed to be rare, it is
* reasonable to omit this report: one of the other
* reports includes information about the same shared
* data, and at this point the likelihood that we
* re-report the same race again is high.
* To check this property holds, size should never be non-zero here,
* because every consumer of struct other_info resets size to 0 in
* release_report().
*/
release_report(flags, KCSAN_REPORT_RACE_SIGNAL);
return false;
WARN_ON(other_info->ai.size);
other_info->ai = *ai;
other_info->num_stack_entries = stack_trace_save(other_info->stack_entries, NUM_STACK_ENTRIES, 2);
if (IS_ENABLED(CONFIG_KCSAN_VERBOSE))
set_other_info_task_blocking(flags, ai, other_info);
raw_spin_unlock_irqrestore(&report_lock, *flags);
}
/* Awaits producer to fill @other_info and then returns. */
static bool prepare_report_consumer(unsigned long *flags,
const struct access_info *ai,
struct other_info *other_info)
{
raw_spin_lock_irqsave(&report_lock, *flags);
while (!other_info->ai.size) { /* Await valid @other_info. */
raw_spin_unlock_irqrestore(&report_lock, *flags);
cpu_relax();
raw_spin_lock_irqsave(&report_lock, *flags);
}
/* Should always have a matching access based on watchpoint encoding. */
if (WARN_ON(!matching_access((unsigned long)other_info->ai.ptr & WATCHPOINT_ADDR_MASK, other_info->ai.size,
(unsigned long)ai->ptr & WATCHPOINT_ADDR_MASK, ai->size)))
goto discard;
if (!matching_access((unsigned long)other_info->ai.ptr, other_info->ai.size,
(unsigned long)ai->ptr, ai->size)) {
/*
* Matching & usable access in other_info: keep other_info_lock
* locked, as this thread consumes it to print the full report;
* unlocked in release_report.
* If the actual accesses to not match, this was a false
* positive due to watchpoint encoding.
*/
kcsan_counter_inc(KCSAN_COUNTER_ENCODING_FALSE_POSITIVES);
goto discard;
}
return true;
discard:
release_report(flags, other_info);
return false;
}
/*
* Depending on the report type either sets @other_info and returns false, or
* awaits @other_info and returns true. If @other_info is not required for the
* report type, simply acquires @report_lock and returns true.
*/
static noinline bool prepare_report(unsigned long *flags,
enum kcsan_report_type type,
const struct access_info *ai,
struct other_info *other_info)
{
switch (type) {
case KCSAN_REPORT_CONSUMED_WATCHPOINT:
prepare_report_producer(flags, ai, other_info);
return false;
case KCSAN_REPORT_RACE_SIGNAL:
return prepare_report_consumer(flags, ai, other_info);
default:
BUG();
/* @other_info not required; just acquire @report_lock. */
raw_spin_lock_irqsave(&report_lock, *flags);
return true;
}
spin_unlock_irqrestore(&report_lock, *flags);
goto retry;
}
void kcsan_report(const volatile void *ptr, size_t size, int access_type,
enum kcsan_value_change value_change, int cpu_id,
enum kcsan_report_type type)
enum kcsan_value_change value_change,
enum kcsan_report_type type, int watchpoint_idx)
{
unsigned long flags = 0;
const struct access_info ai = {
.ptr = ptr,
.size = size,
.access_type = access_type,
.task_pid = in_task() ? task_pid_nr(current) : -1,
.cpu_id = raw_smp_processor_id()
};
struct other_info *other_info = type == KCSAN_REPORT_RACE_UNKNOWN_ORIGIN
? NULL : &other_infos[watchpoint_idx];
kcsan_disable_current();
if (WARN_ON(watchpoint_idx < 0 || watchpoint_idx >= ARRAY_SIZE(other_infos)))
goto out;
/*
* With TRACE_IRQFLAGS, lockdep's IRQ trace state becomes corrupted if
......@@ -494,22 +613,22 @@ void kcsan_report(const volatile void *ptr, size_t size, int access_type,
*/
lockdep_off();
kcsan_disable_current();
if (prepare_report(&flags, ptr, size, access_type, cpu_id, type)) {
if (prepare_report(&flags, type, &ai, other_info)) {
/*
* Never report if value_change is FALSE, only if we it is
* either TRUE or MAYBE. In case of MAYBE, further filtering may
* be done once we know the full stack trace in print_report().
*/
bool reported = value_change != KCSAN_VALUE_CHANGE_FALSE &&
print_report(ptr, size, access_type, value_change, cpu_id, type);
print_report(value_change, type, &ai, other_info);
if (reported && panic_on_warn)
panic("panic_on_warn set ...\n");
release_report(&flags, type);
release_report(&flags, other_info);
}
kcsan_enable_current();
lockdep_on();
out:
kcsan_enable_current();
}
......@@ -4,22 +4,36 @@ config HAVE_ARCH_KCSAN
bool
menuconfig KCSAN
bool "KCSAN: dynamic race detector"
bool "KCSAN: dynamic data race detector"
depends on HAVE_ARCH_KCSAN && DEBUG_KERNEL && !KASAN
select STACKTRACE
help
The Kernel Concurrency Sanitizer (KCSAN) is a dynamic race detector,
which relies on compile-time instrumentation, and uses a
watchpoint-based sampling approach to detect races.
The Kernel Concurrency Sanitizer (KCSAN) is a dynamic
data-race detector that relies on compile-time instrumentation.
KCSAN uses a watchpoint-based sampling approach to detect races.
KCSAN's primary purpose is to detect data races. KCSAN can also be
used to check properties, with the help of provided assertions, of
concurrent code where bugs do not manifest as data races.
While KCSAN's primary purpose is to detect data races, it
also provides assertions to check data access constraints.
These assertions can expose bugs that do not manifest as
data races.
See <file:Documentation/dev-tools/kcsan.rst> for more details.
if KCSAN
config KCSAN_VERBOSE
bool "Show verbose reports with more information about system state"
depends on PROVE_LOCKING
help
If enabled, reports show more information about the system state that
may help better analyze and debug races. This includes held locks and
IRQ trace events.
While this option should generally be benign, we call into more
external functions on report generation; if a race report is
generated from any one of them, system stability may suffer due to
deadlocks or recursion. If in doubt, say N.
config KCSAN_DEBUG
bool "Debugging of KCSAN internals"
......@@ -88,6 +102,17 @@ config KCSAN_SKIP_WATCH_RANDOMIZE
KCSAN_WATCH_SKIP. If false, the chosen value is always
KCSAN_WATCH_SKIP.
config KCSAN_INTERRUPT_WATCHER
bool "Interruptible watchers"
help
If enabled, a task that set up a watchpoint may be interrupted while
delayed. This option will allow KCSAN to detect races between
interrupted tasks and other threads of execution on the same CPU.
Currently disabled by default, because not all safe per-CPU access
primitives and patterns may be accounted for, and therefore could
result in false positives.
config KCSAN_REPORT_ONCE_IN_MS
int "Duration in milliseconds, in which any given race is only reported once"
default 3000
......
......@@ -5890,6 +5890,14 @@ sub process {
}
}
# check for data_race without a comment.
if ($line =~ /\bdata_race\s*\(/) {
if (!ctx_has_comment($first_line, $linenr)) {
WARN("DATA_RACE",
"data_race without comment\n" . $herecurr);
}
}
# check for smp_read_barrier_depends and read_barrier_depends
if (!$file && $line =~ /\b(smp_|)read_barrier_depends\s*\(/) {
WARN("READ_BARRIER_DEPENDS",
......
......@@ -478,8 +478,12 @@ static const char *uaccess_safe_builtin[] = {
"__asan_report_store8_noabort",
"__asan_report_store16_noabort",
/* KCSAN */
"__kcsan_check_access",
"kcsan_found_watchpoint",
"kcsan_setup_watchpoint",
"kcsan_check_scoped_accesses",
"kcsan_disable_current",
"kcsan_enable_current_nowarn",
/* KCSAN/TSAN */
"__tsan_func_entry",
"__tsan_func_exit",
......
Markdown is supported
0%
or
You are about to add 0 people to the discussion. Proceed with caution.
Finish editing this message first!
Please register or to comment