Commit 80793c34 authored by Ahmed S. Darwish's avatar Ahmed S. Darwish Committed by Peter Zijlstra

seqlock: Introduce seqcount_latch_t

Latch sequence counters are a multiversion concurrency control mechanism
where the seqcount_t counter even/odd value is used to switch between
two copies of protected data. This allows the seqcount_t read path to
safely interrupt its write side critical section (e.g. from NMIs).

Initially, latch sequence counters were implemented as a single write
function above plain seqcount_t: raw_write_seqcount_latch(). The read
side was expected to use plain seqcount_t raw_read_seqcount().

A specialized latch read function, raw_read_seqcount_latch(), was later
added. It became the standardized way for latch read paths.  Due to the
dependent load, it has one read memory barrier less than the plain
seqcount_t raw_read_seqcount() API.

Only raw_write_seqcount_latch() and raw_read_seqcount_latch() should be
used with latch sequence counters. Having *unique* read and write path
APIs means that latch sequence counters are actually a data type of
their own -- just inappropriately overloading plain seqcount_t.

Introduce seqcount_latch_t. This adds type-safety and ensures that only
the correct latch-safe APIs are to be used.

Not to break bisection, let the latch APIs also accept plain seqcount_t
or seqcount_raw_spinlock_t. After converting all call sites to
seqcount_latch_t, only that new data type will be allowed.

References: 9b0fd802 ("seqcount: Add raw_write_seqcount_latch()")
References: 7fc26327 ("seqlock: Introduce raw_read_seqcount_latch()")
References: aadd6e5c ("time/sched_clock: Use raw_read_seqcount_latch()")
Signed-off-by: default avatarAhmed S. Darwish <a.darwish@linutronix.de>
Signed-off-by: default avatarPeter Zijlstra (Intel) <peterz@infradead.org>
Link: https://lkml.kernel.org/r/20200827114044.11173-4-a.darwish@linutronix.de
parent 6446a513
...@@ -139,6 +139,24 @@ with the associated LOCKTYPE lock acquired. ...@@ -139,6 +139,24 @@ with the associated LOCKTYPE lock acquired.
Read path: same as in :ref:`seqcount_t`. Read path: same as in :ref:`seqcount_t`.
.. _seqcount_latch_t:
Latch sequence counters (``seqcount_latch_t``)
----------------------------------------------
Latch sequence counters are a multiversion concurrency control mechanism
where the embedded seqcount_t counter even/odd value is used to switch
between two copies of protected data. This allows the sequence counter
read path to safely interrupt its own write side critical section.
Use seqcount_latch_t when the write side sections cannot be protected
from interruption by readers. This is typically the case when the read
side can be invoked from NMI handlers.
Check `raw_write_seqcount_latch()` for more information.
.. _seqlock_t: .. _seqlock_t:
Sequential locks (``seqlock_t``) Sequential locks (``seqlock_t``)
......
...@@ -587,34 +587,76 @@ static inline void write_seqcount_t_invalidate(seqcount_t *s) ...@@ -587,34 +587,76 @@ static inline void write_seqcount_t_invalidate(seqcount_t *s)
kcsan_nestable_atomic_end(); kcsan_nestable_atomic_end();
} }
/** /*
* raw_read_seqcount_latch() - pick even/odd seqcount_t latch data copy * Latch sequence counters (seqcount_latch_t)
* @s: Pointer to seqcount_t or any of the seqcount_locktype_t variants
* *
* Use seqcount_t latching to switch between two storage places protected * A sequence counter variant where the counter even/odd value is used to
* by a sequence counter. Doing so allows having interruptible, preemptible, * switch between two copies of protected data. This allows the read path,
* seqcount_t write side critical sections. * typically NMIs, to safely interrupt the write side critical section.
* *
* Check raw_write_seqcount_latch() for more details and a full reader and * As the write sections are fully preemptible, no special handling for
* writer usage example. * PREEMPT_RT is needed.
*/
typedef struct {
seqcount_t seqcount;
} seqcount_latch_t;
/**
* SEQCNT_LATCH_ZERO() - static initializer for seqcount_latch_t
* @seq_name: Name of the seqcount_latch_t instance
*/
#define SEQCNT_LATCH_ZERO(seq_name) { \
.seqcount = SEQCNT_ZERO(seq_name.seqcount), \
}
/**
* seqcount_latch_init() - runtime initializer for seqcount_latch_t
* @s: Pointer to the seqcount_latch_t instance
*/
static inline void seqcount_latch_init(seqcount_latch_t *s)
{
seqcount_init(&s->seqcount);
}
/**
* raw_read_seqcount_latch() - pick even/odd latch data copy
* @s: Pointer to seqcount_t, seqcount_raw_spinlock_t, or seqcount_latch_t
*
* See raw_write_seqcount_latch() for details and a full reader/writer
* usage example.
* *
* Return: sequence counter raw value. Use the lowest bit as an index for * Return: sequence counter raw value. Use the lowest bit as an index for
* picking which data copy to read. The full counter value must then be * picking which data copy to read. The full counter must then be checked
* checked with read_seqcount_retry(). * with read_seqcount_latch_retry().
*/ */
#define raw_read_seqcount_latch(s) \ #define raw_read_seqcount_latch(s) \
raw_read_seqcount_t_latch(__seqcount_ptr(s)) ({ \
/* \
* Pairs with the first smp_wmb() in raw_write_seqcount_latch(). \
* Due to the dependent load, a full smp_rmb() is not needed. \
*/ \
_Generic(*(s), \
seqcount_t: READ_ONCE(((seqcount_t *)s)->sequence), \
seqcount_raw_spinlock_t: READ_ONCE(((seqcount_raw_spinlock_t *)s)->seqcount.sequence), \
seqcount_latch_t: READ_ONCE(((seqcount_latch_t *)s)->seqcount.sequence)); \
})
static inline int raw_read_seqcount_t_latch(seqcount_t *s) /**
* read_seqcount_latch_retry() - end a seqcount_latch_t read section
* @s: Pointer to seqcount_latch_t
* @start: count, from raw_read_seqcount_latch()
*
* Return: true if a read section retry is required, else false
*/
static inline int
read_seqcount_latch_retry(const seqcount_latch_t *s, unsigned start)
{ {
/* Pairs with the first smp_wmb() in raw_write_seqcount_latch() */ return read_seqcount_retry(&s->seqcount, start);
int seq = READ_ONCE(s->sequence); /* ^^^ */
return seq;
} }
/** /**
* raw_write_seqcount_latch() - redirect readers to even/odd copy * raw_write_seqcount_latch() - redirect latch readers to even/odd copy
* @s: Pointer to seqcount_t or any of the seqcount_locktype_t variants * @s: Pointer to seqcount_t, seqcount_raw_spinlock_t, or seqcount_latch_t
* *
* The latch technique is a multiversion concurrency control method that allows * The latch technique is a multiversion concurrency control method that allows
* queries during non-atomic modifications. If you can guarantee queries never * queries during non-atomic modifications. If you can guarantee queries never
...@@ -633,7 +675,7 @@ static inline int raw_read_seqcount_t_latch(seqcount_t *s) ...@@ -633,7 +675,7 @@ static inline int raw_read_seqcount_t_latch(seqcount_t *s)
* The basic form is a data structure like:: * The basic form is a data structure like::
* *
* struct latch_struct { * struct latch_struct {
* seqcount_t seq; * seqcount_latch_t seq;
* struct data_struct data[2]; * struct data_struct data[2];
* }; * };
* *
...@@ -643,13 +685,13 @@ static inline int raw_read_seqcount_t_latch(seqcount_t *s) ...@@ -643,13 +685,13 @@ static inline int raw_read_seqcount_t_latch(seqcount_t *s)
* void latch_modify(struct latch_struct *latch, ...) * void latch_modify(struct latch_struct *latch, ...)
* { * {
* smp_wmb(); // Ensure that the last data[1] update is visible * smp_wmb(); // Ensure that the last data[1] update is visible
* latch->seq++; * latch->seq.sequence++;
* smp_wmb(); // Ensure that the seqcount update is visible * smp_wmb(); // Ensure that the seqcount update is visible
* *
* modify(latch->data[0], ...); * modify(latch->data[0], ...);
* *
* smp_wmb(); // Ensure that the data[0] update is visible * smp_wmb(); // Ensure that the data[0] update is visible
* latch->seq++; * latch->seq.sequence++;
* smp_wmb(); // Ensure that the seqcount update is visible * smp_wmb(); // Ensure that the seqcount update is visible
* *
* modify(latch->data[1], ...); * modify(latch->data[1], ...);
...@@ -668,8 +710,8 @@ static inline int raw_read_seqcount_t_latch(seqcount_t *s) ...@@ -668,8 +710,8 @@ static inline int raw_read_seqcount_t_latch(seqcount_t *s)
* idx = seq & 0x01; * idx = seq & 0x01;
* entry = data_query(latch->data[idx], ...); * entry = data_query(latch->data[idx], ...);
* *
* // read_seqcount_retry() includes needed smp_rmb() * // This includes needed smp_rmb()
* } while (read_seqcount_retry(&latch->seq, seq)); * } while (read_seqcount_latch_retry(&latch->seq, seq));
* *
* return entry; * return entry;
* } * }
...@@ -693,14 +735,14 @@ static inline int raw_read_seqcount_t_latch(seqcount_t *s) ...@@ -693,14 +735,14 @@ static inline int raw_read_seqcount_t_latch(seqcount_t *s)
* When data is a dynamic data structure; one should use regular RCU * When data is a dynamic data structure; one should use regular RCU
* patterns to manage the lifetimes of the objects within. * patterns to manage the lifetimes of the objects within.
*/ */
#define raw_write_seqcount_latch(s) \ #define raw_write_seqcount_latch(s) \
raw_write_seqcount_t_latch(__seqcount_ptr(s)) { \
smp_wmb(); /* prior stores before incrementing "sequence" */ \
static inline void raw_write_seqcount_t_latch(seqcount_t *s) _Generic(*(s), \
{ seqcount_t: ((seqcount_t *)s)->sequence++, \
smp_wmb(); /* prior stores before incrementing "sequence" */ seqcount_raw_spinlock_t:((seqcount_raw_spinlock_t *)s)->seqcount.sequence++, \
s->sequence++; seqcount_latch_t: ((seqcount_latch_t *)s)->seqcount.sequence++); \
smp_wmb(); /* increment "sequence" before following stores */ smp_wmb(); /* increment "sequence" before following stores */ \
} }
/* /*
......
Markdown is supported
0%
or
You are about to add 0 people to the discussion. Proceed with caution.
Finish editing this message first!
Please register or to comment