Sequence counters and sequential locks¶
Introduction¶
Sequence counters are a reader-writer consistency mechanism withlockless readers (read-only retry loops), and no writer starvation. Theyare used for data that’s rarely written to (e.g. system time), where thereader wants a consistent set of information and is willing to retry ifthat information changes.
A data set is consistent when the sequence count at the beginning of theread side critical section is even and the same sequence count value isread again at the end of the critical section. The data in the set mustbe copied out inside the read side critical section. If the sequencecount has changed between the start and the end of the critical section,the reader must retry.
Writers increment the sequence count at the start and the end of theircritical section. After starting the critical section the sequence countis odd and indicates to the readers that an update is in progress. Atthe end of the write side critical section the sequence count becomeseven again which lets readers make progress.
A sequence counter write side critical section must never be preemptedor interrupted by read side sections. Otherwise the reader will spin forthe entire scheduler tick due to the odd sequence count value and theinterrupted writer. If that reader belongs to a real-time schedulingclass, it can spin forever and the kernel will livelock.
This mechanism cannot be used if the protected data contains pointers,as the writer can invalidate a pointer that the reader is following.
Sequence counters (seqcount_t)¶
This is the raw counting mechanism, which does not protect againstmultiple writers. Write side critical sections must thus be serializedby an external lock.
If the write serialization primitive is not implicitly disablingpreemption, preemption must be explicitly disabled before entering thewrite side section. If the read section can be invoked from hardirq orsoftirq contexts, interrupts or bottom halves must also be respectivelydisabled before entering the write section.
If it’s desired to automatically handle the sequence counterrequirements of writer serialization and non-preemptibility, useSequential locks (seqlock_t) instead.
Initialization:
/* dynamic */seqcount_t foo_seqcount;seqcount_init(&foo_seqcount);/* static */static seqcount_t foo_seqcount = SEQCNT_ZERO(foo_seqcount);/* C99 struct init */struct { .seq = SEQCNT_ZERO(foo.seq),} foo;Write path:
/* Serialized context with disabled preemption */write_seqcount_begin(&foo_seqcount);/* ... [[write-side critical section]] ... */write_seqcount_end(&foo_seqcount);
Read path:
do { seq = read_seqcount_begin(&foo_seqcount); /* ... [[read-side critical section]] ... */} while (read_seqcount_retry(&foo_seqcount, seq));Sequence counters with associated locks (seqcount_LOCKNAME_t)¶
As discussed atSequence counters (seqcount_t), sequence count write side criticalsections must be serialized and non-preemptible. This variant ofsequence counters associate the lock used for writer serialization atinitialization time, which enables lockdep to validate that the writeside critical sections are properly serialized.
This lock association is a NOOP if lockdep is disabled and has neitherstorage nor runtime overhead. If lockdep is enabled, the lock pointer isstored instructseqcount and lockdep’s “lock is held” assertions areinjected at the beginning of the write side critical section to validatethat it is properly protected.
For lock types which do not implicitly disable preemption, preemptionprotection is enforced in the write side function.
The following sequence counters with associated locks are defined:
seqcount_spinlock_t
seqcount_raw_spinlock_t
seqcount_rwlock_t
seqcount_mutex_t
seqcount_ww_mutex_t
The sequence counter read and write APIs can take either a plainseqcount_t or any of the seqcount_LOCKNAME_t variants above.
Initialization (replace “LOCKNAME” with one of the supported locks):
/* dynamic */seqcount_LOCKNAME_t foo_seqcount;seqcount_LOCKNAME_init(&foo_seqcount, &lock);/* static */static seqcount_LOCKNAME_t foo_seqcount = SEQCNT_LOCKNAME_ZERO(foo_seqcount, &lock);/* C99 struct init */struct { .seq = SEQCNT_LOCKNAME_ZERO(foo.seq, &lock),} foo;Write path: same as inSequence counters (seqcount_t), while running from a contextwith the associated write serialization lock acquired.
Read path: same as inSequence counters (seqcount_t).
Latch sequence counters (seqcount_latch_t)¶
Latch sequence counters are a multiversion concurrency control mechanismwhere the embedded seqcount_t counter even/odd value is used to switchbetween two copies of protected data. This allows the sequence counterread path to safely interrupt its own write side critical section.
Use seqcount_latch_t when the write side sections cannot be protectedfrom interruption by readers. This is typically the case when the readside can be invoked from NMI handlers.
Checkwrite_seqcount_latch() for more information.
Sequential locks (seqlock_t)¶
This contains theSequence counters (seqcount_t) mechanism earlier discussed, plus anembedded spinlock for writer serialization and non-preemptibility.
If the read side section can be invoked from hardirq or softirq context,use the write side function variants which disable interrupts or bottomhalves respectively.
Initialization:
/* dynamic */seqlock_t foo_seqlock;seqlock_init(&foo_seqlock);/* static */static DEFINE_SEQLOCK(foo_seqlock);/* C99 struct init */struct { .seql = __SEQLOCK_UNLOCKED(foo.seql)} foo;Write path:
write_seqlock(&foo_seqlock);/* ... [[write-side critical section]] ... */write_sequnlock(&foo_seqlock);
Read path, three categories:
Normal Sequence readers which never block a writer but they mustretry if a writer is in progress by detecting change in the sequencenumber. Writers do not wait for a sequence reader:
do { seq = read_seqbegin(&foo_seqlock); /* ... [[read-side critical section]] ... */} while (read_seqretry(&foo_seqlock, seq));Locking readers which will wait if a writer or another locking readeris in progress. A locking reader in progress will also block a writerfrom entering its critical section. This read lock isexclusive. Unlike rwlock_t, only one locking reader can acquire it:
read_seqlock_excl(&foo_seqlock);/* ... [[read-side critical section]] ... */read_sequnlock_excl(&foo_seqlock);
Conditional lockless reader (as in 1), or locking reader (as in 2),according to a passed marker. This is used to avoid lockless readersstarvation (too much retry loops) in case of a sharp spike in writeactivity. First, a lockless read is tried (even marker passed). Ifthat trial fails (sequence counter doesn’t match), make the markerodd for the next iteration, the lockless read is transformed to afull locking read and no retry loop is necessary, for example:
/* marker; even initialization */int seq = 1;do { seq++; /* 2 on the 1st/lockless path, otherwise odd */ read_seqbegin_or_lock(&foo_seqlock, &seq); /* ... [[read-side critical section]] ... */} while (need_seqretry(&foo_seqlock, seq));done_seqretry(&foo_seqlock, seq);
API documentation¶
- seqcount_init¶
seqcount_init(s)
runtime initializer for seqcount_t
Parameters
sPointer to the seqcount_t instance
- SEQCNT_ZERO¶
SEQCNT_ZERO(name)
static initializer for seqcount_t
Parameters
nameName of the seqcount_t instance
- __read_seqcount_begin¶
__read_seqcount_begin(s)
begin a seqcount_t read section
Parameters
sPointer to seqcount_t or any of the seqcount_LOCKNAME_t variants
Return
count to be passed toread_seqcount_retry()
- raw_read_seqcount_begin¶
raw_read_seqcount_begin(s)
begin a seqcount_t read section w/o lockdep
Parameters
sPointer to seqcount_t or any of the seqcount_LOCKNAME_t variants
Return
count to be passed toread_seqcount_retry()
- read_seqcount_begin¶
read_seqcount_begin(s)
begin a seqcount_t read critical section
Parameters
sPointer to seqcount_t or any of the seqcount_LOCKNAME_t variants
Return
count to be passed toread_seqcount_retry()
- raw_read_seqcount¶
raw_read_seqcount(s)
read the raw seqcount_t counter value
Parameters
sPointer to seqcount_t or any of the seqcount_LOCKNAME_t variants
Description
raw_read_seqcount opens a read critical section of the givenseqcount_t, without any lockdep checking, and without checking ormasking the sequence counter LSB. Calling code is responsible forhandling that.
Return
count to be passed toread_seqcount_retry()
- raw_seqcount_try_begin¶
raw_seqcount_try_begin(s,start)
begin a seqcount_t read critical section w/o lockdep and w/o counter stabilization
Parameters
sPointer to seqcount_t or any of the seqcount_LOCKNAME_t variants
startcount to be passed to
read_seqcount_retry()
Description
Similar toraw_seqcount_begin(), except it enables eliding the criticalsection entirely if odd, instead of doing the speculation knowing it willfail.
Useful when counter stabilization is more or less equivalent to takingthe lock and there is a slowpath that does that.
If true, start will be set to the (even) sequence count read.
Return
true when a read critical section is started.
- raw_seqcount_begin¶
raw_seqcount_begin(s)
begin a seqcount_t read critical section w/o lockdep and w/o counter stabilization
Parameters
sPointer to seqcount_t or any of the seqcount_LOCKNAME_t variants
Description
raw_seqcount_begin opens a read critical section of the givenseqcount_t. Unlikeread_seqcount_begin(), this function will not waitfor the count to stabilize. If a writer is active when it begins, itwill fail theread_seqcount_retry() at the end of the read criticalsection instead of stabilizing at the beginning of it.
Use this only in special kernel hot paths where the read section issmall and has a high probability of success through other externalmeans. It will save a single branching instruction.
Return
count to be passed toread_seqcount_retry()
- __read_seqcount_retry¶
__read_seqcount_retry(s,start)
end a seqcount_t read section w/o barrier
Parameters
sPointer to seqcount_t or any of the seqcount_LOCKNAME_t variants
startcount, from
read_seqcount_begin()
Description
__read_seqcount_retry is like read_seqcount_retry, but has nosmp_rmb()barrier. Callers should ensure thatsmp_rmb() or equivalent ordering isprovided before actually loading any of the variables that are to beprotected in this critical section.
Use carefully, only in critical code, and comment how the barrier isprovided.
Return
true if a read section retry is required, else false
- read_seqcount_retry¶
read_seqcount_retry(s,start)
end a seqcount_t read critical section
Parameters
sPointer to seqcount_t or any of the seqcount_LOCKNAME_t variants
startcount, from
read_seqcount_begin()
Description
read_seqcount_retry closes the read critical section of givenseqcount_t. If the critical section was invalid, it must be ignored(and typically retried).
Return
true if a read section retry is required, else false
- raw_write_seqcount_begin¶
raw_write_seqcount_begin(s)
start a seqcount_t write section w/o lockdep
Parameters
sPointer to seqcount_t or any of the seqcount_LOCKNAME_t variants
Context
- raw_write_seqcount_end¶
raw_write_seqcount_end(s)
end a seqcount_t write section w/o lockdep
Parameters
sPointer to seqcount_t or any of the seqcount_LOCKNAME_t variants
Context
checkwrite_seqcount_end()
- write_seqcount_begin_nested¶
write_seqcount_begin_nested(s,subclass)
start a seqcount_t write section with custom lockdep nesting level
Parameters
sPointer to seqcount_t or any of the seqcount_LOCKNAME_t variants
subclasslockdep nesting level
Description
SeeRuntime locking correctness validator
Context
- write_seqcount_begin¶
write_seqcount_begin(s)
start a seqcount_t write side critical section
Parameters
sPointer to seqcount_t or any of the seqcount_LOCKNAME_t variants
Context
sequence counter write side sections must be serialized andnon-preemptible. Preemption will be automatically disabled if andonly if the seqcount write serialization lock is associated, andpreemptible. If readers can be invoked from hardirq or softirqcontext, interrupts or bottom halves must be respectively disabled.
- write_seqcount_end¶
write_seqcount_end(s)
end a seqcount_t write side critical section
Parameters
sPointer to seqcount_t or any of the seqcount_LOCKNAME_t variants
Context
Preemption will be automatically re-enabled if and only ifthe seqcount write serialization lock is associated, and preemptible.
- raw_write_seqcount_barrier¶
raw_write_seqcount_barrier(s)
do a seqcount_t write barrier
Parameters
sPointer to seqcount_t or any of the seqcount_LOCKNAME_t variants
Description
This can be used to provide an ordering guarantee instead of the usualconsistency guarantee. It is one wmb cheaper, because it can collapsethe two back-to-backwmb()s.
Note that writes surrounding the barrier should be declared atomic (e.g.via WRITE_ONCE): a) to ensure the writes become visible to other threadsatomically, avoiding compiler optimizations; b) to document which writes aremeant to propagate to the reader critical section. This is necessary becauseneither writes before nor after the barrier are enclosed in a seq-writercritical section that would ensure readers are aware of ongoing writes:
seqcount_t seq;bool X = true, Y = false;void read(void){ bool x, y; do { int s = read_seqcount_begin(&seq); x = X; y = Y; } while (read_seqcount_retry(&seq, s)); BUG_ON(!x && !y);}void write(void){ WRITE_ONCE(Y, true); raw_write_seqcount_barrier(seq); WRITE_ONCE(X, false);}- write_seqcount_invalidate¶
write_seqcount_invalidate(s)
invalidate in-progress seqcount_t read side operations
Parameters
sPointer to seqcount_t or any of the seqcount_LOCKNAME_t variants
Description
After write_seqcount_invalidate, no seqcount_t read side operationswill complete successfully and see data older than this.
- SEQCNT_LATCH_ZERO¶
SEQCNT_LATCH_ZERO(seq_name)
static initializer for seqcount_latch_t
Parameters
seq_nameName of the seqcount_latch_t instance
- seqcount_latch_init¶
seqcount_latch_init(s)
runtime initializer for seqcount_latch_t
Parameters
sPointer to the seqcount_latch_t instance
- unsignedraw_read_seqcount_latch(constseqcount_latch_t*s)¶
pick even/odd latch data copy
Parameters
constseqcount_latch_t*sPointer to seqcount_latch_t
Description
Seeraw_write_seqcount_latch() for details and a full reader/writerusage example.
Return
sequence counter raw value. Use the lowest bit as an index forpicking which data copy to read. The full counter must then be checkedwithraw_read_seqcount_latch_retry().
- unsignedread_seqcount_latch(constseqcount_latch_t*s)¶
pick even/odd latch data copy
Parameters
constseqcount_latch_t*sPointer to seqcount_latch_t
Description
Seewrite_seqcount_latch() for details and a full reader/writer usageexample.
Return
sequence counter raw value. Use the lowest bit as an index forpicking which data copy to read. The full counter must then be checkedwithread_seqcount_latch_retry().
- intraw_read_seqcount_latch_retry(constseqcount_latch_t*s,unsignedstart)¶
end a seqcount_latch_t read section
Parameters
constseqcount_latch_t*sPointer to seqcount_latch_t
unsignedstartcount, from
raw_read_seqcount_latch()
Return
true if a read section retry is required, else false
- intread_seqcount_latch_retry(constseqcount_latch_t*s,unsignedstart)¶
end a seqcount_latch_t read section
Parameters
constseqcount_latch_t*sPointer to seqcount_latch_t
unsignedstartcount, from
read_seqcount_latch()
Return
true if a read section retry is required, else false
- voidraw_write_seqcount_latch(seqcount_latch_t*s)¶
redirect latch readers to even/odd copy
Parameters
seqcount_latch_t*sPointer to seqcount_latch_t
- voidwrite_seqcount_latch_begin(seqcount_latch_t*s)¶
redirect latch readers to odd copy
Parameters
seqcount_latch_t*sPointer to seqcount_latch_t
Description
The latch technique is a multiversion concurrency control method that allowsqueries during non-atomic modifications. If you can guarantee queries neverinterrupt the modification -- e.g. the concurrency is strictly between CPUs-- you most likely do not need this.
Where the traditional RCU/lockless data structures rely on atomicmodifications to ensure queries observe either the old or the new state thelatch allows the same for non-atomic updates. The trade-off is doubling thecost of storage; we have to maintain two copies of the entire datastructure.
Very simply put: we first modify one copy and then the other. This ensuresthere is always one copy in a stable state, ready to give us an answer.
The basic form is a data structure like:
struct latch_struct { seqcount_latch_t seq; struct data_struct data[2];};Where a modification, which is assumed to be externally serialized, does thefollowing:
void latch_modify(struct latch_struct *latch, ...){ write_seqcount_latch_begin(&latch->seq); modify(latch->data[0], ...); write_seqcount_latch(&latch->seq); modify(latch->data[1], ...); write_seqcount_latch_end(&latch->seq);}The query will have a form like:
struct entry *latch_query(struct latch_struct *latch, ...){ struct entry *entry; unsigned seq, idx; do { seq = read_seqcount_latch(&latch->seq); idx = seq & 0x01; entry = data_query(latch->data[idx], ...); // This includes needed smp_rmb() } while (read_seqcount_latch_retry(&latch->seq, seq)); return entry;}So during the modification, queries are first redirected to data[1]. Then wemodify data[0]. When that is complete, we redirect queries back to data[0]and we can modify data[1].
NOTE
The non-requirement for atomic modifications does _NOT_ includethe publishing of new entries in the case where data is a dynamicdata structure.
An iteration might start in data[0] and get suspended long enoughto miss an entire modification sequence, once it resumes it mightobserve the new entry.
NOTE2:
When data is a dynamic data structure; one should use regular RCUpatterns to manage the lifetimes of the objects within.
- voidwrite_seqcount_latch(seqcount_latch_t*s)¶
redirect latch readers to even copy
Parameters
seqcount_latch_t*sPointer to seqcount_latch_t
- voidwrite_seqcount_latch_end(seqcount_latch_t*s)¶
end a seqcount_latch_t write section
Parameters
seqcount_latch_t*sPointer to seqcount_latch_t
Description
Marks the end of a seqcount_latch_t writer section, after all copies of thelatch-protected data have been updated.
- seqlock_init¶
seqlock_init(sl)
dynamic initializer for seqlock_t
Parameters
slPointer to the seqlock_t instance
- DEFINE_SEQLOCK¶
DEFINE_SEQLOCK(sl)
Define a statically allocated seqlock_t
Parameters
slName of the seqlock_t instance
- unsignedread_seqbegin(constseqlock_t*sl)¶
start a seqlock_t read side critical section
- unsignedread_seqretry(constseqlock_t*sl,unsignedstart)¶
end a seqlock_t read side section
Parameters
constseqlock_t*slPointer to seqlock_t
unsignedstartcount, from
read_seqbegin()
Description
read_seqretry closes the read side critical section of given seqlock_t.If the critical section was invalid, it must be ignored (and typicallyretried).
Return
true if a read section retry is required, else false
- voidwrite_seqlock(seqlock_t*sl)¶
start a seqlock_t write side critical section
Parameters
seqlock_t*slPointer to seqlock_t
Description
write_seqlock opens a write side critical section for the givenseqlock_t. It also implicitly acquires the spinlock_t embedded insidethat sequential lock. All seqlock_t write side sections are thusautomatically serialized and non-preemptible.
Context
if the seqlock_t read section, or other write side criticalsections, can be invoked from hardirq or softirq contexts, use the_irqsave or _bh variants of this function instead.
- voidwrite_sequnlock(seqlock_t*sl)¶
end a seqlock_t write side critical section
Parameters
seqlock_t*slPointer to seqlock_t
Description
write_sequnlock closes the (serialized and non-preemptible) write sidecritical section of given seqlock_t.
- voidwrite_seqlock_bh(seqlock_t*sl)¶
start a softirqs-disabled seqlock_t write section
Parameters
seqlock_t*slPointer to seqlock_t
Description
_bh variant ofwrite_seqlock(). Use only if the read side section, orother write side sections, can be invoked from softirq contexts.
- voidwrite_sequnlock_bh(seqlock_t*sl)¶
end a softirqs-disabled seqlock_t write section
Parameters
seqlock_t*slPointer to seqlock_t
Description
write_sequnlock_bh closes the serialized, non-preemptible, andsoftirqs-disabled, seqlock_t write side critical section opened withwrite_seqlock_bh().
- voidwrite_seqlock_irq(seqlock_t*sl)¶
start a non-interruptible seqlock_t write section
Parameters
seqlock_t*slPointer to seqlock_t
Description
_irq variant ofwrite_seqlock(). Use only if the read side section, orother write sections, can be invoked from hardirq contexts.
- voidwrite_sequnlock_irq(seqlock_t*sl)¶
end a non-interruptible seqlock_t write section
Parameters
seqlock_t*slPointer to seqlock_t
Description
write_sequnlock_irq closes the serialized and non-interruptibleseqlock_t write side section opened withwrite_seqlock_irq().
- write_seqlock_irqsave¶
write_seqlock_irqsave(lock,flags)
start a non-interruptible seqlock_t write section
Parameters
lockPointer to seqlock_t
flagsStack-allocated storage for saving caller’s local interruptstate, to be passed to
write_sequnlock_irqrestore().
Description
_irqsave variant ofwrite_seqlock(). Use it only if the read sidesection, or other write sections, can be invoked from hardirq context.
- voidwrite_sequnlock_irqrestore(seqlock_t*sl,unsignedlongflags)¶
end non-interruptible seqlock_t write section
Parameters
seqlock_t*slPointer to seqlock_t
unsignedlongflagsCaller’s saved interrupt state, from
write_seqlock_irqsave()
Description
write_sequnlock_irqrestore closes the serialized and non-interruptibleseqlock_t write section previously opened withwrite_seqlock_irqsave().
- voidread_seqlock_excl(seqlock_t*sl)¶
begin a seqlock_t locking reader section
Parameters
seqlock_t*slPointer to seqlock_t
Description
read_seqlock_excl opens a seqlock_t locking reader critical section. Alocking reader exclusively locks outboth other writersand otherlocking readers, but it does not update the embedded sequence number.
Locking readers act like a normalspin_lock()/spin_unlock().
The opened read section must be closed withread_sequnlock_excl().
Context
if the seqlock_t write section,or other read sections, canbe invoked from hardirq or softirq contexts, use the _irqsave or _bhvariant of this function instead.
- voidread_sequnlock_excl(seqlock_t*sl)¶
end a seqlock_t locking reader critical section
Parameters
seqlock_t*slPointer to seqlock_t
- voidread_seqlock_excl_bh(seqlock_t*sl)¶
start a seqlock_t locking reader section with softirqs disabled
Parameters
seqlock_t*slPointer to seqlock_t
Description
_bh variant ofread_seqlock_excl(). Use this variant only if theseqlock_t write side section,or other read sections, can be invokedfrom softirq contexts.
- voidread_sequnlock_excl_bh(seqlock_t*sl)¶
stop a seqlock_t softirq-disabled locking reader section
Parameters
seqlock_t*slPointer to seqlock_t
- voidread_seqlock_excl_irq(seqlock_t*sl)¶
start a non-interruptible seqlock_t locking reader section
Parameters
seqlock_t*slPointer to seqlock_t
Description
_irq variant ofread_seqlock_excl(). Use this only if the seqlock_twrite side section,or other read sections, can be invoked from ahardirq context.
- voidread_sequnlock_excl_irq(seqlock_t*sl)¶
end an interrupts-disabled seqlock_t locking reader section
Parameters
seqlock_t*slPointer to seqlock_t
- read_seqlock_excl_irqsave¶
read_seqlock_excl_irqsave(lock,flags)
start a non-interruptible seqlock_t locking reader section
Parameters
lockPointer to seqlock_t
flagsStack-allocated storage for saving caller’s local interruptstate, to be passed to
read_sequnlock_excl_irqrestore().
Description
_irqsave variant ofread_seqlock_excl(). Use this only if the seqlock_twrite side section,or other read sections, can be invoked from ahardirq context.
- voidread_sequnlock_excl_irqrestore(seqlock_t*sl,unsignedlongflags)¶
end non-interruptible seqlock_t locking reader section
Parameters
seqlock_t*slPointer to seqlock_t
unsignedlongflagsCaller saved interrupt state, from
read_seqlock_excl_irqsave()
- voidread_seqbegin_or_lock(seqlock_t*lock,int*seq)¶
begin a seqlock_t lockless or locking reader
Parameters
seqlock_t*lockPointer to seqlock_t
int*seqMarker and return parameter. If the passed value is even, thereader will become alockless seqlock_t reader as in
read_seqbegin().If the passed value is odd, the reader will become alocking readeras inread_seqlock_excl(). In the first call to this function, thecallermust initialize and pass an even value toseq; this way, alockless read can be optimistically tried first.
Description
read_seqbegin_or_lock is an API designed to optimistically try a normallockless seqlock_t read section first. If an odd counter is found, thelockless read trial has failed, and the next read iteration transformsitself into a full seqlock_t locking reader.
This is typically used to avoid seqlock_t lockless readers starvation(too much retry loops) in the case of a sharp spike in write sideactivity.
CheckSequence counters and sequential locks for template example code.
Context
if the seqlock_t write section,or other read sections, canbe invoked from hardirq or softirq contexts, use the _irqsave or _bhvariant of this function instead.
Return
the encountered sequence counter value, through theseqparameter, which is overloaded as a return parameter. This returnedvalue must be checked withneed_seqretry(). If the read section need tobe retried, this returned value must also be passed as theseqparameter of the nextread_seqbegin_or_lock() iteration.
- intneed_seqretry(seqlock_t*lock,intseq)¶
validate seqlock_t “locking or lockless” read section
Parameters
seqlock_t*lockPointer to seqlock_t
intseqsequence count, from
read_seqbegin_or_lock()
Return
true if a read section retry is required, false otherwise
- voiddone_seqretry(seqlock_t*lock,intseq)¶
end seqlock_t “locking or lockless” reader section
Parameters
seqlock_t*lockPointer to seqlock_t
intseqcount, from
read_seqbegin_or_lock()
Description
done_seqretry finishes the seqlock_t read side critical section startedwithread_seqbegin_or_lock() and validated byneed_seqretry().
- unsignedlongread_seqbegin_or_lock_irqsave(seqlock_t*lock,int*seq)¶
begin a seqlock_t lockless reader, or a non-interruptible locking reader
Parameters
seqlock_t*lockPointer to seqlock_t
int*seqMarker and return parameter. Check
read_seqbegin_or_lock().
Description
This is the _irqsave variant ofread_seqbegin_or_lock(). Use it only ifthe seqlock_t write section,or other read sections, can be invokedfrom hardirq context.
The saved local interrupts state in case of a locking reader, tobe passed to
done_seqretry_irqrestore().The encountered sequence counter value, returned throughseqoverloaded as a return parameter. Check
read_seqbegin_or_lock().
Note
Interrupts will be disabled only for “locking reader” mode.
- voiddone_seqretry_irqrestore(seqlock_t*lock,intseq,unsignedlongflags)¶
end a seqlock_t lockless reader, or a non-interruptible locking reader section
Parameters
seqlock_t*lockPointer to seqlock_t
intseqCount, from
read_seqbegin_or_lock_irqsave()unsignedlongflagsCaller’s saved local interrupt state in case of a lockingreader, also from
read_seqbegin_or_lock_irqsave()
Description
This is the _irqrestore variant ofdone_seqretry(). The read sectionmust’ve been opened withread_seqbegin_or_lock_irqsave(), and validatedbyneed_seqretry().