History log of /freebsd-current/sys/sys/smr.h
Revision Date Author Comments
# 95ee2897 16-Aug-2023 Warner Losh <imp@FreeBSD.org>

sys: Remove $FreeBSD$: two-line .h pattern

Remove /^\s*\*\n \*\s+\$FreeBSD\$$\n/


# 4d846d26 10-May-2023 Warner Losh <imp@FreeBSD.org>

spdx: The BSD-2-Clause-FreeBSD identifier is obsolete, drop -FreeBSD

The SPDX folks have obsoleted the BSD-2-Clause-FreeBSD identifier. Catch
up to that fact and revert to their recommended match of BSD-2-Clause.

Discussed with: pfg
MFC After: 3 days
Sponsored by: Netflix


# cd133525 07-Feb-2023 Mark Johnston <markj@FreeBSD.org>

smr: Remove the return value from smr_wait()

This is supposed to be a blocking version of smr_poll(), so there's no
need for a return value. No functional change intended.

MFC after: 1 week


# 8694fd33 24-Sep-2022 Mark Johnston <markj@FreeBSD.org>

smr: Fix synchronization in smr_enter()

smr_enter() must publish its observed read sequence number before
issuing any subsequent memory operations. The ordering provided by
atomic_add_acq_int() is insufficient on some platforms, at least on
arm64, because it permits reordering of subsequent loads with the store
to c_seq.

Thus, use atomic_thread_fence_seq_cst() to issue a store-load barrier
after publishing the read sequence number.

On x86, take advantage of the fact that memory operations are not
reordered with locked instructions to improve code density: we can store
the observed read sequence and provide a store-load barrier with a
single operation.

Based on a patch from Pierre Habouzit <pierre@habouzit.net>.

PR: 265974
Reviewed by: alc
MFC after: 2 weeks
Differential Revision: https://reviews.freebsd.org/D36370


# 3fba8868 06-Mar-2020 Mark Johnston <markj@FreeBSD.org>

Move SMR pointer type definition and access macros to smr_types.h.

The intent is to provide a header that can be included by other headers
without introducing too much pollution. smr.h depends on various
headers and will likely grow over time, but is less likely to be
required by system headers.

Rename SMR_TYPE_DECLARE() to SMR_POINTER():
- One might use SMR to protect more than just pointers; it
could be used for resizeable arrays, for example, so TYPE seems too
generic.
- It is useful to be able to define anonymous SMR-protected pointer
types and the _DECLARE suffix makes that look wrong.

Reviewed by: jeff, mjg, rlibby
Sponsored by: The FreeBSD Foundation
Differential Revision: https://reviews.freebsd.org/D23988


# 561af25f 27-Feb-2020 Jeff Roberson <jeff@FreeBSD.org>

Simplify lazy advance with a 64bit atomic cmpset.

This provides the potential to force a lazy (tick based) SMR to advance
when there are blocking waiters by decoupling the wr_seq value from the
ticks value.

Add some missing compiler barriers.

Reviewed by: rlibby
Differential Revision: https://reviews.freebsd.org/D23825


# 226dd6db 21-Feb-2020 Jeff Roberson <jeff@FreeBSD.org>

Add an atomic-free tick moderated lazy update variant of SMR.

This enables very cheap read sections with free-to-use latencies and memory
overhead similar to epoch. On a recent AMD platform a read section cost
1ns vs 5ns for the default SMR. On Xeon the numbers should be more like 1
ns vs 11. The memory consumption should be proportional to the product
of the free rate and 2*1/hz while normal SMR consumption is proportional
to the product of free rate and maximum read section time.

While here refactor the code to make future additions more
straightforward.

Name the overall technique Global Unbound Sequences (GUS) and adjust some
comments accordingly. This helps distinguish discussions of the general
technique (SMR) vs this specific implementation (GUS).

Discussed with: rlibby, markj


# 83bf6ee4 19-Feb-2020 Jeff Roberson <jeff@FreeBSD.org>

Since r357940 it is no longer possible to use a single type cast for all
atomic_*_ptr functions.


# bf7dba0b 19-Feb-2020 Jeff Roberson <jeff@FreeBSD.org>

Type validating smr protected pointer accessors.

This API is intended to provide some measure of safety with SMR
protected pointers. A struct wrapper provides type checking and
a guarantee that all access is mediated by the API unless abused. All
modifying functions take an assert as an argument to guarantee that
the required synchronization is present.

Reviewed by: kib, markj, mjg
Differential Revision: https://reviews.freebsd.org/D23711


# a4d50e49 13-Feb-2020 Jeff Roberson <jeff@FreeBSD.org>

Add more precise SMR entry asserts.


# a2abae8d 06-Feb-2020 Ryan Libby <rlibby@FreeBSD.org>

smr.h: fix build after r357641

r357641 missed committing the change to sys/sys/smr.h.

Reported by: jkim
Submitted by: jeff
Reviewed by: rlibby
Differential Revision: https://reviews.freebsd.org/D23464


# bc650984 03-Feb-2020 Jeff Roberson <jeff@FreeBSD.org>

Implement a deferred write advancement feature that can be used to further
amortize shared cacheline writes.

Discussed with: rlibby
Differential Revision: https://reviews.freebsd.org/D23462


# 915c367e 31-Jan-2020 Jeff Roberson <jeff@FreeBSD.org>

Add two missing fences with comments describing them. These were found by
inspection and after a lengthy discussion with jhb and kib. They have not
produced test failures.

Don't pointer chase through cpu0's smr. Use cpu correct smr even when not
in a critical section to reduce the likelihood of false sharing.


# da6e9935 30-Jan-2020 Jeff Roberson <jeff@FreeBSD.org>

Don't use "All rights reserved" in new copyrights.

Requested by: rgrimes


# d4665eaa 30-Jan-2020 Jeff Roberson <jeff@FreeBSD.org>

Implement a safe memory reclamation feature that is tightly coupled with UMA.

This is in the same family of algorithms as Epoch/QSBR/RCU/PARSEC but is
a unique algorithm. This has 3x the performance of epoch in a write heavy
workload with less than half of the read side cost. The memory overhead
is significantly lessened by limiting the free-to-use latency. A synthetic
test uses 1/20th of the memory vs Epoch. There is significant further
discussion in the comments and code review.

This code should be considered experimental. I will write a man page after
it has settled. After further validation the VM will begin using this
feature to permit lockless page lookups.

Both markj and cperciva tested on arm64 at large core counts to verify
fences on weaker ordering architectures. I will commit a stress testing
tool in a follow-up.

Reviewed by: mmacy, markj, rlibby, hselasky
Discussed with: sbahara
Differential Revision: https://reviews.freebsd.org/D22586