History log of /freebsd-current/sys/kern/kern_mutex.c
Revision Date Author Comments
# 6b353101 18-Jan-2024 Olivier Certner <olce@FreeBSD.org>

SCHEDULER_STOPPED(): Rely on a global variable

A commit from 2012 (5d7380f8e34f0083, r228424) introduced
'td_stopsched', on the ground that a global variable would cause all
CPUs to have a copy of it in their cache, and consequently of all other
variables sharing the same cache line.

This is really a problem only if that cache line sees relatively
frequent modifications. This was unlikely to be the case back then
because nearby variables are almost never modified as well. In any
case, today we have a new tool at our disposal to ensure that this
variable goes into a read-mostly section containing frequently-accessed
variables ('__read_frequently'). Most of the cache lines covering this
section are likely to always be in every CPU cache. This makes the
second reason stated in the commit message (ensuring the field is in the
same cache line as some lock-related fields, since these are accessed in
close proximity) moot, as well as the second order effect of requiring
an additional line to be present in the cache (the one containing the
new 'scheduler_stopped' boolean, see below).

From a pure logical point of view, whether the scheduler is stopped is
a global state and is certainly not a per-thread quality.

Consequently, remove 'td_stopsched', which immediately frees a byte in
'struct thread'. Currently, the latter's size (and layout) stays
unchanged, but some of the later re-orderings will probably benefit from
this removal. Available bytes at the original position for
'td_stopsched' have been made explicit with the addition of the
'_td_pad0' member.

Store the global state in the new 'scheduler_stopped' boolean, which is
annotated with '__read_frequently'.

Replace uses of SCHEDULER_STOPPED_TD() with SCHEDULER_STOPPER() and
remove the former as it is now unnecessary.

Reviewed by: markj, kib
Approved by: markj (mentor)
MFC after: 2 weeks
Sponsored by: The FreeBSD Foundation
Differential Revision: https://reviews.freebsd.org/D43572


# 7530de77 22-Oct-2023 Mateusz Guzik <mjg@FreeBSD.org>

thread: add td_wantedlock

This enables obtaining lock information threads are actively waiting for
while sampling. Without the change one would only see a bunch of calls
to lock_delay(), where the stacktrace often does not reveal what the
lock might be.

Note this is not the same as lock profiling, which only produces data
for cases which wait for locks.

struct thread already has a td_lockname field, but I did not use it
because it has different semantics -- denotes when the thread is off
cpu. At the same time it could not be converted to hold a lock_object
pointer because non-curthread access would no longer be guaranteed to be
safe -- by the time it reads the pointer the lock might have been taken,
released and the object containing it freed.

Sample usage with dtrace:
rm /tmp/out.kern_stacks ; dtrace -x stackframes=100 -n 'profile-997 { @[curthread->td_wantedlock != NULL ? stringof(curthread->td_wantedlock->lo_name) : stringof("\n"), stack()] = count(); }' -o /tmp/out.kern_stacks

This also facilitates addition of lock information to traces produced by
hwpmc.

Note: spinlocks are not supported at the moment.

Sponsored by: Rubicon Communications, LLC ("Netgate")


# 685dc743 16-Aug-2023 Warner Losh <imp@FreeBSD.org>

sys: Remove $FreeBSD$: one-line .c pattern

Remove /^[\s*]*__FBSDID\("\$FreeBSD\$"\);?\s*\n/


# 4730a897 02-Sep-2021 Alexander Motin <mav@FreeBSD.org>

callout(9): Allow spin locks use with callout_init_mtx().

Implement lock_spin()/unlock_spin() lock class methods, moving the
assertion to _sleep() instead. Change assertions in callout(9) to
allow spin locks for both regular and C_DIRECT_EXEC cases. In case of
C_DIRECT_EXEC callouts spin locks are the only locks allowed actually.

As the first use case allow taskqueue_enqueue_timeout() use on fast
task queues. It actually becomes more efficient due to avoided extra
context switches in callout(9) thanks to C_DIRECT_EXEC.

MFC after: 2 weeks
Reviewed by: hselasky
Differential Revision: https://reviews.freebsd.org/D31778


# 42862413 02-Aug-2021 Eric van Gyzen <vangyzen@FreeBSD.org>

Fix lockstat:::thread-spin dtrace probe with LOCK_PROFILING

The spinning start time is missing from the calculation due to a
misplaced #endif. Return the #endif where it's supposed to be.

Submitted by: Alexander Alexeev <aalexeev@isilon.com>
Reviewed by: bdrewery, mjg
MFC after: 1 week
Sponsored by: Dell EMC Isilon
Differential Revision: https://reviews.freebsd.org/D31384


# 6a467cc5 23-May-2021 Mateusz Guzik <mjg@FreeBSD.org>

lockprof: pass lock type as an argument instead of reading the spin flag


# f90d57b8 23-Nov-2020 Mateusz Guzik <mjg@FreeBSD.org>

locks: push lock_delay_arg_init calls down

Minor cleanup to skip doing them when recursing on locks and so that
they can act on found lock value if need be.


# bd66a075 04-Aug-2020 Mateusz Guzik <mjg@FreeBSD.org>

mtx: add mtx_wait_unlocked


# c795344f 23-Jul-2020 Mateusz Guzik <mjg@FreeBSD.org>

locks: fix a long standing bug for primitives with kdtrace but without spinning

In such a case the second argument to lock_delay_arg_init was NULL which was
immediately causing a null pointer deref.

Since the sructure is only used for spin count, provide a dedicate routine
initializing it.

Reported by: andrew


# 7029da5c 26-Feb-2020 Pawel Biernacki <kaktus@FreeBSD.org>

Mark more nodes as CTLFLAG_MPSAFE or CTLFLAG_NEEDGIANT (17 of many)

r357614 added CTLFLAG_NEEDGIANT to make it easier to find nodes that are
still not MPSAFE (or already are but aren’t properly marked).
Use it in preparation for a general review of all nodes.

This is non-functional change that adds annotations to SYSCTL_NODE and
SYSCTL_PROC nodes using one of the soon-to-be-required flags.

Mark all obvious cases as MPSAFE. All entries that haven't been marked
as MPSAFE before are by default marked as NEEDGIANT

Approved by: kib (mentor, blanket)
Commented by: kib, gallatin, melifaro
Differential Revision: https://reviews.freebsd.org/D23718


# 879e0604 11-Jan-2020 Mateusz Guzik <mjg@FreeBSD.org>

Add KERNEL_PANICKED macro for use in place of direct panicstr tests


# 2e77cad1 04-Jan-2020 Mateusz Guzik <mjg@FreeBSD.org>

locks: add default delay struct

Use it for all primitives. This makes everything fit in 8 bytes.


# 6b8dd26e 04-Jan-2020 Mateusz Guzik <mjg@FreeBSD.org>

locks: convert delay times to u_short

int is just a waste of space for this purpose.


# 3fd19ce7 15-Dec-2019 Mateusz Guzik <mjg@FreeBSD.org>

mtx: eliminate recursion support from thread lock

Now that it is not used after schedlock changes got merged.

Note the unlock routine temporarily still checks for it on account of just using
regular spin unlock.

This is a prelude towards a general clean up.


# 61a74c5c 15-Dec-2019 Jeff Roberson <jeff@FreeBSD.org>

schedlock 1/4

Eliminate recursion from most thread_lock consumers. Return from
sched_add() without the thread_lock held. This eliminates unnecessary
atomics and lock word loads as well as reducing the hold time for
scheduler locks. This will eventually allow for lockless remote adds.

Discussed with: kib
Reviewed by: jhb
Tested by: pho
Differential Revision: https://reviews.freebsd.org/D22626


# a11bf9a4 23-Aug-2019 Xin LI <delphij@FreeBSD.org>

INVARIANTS: treat LA_LOCKED as the same of LA_XLOCKED in mtx_assert.

The Linux lockdep API assumes LA_LOCKED semantic in lockdep_assert_held(),
meaning that either a shared lock or write lock is Ok. On the other hand,
the timeout code uses lc_assert() with LA_XLOCKED, and we need both to
work.

For mutexes, because they can not be shared (this is unique among all lock
classes, and it is unlikely that we would add new lock class anytime soon),
it is easier to simply extend mtx_assert to handle LA_LOCKED there, despite
the change itself can be viewed as a slight abstraction violation.

Reviewed by: mjg, cem, jhb
MFC after: 1 month
Differential Revision: https://reviews.freebsd.org/D21362


# f183fb16 13-Nov-2018 Mateusz Guzik <mjg@FreeBSD.org>

locks: plug warnings about unitialized variables

They only showed up after I redefined LOCKSTAT_ENABLED to 0.

doing_lockprof in mutex.c is a real (but harmless) bug. Should the
value be non-zero it will do checks for lock profiling which would
otherwise be skipped.

state in rwlock.c is a wart from the compiler, the value can't be
used if lock profiling is not enabled.

Sponsored by: The FreeBSD Foundation


# 4cbbb748 05-Nov-2018 John Baldwin <jhb@FreeBSD.org>

Add a KPI for the delay while spinning on a spin lock.

Replace a call to DELAY(1) with a new cpu_lock_delay() KPI. Currently
cpu_lock_delay() is defined to DELAY(1) on all platforms. However,
platforms with a DELAY() implementation that uses spin locks should
implement a custom cpu_lock_delay() doesn't use locks.

Reviewed by: kib
MFC after: 3 days


# d0a22279 02-Jun-2018 Mateusz Guzik <mjg@FreeBSD.org>

Remove an unused argument to turnstile_unpend.

PR: 228694
Submitted by: Julian Pszczołowski <julian.pszczolowski@gmail.com>


# 8c7549da 20-Mar-2018 Mark Johnston <markj@FreeBSD.org>

Drop KTR_CONTENTION.

It is incomplete, has not been adopted in the other locking primitives,
and we have other means of measuring lock contention (lock_profiling,
lockstat, KTR_LOCK). Drop it to slightly de-clutter the mutex code and
free up a precious KTR class index.

Reviewed by: jhb, mjg
MFC after: 1 week
Differential Revision: https://reviews.freebsd.org/D14771


# 09bdec20 17-Mar-2018 Mateusz Guzik <mjg@FreeBSD.org>

locks: slightly depessimize lockstat

The slow path is always taken when lockstat is enabled. This induces
rdtsc (or other) calls to get the cycle count even when there was no
contention.

Still go to the slow path to not mess with the fast path, but avoid
the heavy lifting unless necessary.

This reduces sys and real time during -j 80 buildkernel:
before: 3651.84s user 1105.59s system 5394% cpu 1:28.18 total
after: 3685.99s user 975.74s system 5450% cpu 1:25.53 total
disabled: 3697.96s user 411.13s system 5261% cpu 1:18.10 total

So note this is still a significant hit.

LOCK_PROFILING results are not affected.


# 9f4e008d 04-Mar-2018 Mateusz Guzik <mjg@FreeBSD.org>

mtx: tidy up recursion handling in thread lock

Normally after grabbing the lock it has to be verified we got the right one
to begin with. However, if we are recursing, it must not change thus the
check can be avoided. In particular this avoids a lock read for non-recursing
case which found out the lock was changed.

While here avoid an irq trip of this happens.

Tested by: pho (previous version)


# 500ca73d 20-Feb-2018 Mateusz Guzik <mjg@FreeBSD.org>

mtx: add debug assertions to mtx_spin_wait_unlocked


# d2576988 18-Feb-2018 Mateusz Guzik <mjg@FreeBSD.org>

mtx: add mtx_spin_wait_unlocked

The primitive can be used to wait for the lock to be released. Intended
usage is for locks in structures which are about to be freed.

The benefit is the avoided interrupt enable/disable trip + atomic op to
grab the lock and shorter wait if the lock is held (since there is no
worry someone will contend on the lock, re-reads can be more aggressive).

Briefly discussed with: kib


# 310f24d7 12-Jan-2018 Mateusz Guzik <mjg@FreeBSD.org>

mtx: use fcmpset to cover setting MTX_CONTESTED


# 15140a8a 30-Dec-2017 Mateusz Guzik <mjg@FreeBSD.org>

mtx: deduplicate indefinite wait check in spinlocks and thread lock


# 1f4d28c7 30-Dec-2017 Mateusz Guzik <mjg@FreeBSD.org>

mtx: pre-read the lock value in thread_lock_flags_

Since this function is effectively slow path, if we get here the lock is most
likely already taken in which case it is cheaper to not blindly attempt the
atomic op.

While here move hwpmc probe out of the loop to match other primitives.


# 8a36da99 27-Nov-2017 Pedro F. Giffuni <pfg@FreeBSD.org>

sys/kern: adoption of SPDX licensing ID tags.

Mainly focus on files that use BSD 2-Clause license, however the tool I
was using misidentified many licenses so this was mostly a manual - error
prone - task.

The Software Package Data Exchange (SPDX) group provides a specification
to make it easier for automated tools to detect and summarize well known
opensource licenses. We are gradually adopting the specification, noting
that the tags are considered only advisory and do not, in any way,
superceed or replace the license texts.


# 2c50bafe 25-Nov-2017 Mateusz Guzik <mjg@FreeBSD.org>

Add the missing lockstat check for thread lock.


# b584eb2e 22-Nov-2017 Mateusz Guzik <mjg@FreeBSD.org>

locks: pass the found lock value to unlock slow path

This avoids an explicit read later.

While here whack the cheaply obtainable 'tid' argument.


# 013c0b49 22-Nov-2017 Mateusz Guzik <mjg@FreeBSD.org>

locks: remove the file + line argument from internal primitives when not used

The pair is of use only in debug or LOCKPROF kernels, but was passed (zeroed)
for many locks even in production kernels.

While here whack the tid argument from wlock hard and xlock hard.

There is no kbi change of any sort - "external" primitives still accept the
pair.


# 284194f1 17-Nov-2017 Mateusz Guzik <mjg@FreeBSD.org>

locks: fix compilation issues without SMP or KDTRACE_HOOKS


# 2ccee9cc 16-Nov-2017 Mateusz Guzik <mjg@FreeBSD.org>

mtx: add missing parts of the diff in r325920

Fixes build breakage.


# 8448e020 16-Nov-2017 Mateusz Guzik <mjg@FreeBSD.org>

mtx: unlock before traversing threads to wake up

This shortens the lock hold time while not affecting corretness.
All the woken up threads end up competing can lose the race against
a completely unrelated thread getting the lock anyway.


# be49509e 21-Oct-2017 Mateusz Guzik <mjg@FreeBSD.org>

mtx: implement thread lock fastpath

MFC after: 1 week


# 62bf13cb 20-Oct-2017 Mateusz Guzik <mjg@FreeBSD.org>

mtx: fix up UP build after r324778

Reported by: Michael Butler


# cbc2d7c2 19-Oct-2017 Mateusz Guzik <mjg@FreeBSD.org>

mtx: stop testing SCHEDULER_STOPPED in kabi funcs for spin mutexes

There is nothing panic-breaking to do in the unlock case and the lock
case will fallback to the slow path doing the check already.

MFC after: 1 week


# 0d74fe26 19-Oct-2017 Mateusz Guzik <mjg@FreeBSD.org>

mtx: clean up locking spin mutexes

1) shorten the fast path by pushing the lockstat probe to the slow path
2) test for kernel panic only after it turns out we will have to spin,
in particular test only after we know we are not recursing

MFC after: 1 week


# e280ce46 13-Oct-2017 Mateusz Guzik <mjg@FreeBSD.org>

mtx: fix up owner_mtx after r324609

Now that MTX_UNOWNED is 0 the test was alwayas false.


# 2f1ddb89 26-Sep-2017 Mateusz Guzik <mjg@FreeBSD.org>

mtx: drop the tid argument from _mtx_lock_sleep

tid must be equal to curthread and the target routine was already reading
it anyway, which is not a problem. Not passing it as a parameter allows for
a little bit shorter code in callers.

MFC after: 1 week


# 6a569d35 08-Sep-2017 Mateusz Guzik <mjg@FreeBSD.org>

Annotate Giant with __exclusive_cache_line


# 574adb65 06-Sep-2017 Mateusz Guzik <mjg@FreeBSD.org>

Sprinkle __read_frequently on few obvious places.

Note that some of annotated variables should probably change their types
to something smaller, preferably bit-sized.


# d0f68f91 30-Jul-2017 Mark Johnston <markj@FreeBSD.org>

Correct the predicates on which lockstat:::{thread,spin}-spin fire.

In particular, they should fire only if the lock was owned by another
thread when we first attempted to acquire that lock.

MFC after: 1 week


# 704cb42f 19-Jun-2017 Mark Johnston <markj@FreeBSD.org>

Fix the !TD_IS_IDLETHREAD(curthread) locking assertions.

Most of the lock slowpaths assert that the calling thread isn't an idle
thread. However, this may not be true if the system has panicked, and in
some cases the assertion appears before a SCHEDULER_STOPPED() check.

MFC after: 3 days
Sponsored by: Dell EMC Isilon


# c7a6a1b3 29-May-2017 Mateusz Guzik <mjg@FreeBSD.org>

mtx: fix whitespace damage in _mtx_trylock_flags_

MFC after: 3 days


# df1c30f6 23-Feb-2017 Warner Losh <imp@FreeBSD.org>

KDTRACE_HOOKS isn't guaranteed to be defined. Change to check to see
if it is defined or not rather than if it is non-zero.

Sponsored by: Netflix, Inc


# dfaa7859 23-Feb-2017 Mateusz Guzik <mjg@FreeBSD.org>

mtx: microoptimize lockstat handling in spin mutexes and thread lock

While here make the code compilablle on kernels with LOCK_PROFILING but without
KDTRACE_HOOKS.


# 13d2ef0f 20-Feb-2017 Mateusz Guzik <mjg@FreeBSD.org>

mtx: fix spin mutexes interaction with failed fcmpset

While doing so move recursion support down to the fallback routine.


# b247fd39 19-Feb-2017 Mateusz Guzik <mjg@FreeBSD.org>

locks: make trylock routines check for 'unowned' value

Since fcmpset can fail without lock contention e.g. on arm, it was possible
to get spurious failures when the caller was expecting the primitive to succeed.

Reported by: mmel


# 5c5df0d9 18-Feb-2017 Mateusz Guzik <mjg@FreeBSD.org>

locks: clean up trylock primitives

In particular thius reduces accesses of the lock itself.


# a24c8eb8 17-Feb-2017 Mateusz Guzik <mjg@FreeBSD.org>

mtx: plug the 'opts' argument when not used


# cbebea4e 17-Feb-2017 Mateusz Guzik <mjg@FreeBSD.org>

mtx: get rid of file/line args from slow paths if they are unused

This denotes changes which went in by accident in r313877.

On most production kernels both said parameters are zeroed and have nothing
reading them in either __mtx_lock_sleep or __mtx_unlock_sleep. Thus this change
stops passing them by internal consumers which this is the case.

Kernel modules use _flags variants which are not affected kbi-wise.


# 09f1319a 17-Feb-2017 Mateusz Guzik <mjg@FreeBSD.org>

mtx: restrict r313875 to kernels without LOCK_PROFILING


# 7640beb9 17-Feb-2017 Mateusz Guzik <mjg@FreeBSD.org>

mtx: microoptimize lockstat handling in __mtx_lock_sleep

This saves a function call and multiple branches after the lock is acquired.


# ffd5c94c 16-Feb-2017 Mateusz Guzik <mjg@FreeBSD.org>

locks: let primitives for modules unlock without always goging to the slsow path

It is only needed if the LOCK_PROFILING is enabled. It has to always check if
the lock is about to be released which requires an avoidable read if the option
is not specified..


# afa39f7a 16-Feb-2017 Mateusz Guzik <mjg@FreeBSD.org>

locks: remove SCHEDULER_STOPPED checks from primitives for modules

They all fallback to the slow path if necessary and the check is there.

This means a panicked kernel executing code from modules will be able to
succeed doing actual lock/unlock, but this was already the case for core code
which has said primitives inlined.


# 3b3cf014 09-Feb-2017 Mateusz Guzik <mjg@FreeBSD.org>

locks: tidy up unlock fallback paths

Update comments to note these functions are reachable if lockstat is
enabled.

Check if the lock has any bits set before attempting unlock, which saves
an unnecessary atomic operation.


# 8e5a3e9a 07-Feb-2017 Mateusz Guzik <mjg@FreeBSD.org>

locks: change backoff to exponential

Previous implementation would use a random factor to spread readers and
reduce chances of starvation. This visibly reduces effectiveness of the
mechanism.

Switch to the more traditional exponential variant. Try to limit starvation
by imposing an upper limit of spins after which spinning is half of what
other threads get. Note the mechanism is turned off by default.

Reviewed by: kib (previous version)


# c1aaf63c 06-Feb-2017 Mateusz Guzik <mjg@FreeBSD.org>

locks: fix recursion support after recent changes

When a relevant lockstat probe is enabled the fallback primitive is called with
a constant signifying a free lock. This works fine for typical cases but breaks
with recursion, since it checks if the passed value is that of the executing
thread.

Read the value if necessary.


# dc089651 05-Feb-2017 Mateusz Guzik <mjg@FreeBSD.org>

mtx: fixup r313278, the assignemnt was supposed to go inside the loop


# cae4ab7f 05-Feb-2017 Mateusz Guzik <mjg@FreeBSD.org>

mtx: fix up _mtx_obtain_lock_fetch usage in thread lock

Since _mtx_obtain_lock_fetch no longer sets the argument to MTX_UNOWNED,
callers have to do it on their own.


# 08da2677 05-Feb-2017 Mateusz Guzik <mjg@FreeBSD.org>

mtx: move lockstat handling out of inline primitives

Lockstat requires checking if it is enabled and if so, calling a 6 argument
function. Further, determining whether to call it on unlock requires
pre-reading the lock value.

This is problematic in at least 3 ways:
- more branches in the hot path than necessary
- additional cacheline ping pong under contention
- bigger code

Instead, check first if lockstat handling is necessary and if so, just fall
back to regular locking routines. For this purpose a new macro is introduced
(LOCKSTAT_PROFILE_ENABLED).

LOCK_PROFILING uninlines all primitives. Fold in the current inline lock
variant into the _mtx_lock_flags to retain the support. With this change
the inline variants are not used when LOCK_PROFILING is defined and thus
can ignore its existence.

This results in:
text data bss dec hex filename
22259667 1303208 4994976 28557851 1b3c21b kernel.orig
21797315 1303208 4994976 28095499 1acb40b kernel.patched

i.e. about 3% reduction in text size.

A remaining action is to remove spurious arguments for internal kernel
consumers.


# 90836c32 04-Feb-2017 Mateusz Guzik <mjg@FreeBSD.org>

mtx: switch to fcmpset

The found value is passed to locking routines in order to reduce cacheline
accesses.

mtx_unlock grows an explicit check for regular unlock. On ll/sc architectures
the routine can fail even if the lock could have been handled by the inline
primitive.

Discussed with: jhb
Tested by: pho (previous version)


# 29051116 27-Jan-2017 Mateusz Guzik <mjg@FreeBSD.org>

Sprinkle __read_mostly on backoff and lock profiling code.

MFC after: 1 month


# 391df78a 03-Jan-2017 Mateusz Guzik <mjg@FreeBSD.org>

mtx: plug open-coded mtx_lock access missed in r311172


# 5e5ad162 03-Jan-2017 Mateusz Guzik <mjg@FreeBSD.org>

Reduce lock accesses in thread lock similarly to r311172.


# 2604eb9e 03-Jan-2017 Mateusz Guzik <mjg@FreeBSD.org>

mtx: reduce lock accesses

Instead of spuriously re-reading the lock value, read it once.

This change also has a side effect of fixing a performance bug:
on failed _mtx_obtain_lock, it was possible that re-read would find
the lock is unowned, but in this case the primitive would make a trip
through turnstile code.

This is diff reduction to a variant which uses atomic_fcmpset.

Discussed with: jhb (previous version)
Tested by: pho (previous version)


# 02315a67 09-Dec-2016 Mark Johnston <markj@FreeBSD.org>

Use a consistent snapshot of the lock state in owner_mtx().

MFC after: 2 weeks


# 310ab671 26-Sep-2016 Eric van Gyzen <vangyzen@FreeBSD.org>

Make no assertions about mutex state when the scheduler is stopped.

This changes the assert path to match the lock and unlock paths.

MFC after: 1 week
Sponsored by: Dell EMC


# a0d45f0f 09-Sep-2016 Mateusz Guzik <mjg@FreeBSD.org>

locks: add backoff for spin mutexes and thread lock

Reviewed by: jhb


# fa5000a4 01-Aug-2016 Mateusz Guzik <mjg@FreeBSD.org>

locks: fix compilation for KDTRACE_HOOKS && !ADAPTIVE_* case

Reported by: Michael Butler <imb protected-networks.net>


# 1ada9041 01-Aug-2016 Mateusz Guzik <mjg@FreeBSD.org>

Implement trivial backoff for locking primitives.

All current spinning loops retry an atomic op the first chance they get,
which leads to performance degradation under load.

One classic solution to the problem consists of delaying the test to an
extent. This implementation has a trivial linear increment and a random
factor for each attempt.

For simplicity, this first thouch implementation only modifies spinning
loops where the lock owner is running. spin mutexes and thread lock were
not modified.

Current parameters are autotuned on boot based on mp_cpus.

Autotune factors are very conservative and are subject to change later.

Reviewed by: kib, jhb
Tested by: pho
MFC after: 1 week


# 61852185 30-Jul-2016 Mateusz Guzik <mjg@FreeBSD.org>

locks: change sleep_cnt and spin_cnt types to u_int

Both variables are uint64_t, but they only count spins or sleeps.
All reasonable values which we can get here comfortably hit in 32-bit range.

Suggested by: kib
MFC after: 1 week


# 90b581f2 22-Jul-2016 Konstantin Belousov <kib@FreeBSD.org>

Implement mtx_trylock_spin(9).

Discussed with: bde
Reviewed by: jhb
Sponsored by: The FreeBSD Foundation
MFC after: 1 week
Differential revision: https://reviews.freebsd.org/D7192


# f61d6c5a 05-Jul-2016 Mark Johnston <markj@FreeBSD.org>

Ensure that spinlock sections are balanced even after a panic.

vpanic() uses spinlock_enter() to disable interrupts before dumping core.
However, when the scheduler is stopped and INVARIANTS is not configured,
thread_lock() does not acquire a spinlock section, while thread_unlock()
releases one. This can result in interrupts staying enabled while the
kernel dumps core, complicating post-mortem analysis of the crash.

Approved by: re (gjb)
MFC after: 1 week
Sponsored by: EMC / Isilon Storage Division


# fc4f686d 01-Jun-2016 Mateusz Guzik <mjg@FreeBSD.org>

Microoptimize locking primitives by avoiding unnecessary atomic ops.

Inline version of primitives do an atomic op and if it fails they fallback to
actual primitives, which immediately retry the atomic op.

The obvious optimisation is to check if the lock is free and only then proceed
to do an atomic op.

Reviewed by: jhb, vangyzen


# be2dfd58 17-May-2016 Mark Johnston <markj@FreeBSD.org>

Remove the MUTEX_DEBUG kernel option.

It has no counterpart among the other lock primitives and has been a
no-op for years. Mutex consistency checks are generally done whenver
INVARIANTS is enabled.


# 5002e195 17-May-2016 Mark Johnston <markj@FreeBSD.org>

Guard the lockstat:::thread-spin probe with KDTRACE_HOOKS.

X-MFC-With: r300103


# 156fbc14 17-May-2016 Mark Johnston <markj@FreeBSD.org>

lockstat:::thread-spin should only fire after spinning for the lock.

MFC after: 1 week


# ce1c953e 01-Aug-2015 Mark Johnston <markj@FreeBSD.org>

Don't modify curthread->td_locks unless INVARIANTS is enabled.

This field is only used in a KASSERT that verifies that no locks are held
when returning to user mode. Moreover, the td_locks accounting is only
correct when LOCK_DEBUG > 0, which is implied by INVARIANTS.

Reviewed by: jhb
MFC after: 1 week
Differential Revision: https://reviews.freebsd.org/D3205


# 32cd0147 19-Jul-2015 Mark Johnston <markj@FreeBSD.org>

Implement the lockstat provider using SDT(9) instead of the custom provider
in lockstat.ko. This means that lockstat probes now have typed arguments and
will utilize SDT probe hot-patching support when it arrives.

Reviewed by: gnn
Differential Revision: https://reviews.freebsd.org/D2993


# c6d48c87 17-Jul-2015 Mark Johnston <markj@FreeBSD.org>

Fix the !KDTRACE_HOOKS build.

X-MFC-With: r285664


# e2b25737 17-Jul-2015 Mark Johnston <markj@FreeBSD.org>

Pass the lock object to lockstat_nsecs() and return immediately if
LO_NOPROFILE is set. Some timecounter handlers acquire a spin mutex, and
we don't want to recurse if lockstat probes are enabled.

PR: 201642
Reviewed by: avg
MFC after: 3 days


# 076dd8eb 12-Jun-2015 Andriy Gapon <avg@FreeBSD.org>

several lockstat improvements

0. For spin events report time spent spinning, not a loop count.
While loop count is much easier and cheaper to obtain it is hard
to reason about the reported numbers, espcially for adaptive locks
where both spinning and sleeping can happen.
So, it's better to compare apples and apples.

1. Teach lockstat about FreeBSD rw locks.
This is done in part by changing the corresponding probes
and in part by changing what probes lockstat should expect.

2. Teach lockstat that rw locks are adaptive and can spin on FreeBSD.

3. Report lock acquisition events for successful rw try-lock operations.

4. Teach lockstat about FreeBSD sx locks.
Reporting of events for those locks completely mirrors
rw locks.

5. Report spin and block events before acquisition event.
This is behavior documented for the upstream, so it makes sense to stick
to it. Note that because of FreeBSD adaptive lock implementations
both the spin and block events may be reported for the same acquisition
while the upstream reports only one of them.

Differential Revision: https://reviews.freebsd.org/D2727
Reviewed by: markj
MFC after: 17 days
Relnotes: yes
Sponsored by: ClusterHQ


# fd07ddcf 13-Dec-2014 Dmitry Chagin <dchagin@FreeBSD.org>

Add _NEW flag to mtx(9), sx(9), rmlock(9) and rwlock(9).
A _NEW flag passed to _init_flags() to avoid check for double-init.

Differential Revision: https://reviews.freebsd.org/D1208
Reviewed by: jhb, wblock
MFC after: 1 Month


# 6afb32fc 01-Dec-2014 Konstantin Belousov <kib@FreeBSD.org>

Disable recursion for the process spinlock.

Tested by: pho
Discussed with: jhb
Sponsored by: The FreeBSD Foundation
MFC after: 1 month


# 5c7bebf9 26-Nov-2014 Konstantin Belousov <kib@FreeBSD.org>

The process spin lock currently has the following distinct uses:

- Threads lifetime cycle, in particular, counting of the threads in
the process, and interlocking with process mutex and thread lock.
The main reason of this is that turnstile locks are after thread
locks, so you e.g. cannot unlock blockable mutex (think process
mutex) while owning thread lock.

- Virtual and profiling itimers, since the timers activation is done
from the clock interrupt context. Replace the p_slock by p_itimmtx
and PROC_ITIMLOCK().

- Profiling code (profil(2)), for similar reason. Replace the p_slock
by p_profmtx and PROC_PROFLOCK().

- Resource usage accounting. Need for the spinlock there is subtle,
my understanding is that spinlock blocks context switching for the
current thread, which prevents td_runtime and similar fields from
changing (updates are done at the mi_switch()). Replace the p_slock
by p_statmtx and PROC_STATLOCK().

The split is done mostly for code clarity, and should not affect
scalability.

Tested by: pho
Sponsored by: The FreeBSD Foundation
MFC after: 1 week


# 40e6bdaf 18-Nov-2014 Warner Losh <imp@FreeBSD.org>

opt_global.h is included automatically in the build. No need to
explicitly include it in these places.

Sponsored by: Netflix


# 2cba8dd3 04-Nov-2014 John Baldwin <jhb@FreeBSD.org>

Add a new thread state "spinning" to schedgraph and add tracepoints at the
start and stop of spinning waits in lock primitives.


# 54366c0b 25-Nov-2013 Attilio Rao <attilio@FreeBSD.org>

- For kernel compiled only with KDTRACE_HOOKS and not any lock debugging
option, unbreak the lock tracing release semantic by embedding
calls to LOCKSTAT_PROFILE_RELEASE_LOCK() direclty in the inlined
version of the releasing functions for mutex, rwlock and sxlock.
Failing to do so skips the lockstat_probe_func invokation for
unlocking.
- As part of the LOCKSTAT support is inlined in mutex operation, for
kernel compiled without lock debugging options, potentially every
consumer must be compiled including opt_kdtrace.h.
Fix this by moving KDTRACE_HOOKS into opt_global.h and remove the
dependency by opt_kdtrace.h for all files, as now only KDTRACE_FRAMES
is linked there and it is only used as a compile-time stub [0].

[0] immediately shows some new bug as DTRACE-derived support for debug
in sfxge is broken and it was never really tested. As it was not
including correctly opt_kdtrace.h before it was never enabled so it
was kept broken for a while. Fix this by using a protection stub,
leaving sfxge driver authors the responsibility for fixing it
appropriately [1].

Sponsored by: EMC / Isilon storage division
Discussed with: rstone
[0] Reported by: rstone
[1] Discussed with: philip


# 7faf4d90 20-Sep-2013 Davide Italiano <davide@FreeBSD.org>

Fix lc_lock/lc_unlock() support for rmlocks held in shared mode. With
current lock classes KPI it was really difficult because there was no
way to pass an rmtracker object to the lock/unlock routines. In order
to accomplish the task, modify the aforementioned functions so that
they can return (or pass as argument) an uinptr_t, which is in the rm
case used to hold a pointer to struct rm_priotracker for current
thread. As an added bonus, this fixes rm_sleep() in the rm shared
case, which right now can communicate priotracker structure between
lc_unlock()/lc_lock().

Suggested by: jhb
Reviewed by: jhb
Approved by: re (delphij)


# ac6b769b 09-Aug-2013 Attilio Rao <attilio@FreeBSD.org>

Give mutex(9) the ability to recurse on a per-instance basis.
Now the MTX_RECURSE flag can be passed to the mtx_*_flag() calls.
This helps in cases we want to narrow down to specific calls the
possibility to recurse for some locks.

Sponsored by: EMC / Isilon storage division
Reviewed by: jeff, alc
Tested by: pho


# de925dd3 30-Jul-2013 Scott Long <scottl@FreeBSD.org>

Fix r253823. Some WIP patches snuck in.

Submitted by: zont


# fc4a5f05 30-Jul-2013 Scott Long <scottl@FreeBSD.org>

Create a knob, kern.ipc.sfreadahead, that allows one to tune the amount of
readahead that sendfile() will do. Default remains the same.

Obtained from: Netflix
MFC after: 3 days


# b5fb43e5 25-Jun-2013 John Baldwin <jhb@FreeBSD.org>

A few mostly cosmetic nits to aid in debugging:
- Call lock_init() first before setting any lock_object fields in
lock init routines. This way if the machine panics due to a duplicate
init the lock's original state is preserved.
- Somewhat similarly, don't decrement td_locks and td_slocks until after
an unlock operation has completed successfully.


# cd2fe4e6 22-Dec-2012 Attilio Rao <attilio@FreeBSD.org>

Fixup r240424: On entering KDB backends, the hijacked thread to run
interrupt context can still be idlethread. At that point, without the
panic condition, it can still happen that idlethread then will try to
acquire some locks to carry on some operations.

Skip the idlethread check on block/sleep lock operations when KDB is
active.

Reported by: jh
Tested by: jh
MFC after: 1 week


# 7f44c618 31-Oct-2012 Attilio Rao <attilio@FreeBSD.org>

Give mtx(9) the ability to crunch different type of structures, with the
only constraint that they have a lock cookie named mtx_lock.
This name, then, becames reserved from the struct that wants to use the
mtx(9) KPI and other locking primitives cannot reuse it for their
members.

Namely such structs are the current struct mtx and the new
struct mtx_padalign. The new structure will define an object which is
the same as the same layout of a struct mtx but will be allocated in
areas aligned to the cache line size and will be as big as a cache line.

This is supposed to give higher performance for highly contented mutexes
both spin or sleep (because of the adaptive spinning), where the cache
line contention results in too much traffic on the system bus.

The struct mtx_padalign can be used in a completely transparent way
with the mtx(9) KPI.

At the moment, a possibility to MFC the patch should be carefully
evaluated because this patch breaks the low level KPI
(not its representation though).

Discussed with: jhb
Reviewed by: jeff, andre
Reviewed by: mdf (earlier version)
Tested by: jimharris


# 0a15e5d3 13-Sep-2012 Attilio Rao <attilio@FreeBSD.org>

Remove all the checks on curthread != NULL with the exception of some MD
trap checks (eg. printtrap()).

Generally this check is not needed anymore, as there is not a legitimate
case where curthread != NULL, after pcpu 0 area has been properly
initialized.

Reviewed by: bde, jhb
MFC after: 1 week


# e3ae0dfe 12-Sep-2012 Attilio Rao <attilio@FreeBSD.org>

Improve check coverage about idle threads.

Idle threads are not allowed to acquire any lock but spinlocks.
Deny any attempt to do so by panicing at the locking operation
when INVARIANTS is on. Then, remove the check on blocking on a
turnstile.
The check in sleepqueues is left because they are not allowed to use
tsleep() either which could happen still.

Reviewed by: bde, jhb, kib
MFC after: 1 week


# f5f9340b 28-Mar-2012 Fabien Thomas <fabient@FreeBSD.org>

Add software PMC support.

New kernel events can be added at various location for sampling or counting.
This will for example allow easy system profiling whatever the processor is
with known tools like pmcstat(8).

Simultaneous usage of software PMC and hardware PMC is possible, for example
looking at the lock acquire failure, page fault while sampling on
instructions.

Sponsored by: NETASQ
MFC after: 1 month


# 35370593 11-Dec-2011 Andriy Gapon <avg@FreeBSD.org>

panic: add a switch and infrastructure for stopping other CPUs in SMP case

Historical behavior of letting other CPUs merily go on is a default for
time being. The new behavior can be switched on via
kern.stop_scheduler_on_panic tunable and sysctl.

Stopping of the CPUs has (at least) the following benefits:
- more of the system state at panic time is preserved intact
- threads and interrupts do not interfere with dumping of the system
state

Only one thread runs uninterrupted after panic if stop_scheduler_on_panic
is set. That thread might call code that is also used in normal context
and that code might use locks to prevent concurrent execution of certain
parts. Those locks might be held by the stopped threads and would never
be released. To work around this issue, it was decided that instead of
explicit checks for panic context, we would rather put those checks
inside the locking primitives.

This change has substantial portions written and re-written by attilio
and kib at various times. Other changes are heavily based on the ideas
and patches submitted by jhb and mdf. bde has provided many insights
into the details and history of the current code.

The new behavior may cause problems for systems that use a USB keyboard
for interfacing with system console. This is because of some unusual
locking patterns in the ukbd code which have to be used because on one
hand ukbd is below syscons, but on the other hand it has to interface
with other usb code that uses regular mutexes/Giant for its concurrency
protection. Dumping to USB-connected disks may also be affected.

PR: amd64/139614 (at least)
In cooperation with: attilio, jhb, kib, mdf
Discussed with: arch@, bde
Tested by: Eugene Grosbein <eugen@grosbein.net>,
gnn,
Steven Hartland <killing@multiplay.co.uk>,
glebius,
Andrew Boyer <aboyer@averesystems.com>
(various versions of the patch)
MFC after: 3 months (or never)


# ccdf2333 20-Nov-2011 Attilio Rao <attilio@FreeBSD.org>

Introduce macro stubs in the mutex implementation that will be always
defined and will allow consumers, willing to provide options, file and
line to locking requests, to not worry about options redefining the
interfaces.
This is typically useful when there is the need to build another
locking interface on top of the mutex one.

The introduced functions that consumers can use are:
- mtx_lock_flags_
- mtx_unlock_flags_
- mtx_lock_spin_flags_
- mtx_unlock_spin_flags_
- mtx_assert_
- thread_lock_flags_

Spare notes:
- Likely we can get rid of all the 'INVARIANTS' specification in the
ppbus code by using the same macro as done in this patch (but this is
left to the ppbus maintainer)
- all the other locking interfaces may require a similar cleanup, where
the most notable case is sx which will allow a further cleanup of
vm_map locking facilities
- The patch should be fully compatible with older branches, thus a MFC
is previewed (infact it uses all the underlying mechanisms already
present).

Comments review by: eadler, Ben Kaduk
Discussed with: kib, jhb
MFC after: 1 month


# d576deed 16-Nov-2011 Pawel Jakub Dawidek <pjd@FreeBSD.org>

Constify arguments for locking KPIs where possible.

This enables locking consumers to pass their own structures around as const and
be able to assert locks embedded into those structures.

Reviewed by: ed, kib, jhb


# 961135ea 09-Nov-2010 John Baldwin <jhb@FreeBSD.org>

- Remove <machine/mutex.h>. Most of the headers were empty, and the
contents of the ones that were not empty were stale and unused.
- Now that <machine/mutex.h> no longer exists, there is no need to allow it
to override various helper macros in <sys/mutex.h>.
- Rename various helper macros for low-level operations on mutexes to live
in the _mtx_* or __mtx_* namespaces. While here, change the names to more
closely match the real API functions they are backing.
- Drop support for including <sys/mutex.h> in assembly source files.

Suggested by: bde (1, 2)


# a7d5f7eb 19-Oct-2010 Jamie Gritton <jamie@FreeBSD.org>

A new jail(8) with a configuration file, to replace the work currently done
by /etc/rc.d/jail.


# d5a62857 18-May-2010 Attilio Rao <attilio@FreeBSD.org>

MFC r207922, r207925, r207929, r208052:
- Change the db_printf return value in order to catch up with printf
- Make witness_list_locks() and witness_display_spinlock() accept
callbacks for printf-like functions in order to queue the output on the
correct channel.


# 98332c8c 11-May-2010 Attilio Rao <attilio@FreeBSD.org>

Right now, WITNESS just blindly pipes all the output to the
(TOCONS | TOLOG) mask even when called from DDB points.
That breaks several output, where the most notable is textdump output.
Fix this by having configurable callbacks passed to witness_list_locks()
and witness_display_spinlock() for printing out datas.

Reported by: several broken textdump outputs
Tested by: Giovanni Trematerra
<giovanni dot trematerra at gmail dot com>
MFC after: 7 days
X-MFC: r207922


# 95e1c1fa 08-Feb-2010 Attilio Rao <attilio@FreeBSD.org>

MC r202889, r202940:
- Fix a race in sched_switch() of sched_4bsd.
Block the td_lock when acquiring explicitly sched_lock in order to prevent
races with other td_lock contenders.
- Merge the ULE's internal function thread_block_switch() into the global
thread_lock_block() and make the former semantic as the default for
thread_lock_block().
- Split out an invariant in order to have better checks.


# b0b9dee5 23-Jan-2010 Attilio Rao <attilio@FreeBSD.org>

- Fix a race in sched_switch() of sched_4bsd.
In the case of the thread being on a sleepqueue or a turnstile, the
sched_lock was acquired (without the aid of the td_lock interface) and
the td_lock was dropped. This was going to break locking rules on other
threads willing to access to the thread (via the td_lock interface) and
modify his flags (allowed as long as the container lock was different
by the one used in sched_switch).
In order to prevent this situation, while sched_lock is acquired there
the td_lock gets blocked. [0]
- Merge the ULE's internal function thread_block_switch() into the global
thread_lock_block() and make the former semantic as the default for
thread_lock_block(). This means that thread_lock_block() will not
disable interrupts when called (and consequently thread_unlock_block()
will not re-enabled them when called). This should be done manually
when necessary.
Note, however, that ULE's thread_unblock_switch() is not reaped
because it does reflect a difference in semantic due in ULE (the
td_lock may not be necessarilly still blocked_lock when calling this).
While asymmetric, it does describe a remarkable difference in semantic
that is good to keep in mind.

[0] Reported by: Kohji Okuno
<okuno dot kohji at jp dot panasonic dot com>
Tested by: Giovanni Trematerra
<giovanni dot trematerra at gmail dot com>
MFC: 2 weeks


# 67784314 08-Sep-2009 Poul-Henning Kamp <phk@FreeBSD.org>

Revert previous commit and add myself to the list of people who should
know better than to commit with a cat in the area.


# b34421bf 08-Sep-2009 Poul-Henning Kamp <phk@FreeBSD.org>

Add necessary include.


# 7e2d0af9 17-Aug-2009 Attilio Rao <attilio@FreeBSD.org>

MFC r196334:

* Change the scope of the ASSERT_ATOMIC_LOAD() from a generic check to
a pointer-fetching specific operation check. Consequently, rename the
operation ASSERT_ATOMIC_LOAD_PTR().
* Fix the implementation of ASSERT_ATOMIC_LOAD_PTR() by checking
directly alignment on the word boundry, for all the given specific
architectures. That's a bit too strict for some common case, but it
assures safety.
* Add a comment explaining the scope of the macro
* Add a new stub in the lockmgr specific implementation

Tested by: marcel (initial version), marius
Reviewed by: rwatson, jhb (comment specific review)
Approved by: re (kib)


# 353998ac 17-Aug-2009 Attilio Rao <attilio@FreeBSD.org>

* Change the scope of the ASSERT_ATOMIC_LOAD() from a generic check to
a pointer-fetching specific operation check. Consequently, rename the
operation ASSERT_ATOMIC_LOAD_PTR().
* Fix the implementation of ASSERT_ATOMIC_LOAD_PTR() by checking
directly alignment on the word boundry, for all the given specific
architectures. That's a bit too strict for some common case, but it
assures safety.
* Add a comment explaining the scope of the macro
* Add a new stub in the lockmgr specific implementation

Tested by: marcel (initial version), marius
Reviewed by: rwatson, jhb (comment specific review)
Approved by: re (kib)


# 21845eba 14-Aug-2009 Bjoern A. Zeeb <bz@FreeBSD.org>

MFC r196226:

Add a new macro to test that a variable could be loaded atomically.
Check that the given variable is at most uintptr_t in size and that
it is aligned.

Note: ASSERT_ATOMIC_LOAD() uses ALIGN() to check for adequate
alignment -- however, the function of ALIGN() is to guarantee
alignment, and therefore may lead to stronger alignment
enforcement than necessary for types that are smaller than
sizeof(uintptr_t).

Add checks to mtx, rw and sx locks init functions to detect possible
breakage. This was used during debugging of the problem fixed with
r196118 where a pointer was on an un-aligned address in the dpcpu area.

In collaboration with: rwatson
Reviewed by: rwatson

Approved by: re (kib)


# 8d518523 14-Aug-2009 Bjoern A. Zeeb <bz@FreeBSD.org>

Add a new macro to test that a variable could be loaded atomically.
Check that the given variable is at most uintptr_t in size and that
it is aligned.

Note: ASSERT_ATOMIC_LOAD() uses ALIGN() to check for adequate
alignment -- however, the function of ALIGN() is to guarantee
alignment, and therefore may lead to stronger alignment
enforcement than necessary for types that are smaller than
sizeof(uintptr_t).

Add checks to mtx, rw and sx locks init functions to detect possible
breakage. This was used during debugging of the problem fixed with
r196118 where a pointer was on an un-aligned address in the dpcpu area.

In collaboration with: rwatson
Reviewed by: rwatson
Approved by: re (kib)


# a571ad41 29-May-2009 John Baldwin <jhb@FreeBSD.org>

Remove extra cpu_spinwait() invocations. This should really only be used
in tight spin loops, not in these edge cases where we restart a much
larger loop only a few times.

Reviewed by: attilio


# fa29f023 29-May-2009 John Baldwin <jhb@FreeBSD.org>

Tweak a few comments on adaptive spinning.


# a5aedd68 26-May-2009 Stacey Son <sson@FreeBSD.org>

Add the OpenSolaris dtrace lockstat provider. The lockstat provider
adds probes for mutexes, reader/writer and shared/exclusive locks to
gather contention statistics and other locking information for
dtrace scripts, the lockstat(1M) command and other potential
consumers.

Reviewed by: attilio jhb jb
Approved by: gnn (mentor)


# 583220dc 20-May-2009 John Baldwin <jhb@FreeBSD.org>

Remove an obsolete assertion. We always wake up all waiters when unlocking
a mutex and never set the lock cookie == MTX_CONTESTED.


# 1723a064 15-Mar-2009 Jeff Roberson <jeff@FreeBSD.org>

- Wrap lock profiling state variables in #ifdef LOCK_PROFILING blocks.


# d3df4af3 14-Mar-2009 Jeff Roberson <jeff@FreeBSD.org>

- When a mutex is destroyed while locked we need to inform lock profiling
that it has been released.


# d7f03759 19-Oct-2008 Ulf Lilleengen <lulf@FreeBSD.org>

- Import the HEAD csup code which is the basis for the cvsmode work.


# 41313430 10-Sep-2008 John Baldwin <jhb@FreeBSD.org>

Teach WITNESS about the interlocks used with lockmgr. This removes a bunch
of spurious witness warnings since lockmgr grew witness support. Before
this, every time you passed an interlock to a lockmgr lock WITNESS treated
it as a LOR.

Reviewed by: attilio


# bf9c6c31 10-Sep-2008 John Baldwin <jhb@FreeBSD.org>

Various whitespace fixes.


# ad69e26b 13-Feb-2008 John Baldwin <jhb@FreeBSD.org>

Add KASSERT()'s to catch attempts to recurse on spin mutexes that aren't
marked recursable either via mtx_lock_spin() or thread_lock().

MFC after: 1 week


# 13c85a48 13-Feb-2008 John Baldwin <jhb@FreeBSD.org>

Add a couple of assertions and KTR logging to thread_lock_flags() to
match mtx_lock_spin_flags().

MFC after: 1 week


# eea4f254 15-Dec-2007 Jeff Roberson <jeff@FreeBSD.org>

- Re-implement lock profiling in such a way that it no longer breaks
the ABI when enabled. There is no longer an embedded lock_profile_object
in each lock. Instead a list of lock_profile_objects is kept per-thread
for each lock it may own. The cnt_hold statistic is now always 0 to
facilitate this.
- Support shared locking by tracking individual lock instances and
statistics in the per-thread per-instance lock_profile_object.
- Make the lock profiling hash table a per-cpu singly linked list with a
per-cpu static lock_prof allocator. This removes the need for an array
of spinlocks and reduces cache contention between cores.
- Use a seperate hash for spinlocks and other locks so that only a
critical_enter() is required and not a spinlock_enter() to modify the
per-cpu tables.
- Count time spent spinning in the lock statistics.
- Remove the LOCK_PROFILE_SHARED option as it is always supported now.
- Specifically drop and release the scheduler locks in both schedulers
since we track owners now.

In collaboration with: Kip Macy
Sponsored by: Nokia


# 573c6b82 27-Nov-2007 Attilio Rao <attilio@FreeBSD.org>

Make ADAPTIVE_GIANT as the default in the kernel and remove the option.
Currently, Giant is not too much contented so that it is ok to treact it
like any other mutexes.

Please don't forget to update your own custom config kernel files.

Approved by: cognet, marcel (maintainers of arches where option is
not enabled at the moment)


# 49aead8a 26-Nov-2007 Attilio Rao <attilio@FreeBSD.org>

Simplify the adaptive spinning algorithm in rwlock and mutex:
currently, before to spin the turnstile spinlock is acquired and the
waiters flag is set.
This is not strictly necessary, so just spin before to acquire the
spinlock and to set the flags.
This will simplify a lot other functions too, as now we have the waiters
flag set only if there are actually waiters.
This should make wakeup/sleeping couplet faster under intensive mutex
workload.
This also fixes a bug in rw_try_upgrade() in the adaptive case, where
turnstile_lookup() will recurse on the ts_lock lock that will never be
really released [1].

[1] Reported by: jeff with Nokia help
Tested by: pho, kris (earlier, bugged version of rwlock part)
Discussed with: jhb [2], jeff
MFC after: 1 week

[2] John had a similar patch about 6.x and/or 7.x about mutexes probabilly


# f9721b43 18-Nov-2007 Attilio Rao <attilio@FreeBSD.org>

Expand lock class with the "virtual" function lc_assert which will offer
an unified way for all the lock primitives to express lock assertions.
Currenty, lockmgrs and rmlocks don't have assertions, so just panic in
that case.
This will be a base for more callout improvements.

Ok'ed by: jhb, jeff


# 431f8906 13-Nov-2007 Julian Elischer <julian@FreeBSD.org>

generally we are interested in what thread did something as
opposed to what process. Since threads by default have teh name of the
process unless over-written with more useful information, just print the
thread name instead.


# 6ea38de8 18-Jul-2007 Jeff Roberson <jeff@FreeBSD.org>

- Remove the global definition of sched_lock in mutex.h to break
new code and third party modules which try to depend on it.
- Initialize sched_lock in sched_4bsd.c.
- Declare sched_lock in sparc64 pmap.c and assert that we're compiling
with SCHED_4BSD to prevent accidental crashes from running ULE. This
is the sole remaining file outside of the scheduler that uses the
global sched_lock.

Approved by: re


# 773890b9 18-Jul-2007 Jeff Roberson <jeff@FreeBSD.org>

- Add the proper lock profiling calls to _thread_lock().

Obtained from: kipmacy
Approved by: re


# 65d32cd8 09-Jun-2007 Matt Jacob <mjacob@FreeBSD.org>

Propagate volatile qualifier to make gcc4.2 happy.


# e6825691 08-Jun-2007 Attilio Rao <attilio@FreeBSD.org>

Remove the MUTEX_WAKE_ALL option and make it the default behaviour for our
mutexes.
Currently we alredy force MUTEX_WAKE_ALL beacause of some problems with the
!MUTEX_WAKE_ALL case (unavioidable priority inversion).


# 710eacdc 05-Jun-2007 Jeff Roberson <jeff@FreeBSD.org>

- Placing the 'volatile' on the right side of the * in the td_lock
declaration removes the need for __DEVOLATILE().

Pointed out by: tegge


# d301eb10 05-Jun-2007 Attilio Rao <attilio@FreeBSD.org>

Fix a problem with not-preemptive kernels caming from mis-merging of
existing code with the new thread_lock patch.
This also cleans up a bit unlock operation for mutexes.

Approved by: jhb, jeff(mentor)


# b95b98b0 05-Jun-2007 Konstantin Belousov <kib@FreeBSD.org>

Restore non-SMP build.

Reviewed by: attilio


# 2502c107 04-Jun-2007 Jeff Roberson <jeff@FreeBSD.org>

Commit 3/14 of sched_lock decomposition.
- Add a per-turnstile spinlock to solve potential priority propagation
deadlocks that are possible with thread_lock().
- The turnstile lock order is defined as the exact opposite of the
lock order used with the sleep locks they represent. This allows us
to walk in reverse order in priority_propagate and this is the only
place we wish to multiply acquire turnstile locks.
- Use the turnstile_chain lock to protect assigning mutexes to turnstiles.
- Change the turnstile interface to pass back turnstile pointers to the
consumers. This allows us to reduce some locking and makes it easier
to cancel turnstile assignment while the turnstile chain lock is held.

Tested by: kris, current@
Tested on: i386, amd64, ULE, 4BSD, libthr, libkse, PREEMPTION, etc.
Discussed with: kris, attilio, kmacy, jhb, julian, bde (small parts each)


# c91fcee7 18-May-2007 John Baldwin <jhb@FreeBSD.org>

Move lock_profile_object_{init,destroy}() into lock_{init,destroy}().


# c0bfd703 08-May-2007 John Baldwin <jhb@FreeBSD.org>

Teach 'show lock' to properly handle a destroyed mutex.


# 70fe8436 03-Apr-2007 Kip Macy <kmacy@FreeBSD.org>

move lock_profile calls out of the macros and into kern_mutex.c
add check for mtx_recurse == 0 when releasing sleep lock


# cd6e6e4e 22-Mar-2007 John Baldwin <jhb@FreeBSD.org>

- Simplify the #ifdef's for adaptive mutexes and rwlocks by conditionally
defining a macro earlier in the file.
- Add NO_ADAPTIVE_RWLOCKS option to disable adaptive spinning for rwlocks.


# aa89d8cd 21-Mar-2007 John Baldwin <jhb@FreeBSD.org>

Rename the 'mtx_object', 'rw_object', and 'sx_object' members of mutexes,
rwlocks, and sx locks to 'lock_object'.


# 6e21afd4 09-Mar-2007 John Baldwin <jhb@FreeBSD.org>

Add two new function pointers 'lc_lock' and 'lc_unlock' to lock classes.
These functions are intended to be used to drop a lock and then reacquire
it when doing an sleep such as msleep(9). Both functions accept a
'struct lock_object *' as their first parameter. The 'lc_unlock' function
returns an integer that is then passed as the second paramter to the
subsequent 'lc_lock' function. This can be used to communicate state.
For example, sx locks and rwlocks use this to indicate if the lock was
share/read locked vs exclusive/write locked.

Currently, spin mutexes and lockmgr locks do not provide working lc_lock
and lc_unlock functions.


# ae8dde30 09-Mar-2007 John Baldwin <jhb@FreeBSD.org>

Use C99-style struct member initialization for lock classes.


# c66d7606 02-Mar-2007 Kip Macy <kmacy@FreeBSD.org>

lock stats updates need to be protected by the lock


# a5bceb77 01-Mar-2007 Kip Macy <kmacy@FreeBSD.org>

Evidently I've overestimated gcc's ability to peak inside inline functions
and optimize away unused stack values. The 48 bytes that the lock_profile_object
adds to the stack evidently has a measurable performance impact on certain workloads.


# f183910b 26-Feb-2007 Kip Macy <kmacy@FreeBSD.org>

Further improvements to LOCK_PROFILING:
- Fix missing initialization in kern_rwlock.c causing bogus times to be collected
- Move updates to the lock hash to after the lock is released for spin mutexes,
sleep mutexes, and sx locks
- Add new kernel build option LOCK_PROFILE_FAST - only update lock profiling
statistics when an acquisition is contended. This reduces the overhead of
LOCK_PROFILING to increasing system time by 20%-25% which on
"make -j8 kernel-toolchain" on a dual woodcrest is unmeasurable in terms
of wall-clock time. Contrast this to enabling lock profiling without
LOCK_PROFILE_FAST and I see a 5x-6x slowdown in wall-clock time.


# fe68a916 26-Feb-2007 Kip Macy <kmacy@FreeBSD.org>

general LOCK_PROFILING cleanup

- only collect timestamps when a lock is contested - this reduces the overhead
of collecting profiles from 20x to 5x

- remove unused function from subr_lock.c

- generalize cnt_hold and cnt_lock statistics to be kept for all locks

- NOTE: rwlock profiling generates invalid statistics (and most likely always has)
someone familiar with that should review


# 1364a812 15-Dec-2006 Kip Macy <kmacy@FreeBSD.org>

- Fix some gcc warnings in lock_profile.h
- add cnt_hold cnt_lock support for spin mutexes
- make sure contested is initialized to zero to only bump contested when appropriate
- move initialization function to kern_mutex.c to avoid cyclic dependency between
mutex.h and lock_profile.h


# 61bd5e21 12-Nov-2006 Kip Macy <kmacy@FreeBSD.org>

track lock class name in a way that doesn't break WITNESS


# 7c0435b9 10-Nov-2006 Kip Macy <kmacy@FreeBSD.org>

MUTEX_PROFILING has been generalized to LOCK_PROFILING. We now profile
wait (time waited to acquire) and hold times for *all* kernel locks. If
the architecture has a system synchronized TSC, the profiling code will
use that - thereby minimizing profiling overhead. Large chunks of profiling
code have been moved out of line, the overhead measured on the T1 for when
it is compiled in but not enabled is < 1%.

Approved by: scottl (standing in for mentor rwatson)
Reviewed by: des and jhb


# 0fa2168b 15-Aug-2006 John Baldwin <jhb@FreeBSD.org>

- When spinning on a spin lock, if the debugger is active or we are in a
panic, go ahead and do the longer DELAY(1) spin wait.
- If we panic due to spinning too long, print out a few more details
including the pointer to the mutex in question and the tid of the owning
thread.


# 764e4d54 27-Jul-2006 John Baldwin <jhb@FreeBSD.org>

Adjust td_locks for non-spin mutexes, rwlocks, and sx locks so that it is
a count of all non-spin locks, not just lockmgr locks. This can give us a
much cheaper way to see if we have any locks held (such as when returning
to userland via userret()) without requiring WITNESS.

MFC after: 1 week


# 186abbd7 27-Jul-2006 John Baldwin <jhb@FreeBSD.org>

Write a magic value into mtx_lock when destroying a mutex that will force
all other mtx_lock() operations to block. Previously, when the mutex was
destroyed, it would still have a valid value in mtx_lock(): either the
unowned cookie, which would allow a subsequent mtx_lock() to succeed, or a
pointer to the thread who destroyed the mutex if the mutex was locked when
it was destroyed.

MFC after: 3 days


# 49b94bfc 03-Jun-2006 John Baldwin <jhb@FreeBSD.org>

Bah, fix fat finger in last. Invert the ~ on MTX_FLAGMASK as it's
non-intuitive for the ~ to be built into the mask. All the users now
explicitly ~ the mask. In addition, add MTX_UNOWNED to the mask even
though it technically isn't a flag. This should unbreak mtx_owner().

Quickly spotted by: kris


# 315ce35f 03-Jun-2006 John Baldwin <jhb@FreeBSD.org>

Simplify mtx_owner() so it only reads m->mtx_lock once.


# f781b5a4 03-Jun-2006 John Baldwin <jhb@FreeBSD.org>

Style fix to be more like _mtx_lock_sleep(): use 'while (!foo) { ... }'
instead of 'for (;;) { if (foo) break; ... }'.


# c40da00c 16-May-2006 Poul-Henning Kamp <phk@FreeBSD.org>

Since DELAY() was moved, most <machine/clock.h> #includes have been
unnecessary.


# 73dbd3da 11-May-2006 John Baldwin <jhb@FreeBSD.org>

Remove various bits of conditional Alpha code and fixup a few comments.


# 76447e56 14-Apr-2006 John Baldwin <jhb@FreeBSD.org>

Mark the thread pointer used during an adaptive spin volatile so that the
compiler doesn't decide to cache td_state. Cachine the state would cause
the spinning thread to not notice when the owning thread stopped executing
(if it was preempted for example) which could result in livelock.


# 7aa4f685 27-Jan-2006 John Baldwin <jhb@FreeBSD.org>

- Add support for having both a shared and exclusive queue of threads in
each turnstile. Also, allow for the owner thread pointer of a turnstile
to be NULL. This is needed for the upcoming reader/writer lock
implementation.
- Add a new ddb command 'show turnstile' that will look up the turnstile
associated with the given lock argument and display useful information
like the list of threads blocked on each queue, etc. If there isn't an
active turnstile for a lock at the specified address, then the function
will see if there is an active turnstile at the specified address and
display info about it if so.
- Adjust the mutex code to handle the turnstile API changes.

Tested on: i386 (all), alpha, amd64, sparc64 (1 and 3)


# 67f7fe8c 24-Jan-2006 John Baldwin <jhb@FreeBSD.org>

Whitespace fix.


# 83a81bcb 17-Jan-2006 John Baldwin <jhb@FreeBSD.org>

Add a new file (kern/subr_lock.c) for holding code related to struct
lock_obj objects:
- Add new lock_init() and lock_destroy() functions to setup and teardown
lock_object objects including KTR logging and registering with WITNESS.
- Move all the handling of LO_INITIALIZED out of witness and the various
lock init functions into lock_init() and lock_destroy().
- Remove the constants for static indices into the lock_classes[] array
and change the code outside of subr_lock.c to use LOCK_CLASS to compare
against a known lock class.
- Move the 'show lock' ddb function and lock_classes[] array out of
kern_mutex.c over to subr_lock.c.


# 550d1c93 17-Jan-2006 John Baldwin <jhb@FreeBSD.org>

Initialize thread0.td_contested in init_turnstiles() rather than
mutex_init() as it is used by the turnstile code and is not mutex-specific.


# 861a2308 07-Jan-2006 Scott Long <scottl@FreeBSD.org>

If destroying a spinlock, make sure that it is exited properly.

Submitted by: jhb
MFC After: 3 days


# 3b783acd 07-Jan-2006 John Baldwin <jhb@FreeBSD.org>

Revert an untested local change that crept in with the lo_class changes
and subsequently broke the build. This change is supposed to fix the
case where doing a mtx_destroy() off a spin mutex while you hold it fails.
If it had been tested I would just leave it in, but it hasn't been tested
yet, so it will have to wait until later.


# 75d6a87f 06-Jan-2006 Tai-hwa Liang <avatar@FreeBSD.org>

Trying to fix compilation bustage introduced in rev1.160 by converting
a missing lo_class to LO_CLASSINDEX().


# 3c6decc3 06-Jan-2006 John Baldwin <jhb@FreeBSD.org>

Trim another pointer from struct lock_object (and thus from struct mtx and
struct sx). Instead of storing a direct pointer to a our lock_class
struct in lock_object, reserve 4 bits in the lo_flags field to serve as an
index into a global lock_classes array that contains pointers to the lock
classes. Only debugging code such as WITNESS or INVARIANTS checks and KTR
logging need to access the lock_class member, so this shouldn't add any
overhead to production kernels. It might add some slight overhead to
kernels using those debug options however.

As with the previous set of changes to lock_object, this is going to
completely obliterate the kernel ABI, so be sure to recompile all your
modules.


# d272fe53 13-Dec-2005 John Baldwin <jhb@FreeBSD.org>

Add a new 'show lock' command to ddb. If the argument has a valid lock
class, then it displays various information about the lock and calls a
new function pointer in lock_class (lc_ddb_show) to dump class-specific
information about the lock as well (such as the owner of a mutex or
xlock'ed sx lock). This is easier than staring at hex dumps of locks to
figure out who owns the lock, etc. Note that extending lock_class doesn't
affect the ABI for any kernel modules as the only code that deals with
lock_class structures directly is kern_mutex.c, kern_sx.c, and witness.

MFC after: 1 week


# 8c4b6380 18-Oct-2005 John Baldwin <jhb@FreeBSD.org>

Move the initialization of the devmtx into the mutex_init() function
called during early init before cninit().

Tested on: i386, alpha, sparc64
Reviewed by: phk, imp
Reported by: Divacky Roman xdivac02 at stud dot fit dot vutbr dot cz
MFC after: 1 week


# 83cece6f 02-Sep-2005 John Baldwin <jhb@FreeBSD.org>

- Add an assertion to panic if one tries to call mtx_trylock() on a spin
mutex.
- Don't panic if a spin lock is held too long inside _mtx_lock_spin() if
panicstr is set (meaning that we are already in a panic). Just keep
spinning forever instead.


# 1126349a 29-Jul-2005 Paul Saab <ps@FreeBSD.org>

Ignore mutex asserts when we're dumping as well. This allows me
to panic a system from DDB when INVARIANTS is compiled into the
kernel on a scsi system.


# 122eceef 15-Jul-2005 John Baldwin <jhb@FreeBSD.org>

Convert the atomic_ptr() operations over to operating on uintptr_t
variables rather than void * variables. This makes it easier and simpler
to get asm constraints and volatile keywords correct.

MFC after: 3 days
Tested on: i386, alpha, sparc64
Compiled on: ia64, powerpc, amd64
Kernel toolchain busted on: arm


# 4f201858 08-Apr-2005 Gleb Smirnoff <glebius@FreeBSD.org>

Add additional newline to debug.mutex.prof.stats header, so that
column names are printed exactly above the columns.


# c6a37e84 04-Apr-2005 John Baldwin <jhb@FreeBSD.org>

Divorce critical sections from spinlocks. Critical sections as denoted by
critical_enter() and critical_exit() are now solely a mechanism for
deferring kernel preemptions. They no longer have any affect on
interrupts. This means that standalone critical sections are now very
cheap as they are simply unlocked integer increments and decrements for the
common case.

Spin mutexes now use a separate KPI implemented in MD code: spinlock_enter()
and spinlock_exit(). This KPI is responsible for providing whatever MD
guarantees are needed to ensure that a thread holding a spin lock won't
be preempted by any other code that will try to lock the same lock. For
now all archs continue to block interrupts in a "spinlock section" as they
did formerly in all critical sections. Note that I've also taken this
opportunity to push a few things into MD code rather than MI. For example,
critical_fork_exit() no longer exists. Instead, MD code ensures that new
threads have the correct state when they are created. Also, we no longer
try to fixup the idlethreads for APs in MI code. Instead, each arch sets
the initial curthread and adjusts the state of the idle thread it borrows
in order to perform the initial context switch.

This change is largely a big NOP, but the cleaner separation it provides
will allow for more efficient alternative locking schemes in other parts
of the kernel (bare critical sections rather than per-CPU spin mutexes
for per-CPU data for example).

Reviewed by: grehan, cognet, arch@, others
Tested on: i386, alpha, sparc64, powerpc, arm, possibly more


# 33fb8a38 05-Jan-2005 John Baldwin <jhb@FreeBSD.org>

Rework the optimization for spinlocks on UP to be slightly less drastic and
turn it back on. Specifically, the actual changes are now less intrusive
in that the _get_spin_lock() and _rel_spin_lock() macros now have their
contents changed for UP vs SMP kernels which centralizes the changes.
Also, UP kernels do not use _mtx_lock_spin() and no longer include it. The
UP versions of the spin lock functions do not use any atomic operations,
but simple compares and stores which allow mtx_owned() to still work for
spin locks while removing the overhead of atomic operations.

Tested on: i386, alpha


# 2ff0e645 12-Oct-2004 John Baldwin <jhb@FreeBSD.org>

Refine the turnstile and sleep queue interfaces just a bit:
- Add a new _lock() call to each API that locks the associated chain lock
for a lock_object pointer or wait channel. The _lookup() functions now
require that the chain lock be locked via _lock() when they are called.
- Change sleepq_add(), turnstile_wait() and turnstile_claim() to lookup
the associated queue structure internally via _lookup() rather than
accepting a pointer from the caller. For turnstiles, this means that
the actual lookup of the turnstile in the hash table is only done when
the thread actually blocks rather than being done on each loop iteration
in _mtx_lock_sleep(). For sleep queues, this means that sleepq_lookup()
is no longer used outside of the sleep queue code except to implement an
assertion in cv_destroy().
- Change sleepq_broadcast() and sleepq_signal() to require that the chain
lock is already required. For condition variables, this lets the
cv_broadcast() and cv_signal() functions lock the sleep queue chain lock
while testing the waiters count. This means that the waiters count
internal to condition variables is no longer protected by the interlock
mutex and cv_broadcast() and cv_signal() now no longer require that the
interlock be held when they are called. This lets consumers of condition
variables drop the lock before waking other threads which can result in
fewer context switches.

MFC after: 1 month


# b9a80aca 12-Oct-2004 Stephan Uphoff <ups@FreeBSD.org>

Force MUTEX_WAKE_ALL.
A race condition in single thread wakeup may break priority inheritance.

Tested by: pho
Reviewed by: jhb,julian
Approved by: sam (mentor)
MFC: ASAP


# 9923b511 02-Sep-2004 Scott Long <scottl@FreeBSD.org>

Turn PREEMPTION into a kernel option. Make sure that it's defined if
FULL_PREEMPTION is defined. Add a runtime warning to ULE if PREEMPTION is
enabled (code inspired by the PREEMPTION warning in kern_switch.c). This
is a possible MT5 candidate.


# 00096801 19-Aug-2004 John-Mark Gurney <jmg@FreeBSD.org>

add options MPROF_BUFFERS and MPROF_HASH_SIZE that adjust the sizes of
the mutex profiling buffers. Document them in the man page and in NOTES.
Ensure _HASH_SIZE is larger than _BUFFERS with a cpp error.


# bdcfcf5b 04-Aug-2004 John Baldwin <jhb@FreeBSD.org>

Cache the value of curthread in the _get_sleep_lock() and _get_spin_lock()
macros and pass the value to the associated _mtx_*() functions to avoid
more curthread dereferences in the function implementations. This provided
a very modest perf improvement in some benchmarks.

Suggested by: rwatson
Tested by: scottl


# 9f1b87f1 03-Aug-2004 Maxime Henrion <mux@FreeBSD.org>

Instead of calling ia32_pause() conditionally on __i386__ or __amd64__
being defined, define and use a new MD macro, cpu_spinwait(). It only
expands to something on i386 and amd64, so the compiled code should be
identical.

Name of the macro found by: jhb
Reviewed by: jhb


# a9abdce4 27-Jul-2004 Robert Watson <rwatson@FreeBSD.org>

Add "options ADAPTIVE_GIANT" which causes Giant to also be treated in
an adaptive fashion when adaptive mutexes are enabled. The theory
behind non-adaptive Giant is that Giant will be held for long periods
of time, and therefore spinning waiting on it is wasteful. However,
in MySQL benchmarks which are relatively Giant-free, running Giant
adaptive makes an observable difference on SMP (5% transaction rate
improvement). As such, make adaptive behavior on Giant an option so
it can be more widely benchmarked.


# b09cb102 19-Jul-2004 Peter Wemm <peter@FreeBSD.org>

#ifdef __i386__ -> __i386__ || __amd64__


# ece2d989 18-Jul-2004 Pawel Jakub Dawidek <pjd@FreeBSD.org>

Now we have NO_ADAPTIVE_MUTEXES option, so use it here too.

Missed by: scottl


# 701f1408 18-Jul-2004 Scott Long <scottl@FreeBSD.org>

Enable ADAPTIVE_MUTEXES by default by changing the sense of the option to
NO_ADAPTIVE_MUTEXES. This option has been enabled by default on amd64 for
quite some time, and has been extensively tested on i386 and sparc64. It
shows measurable performance gains in many circumstances, and few negative
effects. It would be nice in t he future if adaptive mutexes actually went
to sleep after a certain amount of spinning, but that will require quite a
bit more testing.


# 2d50560a 10-Jul-2004 Marcel Moolenaar <marcel@FreeBSD.org>

Update for the KDB framework:
o Make debugging code conditional upon KDB instead of DDB.
o Call kdb_enter() instead of Debugger().
o Call kdb_backtrace() instead of db_print_backtrace() or backtrace().

kern_mutex.c:
o Replace checks for db_active with checks for kdb_active and make
them unconditional.

kern_shutdown.c:
o s/DDB_UNATTENDED/KDB_UNATTENDED/g
o s/DDB_TRACE/KDB_TRACE/g
o Save the TID of the thread doing the kernel dump so the debugger
knows which thread to select as the current when debugging the
kernel core file.
o Clear kdb_active instead of db_active and do so unconditionally.
o Remove backtrace() implementation.

kern_synch.c:
o Call kdb_reenter() instead of db_error().


# 0c0b25ae 02-Jul-2004 John Baldwin <jhb@FreeBSD.org>

Implement preemption of kernel threads natively in the scheduler rather
than as one-off hacks in various other parts of the kernel:
- Add a function maybe_preempt() that is called from sched_add() to
determine if a thread about to be added to a run queue should be
preempted to directly. If it is not safe to preempt or if the new
thread does not have a high enough priority, then the function returns
false and sched_add() adds the thread to the run queue. If the thread
should be preempted to but the current thread is in a nested critical
section, then the flag TDF_OWEPREEMPT is set and the thread is added
to the run queue. Otherwise, mi_switch() is called immediately and the
thread is never added to the run queue since it is switch to directly.
When exiting an outermost critical section, if TDF_OWEPREEMPT is set,
then clear it and call mi_switch() to perform the deferred preemption.
- Remove explicit preemption from ithread_schedule() as calling
setrunqueue() now does all the correct work. This also removes the
do_switch argument from ithread_schedule().
- Do not use the manual preemption code in mtx_unlock if the architecture
supports native preemption.
- Don't call mi_switch() in a loop during shutdown to give ithreads a
chance to run if the architecture supports native preemption since
the ithreads will just preempt DELAY().
- Don't call mi_switch() from the page zeroing idle thread for
architectures that support native preemption as it is unnecessary.
- Native preemption is enabled on the same archs that supported ithread
preemption, namely alpha, i386, and amd64.

This change should largely be a NOP for the default case as committed
except that we will do fewer context switches in a few cases and will
avoid the run queues completely when preempting.

Approved by: scottl (with his re@ hat)


# bf0acc27 02-Jul-2004 John Baldwin <jhb@FreeBSD.org>

- Change mi_switch() and sched_switch() to accept an optional thread to
switch to. If a non-NULL thread pointer is passed in, then the CPU will
switch to that thread directly rather than calling choosethread() to pick
a thread to choose to.
- Make sched_switch() aware of idle threads and know to do
TD_SET_CAN_RUN() instead of sticking them on the run queue rather than
requiring all callers of mi_switch() to know to do this if they can be
called from an idlethread.
- Move constants for arguments to mi_switch() and thread_single() out of
the middle of the function prototypes and up above into their own
section.


# 535eb309 06-Apr-2004 John Baldwin <jhb@FreeBSD.org>

Add a new kernel option MUTEX_WAKE_ALL that changes the mutex unlock code
to awaken all waiters when a contested mutex is released instead of just
the highest priority waiter. If the various threads are awakened in
sequence then each thread may acquire and release the lock in question
without contention resulting in fewer expensive unlock and lock
operations. This old behavior of waking just the highest priority is
still used if this option is specified. Making the algorithm conditional
on a kernel option will allows us to benchmark both cases later and
determine which one should be used by default.

Requested by: tanimura-san


# 94ffb20d 28-Jan-2004 Robert Watson <rwatson@FreeBSD.org>

Add a reset sysctl for mutex profiling: zeros all of the mutex
profiling buffers and hash table. This makes it a lot easier to
do multiple profiling runs without rebooting or performing
gratuitous arithmetic. Sysctl is named debug.mutex.prof.reset.

Reviewed by: jake


# 8d768e76 28-Jan-2004 John Baldwin <jhb@FreeBSD.org>

Rework witness_lock() to make it slightly more useful and flexible.
- witness_lock() is split into two pieces: witness_checkorder() and
witness_lock(). Witness_checkorder() determines if acquiring a specified
lock at the time it is called would result in a lock order. It
optionally adds a new lock order relationship as well. witness_lock()
updates witness's data structures to assume that a lock has been acquired
by stick a new lock instance in the appropriate lock instance list.
- The mutex and sx lock functions now call checkorder() prior to trying to
acquire a lock and continue to call witness_lock() after the acquire is
completed. This will let witness catch a deadlock before it happens
rather than trying to do so after the threads have deadlocked (i.e. never
actually report it).
- A new function witness_defineorder() has been added that adds a lock
order between two locks at runtime without having to acquire the locks.
If the lock order cannot be added it will return an error. This function
is available to programmers via the WITNESS_DEFINEORDER() macro which
accepts either two mutexes or two sx locks as its arguments.
- A few simple wrapper macros were added to allow developers to call
witness_checkorder() anywhere as a way of enforcing locking assertions
in code that might acquire a certain lock in some situations. The
macros are: witness_check_{mutex,shared_sx,exclusive_sx} and take an
appropriate lock as the sole argument.
- The code to remove a lock instance from a lock list in witness_unlock()
was unnested by using a goto to vastly improve the readability of this
function.


# 29bcc451 24-Jan-2004 Jeff Roberson <jeff@FreeBSD.org>

- Add a flags parameter to mi_switch. The value of flags may be SW_VOL or
SW_INVOL. Assert that one of these is set in mi_switch() and propery
adjust the rusage statistics. This is to simplify the large number of
users of this interface which were previously all required to adjust the
proper counter prior to calling mi_switch(). This also facilitates more
switch and locking optimizations.
- Change all callers of mi_switch() to pass the appropriate paramter and
remove direct references to the process statistics.


# 8dc10be8 24-Jan-2004 Robert Watson <rwatson@FreeBSD.org>

Add some basic support for measuring sleep mutex contention to the
mutex profiling code. As with existing mutex profiling, measurement
is done with respect to mtx_lock() instances in the code, as opposed
to specific mutexes. In particular, measure two things:

(1) Lock contention. How often did this mtx_lock() call get made and
have to sleep (or almost sleep) waiting for the lock. This helps
identify the "victims" of contention.

(2) Hold contention. How often, while the lock was held by a thread
as a result of this mtx_lock(), did another thread try to acquire
the same mutex. This helps identify the causes of contention.

I'm currently exploring adding measurement of "time waited for the
lock", but the current implementation has proven useful to me so far
so I figured I'd commit it so others could try it out. Note that this
increases the size of mutexes when MUTEX_PROFILING is enabled, so you
might find you need to further bump UMA_BOOT_PAGES. Fixes welcome.

The once over: des, others


# eac09796 05-Jan-2004 John Baldwin <jhb@FreeBSD.org>

- Allow mtx_trylock() to recurse on a recursive mutex. Attempts to recurse
on a non-recursive mutex will fail but will not trigger any assertions.
- Add an assertion to mtx_lock() that one never recurses on a non-recursive
mutex. This is mostly useful for the non-WITNESS case.

Requested by: deischen, julian, others (1)


# 961a7b24 11-Nov-2003 John Baldwin <jhb@FreeBSD.org>

Add an implementation of turnstiles and change the sleep mutex code to use
turnstiles to implement blocking isntead of implementing a thread queue
directly. These turnstiles are somewhat similar to those used in Solaris 7
as described in Solaris Internals but are also different.

Turnstiles do not come out of a fixed-sized pool. Rather, each thread is
assigned a turnstile when it is created that it frees when it is destroyed.
When a thread blocks on a lock, it donates its turnstile to that lock to
serve as queue of blocked threads. The queue associated with a given lock
is found by a lookup in a simple hash table. The turnstile itself is
protected by a lock associated with its entry in the hash table. This
means that sched_lock is no longer needed to contest on a mutex. Instead,
sched_lock is only used when manipulating run queues or thread priorities.
Turnstiles also implement priority propagation inherently.

Currently turnstiles only support mutexes. Eventually, however, turnstiles
may grow two queue's to support a non-sleepable reader/writer lock
implementation. For more details, see the comments in sys/turnstile.h and
kern/subr_turnstile.c.

The two primary advantages from the turnstile code include: 1) the size
of struct mutex shrinks by four pointers as it no longer stores the
thread queue linkages directly, and 2) less contention on sched_lock in
SMP systems including the ability for multiple CPUs to contend on different
locks simultaneously (not that this last detail is necessarily that much of
a big win). Note that 1) means that this commit is a kernel ABI breaker,
so don't mix old modules with a new kernel and vice versa.

Tested on: i386 SMP, sparc64 SMP, alpha SMP


# 41109518 31-Jul-2003 John Baldwin <jhb@FreeBSD.org>

If a spin lock is held for too long and WITNESS is enabled, then call
witness_display_spinlock() to see if we can find out where the current
owner of the spin lock last acquired the lock.


# 47b722c1 30-Jul-2003 John Baldwin <jhb@FreeBSD.org>

When complaining about a sleeping thread owning a mutex, display the
thread's pid to make debugging easier for people who don't want to have to
use the intended tool for these panics (witness).

Indirectly prodded by: kris


# f7ee1590 02-Jul-2003 John Baldwin <jhb@FreeBSD.org>

- Add comments about the maintenance of the per-thread list of contested
locks held by each thread.
- Fix a bug in the original BSD/OS code where a contested lock was not
properly handed off from the old thread to the new thread when a
contested lock with more than one blocked thread was transferred from
one thread to another.
- Don't use an atomic operation to write the MTX_CONTESTED value to
mtx_lock in the aforementioned special case. The memory barriers and
exclusion provided by sched_lock are sufficient.

Spotted by: alc (2)


# 677b542e 10-Jun-2003 David E. O'Brien <obrien@FreeBSD.org>

Use __FBSDID().


# b82af320 31-May-2003 Poul-Henning Kamp <phk@FreeBSD.org>

Add "" around mutex name to make message less confusing.


# 27dad03c 17-Apr-2003 John Baldwin <jhb@FreeBSD.org>

Use TD_IS_RUNNING() instead of thread_running() in the adaptive mutex
code.


# 060563ec 10-Apr-2003 Julian Elischer <julian@FreeBSD.org>

Move the _oncpu entry from the KSE to the thread.
The entry in the KSE still exists but it's purpose will change a bit
when we add the ability to lock a KSE to a cpu.


# f949f795 23-Mar-2003 Tim J. Robbins <tjr@FreeBSD.org>

Remove unused mtx_lock_giant(), mtx_unlock_giant(), related globals
and sysctls.


# b4b138c2 18-Mar-2003 Poul-Henning Kamp <phk@FreeBSD.org>

Including <sys/stdint.h> is (almost?) universally only to be able to use
%j in printfs, so put a newsted include in <sys/systm.h> where the printf
prototype lives and save everybody else the trouble.


# 75d468ee 11-Mar-2003 John Baldwin <jhb@FreeBSD.org>

Axe the useless MTX_SLEEPABLE flag. mutexes are not sleepable locks.
Nothing used this flag and WITNESS would have panic'd during mtx_init()
if anything had.


# 1106937d 04-Mar-2003 John Baldwin <jhb@FreeBSD.org>

Remove safety belt: it is now ok to do a mtx_trylock() on a mutex you
already own. The mtx_trylock() will fail however. Enhance the comment
at the top of the try lock function to explain this.

Requested by: jlemon and his evil netisr locking


# 5fa8dd90 04-Mar-2003 John Baldwin <jhb@FreeBSD.org>

Miscellaneous cleanups to _mtx_lock_sleep():
- Declare some local variables at the top of the function instead of in a
nested block.
- Use mtx_owned() instead of masking off bits from mtx_lock manually.
- Read the value of mtx_lock into 'v' as a separate line rather than inside
an if statement for clarity. This code is hairy enough as it is.


# 6b869595 04-Mar-2003 John Baldwin <jhb@FreeBSD.org>

Properly assert that mtx_trylock() is not called on a mutex we already
owned. Previously the KASSERT would only trigger if we successfully
acquired a lock that we already held. However, _obtain_lock() fails to
acquire locks that we already hold, so the KASSERT was never checked in
the case it was supposed to fail.


# 0bd5f797 25-Feb-2003 Mike Makonnen <mtm@FreeBSD.org>

Unbreak mutex profiling (at least for me).
o Always check for null when dereferencing the filename component.
o Implement a try-and-backoff method for allocating memory to
dump stats to avoid a spin-lock -> sleep-lock mutex lock order
panic with WITNESS.

Approved by: des, markm (mentor)
Not objected: jhb


# ecf031c9 21-Jan-2003 Dag-Erling Smørgrav <des@FreeBSD.org>

There's absolutely no need for a struct-within-a-struct, so move the
counters out of the inner struct and remove it.


# fa669ab7 25-Oct-2002 Poul-Henning Kamp <phk@FreeBSD.org>

Disable the kernacc() check in mtx_validate() until such time that kernacc
does not require Giant.

This means that we may miss panics on a class of mutex programming bugs,
but only if running with a Chernobyl setting of debug-flags.

Spotted by: Pete Carah <pete@ns.altadena.net>


# f2c1ea81 23-Oct-2002 Dag-Erling Smørgrav <des@FreeBSD.org>

Whitespace cleanup.


# d08926b1 22-Oct-2002 Robert Drehmel <robert@FreeBSD.org>

Change the `mutex_prof' structure to use three variables contained
in an anonymous structure as counters, instead of an array with
preprocessor-defined names for indices. Remove the associated XXX-
comment.


# 6d036900 21-Oct-2002 Dag-Erling Smørgrav <des@FreeBSD.org>

Reduce the overhead of the mutex statistics gathering code, try to produce
shorter lines in the report, and clean up some minor style issues.


# b43179fb 11-Oct-2002 Jeff Roberson <jeff@FreeBSD.org>

- Create a new scheduler api that is defined in sys/sched.h
- Begin moving scheduler specific functionality into sched_4bsd.c
- Replace direct manipulation of scheduler data with hooks provided by the
new api.
- Remove KSE specific state modifications and single runq assumptions from
kern_switch.c

Reviewed by: -arch


# 551cf4e1 02-Oct-2002 John Baldwin <jhb@FreeBSD.org>

Rename the mutex thread and process states to use a more generic 'LOCK'
name instead. (e.g., SLOCK instead of SMTX, TD_ON_LOCK() instead of
TD_ON_MUTEX()) Eventually a turnstile abstraction will be added that
will be shared with mutexes and other types of locks. SLOCK/TDI_LOCK will
be used internally by the turnstile code and will not be specific to
mutexes. Making the change now ensures that turnstiles can be dropped
in at a later date without affecting the ABI of userland applications.


# 27354830 29-Sep-2002 Julian Elischer <julian@FreeBSD.org>

uh, commit all of the patch


# e0817317 29-Sep-2002 Julian Elischer <julian@FreeBSD.org>

commit the version I actually tested..

Submitted by: davidxu


# 9eb1fdea 29-Sep-2002 Julian Elischer <julian@FreeBSD.org>

Implement basic KSE loaning. This stops a hread that is blocked in BOUND mode
from stopping another thread from completing a syscall, and this allows it to
release its resources etc. Probably more related commits to follow (at least
one I know of)

Initial concept by: julian, dillon
Submitted by: davidxu


# 71fad9fd 11-Sep-2002 Julian Elischer <julian@FreeBSD.org>

Completely redo thread states.

Reviewed by: davidxu@freebsd.org


# 0d975d63 03-Sep-2002 John Baldwin <jhb@FreeBSD.org>

Add some KASSERT()'s to ensure that we don't perform spin mutex ops on
sleep mutexes and vice versa. WITNESS normally should catch this but
not everyone uses WITNESS so this is a fallback to catch nasty but easy
to do bugs.


# 02bd1bcd 26-Aug-2002 Ian Dowse <iedowse@FreeBSD.org>

Add a new KTR type KTR_CONTENTION, and use it in the mutex code to
log the start and end of periods during which mtx_lock() is waiting
to acquire a sleep mutex. The log message includes the file and
line of both the waiter and the holder.

Reviewed by: jhb, jake


# ce39e722 27-Jul-2002 John Baldwin <jhb@FreeBSD.org>

Disable optimization of spinlocks on UP kernels w/o debugging for now
since it breaks mtx_owned() on spin mutexes when used outside of
mtx_assert(). Unfortunately we currently use it in the i386 MD code
and in the sio(4) driver.

Reported by: bde


# b61860ad 02-Jul-2002 Dag-Erling Smørgrav <des@FreeBSD.org>

Add mtx_ prefixes to the fields used for mutex profiling, and fix a bug
where the profiling code would report the release point instead of the
acquisition point.

Requested by: bde


# e602ba25 29-Jun-2002 Julian Elischer <julian@FreeBSD.org>

Part 1 of KSE-III

The ability to schedule multiple threads per process
(one one cpu) by making ALL system calls optionally asynchronous.
to come: ia64 and power-pc patches, patches for gdb, test program (in tools)

Reviewed by: Almost everyone who counts
(at various times, peter, jhb, matt, alfred, mini, bernd,
and a cast of thousands)

NOTE: this is still Beta code, and contains lots of debugging stuff.
expect slight instability in signals..


# 6a95e08f 04-Jun-2002 John Baldwin <jhb@FreeBSD.org>

Replace thread_runnable() with thread_running() as the latter is more
accurate.

Suggested by: julian


# 7fcca609 04-Jun-2002 John Baldwin <jhb@FreeBSD.org>

Optimize the adaptive mutex spin a bit. Use a simple while loop with
simple reads (and on IA32, a "pause" instruction for each interation of the
loop) to spin until either the mutex owner field changes, or the lock owner
stops executing.

Suggested by: tanimura
Tested on: i386


# 5853d37d 04-Jun-2002 John Baldwin <jhb@FreeBSD.org>

Add a private thread_runnable() macro to make the code more readable and
make the KSE diff easier to maintain.


# db586c8b 22-May-2002 Dag-Erling Smørgrav <des@FreeBSD.org>

Make the counters uintmax_ts, and use %ju rather than %llu.


# 6b8c6989 22-May-2002 John Baldwin <jhb@FreeBSD.org>

Rename pause() to ia32_pause() so it doesn't conflict with the pause()
function defined in <unistd.h>. I didn't #ifdef _KERNEL it because the
mutex implementation in libpthread will probably need this.


# 0228ea4e 22-May-2002 John Baldwin <jhb@FreeBSD.org>

Rename cpu_pause() to pause(). Originally I was going to make this an
MI API with empty cpu_pause() functions on other arch's, but this
functionality is definitely unique to IA-32, so I decided to leave it
as i386-only and wrap it in #ifdef's. I should have dropped the cpu_
prefix when I made that decision.

Requested by: bde


# 703fc290 21-May-2002 John Baldwin <jhb@FreeBSD.org>

Add appropriate IA32 "pause" instructions to improve performanec on
Pentium 4's and newer IA32 processors. The "pause" instruction has been
verified by Intel to be a NOP on all currently existing IA32 processors
prior to the Pentium 4.


# 0e54ddad 21-May-2002 John Baldwin <jhb@FreeBSD.org>

Fix an old cut 'n' paste bug inherited from BSD/OS: don't increment 'i'
twice once we are in the long wait stage of spinning on a spin mutex.


# e6302957 21-May-2002 John Baldwin <jhb@FreeBSD.org>

Whitespace fixup, properly indent the body of an else clause.


# 2498cf8c 21-May-2002 John Baldwin <jhb@FreeBSD.org>

Add code to make default mutexes adaptive if the ADAPTIVE_MUTEXES kernel
option is used (not on by default).

- In the case of trying to lock a mutex, if the MTX_CONTESTED flag is set,
then we can safely read the thread pointer from the mtx_lock member while
holding sched_lock. We then examine the thread to see if it is currently
executing on another CPU. If it is, then we keep looping instead of
blocking.
- In the case of trying to unlock a mutex, it is now possible for a mutex
to have MTX_CONTESTED set in mtx_lock but to not have any threads
actually blocked on it, so we need to handle that case. In that case,
we just release the lock as if MTX_CONTESTED was not set and return.
- We do not adaptively spin on Giant as Giant is held for long times and
it slows SMP systems down to a crawl (it was taking several minutes,
like 5-10 or so for my test alpha and sparc64 SMP boxes to boot up when
they adaptively spinned on Giant).
- We only compile in the code to do this for SMP kernels, it doesn't make
sense for UP kernels.

Tested on: i386, alpha, sparc64


# e8fdcfb5 21-May-2002 John Baldwin <jhb@FreeBSD.org>

Optimize spin mutexes for UP kernels without debugging to just enter and
exit critical sections. We only contest on a spin mutex on an SMP kernel
running on an SMP machine.


# 0c88508a 04-Apr-2002 John Baldwin <jhb@FreeBSD.org>

Change mtx_init() to now take an extra argument. The third argument is
the generic lock type for use with witness. If this argument is NULL then
the lock name is used as the lock type. Add a macro for a lock type name
for network driver locks.


# e6330704 02-Apr-2002 Dag-Erling Smørgrav <des@FreeBSD.org>

Revert to open hashing. It makes the code simpler, and works farily well
even when the number of records approaches the size of the hash table.
Besides, the previous implementation (using linear probing) was broken :)

Also, use the newly introduced MTX_SYSINIT.


# c53c013b 02-Apr-2002 John Baldwin <jhb@FreeBSD.org>

- Move the MI mutexes sched_lock and Giant from being declared in the
various machdep.c's to being declared in kern_mutex.c.
- Add a new function mutex_init() used to perform early initialization
needed for mutexes such as setting up thread0's contested lock list
and initializing MI mutexes. Change the various MD startup routines
to call this function instead of duplicating all the code themselves.

Tested on: alpha, i386


# 7feefcd6 02-Apr-2002 John Baldwin <jhb@FreeBSD.org>

Spelling police.


# c27b5699 02-Apr-2002 Andrew R. Reiter <arr@FreeBSD.org>

- Add MTX_SYSINIT and SX_SYSINIT as macro glue for allowing sx and mtx
locks to be able to setup a SYSINIT call. This helps in places where
a lock is needed to protect some data, but the data is not truly
associated with a subsystem that can properly initialize it's lock.
The macros use the mtx_sysinit() and sx_sysinit() functions,
respectively, as the handler argument to SYSINIT().

Reviewed by: alfred, jhb, smp@


# b784ffe9 02-Apr-2002 Dag-Erling Smørgrav <des@FreeBSD.org>

Instead of get_cyclecount(9), use nanotime(9) to record acquisition and
release times. Measurements are made and stored in nanoseconds but
presented in microseconds, which should be sufficient for the locks for
which we actually want this (those that are held long and / or often).
Also, rename some variables and structure members to unit-agnostic names.


# 6c35e809 01-Apr-2002 Dag-Erling Smørgrav <des@FreeBSD.org>

Mutex profiling code, conditional on the MUTEX_PROFILING option. Adds the
following sysctl variables:

debug.mutex.prof.enable enable / disable profiling
debug.mutex.prof.acquisitions number of mutex acquisitions recorded
debug.mutex.prof.records number of acquisition points recorded
debug.mutex.prof.maxrecords max number of acquisition points
debug.mutex.prof.rejected number of rejections (due to full table)
debug.mutex.prof.hashsize hash size
debug.mutex.prof.collisions number of hash collisions
debug.mutex.prof.stats profiling statistics

The code records four numbers for each acquisition point (identified by
source file name and line number): longest time held, total time held,
number of non-recursive acquisitions, average time held. The measurements
are in clock cycles (as returned by get_cyclecount(9)); this may cause
measurements on some SMP systems to be unreliable. This can probably be
worked around by replacing get_cyclecount(9) by some incarnation of
nanotime(9).

This work was derived from initial patches by eivind.


# f22a4b62 27-Mar-2002 Jeff Roberson <jeff@FreeBSD.org>

Add a new mtx_init option "MTX_DUPOK" which allows duplicate acquires of locks
with this flag. Remove the dup_list and dup_ok code from subr_witness. Now
we just check for the flag instead of doing string compares.

Also, switch the process lock, process group lock, and uma per cpu locks over
to this interface. The original mechanism did not work well for uma because
per cpu lock names are unique to each zone.

Approved by: jhb


# 4d77a549 19-Mar-2002 Alfred Perlstein <alfred@FreeBSD.org>

Remove __P.


# 114730b0 20-Feb-2002 Peter Wemm <peter@FreeBSD.org>

Tidy up some unused variables


# 735da6de 18-Feb-2002 Matthew Dillon <dillon@FreeBSD.org>

Add kern_giant_ucred to instrument Giant around ucred related operations
such a getgid(), setgid(), etc...


# 2c100766 11-Feb-2002 Julian Elischer <julian@FreeBSD.org>

In a threaded world, differnt priorirites become properties of
different entities. Make it so.

Reviewed by: jhb@freebsd.org (john baldwin)


# 18fc2ba9 08-Feb-2002 John Baldwin <jhb@FreeBSD.org>

Use the mtx_owner() macro in one spot in _mtx_lock_sleep() to make the
code easier to read.


# bf07c922 15-Jan-2002 John Baldwin <jhb@FreeBSD.org>

Bump the limits for determining if we've held a spinlock too long as they
seem to be too short for the 500 Mhz DS20 I'm testing on. The rather
arbitrary numbers are rather bogus anyways. We should probably have
variables for these limits that are calibrated in the MD startup code
somehow.


# c86b6ff5 05-Jan-2002 John Baldwin <jhb@FreeBSD.org>

Change the preemption code for software interrupt thread schedules and
mutex releases to not require flags for the cases when preemption is
not allowed:

The purpose of the MTX_NOSWITCH and SWI_NOSWITCH flags is to prevent
switching to a higher priority thread on mutex releease and swi schedule,
respectively when that switch is not safe. Now that the critical section
API maintains a per-thread nesting count, the kernel can easily check
whether or not it should switch without relying on flags from the
programmer. This fixes a few bugs in that all current callers of
swi_sched() used SWI_NOSWITCH, when in fact, only the ones called from
fast interrupt handlers and the swi_sched of softclock needed this flag.
Note that to ensure that swi_sched()'s in clock and fast interrupt
handlers do not switch, these handlers have to be explicitly wrapped
in critical_enter/exit pairs. Presently, just wrapping the handlers is
sufficient, but in the future with the fully preemptive kernel, the
interrupt must be EOI'd before critical_exit() is called. (critical_exit()
can switch due to a deferred preemption in a fully preemptive kernel.)

I've tested the changes to the interrupt code on i386 and alpha. I have
not tested ia64, but the interrupt code is almost identical to the alpha
code, so I expect it will work fine. PowerPC and ARM do not yet have
interrupt code in the tree so they shouldn't be broken. Sparc64 is
broken, but that's been ok'd by jake and tmm who will be fixing the
interrupt code for sparc64 shortly.

Reviewed by: peter
Tested on: i386, alpha


# 7e1f6dfe 17-Dec-2001 John Baldwin <jhb@FreeBSD.org>

Modify the critical section API as follows:
- The MD functions critical_enter/exit are renamed to start with a cpu_
prefix.
- MI wrapper functions critical_enter/exit maintain a per-thread nesting
count and a per-thread critical section saved state set when entering
a critical section while at nesting level 0 and restored when exiting
to nesting level 0. This moves the saved state out of spin mutexes so
that interlocking spin mutexes works properly.
- Most low-level MD code that used critical_enter/exit now use
cpu_critical_enter/exit. MI code such as device drivers and spin
mutexes use the MI wrappers. Note that since the MI wrappers store
the state in the current thread, they do not have any return values or
arguments.
- mtx_intr_enable() is replaced with a constant CRITICAL_FORK which is
assigned to curthread->td_savecrit during fork_exit().

Tested on: i386, alpha


# ba48b69a 15-Nov-2001 John Baldwin <jhb@FreeBSD.org>

Remove definition of witness and comment stating that this file implements
witness. Witness moved off to subr_witness.c a while ago.


# d23f5958 26-Oct-2001 Matthew Dillon <dillon@FreeBSD.org>

Add mtx_lock_giant() and mtx_unlock_giant() wrappers for sysctl management
of Giant during the Giant unwinding phase, and start work on instrumenting
Giant for the file and proc mutexes.

These wrappers allow developers to turn on and off Giant around various
subsystems. DEVELOPERS SHOULD NEVER TURN OFF GIANT AROUND A SUBSYSTEM JUST
BECAUSE THE SYSCTL EXISTS! General developers should only considering
turning on Giant for a subsystem whos default is off (to help track down
bugs). Only developers working on particular subsystems who know what
they are doing should consider turning off Giant.

These wrappers will greatly improve our ability to unwind Giant and test
the kernel on a (mostly) subsystem by subsystem basis. They allow Giant
unwinding developers (GUDs) to emplace appropriate subsystem and structural
mutexes in the main tree and then request that the larger community test
the work by turning off Giant around the subsystem(s), without the larger
community having to mess around with patches. These wrappers also allow
GUDs to boot into a (more likely to be) working system in the midst of
their unwinding work and to test that work under more controlled
circumstances.

There is a master sysctl, kern.giant.all, which defaults to 0 (off). If
turned on it overrides *ALL* other kern.giant sysctls and forces Giant to
be turned on for all wrapped subsystems. If turned off then Giant around
individual subsystems are controlled by various other kern.giant.XXX sysctls.

Code which overlaps multiple subsystems must have all related subsystem Giant
sysctls turned off in order to run without Giant.


# 7ada5876 19-Oct-2001 John Baldwin <jhb@FreeBSD.org>

The mtx_init() and sx_init() functions bzero'd locks before handing them
off to witness_init() making the check for double intializating a lock by
testing the LO_INITIALIZED flag moot. Workaround this by checking the
LO_INITIALIZED flag ourself before we bzero the lock structure.


# 21377ce0 25-Sep-2001 John Baldwin <jhb@FreeBSD.org>

Remove superflous parens after de-macroizing.


# dde96c99 22-Sep-2001 John Baldwin <jhb@FreeBSD.org>

Since we no longer inline any debugging code in the mutex operations, move
all the debugging code into the function versions of the mutex operations
in kern_mutex.c. This reduced the __mtx_* macros to simply wrappers of
the _{get,rel}_lock_* macros, so the __mtx_* macros were also abolished in
favor of just calling the _{get,rel}_lock_* macros. The tangled hairy mass
of macros calling macros is at least a bit more sane now.


# a44f918b 19-Sep-2001 John Baldwin <jhb@FreeBSD.org>

Fix a bug in propagate priority: the kse group pointer wasn't being
updated in the loop so the new thread always seemd to have the same
priority as the original thread and no actual priorities were changed.


# b40ce416 12-Sep-2001 Julian Elischer <julian@FreeBSD.org>

KSE Milestone 2
Note ALL MODULES MUST BE RECOMPILED
make the kernel aware that there are smaller units of scheduling than the
process. (but only allow one thread per process at this time).
This is functionally equivalent to teh previousl -current except
that there is a thread associated with each process.

Sorry john! (your next MFC will be a doosie!)

Reviewed by: peter@freebsd.org, dillon@freebsd.org

X-MFC after: ha ha ha ha


# 76dcbd6f 24-Aug-2001 Bosko Milekic <bmilekic@FreeBSD.org>

Force a commit on kern_mutex.c to explain reason for last commit but while
I'm at it also add a comment in mtx_validate() explaining the purpose
of the last change.

Basically, this fixes booting kernels compiled with MUTEX_DEBUG. What used
to happen is before we setidt from init386() [still using BTX idt], we
called mtx_init() on several mutex locks, notably Giant and some others.
This is a problem for MUTEX_DEBUG because it enables mtx_validate() which
calls kernacc(), some of which in turn requires Giant.
Fix by calling kernacc() from mtx_validate() only if (!cold).


# ab07087e 24-Aug-2001 Bosko Milekic <bmilekic@FreeBSD.org>

*** empty log message ***


# 5cb0fbe4 31-Jul-2001 John Baldwin <jhb@FreeBSD.org>

If we have already panic'd then don't bother enforcing mutex asserts as
things are pretty much shot already and all panic'ing does is hurt our
chances of getting a dump.

Inspired by: sheldonh


# c4f7a187 25-Jun-2001 John Baldwin <jhb@FreeBSD.org>

Count the context switch when blocking on a mutex as a voluntary context
switch. Count the context switch when preempting the current thread to let
a higher priority thread blocked on a mutex we just released run as an
involuntary context switch.

Reported by: bde


# 2d96f0b1 04-May-2001 John Baldwin <jhb@FreeBSD.org>

- Move state about lock objects out of struct lock_object and into a new
struct lock_instance that is stored in the per-process and per-CPU lock
lists. Previously, the lock lists just kept a pointer to each lock held.
That pointer is now replaced by a lock instance which contains a pointer
to the lock object, the file and line of the last acquisition of a lock,
and various flags about a lock including its recursion count.
- If we sleep while holding a sleepable lock, then mark that lock instance
as having slept and ignore any lock order violations that occur while
acquiring Giant when we wake up with slept locks. This is ok because of
Giant's special nature.
- Allow witness to differentiate between shared and exclusive locks and
unlocks of a lock. Witness will now detect the case when a lock is
acquired first in one mode and then in another. Mutexes are always
locked and unlocked exclusively. Witness will also now detect the case
where a process attempts to unlock a shared lock while holding an
exclusive lock and vice versa.
- Fix a bug in the lock list implementation where we used the wrong
constant to detect the case where a lock list entry was full.


# fb919e4d 01-May-2001 Mark Murray <markm@FreeBSD.org>

Undo part of the tangle of having sys/lock.h and sys/mutex.h included in
other "system" header files.

Also help the deprecation of lockmgr.h by making it a sub-include of
sys/lock.h and removing sys/lockmgr.h form kernel .c files.

Sort sys/*.h includes where possible in affected files.

OK'ed by: bde (with reservations)


# 7141f2ad 16-Apr-2001 John Baldwin <jhb@FreeBSD.org>

Exit and re-enter the critical section while spinning for a spinlock so
that interrupts can come in while we are waiting for a lock.


# f0b60d75 13-Apr-2001 Mark Murray <markm@FreeBSD.org>

Handle a rare but fatal race invoked sometimes when SIGSTOP is
invoked.


# 19284646 28-Mar-2001 John Baldwin <jhb@FreeBSD.org>

Rework the witness code to work with sx locks as well as mutexes.
- Introduce lock classes and lock objects. Each lock class specifies a
name and set of flags (or properties) shared by all locks of a given
type. Currently there are three lock classes: spin mutexes, sleep
mutexes, and sx locks. A lock object specifies properties of an
additional lock along with a lock name and all of the extra stuff needed
to make witness work with a given lock. This abstract lock stuff is
defined in sys/lock.h. The lockmgr constants, types, and prototypes have
been moved to sys/lockmgr.h. For temporary backwards compatability,
sys/lock.h includes sys/lockmgr.h.
- Replace proc->p_spinlocks with a per-CPU list, PCPU(spinlocks), of spin
locks held. By making this per-cpu, we do not have to jump through
magic hoops to deal with sched_lock changing ownership during context
switches.
- Replace proc->p_heldmtx, formerly a list of held sleep mutexes, with
proc->p_sleeplocks, which is a list of held sleep locks including sleep
mutexes and sx locks.
- Add helper macros for logging lock events via the KTR_LOCK KTR logging
level so that the log messages are consistent.
- Add some new flags that can be passed to mtx_init():
- MTX_NOWITNESS - specifies that this lock should be ignored by witness.
This is used for the mutex that blocks a sx lock for example.
- MTX_QUIET - this is not new, but you can pass this to mtx_init() now
and no events will be logged for this lock, so that one doesn't have
to change all the individual mtx_lock/unlock() operations.
- All lock objects maintain an initialized flag. Use this flag to export
a mtx_initialized() macro that can be safely called from drivers. Also,
we on longer walk the all_mtx list if MUTEX_DEBUG is defined as witness
performs the corresponding checks using the initialized flag.
- The lock order reversal messages have been improved to output slightly
more accurate file and line numbers.


# 6283b7d0 27-Mar-2001 John Baldwin <jhb@FreeBSD.org>

- Switch from using save/disable/restore_intr to using critical_enter/exit
and change the u_int mtx_saveintr member of struct mtx to a critical_t
mtx_savecrit.
- On the alpha we no longer need a custom _get_spin_lock() macro to avoid
an extra PAL call, so remove it.
- Partially fix using mutexes with WITNESS in modules. Change all the
_mtx_{un,}lock_{spin,}_flags() macros to accept explicit file and line
parameters and rename them to use a prefix of two underscores. Inside
of kern_mutex.c, generate wrapper functions for
_mtx_{un,}lock_{spin,}_flags() (only using a prefix of one underscore)
that are called from modules. The macros mtx_{un,}lock_{spin,}_flags()
are mapped to the __mtx_* macros inside of the kernel to inline the
usual case of mutex operations and map to the internal _mtx_* functions
in the module case so that modules will use WITNESS and KTR logging if
the kernel is compiled with support for it.


# 5db078a9 09-Mar-2001 John Baldwin <jhb@FreeBSD.org>

Fix mtx_legal2block. The only time that it is bad to block on a mutex is
if we hold a spin mutex, since we can trivially get into deadlocks if we
start switching out of processes that hold spinlocks. Checking to see if
interrupts were disabled was a sort of cheap way of doing this since most
of the time interrupts were only disabled when holding a spin lock. At
least on the i386. To fix this properly, use a per-process counter
p_spinlocks that counts the number of spin locks currently held, and
instead of checking to see if interrupts are disabled in the witness code,
check to see if we hold any spin locks. Since child processes always
start up with the sched lock magically held in fork_exit(), we initialize
p_spinlocks to 1 for child processes. Note that proc0 doesn't go through
fork_exit(), so it starts with no spin locks held.

Consulting from: cp


# 1b43703b 06-Mar-2001 John Baldwin <jhb@FreeBSD.org>

- Add an extra check in priority_propagation() for UP systems to ensure we
don't end up back at ourselves which would indicate deadlock.
- Add the proc lock to the witness dup_list as we may hold more than one
process lock at a time.
- Don't assert a mutex is owned in _mtx_unlock_sleep() as that is too late.
We do the checks in the macros instead.


# a96dcd84 28-Feb-2001 Julian Elischer <julian@FreeBSD.org>

Shuffle netgraph mutexes a bit and hold a reference on a node
from the function that is calling the destructor.


# 5b270b2a 27-Feb-2001 Jake Burkholder <jake@FreeBSD.org>

Sigh. Try to get priorities sorted out. Don't bother trying to
update native priority, it is diffcult to get right and likely
to end up horribly wrong. Use an honestly wrong fixed value
that seems to work; PUSER for user threads, and the interrupt
priority for ithreads. Set it once when the process is created
and forget about it.

Suggested by: bde
Pointy hat: me


# be15bfc0 26-Feb-2001 Jake Burkholder <jake@FreeBSD.org>

Initialize native priority to PRI_MAX. It was usually 0 which made a
process's priority go through the roof when it released a (contested)
mutex. Only set the native priority in mtx_lock if hasn't already
been set.

Reviewed by: jhb


# a10f4966 25-Feb-2001 Jake Burkholder <jake@FreeBSD.org>

Remove brackets around variables in a function that used to be
a macro.


# 74334661 24-Feb-2001 Julian Elischer <julian@FreeBSD.org>

Move netgraph spimlock order entries out of
the #ifdef SMP section. They need to be there for UP too.


# 1103f3b0 24-Feb-2001 John Baldwin <jhb@FreeBSD.org>

Grrr, s/INVARIANTS_SUPPORT/INVARIANT_SUPPORT/.


# 15ec816a 24-Feb-2001 John Baldwin <jhb@FreeBSD.org>

- Axe RETIP() as it was very i386 specific and unwieldy. Instead, use the
passed in filename and line number in the KTR tracepoint message.
- Even though it is #if 0'd code, change the code to detect that a process
is an interrupt thread to check p->p_ithd against NULL rather than
checking non-existant process flags from BSD/OS.
- Use '%p' to print pointers in KTR log messages instead of assuming
sizeof(int) == sizeof(void *).
- Don't set p_mtxname to NULL when releasing a mutex. It doesn't hurt
to leave it set (we don't clear w_mesg for example) and at least at
one time in the past, there used to be race conditions in the kernel
that would result in setting this to NULL causing the kernel to
dereference NULL.
- Make the _mtx_assert() function be compiled in if INVARIANTS_SUPPORT is
defined rather than if INVARIANTS is defined so that a KLD compiled
with INVARIANTS that uses mtx_assert() can be used with a kernel that
just has INVARIANT_SUPPORT compiled in.


# 33338e73 24-Feb-2001 Julian Elischer <julian@FreeBSD.org>

Add knowledge of the netgraph spinlocks into the Witness code.
Well, at least I think that's how it's done.


# 25d209f2 21-Feb-2001 John Baldwin <jhb@FreeBSD.org>

- Use the NOCPU constant.
- Move the ithread spin locks before sched lock and clk in preparation for
future commits to the ithread code.


# 27863426 11-Feb-2001 Bosko Milekic <bmilekic@FreeBSD.org>

Change all instances of `CURPROC' and `CURTHD' to `curproc,' in order
to stay consistent.

Requested by: bde


# d5a08a60 11-Feb-2001 Jake Burkholder <jake@FreeBSD.org>

Implement a unified run queue and adjust priority levels accordingly.

- All processes go into the same array of queues, with different
scheduling classes using different portions of the array. This
allows user processes to have their priorities propogated up into
interrupt thread range if need be.
- I chose 64 run queues as an arbitrary number that is greater than
32. We used to have 4 separate arrays of 32 queues each, so this
may not be optimal. The new run queue code was written with this
in mind; changing the number of run queues only requires changing
constants in runq.h and adjusting the priority levels.
- The new run queue code takes the run queue as a parameter. This
is intended to be used to create per-cpu run queues. Implement
wrappers for compatibility with the old interface which pass in
the global run queue structure.
- Group the priority level, user priority, native priority (before
propogation) and the scheduling class into a struct priority.
- Change any hard coded priority levels that I found to use
symbolic constants (TTIPRI and TTOPRI).
- Remove the curpriority global variable and use that of curproc.
This was used to detect when a process' priority had lowered and
it should yield. We now effectively yield on every interrupt.
- Activate propogate_priority(). It should now have the desired
effect without needing to also propogate the scheduling class.
- Temporarily comment out the call to vm_page_zero_idle() in the
idle loop. It interfered with propogate_priority() because
the idle process needed to do a non-blocking acquire of Giant
and then other processes would try to propogate their priority
onto it. The idle process should not do anything except idle.
vm_page_zero_idle() will return in the form of an idle priority
kernel thread which is woken up at apprioriate times by the vm
system.
- Update struct kinfo_proc to the new priority interface. Deliberately
change its size by adjusting the spare fields. It remained the same
size, but the layout has changed, so userland processes that use it
would parse the data incorrectly. The size constraint should really
be changed to an arbitrary version number. Also add a debug.sizeof
sysctl node for struct kinfo_proc.


# 5746a1d8 10-Feb-2001 Bosko Milekic <bmilekic@FreeBSD.org>

- Place back STR string declarations for lock/unlock strings used for KTR_LOCK
tracing in order to avoid duplication.
- Insert some tracepoints back into the mutex acq/rel code, thus ensuring
that we can trace all lock acq/rel's again.
- All CURPROC != NULL checks are MPASS()es (under MUTEX_DEBUG) because they
signify a serious mutex corruption.
- Change up some KASSERT()s to MPASS()es, and vice-versa, depending on the
type of problem we're debugging (INVARIANTS is used here to check that
the API is being used properly whereas MUTEX_DEBUG is used to ensure that
something general isn't happening that will have bad impact on mutex
locks).

Reminded by: jhb, jake, asmodai


# c75e5182 09-Feb-2001 John Baldwin <jhb@FreeBSD.org>

Unify the two sleep lock order lists to enforce the process lock ->
uidinfo lock locking order.


# e910ba59 09-Feb-2001 John Baldwin <jhb@FreeBSD.org>

- Change the 'witness_list' ddb command to 'show mutexes'. Note that this
will only display sleep mutexes held by the current process.
- Clean up some nits in the witness_display() function and add a ddb
command 'show witness' that dumps the hierarchy and order lists to the
console.
- Use queue(3) macros where appropriate.
- Resort the spin lock order list so that "com" is before "sched_lock".
Also, add appropriate #ifdef's around SMP and i386-specific mutexes.
- Add two new mutexes used to protect the ithread lists and tables to the
order list.

Requested by: bde (1)


# 9ed346ba 08-Feb-2001 Bosko Milekic <bmilekic@FreeBSD.org>

Change and clean the mutex lock interface.

mtx_enter(lock, type) becomes:

mtx_lock(lock) for sleep locks (MTX_DEF-initialized locks)
mtx_lock_spin(lock) for spin locks (MTX_SPIN-initialized)

similarily, for releasing a lock, we now have:

mtx_unlock(lock) for MTX_DEF and mtx_unlock_spin(lock) for MTX_SPIN.
We change the caller interface for the two different types of locks
because the semantics are entirely different for each case, and this
makes it explicitly clear and, at the same time, it rids us of the
extra `type' argument.

The enter->lock and exit->unlock change has been made with the idea
that we're "locking data" and not "entering locked code" in mind.

Further, remove all additional "flags" previously passed to the
lock acquire/release routines with the exception of two:

MTX_QUIET and MTX_NOSWITCH

The functionality of these flags is preserved and they can be passed
to the lock/unlock routines by calling the corresponding wrappers:

mtx_{lock, unlock}_flags(lock, flag(s)) and
mtx_{lock, unlock}_spin_flags(lock, flag(s)) for MTX_DEF and MTX_SPIN
locks, respectively.

Re-inline some lock acq/rel code; in the sleep lock case, we only
inline the _obtain_lock()s in order to ensure that the inlined code
fits into a cache line. In the spin lock case, we inline recursion and
actually only perform a function call if we need to spin. This change
has been made with the idea that we generally tend to avoid spin locks
and that also the spin locks that we do have and are heavily used
(i.e. sched_lock) do recurse, and therefore in an effort to reduce
function call overhead for some architectures (such as alpha), we
inline recursion for this case.

Create a new malloc type for the witness code and retire from using
the M_DEV type. The new type is called M_WITNESS and is only declared
if WITNESS is enabled.

Begin cleaning up some machdep/mutex.h code - specifically updated the
"optimized" inlined code in alpha/mutex.h and wrote MTX_LOCK_SPIN
and MTX_UNLOCK_SPIN asm macros for the i386/mutex.h as we presently
need those.

Finally, caught up to the interface changes in all sys code.

Contributors: jake, jhb, jasone (in no particular order)


# d38b8dbf 27-Jan-2001 John Baldwin <jhb@FreeBSD.org>

Add a new ddb command 'witness_list' that lists the mutexes held by
curproc.

Requested by: peter


# 1b367556 23-Jan-2001 Jason Evans <jasone@FreeBSD.org>

Convert all simplelocks to mutexes and remove the simplelock implementations.


# 8484de75 24-Jan-2001 John Baldwin <jhb@FreeBSD.org>

- Don't use a union and fun tricks to shave one extra pointer off of struct
mtx right now as it makes debugging harder. When we are in optimizing
mode, we can revisit this.
- Fix the KTR trace messages to use %p rather than 0x%p to avoid duplicate
0x's in KTR output.
- During witness_fixup, release Giant so that witness doesn't get confused.
Also, grab all_mtx while walking the list of mutexes.
- Remove w_sleep and w_recurse. Instead, perform checks on mutexes using
the mutex's mtx_flags field.
- Allow debug.witness_ddb and debug.witness_skipspin to be set from the
loader.
- Add Giant to the front of existing order_list entries to help ensure
Giant is always first.
- Add an order entry for the various proc locks. Note that this only
helps keep proc in order mostly as the allproc and proctree mutexes are
only obtained during a lockmgr operation on the specified mutex.


# 56771ca7 21-Jan-2001 Jason Evans <jasone@FreeBSD.org>

Print correct file name and line number in mtx_assert().

Noticed by: jake


# 0cde2e34 21-Jan-2001 Jason Evans <jasone@FreeBSD.org>

Move most of sys/mutex.h into kern/kern_mutex.c, thereby making the mutex
inline functions non-inlined. Hide parts of the mutex implementation that
should not be exposed.

Make sure that WITNESS code is not executed during boot until the mutexes
are fully initialized by SI_SUB_MUTEX (the original motivation for this
commit).

Submitted by: peter


# 527c2fd2 21-Jan-2001 Jason Evans <jasone@FreeBSD.org>

Make the order of the static initializer for all_mtx match the order of
fields in struct mtx.

Found by: jake


# d1c1b841 21-Jan-2001 Jason Evans <jasone@FreeBSD.org>

Remove MUTEX_DECLARE() and MTX_COLD. Instead, postpone full mutex
initialization until after malloc() is safe to call, then iterate through
all mutexes and complete their initialization.

This change is necessary in order to avoid some circular bootstrapping
dependencies.


# c1ef8aac 19-Jan-2001 Jake Burkholder <jake@FreeBSD.org>

- Make npx_intr INTR_MPSAFE and move acquiring Giant into the
function itself.
- Remove a hack to allow acquiring Giant from the npx asm trap
vector.


# 08812b39 18-Jan-2001 Bosko Milekic <bmilekic@FreeBSD.org>

Implement MTX_RECURSE flag for mtx_init().
All calls to mtx_init() for mutexes that recurse must now include
the MTX_RECURSE bit in the flag argument variable. This change is in
preparation for an upcoming (further) mutex API cleanup.
The witness code will call panic() if a lock is found to recurse but
the MTX_RECURSE bit was not set during the lock's initialization.

The old MTX_RECURSE "state" bit (in mtx_lock) has been renamed to
MTX_RECURSED, which is more appropriate given its meaning.

The following locks have been made "recursive," thus far:
eventhandler, Giant, callout, sched_lock, possibly some others declared
in the architecture-specific code, all of the network card driver locks
in pci/, as well as some other locks in dev/ stuff that I've found to
be recursive.

Reviewed by: jhb


# ef73ae4b 09-Jan-2001 Jake Burkholder <jake@FreeBSD.org>

Use PCPU_GET, PCPU_PTR and PCPU_SET to access all per-cpu variables
other then curproc.


# 562e4ffe 13-Dec-2000 John Baldwin <jhb@FreeBSD.org>

- Add a new flag MTX_QUIET that can be passed to the various mtx_*
functions. If this flag is set, then no KTR log messages are issued.
This is useful for blocking excessive logging, such as with the internal
mutex used by the witness code.
- Use MTX_QUIET on all of the mtx_enter/exit operations on the internal
mutex used by the witness code.
- If we are in a panic, don't do witness checks in witness_enter(),
witness_exit(), and witness_try_enter(), just return.


# 92cf772d 11-Dec-2000 Jake Burkholder <jake@FreeBSD.org>

- Add code to detect if a system call returns with locks other than Giant
held and panic if so (conditional on witness).
- Change witness_list to return the number of locks held so this is easier.
- Add kern/syscalls.c to the kernel build if witness is defined so that the
panic message can contain the name of the offending system call.
- Add assertions that Giant and sched_lock are not held when returning from
a system call, which were missing for alpha and ia64.


# 428b4b55 11-Dec-2000 John Baldwin <jhb@FreeBSD.org>

Oops, the witness mutex is a spin lock, so use MTX_SPIN in the call to
mtx_init(). Since the witness code ignores its internal mutex, this
doesn't result in any functional change.


# 7cc0979f 08-Dec-2000 David Malone <dwmalone@FreeBSD.org>

Convert more malloc+bzero to malloc+M_ZERO.

Submitted by: josh@zipperup.org
Submitted by: Robert Drehmel <robd@gmx.net>


# 6936206e 30-Nov-2000 John Baldwin <jhb@FreeBSD.org>

Split the WITNESS and MUTEX_DEBUG options apart so that WITNESS does not
depend on MUTEX_DEBUG. The MUTEX_DEBUG option turns on extra assertions
and checks to verify that mutexes themselves are implemented properly.
The WITNESS option uses extra checks and diagnostics to verify that other
code is using mutexes properly.


# 1bd0eefb 29-Nov-2000 John Baldwin <jhb@FreeBSD.org>

Fix up priority propagation:
- Use a better test for determining when a process is running.
- Convert some checks to assertions.
- Remove unnecessary tests.
- Save the priority before acquiring a mutex rather than in msleep(9).


# 86327ad8 29-Nov-2000 John Baldwin <jhb@FreeBSD.org>

Set p_mtxname when blocking on a mutex and clear it when waking up.


# f404050e 29-Nov-2000 John Baldwin <jhb@FreeBSD.org>

Use an atomic operation with an appropriate memory barrier when releasing
a contested sleep mutex in the case that at least two processes are blocked
on the contested mutex.


# 8f838cb5 29-Nov-2000 John Baldwin <jhb@FreeBSD.org>

The sched_lock mutex goes after the sio mutex in the locking order since
a software interrupt can be scheduled in the sio interrupt handler while
the sio mutex is held.


# bbc7a98a 29-Nov-2000 John Baldwin <jhb@FreeBSD.org>

Save the line number and filename of the last mtx_enter operation for
spin locks. We already do this for sleep locks.


# 0931dcef 26-Nov-2000 Alfred Perlstein <alfred@FreeBSD.org>

Move the #define of _KERN_MUTEX_C_ so that it's before any system headers
are included. System headers can include sys/mutex.h and then certain
macros do not get defined.

Reviewed by: jake


# a5d5c61c 26-Nov-2000 Jake Burkholder <jake@FreeBSD.org>

Add uidinfo hash and uidinfo struct to the witness order list.


# fa2fbc3d 18-Nov-2000 Jake Burkholder <jake@FreeBSD.org>

- Protect the callout wheel with a separate spin mutex, callout_lock.
- Use the mutex in hardclock to ensure no races between it and
softclock.
- Make softclock be INTR_MPSAFE and provide a flag,
CALLOUT_MPSAFE, which specifies that a callout handler does not
need giant. There is still no way to set this flag when
regstering a callout.

Reviewed by: -smp@, jlemon


# 7da6f977 17-Nov-2000 Jake Burkholder <jake@FreeBSD.org>

- Split the run queue and sleep queue linkage, so that a process
may block on a mutex while on the sleep queue without corrupting
it.
- Move dropping of Giant to after the acquire of sched_lock.

Tested by: John Hay <jhay@icomtek.csir.co.za>
jhb


# 20cdcc5b 15-Nov-2000 John Baldwin <jhb@FreeBSD.org>

Don't release and acquire Giant in mi_switch(). Instead, release and
acquire Giant as needed in functions that call mi_switch(). The releases
need to be done outside of the sched_lock to avoid potential deadlocks
from trying to acquire Giant while interrupts are disabled.

Submitted by: witness


# 9c36c934 15-Nov-2000 John Baldwin <jhb@FreeBSD.org>

Include the right headers to get the DDB #define and the db_active variable.


# 59f857e4 15-Nov-2000 John Baldwin <jhb@FreeBSD.org>

Declare the 'witness_spin_check' properly as a per-CPU variable in the
non-SMP case.


# ecbd8e37 15-Nov-2000 John Baldwin <jhb@FreeBSD.org>

Don't perform witness checks in witness_enter() during a panic.


# 0fe4e534 10-Nov-2000 John Baldwin <jhb@FreeBSD.org>

Minor whitespace nit in a comment.


# a5a96a19 26-Oct-2000 John Baldwin <jhb@FreeBSD.org>

- Use MUTEX_DECLARE() and MTX_COLD for the WITNESS code's internal mutex so
it can function before malloc(9) is up and running.
- Add two new options WITNESS_DDB and WITNESS_SKIPSPIN. If WITNESS_SKIPSPIN
is enabled, then spin mutexes are ignored by the WITNESS code. If
WITNESS_DDB is turned on and DDB is compiled into the kernel, then the
kernel will drop into DDB when either a lock hierarchy violation occurs
or mutexes are held when going to sleep.
- Add some new sysctls:
debug.witness_ddb is a read-write sysctl that corresponds to WITNESS_DDB.
The kernel option merely changes the default value to on at boot.
debug.witness_skipspin is a read-only sysctl that one can use to determine
if the kernel was compiled with WITNESS_SKIPSPIN.
- Wipe out the BSD/OS-specific lock order lists. We get to build our own
lists now as we add mutexes to the kernel.


# 31271627 24-Oct-2000 John Baldwin <jhb@FreeBSD.org>

Quite some warnings.


# b67a3e6e 20-Oct-2000 John Baldwin <jhb@FreeBSD.org>

Propogate the 'const'ness of mutex descriptions to the witness code to
quiet warnings.


# 78f0da03 20-Oct-2000 John Baldwin <jhb@FreeBSD.org>

Actually enable the witness code if the WITNESS kernel option is enabled.


# f5271ebc 20-Oct-2000 John Baldwin <jhb@FreeBSD.org>

Doh. Fix a 64-bit-ism by using uintptr_t for a temporary lock variable
instead of int.


# 36412d79 20-Oct-2000 John Baldwin <jhb@FreeBSD.org>

- Make the mutex code almost completely machine independent. This greatly
reducues the maintenance load for the mutex code. The only MD portions
of the mutex code are in machine/mutex.h now, which include the assembly
macros for handling mutexes as well as optionally overriding the mutex
micro-operations. For example, we use optimized micro-ops on the x86
platform #ifndef I386_CPU.
- Change the behavior of the SMP_DEBUG kernel option. In the new code,
mtx_assert() only depends on INVARIANTS, allowing other kernel developers
to have working mutex assertiions without having to include all of the
mutex debugging code. The SMP_DEBUG kernel option has been renamed to
MUTEX_DEBUG and now just controls extra mutex debugging code.
- Abolish the ugly mtx_f hack. Instead, we dynamically allocate
seperate mtx_debug structures on the fly in mtx_init, except for mutexes
that are initiated very early in the boot process. These mutexes
are declared using a special MUTEX_DECLARE() macro, and use a new
flag MTX_COLD when calling mtx_init. This is still somewhat hackish,
but it is less evil than the mtx_f filler struct, and the mtx struct is
now the same size with and without mutex debugging code.
- Add some micro-micro-operation macros for doing the actual atomic
operations on the mutex mtx_lock field to make it easier for other archs
to override/optimize mutex ops if needed. These new tiny ops also clean
up the code in some places by replacing long atomic operation function
calls that spanned 2-3 lines with a short 1-line macro call.
- Don't call mi_switch() from mtx_enter_hard() when we block while trying
to obtain a sleep mutex. Calling mi_switch() would bogusly release
Giant before switching to the next process. Instead, inline most of the
code from mi_switch() in the mtx_enter_hard() function. Note that when
we finally kill Giant we can back this out and go back to calling
mi_switch().


# 606f8eb2 14-Sep-2000 John Baldwin <jhb@FreeBSD.org>

Remove the mtx_t, witness_t, and witness_blessed_t types. Instead, just
use struct mtx, struct witness, and struct witness_blessed.

Requested by: bde


# 5340642a 09-Sep-2000 Jason Evans <jasone@FreeBSD.org>

Style cleanups. No functional changes.


# 46bf3fe5 09-Sep-2000 Jason Evans <jasone@FreeBSD.org>

Add file and line arguments to WITNESS_ENTER() and WITNESS_EXIT, since
__FILE__ and __LINE__ don't get expanded usefully in inline functions.

Add const to all witness*() arguments that are filenames.


# 12473b76 08-Sep-2000 Jason Evans <jasone@FreeBSD.org>

Rename mtx_enter(), mtx_try_enter(), and mtx_exit() and wrap them with cpp
macros that expand to pass filename and line number information. This is
necessary since we're using inline functions instead of macros now.

Add const to the filename pointers passed througout the mtx and witness
code.


# 0384fff8 06-Sep-2000 Jason Evans <jasone@FreeBSD.org>

Major update to the way synchronization is done in the kernel. Highlights
include:

* Mutual exclusion is used instead of spl*(). See mutex(9). (Note: The
alpha port is still in transition and currently uses both.)

* Per-CPU idle processes.

* Interrupts are run in their own separate kernel threads and can be
preempted (i386 only).

Partially contributed by: BSDi (BSD/OS)
Submissions by (at least): cp, dfr, dillon, grog, jake, jhb, sheldonh