#
272461 |
|
02-Oct-2014 |
gjb |
Copy stable/10@r272459 to releng/10.1 as part of the 10.1-RELEASE process.
Approved by: re (implicit) Sponsored by: The FreeBSD Foundation |
#
256281 |
|
10-Oct-2013 |
gjb |
Copy head (r256279) to stable/10 as part of the 10.0-RELEASE cycle.
Approved by: re (implicit) Sponsored by: The FreeBSD Foundation
|
#
247787 |
|
04-Mar-2013 |
davide |
MFcalloutng: Introduce sbt variants of msleep(), msleep_spin(), pause(), tsleep() in the KPI, allowing to specify timeout in 'sbintime_t' rather than ticks.
Sponsored by: Google Summer of Code 2012, iXsystems inc. Tested by: flo, marius, ian, markj, Fabian Keil
|
#
227788 |
|
21-Nov-2011 |
attilio |
Introduce the same mutex-wise fix in r227758 for sx locks.
The functions that offer file and line specifications are: - sx_assert_ - sx_downgrade_ - sx_slock_ - sx_slock_sig_ - sx_sunlock_ - sx_try_slock_ - sx_try_xlock_ - sx_try_upgrade_ - sx_unlock_ - sx_xlock_ - sx_xlock_sig_ - sx_xunlock_
Now vm_map locking is fully converted and can avoid to know specifics about locking procedures. Reviewed by: kib MFC after: 1 month
|
#
227588 |
|
16-Nov-2011 |
pjd |
Constify arguments for locking KPIs where possible.
This enables locking consumers to pass their own structures around as const and be able to assert locks embedded into those structures.
Reviewed by: ed, kib, jhb
|
#
219819 |
|
21-Mar-2011 |
jeff |
- Merge changes to the base system to support OFED. These include a wider arg2 for sysctl, updates to vlan code, IFT_INFINIBAND, and other miscellaneous small features.
|
#
197643 |
|
30-Sep-2009 |
attilio |
When releasing a read/shared lock we need to use a write memory barrier in order to avoid, on architectures which doesn't have strong ordered writes, CPU instructions reordering.
Diagnosed by: fabio Reviewed by: jhb Tested by: Giovanni Trematerra <giovanni dot trematerra at gmail dot com>
|
#
194578 |
|
21-Jun-2009 |
rdivacky |
In non-debugging mode make this define (void)0 instead of nothing. This helps to catch bugs like the below with clang.
if (cond); <--- note the trailing ; something();
Approved by: ed (mentor) Discussed on: current@
|
#
193011 |
|
28-May-2009 |
attilio |
Reverse the logic for ADAPTIVE_SX option and enable it by default. Introduce for this operation the reverse NO_ADAPTIVE_SX option. The flag SX_ADAPTIVESPIN to be passed to sx_init_flags(9) gets suppressed and the new flag, offering the reversed logic, SX_NOADAPTIVE is added.
Additively implements adaptive spininning for sx held in shared mode. The spinning limit can be handled through sysctls in order to be tuned while the code doesn't reach the release, after which time they should be dropped probabilly.
This change has made been necessary by recent benchmarks where it does improve concurrency of workloads in presence of high contention (ie. ZFS).
KPI breakage is documented by __FreeBSD_version bumping, manpage and UPDATING updates.
Requested by: jeff, kmacy Reviewed by: jeff Tested by: pho
|
#
192853 |
|
26-May-2009 |
sson |
Add the OpenSolaris dtrace lockstat provider. The lockstat provider adds probes for mutexes, reader/writer and shared/exclusive locks to gather contention statistics and other locking information for dtrace scripts, the lockstat(1M) command and other potential consumers.
Reviewed by: attilio jhb jb Approved by: gnn (mentor)
|
#
181682 |
|
13-Aug-2008 |
ed |
Fix compilation of arm's AVILA.
Compilation of the AVILA kernel failed because of two reasons:
- It needed curthread, which is defined through <sys/pcpu.h>.
- It still referred the softc's sc_mtx field, which has been replaced by sc_lock three weeks ago.
To solve the first problem, I decided to include <sys/pcpu.h> in <sys/sx.h>, which also seems to be done by <sys/mutex.h> and <sys/rwlock.h>. Those header files also require curthread.
Approved by: jhb
|
#
174629 |
|
15-Dec-2007 |
jeff |
- Re-implement lock profiling in such a way that it no longer breaks the ABI when enabled. There is no longer an embedded lock_profile_object in each lock. Instead a list of lock_profile_objects is kept per-thread for each lock it may own. The cnt_hold statistic is now always 0 to facilitate this. - Support shared locking by tracking individual lock instances and statistics in the per-thread per-instance lock_profile_object. - Make the lock profiling hash table a per-cpu singly linked list with a per-cpu static lock_prof allocator. This removes the need for an array of spinlocks and reduces cache contention between cores. - Use a seperate hash for spinlocks and other locks so that only a critical_enter() is required and not a spinlock_enter() to modify the per-cpu tables. - Count time spent spinning in the lock statistics. - Remove the LOCK_PROFILE_SHARED option as it is always supported now. - Specifically drop and release the scheduler locks in both schedulers since we track owners now.
In collaboration with: Kip Macy Sponsored by: Nokia
|
#
171277 |
|
06-Jul-2007 |
attilio |
Fix some problems with lock_profiling in sx locks: - Adjust lock_profiling stubs semantic in the hard functions in order to be more accurate and trustable - Disable shared paths for lock_profiling. Actually, lock_profiling has a subtle race which makes results caming from shared paths not completely trustable. A macro stub (LOCK_PROFILING_SHARED) can be actually used for re-enabling this paths, but is currently intended for developing use only. - Use homogeneous names for automatic variables in hard functions regarding lock_profiling - Style fixes - Add a CTASSERT for some flags building
Discussed with: kmacy, kris Approved by: jeff (mentor) Approved by: re
|
#
170149 |
|
31-May-2007 |
attilio |
Add functions sx_xlock_sig() and sx_slock_sig(). These functions are intended to do the same actions of sx_xlock() and sx_slock() but with the difference to perform an interruptible sleep, so that sleep can be interrupted by external events. In order to support these new featueres, some code renstruction is needed, but external API won't be affected at all.
Note: use "void" cast for "int" returning functions in order to avoid tools like Coverity prevents to whine.
Requested by: rwatson Tested by: rwatson Reviewed by: jhb Approved by: jeff (mentor)
|
#
170115 |
|
29-May-2007 |
attilio |
style(9) fixes for sx locks.
Approved by: jeff (mentor)
|
#
169780 |
|
19-May-2007 |
jhb |
Rename the macros for assertion flags passed to sx_assert() from SX_* to SA_* to match mutexes and rwlocks. The old flags still exist for backwards compatiblity.
Requested by: attilio
|
#
169776 |
|
19-May-2007 |
jhb |
Expose sx_xholder() as a public macro. It returns a pointer to the thread that holds the current exclusive lock, or NULL if no thread holds an exclusive lock.
Requested by: pjd
|
#
169769 |
|
19-May-2007 |
jhb |
Add a new SX_RECURSE flag to make support for recursive exclusive locks conditional. By default, sx(9) locks are back to not supporting recursive exclusive locks.
Submitted by: attilio
|
#
169394 |
|
08-May-2007 |
jhb |
Add destroyed cookie values for sx locks and rwlocks as well as extra KASSERTs so that any lock operations on a destroyed lock will panic or hang.
|
#
168330 |
|
03-Apr-2007 |
kmacy |
Fixes to sx for newsx - fix recursed case and move out of inline
Submitted by: Attilio Rao <attilio@freebsd.org>
|
#
168191 |
|
31-Mar-2007 |
jhb |
Optimize sx locks to use simple atomic operations for the common cases of obtaining and releasing shared and exclusive locks. The algorithms for manipulating the lock cookie are very similar to that rwlocks. This patch also adds support for exclusive locks using the same algorithm as mutexes.
A new sx_init_flags() function has been added so that optional flags can be specified to alter a given locks behavior. The flags include SX_DUPOK, SX_NOWITNESS, SX_NOPROFILE, and SX_QUITE which are all identical in nature to the similar flags for mutexes.
Adaptive spinning on select locks may be enabled by enabling the ADAPTIVE_SX kernel option. Only locks initialized with the SX_ADAPTIVESPIN flag via sx_init_flags() will adaptively spin.
The common cases for sx_slock(), sx_sunlock(), sx_xlock(), and sx_xunlock() are now performed inline in non-debug kernels. As a result, <sys/sx.h> now requires <sys/lock.h> to be included prior to <sys/sx.h>.
The new kernel option SX_NOINLINE can be used to disable the aforementioned inlining in non-debug kernels.
The size of struct sx has changed, so the kernel ABI is probably greatly disturbed.
MFC after: 1 month Submitted by: attilio Tested by: kris, pjd
|
#
167787 |
|
21-Mar-2007 |
jhb |
Rename the 'mtx_object', 'rw_object', and 'sx_object' members of mutexes, rwlocks, and sx locks to 'lock_object'.
|
#
167387 |
|
09-Mar-2007 |
jhb |
Allow threads to atomically release rw and sx locks while waiting for an event. Locking primitives that support this (mtx, rw, and sx) now each include their own foo_sleep() routine. - Rename msleep() to _sleep() and change it's 'struct mtx' object to a 'struct lock_object' pointer. _sleep() uses the recently added lc_unlock() and lc_lock() function pointers for the lock class of the specified lock to release the lock while the thread is suspended. - Add wrappers around _sleep() for mutexes (mtx_sleep()), rw locks (rw_sleep()), and sx locks (sx_sleep()). msleep() still exists and is now identical to mtx_sleep(), but it is deprecated. - Rename SLEEPQ_MSLEEP to SLEEPQ_SLEEP. - Rewrite much of sleep.9 to not be msleep(9) centric. - Flesh out the 'RETURN VALUES' section in sleep.9 and add an 'ERRORS' section. - Add __nonnull(1) to _sleep() and msleep_spin() so that the compiler will warn if you try to pass a NULL wait channel. The functions already have a KASSERT to that effect.
|
#
161721 |
|
29-Aug-2006 |
jhb |
The _sx_assert() prototype should exist if either of INVARIANTS or INVARIANT_SUPPORT is defined so you can build a kernel with INVARIANT_SUPPORT, but build a module with just INVARIANTS on.
MFC after: 3 days Reported by: kuriyama
|
#
161337 |
|
15-Aug-2006 |
jhb |
Add a new 'show sleepchain' ddb command similar to 'show lockchain' except that it operates on lockmgr and sx locks. This can be useful for tracking down vnode deadlocks in VFS for example. Note that this command is a bit more fragile than 'show lockchain' as we have to poke around at the wait channel of a thread to see if it points to either a struct lock or a condition variable inside of a struct sx. If td_wchan points to something unmapped, then this command will terminate early due to a fault, but no harm will be done.
|
#
159844 |
|
21-Jun-2006 |
jhb |
Add a sx_xlocked() macro which returns true if the current thread holds an exclusive lock on the specified sx lock.
|
#
157296 |
|
30-Mar-2006 |
jhb |
Style fix.
|
#
149739 |
|
02-Sep-2005 |
jhb |
Add a SYSUNINIT() to SX_SYSINIT() to call sx_destroy() to destroy the sx lock when a module is unloaded similar to the recent change made to MTX_SYSINIT().
Suggested by: pjd MFC after: 3 days
|
#
139825 |
|
07-Jan-2005 |
imp |
/* -> /*- for license, minor formatting changes
|
#
131984 |
|
11-Jul-2004 |
darrenr |
Add sx_unlock() macro as a frontend to both sx_sunlock() and sx_xunlock(), using sx_cnt to determine what state the lock is in and call the respective function appropriately.
|
#
125444 |
|
04-Feb-2004 |
bde |
Include <sys/queue.h> before <sys/_lock.h> instead of depending on namespace pollution in other headers. <sys/types.h> is now the only prerequisite for <sys/sx.h>.
Fixed some style bugs: - removed bogus LOCORE ifdef. Including this C header in assembler sources is just nonsense. - removed unused include of <sys/_mutex.h>. It finished rotting when the mutex in struct sx became indirect in rev.1.15. - removed most comments on #else and #endif's and cleaned up the others. All were misindented...
|
#
125419 |
|
04-Feb-2004 |
pjd |
Add SX_UNLOCKED define. It will be used with sx_assert(9) to be sure that current thread does not hold given sx(9) lock.
Reviewed by: jhb Approved by: jhb, scottl (mentor)
|
#
93688 |
|
02-Apr-2002 |
arr |
- Make this compile if INVARIANTS support is not enabled.
|
#
93672 |
|
02-Apr-2002 |
arr |
- Add MTX_SYSINIT and SX_SYSINIT as macro glue for allowing sx and mtx locks to be able to setup a SYSINIT call. This helps in places where a lock is needed to protect some data, but the data is not truly associated with a subsystem that can properly initialize it's lock. The macros use the mtx_sysinit() and sx_sysinit() functions, respectively, as the handler argument to SYSINIT().
Reviewed by: alfred, jhb, smp@
|
#
86333 |
|
13-Nov-2001 |
dillon |
Create a mutex pool API for short term leaf mutexes. Replace the manual mutex pool in kern_lock.c (lockmgr locks) with the new API. Replace the mutexes embedded in sxlocks with the new API.
|
#
85412 |
|
24-Oct-2001 |
jhb |
Fix this to actually compile in the !INVARIANTS case.
Reported by: Maxime Henrion <mux@qualys.com>
|
#
85388 |
|
23-Oct-2001 |
jhb |
Change the sx(9) assertion API to use a sx_assert() function similar to mtx_assert(9) rather than several SX_ASSERT_* macros.
|
#
83593 |
|
17-Sep-2001 |
jhb |
Use NULL instead of __FILE__ in the !LOCK_DEBUG case in the locking code since the filenames are only used in the LOCK_DEBUG case and are just bloat in the !LOCK_DEBUG case.
|
#
83366 |
|
12-Sep-2001 |
julian |
KSE Milestone 2 Note ALL MODULES MUST BE RECOMPILED make the kernel aware that there are smaller units of scheduling than the process. (but only allow one thread per process at this time). This is functionally equivalent to teh previousl -current except that there is a thread associated with each process.
Sorry john! (your next MFC will be a doosie!)
Reviewed by: peter@freebsd.org, dillon@freebsd.org
X-MFC after: ha ha ha ha
|
#
83103 |
|
05-Sep-2001 |
jhb |
Include <sys/_lock.h> for the definition of struct lock_object. Don't understand why this wasn't added when _mutex.h was added.
Noticed by: jlemon
|
#
81599 |
|
13-Aug-2001 |
jasone |
Add sx_try_upgrade() and sx_downgrade().
Submitted by: Alexander Kabaev <ak03@gte.com>
|
#
78872 |
|
27-Jun-2001 |
jhb |
- Add trylock variants of shared and exclusive locks. - The sx assertions don't actually need the internal sx mutex lock, so don't bother doing so. - Add a new assertion SX_ASSERT_LOCKED() that asserts that either a shared or exclusive lock should be held. This assertion should be used instead of SX_ASSERT_SLOCKED() in almost all cases. - Adjust some KASSERT()'s to include file and line information. - Use the new witness_assert() function in the WITNESS case for sx slock asserts to verify that the current thread actually owns a slock.
|
#
76166 |
|
01-May-2001 |
markm |
Undo part of the tangle of having sys/lock.h and sys/mutex.h included in other "system" header files.
Also help the deprecation of lockmgr.h by making it a sub-include of sys/lock.h and removing sys/lockmgr.h form kernel .c files.
Sort sys/*.h includes where possible in affected files.
OK'ed by: bde (with reservations)
|
#
74912 |
|
28-Mar-2001 |
jhb |
Rework the witness code to work with sx locks as well as mutexes. - Introduce lock classes and lock objects. Each lock class specifies a name and set of flags (or properties) shared by all locks of a given type. Currently there are three lock classes: spin mutexes, sleep mutexes, and sx locks. A lock object specifies properties of an additional lock along with a lock name and all of the extra stuff needed to make witness work with a given lock. This abstract lock stuff is defined in sys/lock.h. The lockmgr constants, types, and prototypes have been moved to sys/lockmgr.h. For temporary backwards compatability, sys/lock.h includes sys/lockmgr.h. - Replace proc->p_spinlocks with a per-CPU list, PCPU(spinlocks), of spin locks held. By making this per-cpu, we do not have to jump through magic hoops to deal with sched_lock changing ownership during context switches. - Replace proc->p_heldmtx, formerly a list of held sleep mutexes, with proc->p_sleeplocks, which is a list of held sleep locks including sleep mutexes and sx locks. - Add helper macros for logging lock events via the KTR_LOCK KTR logging level so that the log messages are consistent. - Add some new flags that can be passed to mtx_init(): - MTX_NOWITNESS - specifies that this lock should be ignored by witness. This is used for the mutex that blocks a sx lock for example. - MTX_QUIET - this is not new, but you can pass this to mtx_init() now and no events will be logged for this lock, so that one doesn't have to change all the individual mtx_lock/unlock() operations. - All lock objects maintain an initialized flag. Use this flag to export a mtx_initialized() macro that can be safely called from drivers. Also, we on longer walk the all_mtx list if MUTEX_DEBUG is defined as witness performs the corresponding checks using the initialized flag. - The lock order reversal messages have been improved to output slightly more accurate file and line numbers.
|
#
73901 |
|
06-Mar-2001 |
jhb |
In order to avoid recursing on the backing mutex for sx locks in the INVARIANTS case, define the actual KASSERT() in _SX_ASSERT_[SX]LOCKED macros that are used in the sx code itself and convert the SX_ASSERT_[SX]LOCKED macros to simple wrappers that grab the mutex for the duration of the check.
|
#
73900 |
|
06-Mar-2001 |
jhb |
Get the arguments to the KASSERT() printf() in SX_ASSERT_XLOCKED() in the proper order.
|
#
73874 |
|
06-Mar-2001 |
dwmalone |
Fix typo: define SX_ASSERT_XLOCKED not SX_ASSERT_XLOCKER in non-INVARIANTS case.
PR: 25567 Submitted by: nnd@mail.nsk.ru
|
#
73863 |
|
06-Mar-2001 |
bmilekic |
- Add sx_descr description member to sx lock structure - Add sx_xholder member to sx struct which is used for INVARIANTS-enabled assertions. It indicates the thread that presently owns the xlock. - Add some assertions to the sx lock code that will detect the fatal API abuse: xlock --> xlock xlock --> slock which now works thanks to sx_xholder. Notice that the remaining two problematic cases: slock --> xlock slock --> slock (a little less problematic, but still recursion) will need to be handled by witness eventually, as they are more involved.
Reviewed by: jhb, jake, jasone
|
#
73782 |
|
05-Mar-2001 |
jasone |
Implement shared/exclusive locks.
Reviewed by: bmilekic, jake, jhb
|