Movatterモバイル変換


[0]ホーム

URL:


Skip to content

Navigation Menu

Sign in
Appearance settings

Search code, repositories, users, issues, pull requests...

Provide feedback

We read every piece of feedback, and take your input very seriously.

Saved searches

Use saved searches to filter your results more quickly

Sign up
Appearance settings

Commitbe1e805

Browse files
committed
Use a non-locking test in TAS_SPIN() on all IA64 platforms.
Per my testing, this works just as well with gcc as it does with HP'scompiler; and there is no reason to think that the effect doesn't occurwith icc, either.Also, rewrite the header comment about enforcing sequencing around spinlockoperations, per Robert's gripe that it was misleading.
1 parentc01c25f commitbe1e805

File tree

1 file changed

+22
-6
lines changed

1 file changed

+22
-6
lines changed

‎src/include/storage/s_lock.h

Lines changed: 22 additions & 6 deletions
Original file line numberDiff line numberDiff line change
@@ -41,7 +41,7 @@
4141
*
4242
*int TAS_SPIN(slock_t *lock)
4343
*Like TAS(), but this version is used when waiting for a lock
44-
*previously found to be contended.Typically, this is the
44+
*previously found to be contended.By default, this is the
4545
*same as TAS(), but on some architectures it's better to poll a
4646
*contended lock using an unlocked instruction and retry the
4747
*atomic test-and-set only when it appears free.
@@ -54,10 +54,22 @@
5454
*on Alpha TAS() will "fail" if interrupted. Therefore a retry loop must
5555
*always be used, even if you are certain the lock is free.
5656
*
57-
*ANOTHER CAUTION: be sure that TAS(), TAS_SPIN(), and S_UNLOCK() represent
58-
*sequence points, ie, loads and stores of other values must not be moved
59-
*across a lock or unlock. In most cases it suffices to make the operation
60-
*be done through a "volatile" pointer.
57+
*Another caution for users of these macros is that it is the caller's
58+
*responsibility to ensure that the compiler doesn't re-order accesses
59+
*to shared memory to precede the actual lock acquisition, or follow the
60+
*lock release. Typically we handle this by using volatile-qualified
61+
*pointers to refer to both the spinlock itself and the shared data
62+
*structure being accessed within the spinlocked critical section.
63+
*That fixes it because compilers are not allowed to re-order accesses
64+
*to volatile objects relative to other such accesses.
65+
*
66+
*On platforms with weak memory ordering, the TAS(), TAS_SPIN(), and
67+
*S_UNLOCK() macros must further include hardware-level memory fence
68+
*instructions to prevent similar re-ordering at the hardware level.
69+
*TAS() and TAS_SPIN() must guarantee that loads and stores issued after
70+
*the macro are not executed until the lock has been obtained. Conversely,
71+
*S_UNLOCK() must guarantee that loads and stores issued before the macro
72+
*have been executed before the lock is released.
6173
*
6274
*On most supported platforms, TAS() uses a tas() function written
6375
*in assembly language to execute a hardware atomic-test-and-set
@@ -229,6 +241,9 @@ typedef unsigned int slock_t;
229241

230242
#defineTAS(lock) tas(lock)
231243

244+
/* On IA64, it's a win to use a non-locking test before the xchg proper */
245+
#defineTAS_SPIN(lock)(*(lock) ? 1 : TAS(lock))
246+
232247
#ifndef__INTEL_COMPILER
233248

234249
static __inline__int
@@ -735,7 +750,8 @@ typedef unsigned int slock_t;
735750

736751
#include<ia64/sys/inline.h>
737752
#defineTAS(lock) _Asm_xchg(_SZ_W, lock, 1, _LDHINT_NONE)
738-
#defineTAS_SPIN(lock) (*(lock) ? 1 : TAS(lock))
753+
/* On IA64, it's a win to use a non-locking test before the xchg proper */
754+
#defineTAS_SPIN(lock)(*(lock) ? 1 : TAS(lock))
739755

740756
#endif/* HPUX on IA64, non gcc */
741757

0 commit comments

Comments
 (0)

[8]ページ先頭

©2009-2025 Movatter.jp