Movatterモバイル変換


[0]ホーム

URL:


Skip to content

Navigation Menu

Sign in
Appearance settings

Search code, repositories, users, issues, pull requests...

Provide feedback

We read every piece of feedback, and take your input very seriously.

Saved searches

Use saved searches to filter your results more quickly

Sign up
Appearance settings

Commit4893ccd

Browse files
committed
Remove swpb-based spinlock implementation for ARMv5 and earlier.
Per recent analysis by Andres Freund, this implementation is in factunsafe, because ARMv5 has weak memory ordering, which means tha theCPU could move loads or stores across the volatile store performed bythe default S_UNLOCK. We could try to fix this, but have no ARMv5hardware to test on, so removing support seems better. We can stillsupport ARMv5 systems on GCC versions new enough to have built-inatomics support for this platform, and can also re-add support forthe old way if someone has hardware that can be used to test a fix.However, since the requirement to use a relatively-new GCC hasn'tbeen an issue for ARMv6 or ARMv7, which lack the swpb instructionaltogether, perhaps it won't be an issue for ARMv5 either.
1 parent1b86c81 commit4893ccd

File tree

1 file changed

+6
-48
lines changed

1 file changed

+6
-48
lines changed

‎src/include/storage/s_lock.h

Lines changed: 6 additions & 48 deletions
Original file line numberDiff line numberDiff line change
@@ -300,55 +300,13 @@ tas(volatile slock_t *lock)
300300
#endif/* __INTEL_COMPILER */
301301
#endif/* __ia64__ || __ia64 */
302302

303-
304303
/*
305-
* On ARM, we use __sync_lock_test_and_set(int *, int) if available, and if
306-
* not fall back on the SWPB instruction. SWPB does not work on ARMv6 or
307-
* later, so the compiler builtin is preferred if available. Note also that
308-
* the int-width variant of the builtin works on more chips than other widths.
309-
*/
310-
#if defined(__arm__)|| defined(__arm)
311-
#defineHAS_TEST_AND_SET
312-
313-
#defineTAS(lock) tas(lock)
314-
315-
#ifdefHAVE_GCC_INT_ATOMICS
316-
317-
typedefintslock_t;
318-
319-
static __inline__int
320-
tas(volatileslock_t*lock)
321-
{
322-
return__sync_lock_test_and_set(lock,1);
323-
}
324-
325-
#defineS_UNLOCK(lock) __sync_lock_release(lock)
326-
327-
#else/* !HAVE_GCC_INT_ATOMICS */
328-
329-
typedefunsignedcharslock_t;
330-
331-
static __inline__int
332-
tas(volatileslock_t*lock)
333-
{
334-
registerslock_t_res=1;
335-
336-
__asm__ __volatile__(
337-
"swpb %0, %0, [%2]\n"
338-
:"+r"(_res),"+m"(*lock)
339-
:"r"(lock)
340-
:"memory");
341-
return (int)_res;
342-
}
343-
344-
#endif/* HAVE_GCC_INT_ATOMICS */
345-
#endif/* __arm__ */
346-
347-
348-
/*
349-
* On ARM64, we use __sync_lock_test_and_set(int *, int) if available.
304+
* On ARM and ARM64, we use __sync_lock_test_and_set(int *, int) if available.
305+
*
306+
* We use the int-width variant of the builtin because it works on more chips
307+
* than other widths.
350308
*/
351-
#if defined(__aarch64__)|| defined(__aarch64)
309+
#if defined(__arm__)|| defined(__arm)|| defined(__aarch64__)|| defined(__aarch64)
352310
#ifdefHAVE_GCC_INT_ATOMICS
353311
#defineHAS_TEST_AND_SET
354312

@@ -365,7 +323,7 @@ tas(volatile slock_t *lock)
365323
#defineS_UNLOCK(lock) __sync_lock_release(lock)
366324

367325
#endif/* HAVE_GCC_INT_ATOMICS */
368-
#endif/* __aarch64__ */
326+
#endif/*__arm__ || __arm ||__aarch64__ || __aarch64 */
369327

370328

371329
/* S/390 and S/390x Linux (32- and 64-bit zSeries) */

0 commit comments

Comments
 (0)

[8]ページ先頭

©2009-2025 Movatter.jp