Movatterモバイル変換


[0]ホーム

URL:


Skip to content

Navigation Menu

Sign in
Appearance settings

Search code, repositories, users, issues, pull requests...

Provide feedback

We read every piece of feedback, and take your input very seriously.

Saved searches

Use saved searches to filter your results more quickly

Sign up
Appearance settings

Commit19cefeb

Browse files
committed
Allow use of __sync_lock_test_and_set for spinlocks on any machine.
If we have no special-case code in s_lock.h for the current platform,but the compiler has __sync_lock_test_and_set, use that instead offailing. It's unlikely that anybody's __sync_lock_test_and_setwould be so awful as to be worse than our semaphore-based fallback,but if it is, they can (continue to) use --disable-spinlocks.This allows removal of the RISC-V special case installed by commitc32fcac, which generated exactly the same code but only on thatplatform. Usefully, the RISC-V buildfarm animals should now testat least the int variant of this patch.I've manually tested both variants on ARM by dint of removing theARM-specific stanza. We don't want to drop that, because it alreadyhas some special knowledge and is likely to grow more over time.Likewise, this is not meant to preclude installing special casesfor other arches if that proves worthwhile.Per discussion of a request to install the same code for loongarch64.Like the previous patch, we might as well back-patch to supportedbranches.Discussion:https://postgr.es/m/761ac43d44b84d679ba803c2bd947cc0@HSMAILSVR04.hs.handsome.com.cn
1 parentb3326a7 commit19cefeb

File tree

1 file changed

+45
-23
lines changed

1 file changed

+45
-23
lines changed

‎src/include/storage/s_lock.h

Lines changed: 45 additions & 23 deletions
Original file line numberDiff line numberDiff line change
@@ -341,29 +341,6 @@ tas(volatile slock_t *lock)
341341
#endif/* __arm__ || __arm || __aarch64__ || __aarch64 */
342342

343343

344-
/*
345-
* RISC-V likewise uses __sync_lock_test_and_set(int *, int) if available.
346-
*/
347-
#if defined(__riscv)
348-
#ifdefHAVE_GCC__SYNC_INT32_TAS
349-
#defineHAS_TEST_AND_SET
350-
351-
#defineTAS(lock) tas(lock)
352-
353-
typedefintslock_t;
354-
355-
static __inline__int
356-
tas(volatileslock_t*lock)
357-
{
358-
return__sync_lock_test_and_set(lock,1);
359-
}
360-
361-
#defineS_UNLOCK(lock) __sync_lock_release(lock)
362-
363-
#endif/* HAVE_GCC__SYNC_INT32_TAS */
364-
#endif/* __riscv */
365-
366-
367344
/* S/390 and S/390x Linux (32- and 64-bit zSeries) */
368345
#if defined(__s390__)|| defined(__s390x__)
369346
#defineHAS_TEST_AND_SET
@@ -748,6 +725,51 @@ tas(volatile slock_t *lock)
748725
typedefunsignedcharslock_t;
749726
#endif
750727

728+
729+
/*
730+
* If we have no platform-specific knowledge, but we found that the compiler
731+
* provides __sync_lock_test_and_set(), use that. Prefer the int-width
732+
* version over the char-width version if we have both, on the rather dubious
733+
* grounds that that's known to be more likely to work in the ARM ecosystem.
734+
* (But we dealt with ARM above.)
735+
*/
736+
#if !defined(HAS_TEST_AND_SET)
737+
738+
#if defined(HAVE_GCC__SYNC_INT32_TAS)
739+
#defineHAS_TEST_AND_SET
740+
741+
#defineTAS(lock) tas(lock)
742+
743+
typedefintslock_t;
744+
745+
static __inline__int
746+
tas(volatileslock_t*lock)
747+
{
748+
return__sync_lock_test_and_set(lock,1);
749+
}
750+
751+
#defineS_UNLOCK(lock) __sync_lock_release(lock)
752+
753+
#elif defined(HAVE_GCC__SYNC_CHAR_TAS)
754+
#defineHAS_TEST_AND_SET
755+
756+
#defineTAS(lock) tas(lock)
757+
758+
typedefcharslock_t;
759+
760+
static __inline__int
761+
tas(volatileslock_t*lock)
762+
{
763+
return__sync_lock_test_and_set(lock,1);
764+
}
765+
766+
#defineS_UNLOCK(lock) __sync_lock_release(lock)
767+
768+
#endif/* HAVE_GCC__SYNC_INT32_TAS */
769+
770+
#endif/* !defined(HAS_TEST_AND_SET) */
771+
772+
751773
/*
752774
* Default implementation of S_UNLOCK() for gcc/icc.
753775
*

0 commit comments

Comments
 (0)

[8]ページ先頭

©2009-2025 Movatter.jp