Movatterモバイル変換


[0]ホーム

URL:


Skip to content

Navigation Menu

Sign in
Appearance settings

Search code, repositories, users, issues, pull requests...

Provide feedback

We read every piece of feedback, and take your input very seriously.

Saved searches

Use saved searches to filter your results more quickly

Sign up
Appearance settings

Commit3f66e1c

Browse files
committed
Add basic spinlock tests to regression tests.
As s_lock_test, the already existing test for spinlocks, isn't run inan automated fashion (and doesn't test a normal backend environment),adding tests that are run as part of a normal regression run is a goodidea. Particularly in light of several recent and upcoming spinlockrelated fixes.Currently the new tests are run as part of the pre-existingtest_atomic_ops() test. That perhaps can be quibbled about, but fornow seems ok.The only operations that s_lock_test tests but the new tests don't arethe detection of a stuck spinlock and S_LOCK_FREE (which is otherwiseunused, not implemented on all platforms, and will be removed).This currently contains a test for more than INT_MAX spinlocks (onlyrun with --disable-spinlocks), to ensure the recent commit fixing abug with more than INT_MAX spinlock initializations is correct. Thattest is somewhat slow, so we might want to disable it after a fewdays.It might be worth retiring s_lock_test after this. The added coverageof a stuck spinlock probably isn't worth the added complexity?Author: Andres FreundDiscussion:https://postgr.es/m/20200606023103.avzrctgv7476xj7i@alap3.anarazel.de
1 parent28fa048 commit3f66e1c

File tree

1 file changed

+109
-0
lines changed

1 file changed

+109
-0
lines changed

‎src/test/regress/regress.c

Lines changed: 109 additions & 0 deletions
Original file line numberDiff line numberDiff line change
@@ -31,6 +31,7 @@
3131
#include"executor/spi.h"
3232
#include"miscadmin.h"
3333
#include"port/atomics.h"
34+
#include"storage/spin.h"
3435
#include"utils/builtins.h"
3536
#include"utils/geo_decls.h"
3637
#include"utils/rel.h"
@@ -1036,6 +1037,108 @@ test_atomic_uint64(void)
10361037
EXPECT_EQ_U64(pg_atomic_fetch_and_u64(&var, ~0),0);
10371038
}
10381039

1040+
/*
1041+
* Perform, fairly minimal, testing of the spinlock implementation.
1042+
*
1043+
* It's likely worth expanding these to actually test concurrency etc, but
1044+
* having some regularly run tests is better than none.
1045+
*/
1046+
staticvoid
1047+
test_spinlock(void)
1048+
{
1049+
/*
1050+
* Basic tests for spinlocks, as well as the underlying operations.
1051+
*
1052+
* We embed the spinlock in a struct with other members to test that the
1053+
* spinlock operations don't perform too wide writes.
1054+
*/
1055+
{
1056+
structtest_lock_struct
1057+
{
1058+
chardata_before[4];
1059+
slock_tlock;
1060+
chardata_after[4];
1061+
}struct_w_lock;
1062+
1063+
memcpy(struct_w_lock.data_before,"abcd",4);
1064+
memcpy(struct_w_lock.data_after,"ef12",4);
1065+
1066+
/* test basic operations via the SpinLock* API */
1067+
SpinLockInit(&struct_w_lock.lock);
1068+
SpinLockAcquire(&struct_w_lock.lock);
1069+
SpinLockRelease(&struct_w_lock.lock);
1070+
1071+
/* test basic operations via underlying S_* API */
1072+
S_INIT_LOCK(&struct_w_lock.lock);
1073+
S_LOCK(&struct_w_lock.lock);
1074+
S_UNLOCK(&struct_w_lock.lock);
1075+
1076+
/* and that "contended" acquisition works */
1077+
s_lock(&struct_w_lock.lock,"testfile",17,"testfunc");
1078+
S_UNLOCK(&struct_w_lock.lock);
1079+
1080+
/*
1081+
* Check, using TAS directly, that a single spin cycle doesn't block
1082+
* when acquiring an already acquired lock.
1083+
*/
1084+
#ifdefTAS
1085+
S_LOCK(&struct_w_lock.lock);
1086+
1087+
if (!TAS(&struct_w_lock.lock))
1088+
elog(ERROR,"acquired already held spinlock");
1089+
1090+
#ifdefTAS_SPIN
1091+
if (!TAS_SPIN(&struct_w_lock.lock))
1092+
elog(ERROR,"acquired already held spinlock");
1093+
#endif/* defined(TAS_SPIN) */
1094+
1095+
S_UNLOCK(&struct_w_lock.lock);
1096+
#endif/* defined(TAS) */
1097+
1098+
/*
1099+
* Verify that after all of this the non-lock contents are still
1100+
* correct.
1101+
*/
1102+
if (memcmp(struct_w_lock.data_before,"abcd",4)!=0)
1103+
elog(ERROR,"padding before spinlock modified");
1104+
if (memcmp(struct_w_lock.data_after,"ef12",4)!=0)
1105+
elog(ERROR,"padding after spinlock modified");
1106+
}
1107+
1108+
/*
1109+
* Ensure that allocating more than INT32_MAX emulated spinlocks
1110+
* works. That's interesting because the spinlock emulation uses a 32bit
1111+
* integer to map spinlocks onto semaphores. There've been bugs...
1112+
*/
1113+
#ifndefHAVE_SPINLOCKS
1114+
{
1115+
/*
1116+
* Initialize enough spinlocks to advance counter close to
1117+
* wraparound. It's too expensive to perform acquire/release for each,
1118+
* as those may be syscalls when the spinlock emulation is used (and
1119+
* even just atomic TAS would be expensive).
1120+
*/
1121+
for (uint32i=0;i<INT32_MAX-100000;i++)
1122+
{
1123+
slock_tlock;
1124+
1125+
SpinLockInit(&lock);
1126+
}
1127+
1128+
for (uint32i=0;i<200000;i++)
1129+
{
1130+
slock_tlock;
1131+
1132+
SpinLockInit(&lock);
1133+
1134+
SpinLockAcquire(&lock);
1135+
SpinLockRelease(&lock);
1136+
SpinLockAcquire(&lock);
1137+
SpinLockRelease(&lock);
1138+
}
1139+
}
1140+
#endif
1141+
}
10391142

10401143
PG_FUNCTION_INFO_V1(test_atomic_ops);
10411144
Datum
@@ -1047,5 +1150,11 @@ test_atomic_ops(PG_FUNCTION_ARGS)
10471150

10481151
test_atomic_uint64();
10491152

1153+
/*
1154+
* Arguably this shouldn't be tested as part of this function, but it's
1155+
* closely enough related that that seems ok for now.
1156+
*/
1157+
test_spinlock();
1158+
10501159
PG_RETURN_BOOL(true);
10511160
}

0 commit comments

Comments
 (0)

[8]ページ先頭

©2009-2025 Movatter.jp