Movatterモバイル変換


[0]ホーム

URL:


Skip to content

Navigation Menu

Sign in
Appearance settings

Search code, repositories, users, issues, pull requests...

Provide feedback

We read every piece of feedback, and take your input very seriously.

Saved searches

Use saved searches to filter your results more quickly

Sign up
Appearance settings

Commit008c119

Browse files
committed
Add basic spinlock tests to regression tests.
As s_lock_test, the already existing test for spinlocks, isn't run inan automated fashion (and doesn't test a normal backend environment),adding tests that are run as part of a normal regression run is a goodidea. Particularly in light of several recent and upcoming spinlockrelated fixes.Currently the new tests are run as part of the pre-existingtest_atomic_ops() test. That perhaps can be quibbled about, but fornow seems ok.The only operations that s_lock_test tests but the new tests don't arethe detection of a stuck spinlock and S_LOCK_FREE (which is otherwiseunused, not implemented on all platforms, and will be removed).This currently contains a test for more than INT_MAX spinlocks (onlyrun with --disable-spinlocks), to ensure the recent commit fixing abug with more than INT_MAX spinlock initializations is correct. Thattest is somewhat slow, so we might want to disable it after a fewdays.It might be worth retiring s_lock_test after this. The added coverageof a stuck spinlock probably isn't worth the added complexity?Author: Andres FreundDiscussion:https://postgr.es/m/20200606023103.avzrctgv7476xj7i@alap3.anarazel.de
1 parent3b8210d commit008c119

File tree

1 file changed

+109
-0
lines changed

1 file changed

+109
-0
lines changed

‎src/test/regress/regress.c

Lines changed: 109 additions & 0 deletions
Original file line numberDiff line numberDiff line change
@@ -34,6 +34,7 @@
3434
#include"optimizer/optimizer.h"
3535
#include"optimizer/plancat.h"
3636
#include"port/atomics.h"
37+
#include"storage/spin.h"
3738
#include"utils/builtins.h"
3839
#include"utils/geo_decls.h"
3940
#include"utils/rel.h"
@@ -795,6 +796,108 @@ test_atomic_uint64(void)
795796
EXPECT_EQ_U64(pg_atomic_fetch_and_u64(&var, ~0),0);
796797
}
797798

799+
/*
800+
* Perform, fairly minimal, testing of the spinlock implementation.
801+
*
802+
* It's likely worth expanding these to actually test concurrency etc, but
803+
* having some regularly run tests is better than none.
804+
*/
805+
staticvoid
806+
test_spinlock(void)
807+
{
808+
/*
809+
* Basic tests for spinlocks, as well as the underlying operations.
810+
*
811+
* We embed the spinlock in a struct with other members to test that the
812+
* spinlock operations don't perform too wide writes.
813+
*/
814+
{
815+
structtest_lock_struct
816+
{
817+
chardata_before[4];
818+
slock_tlock;
819+
chardata_after[4];
820+
}struct_w_lock;
821+
822+
memcpy(struct_w_lock.data_before,"abcd",4);
823+
memcpy(struct_w_lock.data_after,"ef12",4);
824+
825+
/* test basic operations via the SpinLock* API */
826+
SpinLockInit(&struct_w_lock.lock);
827+
SpinLockAcquire(&struct_w_lock.lock);
828+
SpinLockRelease(&struct_w_lock.lock);
829+
830+
/* test basic operations via underlying S_* API */
831+
S_INIT_LOCK(&struct_w_lock.lock);
832+
S_LOCK(&struct_w_lock.lock);
833+
S_UNLOCK(&struct_w_lock.lock);
834+
835+
/* and that "contended" acquisition works */
836+
s_lock(&struct_w_lock.lock,"testfile",17,"testfunc");
837+
S_UNLOCK(&struct_w_lock.lock);
838+
839+
/*
840+
* Check, using TAS directly, that a single spin cycle doesn't block
841+
* when acquiring an already acquired lock.
842+
*/
843+
#ifdefTAS
844+
S_LOCK(&struct_w_lock.lock);
845+
846+
if (!TAS(&struct_w_lock.lock))
847+
elog(ERROR,"acquired already held spinlock");
848+
849+
#ifdefTAS_SPIN
850+
if (!TAS_SPIN(&struct_w_lock.lock))
851+
elog(ERROR,"acquired already held spinlock");
852+
#endif/* defined(TAS_SPIN) */
853+
854+
S_UNLOCK(&struct_w_lock.lock);
855+
#endif/* defined(TAS) */
856+
857+
/*
858+
* Verify that after all of this the non-lock contents are still
859+
* correct.
860+
*/
861+
if (memcmp(struct_w_lock.data_before,"abcd",4)!=0)
862+
elog(ERROR,"padding before spinlock modified");
863+
if (memcmp(struct_w_lock.data_after,"ef12",4)!=0)
864+
elog(ERROR,"padding after spinlock modified");
865+
}
866+
867+
/*
868+
* Ensure that allocating more than INT32_MAX emulated spinlocks
869+
* works. That's interesting because the spinlock emulation uses a 32bit
870+
* integer to map spinlocks onto semaphores. There've been bugs...
871+
*/
872+
#ifndefHAVE_SPINLOCKS
873+
{
874+
/*
875+
* Initialize enough spinlocks to advance counter close to
876+
* wraparound. It's too expensive to perform acquire/release for each,
877+
* as those may be syscalls when the spinlock emulation is used (and
878+
* even just atomic TAS would be expensive).
879+
*/
880+
for (uint32i=0;i<INT32_MAX-100000;i++)
881+
{
882+
slock_tlock;
883+
884+
SpinLockInit(&lock);
885+
}
886+
887+
for (uint32i=0;i<200000;i++)
888+
{
889+
slock_tlock;
890+
891+
SpinLockInit(&lock);
892+
893+
SpinLockAcquire(&lock);
894+
SpinLockRelease(&lock);
895+
SpinLockAcquire(&lock);
896+
SpinLockRelease(&lock);
897+
}
898+
}
899+
#endif
900+
}
798901

799902
PG_FUNCTION_INFO_V1(test_atomic_ops);
800903
Datum
@@ -806,6 +909,12 @@ test_atomic_ops(PG_FUNCTION_ARGS)
806909

807910
test_atomic_uint64();
808911

912+
/*
913+
* Arguably this shouldn't be tested as part of this function, but it's
914+
* closely enough related that that seems ok for now.
915+
*/
916+
test_spinlock();
917+
809918
PG_RETURN_BOOL(true);
810919
}
811920

0 commit comments

Comments
 (0)

[8]ページ先頭

©2009-2025 Movatter.jp