Movatterモバイル変換


[0]ホーム

URL:


Skip to content

Navigation Menu

Sign in
Appearance settings

Search code, repositories, users, issues, pull requests...

Provide feedback

We read every piece of feedback, and take your input very seriously.

Saved searches

Use saved searches to filter your results more quickly

Sign up
Appearance settings

Commit1d95617

Browse files
committed
Fix serialization anomalies due to race conditions on INSERT.
On insert the CheckForSerializableConflictIn() test was performedbefore the page(s) which were going to be modified had been locked(with an exclusive buffer content lock). If another processacquired a relation SIReadLock on the heap and scanned to a page onwhich an insert was going to occur before the page was so locked,a rw-conflict would be missed, which could allow a serializationanomaly to be missed. The window between the check and the pagelock was small, so the bug was generally not noticed unless therewas high concurrency with multiple processes inserting into thesame table.This was reported by Peter Bailis as bug #11732, by Sean Chittendenas bug #13667, and by others.The race condition was eliminated in heap_insert() by moving thecheck down below the acquisition of the buffer lock, which had beenthe very next statement. Because of the loop locking and unlockingmultiple buffers in heap_multi_insert() a check was added after allinserts were completed. The check before the start of the insertswas left because it might avoid a large amount of work to detect aserialization anomaly before performing the all of the inserts andthe related WAL logging.While investigating this bug, other SSI bugs which were even harderto hit in practice were noticed and fixed, an unnecessary check(covered by another check, so redundant) was removed fromheap_update(), and comments were improved.Back-patch to all supported branches.Kevin Grittner and Thomas Munro
1 parent5554b30 commit1d95617

File tree

2 files changed

+69
-31
lines changed

2 files changed

+69
-31
lines changed

‎src/backend/access/heap/heapam.c

Lines changed: 65 additions & 26 deletions
Original file line numberDiff line numberDiff line change
@@ -2067,26 +2067,31 @@ heap_insert(Relation relation, HeapTuple tup, CommandId cid,
20672067
*/
20682068
heaptup=heap_prepare_insert(relation,tup,xid,cid,options);
20692069

2070+
/*
2071+
* Find buffer to insert this tuple into. If the page is all visible,
2072+
* this will also pin the requisite visibility map page.
2073+
*/
2074+
buffer=RelationGetBufferForTuple(relation,heaptup->t_len,
2075+
InvalidBuffer,options,bistate,
2076+
&vmbuffer,NULL);
2077+
20702078
/*
20712079
* We're about to do the actual insert -- but check for conflict first, to
20722080
* avoid possibly having to roll back work we've just done.
20732081
*
2082+
* This is safe without a recheck as long as there is no possibility of
2083+
* another process scanning the page between this check and the insert
2084+
* being visible to the scan (i.e., an exclusive buffer content lock is
2085+
* continuously held from this point until the tuple insert is visible).
2086+
*
20742087
* For a heap insert, we only need to check for table-level SSI locks. Our
20752088
* new tuple can't possibly conflict with existing tuple locks, and heap
20762089
* page locks are only consolidated versions of tuple locks; they do not
2077-
* lock "gaps" as index page locks do. So we don't need toidentify a
2078-
* bufferbefore making the call.
2090+
* lock "gaps" as index page locks do. So we don't need tospecify a
2091+
* bufferwhen making the call, which makes for a faster check.
20792092
*/
20802093
CheckForSerializableConflictIn(relation,NULL,InvalidBuffer);
20812094

2082-
/*
2083-
* Find buffer to insert this tuple into. If the page is all visible,
2084-
* this will also pin the requisite visibility map page.
2085-
*/
2086-
buffer=RelationGetBufferForTuple(relation,heaptup->t_len,
2087-
InvalidBuffer,options,bistate,
2088-
&vmbuffer,NULL);
2089-
20902095
/* NO EREPORT(ERROR) from here till changes are logged */
20912096
START_CRIT_SECTION();
20922097

@@ -2340,13 +2345,26 @@ heap_multi_insert(Relation relation, HeapTuple *tuples, int ntuples,
23402345

23412346
/*
23422347
* We're about to do the actual inserts -- but check for conflict first,
2343-
* to avoid possibly having to roll back work we've just done.
2348+
* to minimize the possibility of having to roll back work we've just
2349+
* done.
23442350
*
2345-
* For a heap insert, we only need to check for table-level SSI locks. Our
2346-
* new tuple can't possibly conflict with existing tuple locks, and heap
2351+
* A check here does not definitively prevent a serialization anomaly;
2352+
* that check MUST be done at least past the point of acquiring an
2353+
* exclusive buffer content lock on every buffer that will be affected,
2354+
* and MAY be done after all inserts are reflected in the buffers and
2355+
* those locks are released; otherwise there race condition. Since
2356+
* multiple buffers can be locked and unlocked in the loop below, and it
2357+
* would not be feasible to identify and lock all of those buffers before
2358+
* the loop, we must do a final check at the end.
2359+
*
2360+
* The check here could be omitted with no loss of correctness; it is
2361+
* present strictly as an optimization.
2362+
*
2363+
* For heap inserts, we only need to check for table-level SSI locks. Our
2364+
* new tuples can't possibly conflict with existing tuple locks, and heap
23472365
* page locks are only consolidated versions of tuple locks; they do not
2348-
* lock "gaps" as index page locks do. So we don't need toidentify a
2349-
* bufferbefore making the call.
2366+
* lock "gaps" as index page locks do. So we don't need tospecify a
2367+
* bufferwhen making the call, which makes for a faster check.
23502368
*/
23512369
CheckForSerializableConflictIn(relation,NULL,InvalidBuffer);
23522370

@@ -2538,6 +2556,22 @@ heap_multi_insert(Relation relation, HeapTuple *tuples, int ntuples,
25382556
ndone+=nthispage;
25392557
}
25402558

2559+
/*
2560+
* We're done with the actual inserts. Check for conflicts again, to
2561+
* ensure that all rw-conflicts in to these inserts are detected. Without
2562+
* this final check, a sequential scan of the heap may have locked the
2563+
* table after the "before" check, missing one opportunity to detect the
2564+
* conflict, and then scanned the table before the new tuples were there,
2565+
* missing the other chance to detect the conflict.
2566+
*
2567+
* For heap inserts, we only need to check for table-level SSI locks. Our
2568+
* new tuples can't possibly conflict with existing tuple locks, and heap
2569+
* page locks are only consolidated versions of tuple locks; they do not
2570+
* lock "gaps" as index page locks do. So we don't need to specify a
2571+
* buffer when making the call.
2572+
*/
2573+
CheckForSerializableConflictIn(relation,NULL,InvalidBuffer);
2574+
25412575
/*
25422576
* If tuples are cachable, mark them for invalidation from the caches in
25432577
* case we abort. Note it is OK to do this after releasing the buffer,
@@ -2828,6 +2862,11 @@ heap_delete(Relation relation, ItemPointer tid,
28282862
/*
28292863
* We're about to do the actual delete -- check for conflict first, to
28302864
* avoid possibly having to roll back work we've just done.
2865+
*
2866+
* This is safe without a recheck as long as there is no possibility of
2867+
* another process scanning the page between this check and the delete
2868+
* being visible to the scan (i.e., an exclusive buffer content lock is
2869+
* continuously held from this point until the tuple delete is visible).
28312870
*/
28322871
CheckForSerializableConflictIn(relation,&tp,buffer);
28332872

@@ -3449,12 +3488,6 @@ heap_update(Relation relation, ItemPointer otid, HeapTuple newtup,
34493488
gotol2;
34503489
}
34513490

3452-
/*
3453-
* We're about to do the actual update -- check for conflict first, to
3454-
* avoid possibly having to roll back work we've just done.
3455-
*/
3456-
CheckForSerializableConflictIn(relation,&oldtup,buffer);
3457-
34583491
/* Fill in transaction status data */
34593492

34603493
/*
@@ -3643,14 +3676,20 @@ heap_update(Relation relation, ItemPointer otid, HeapTuple newtup,
36433676
}
36443677

36453678
/*
3646-
* We're about tocreate thenew tuple -- check for conflict first, to
3679+
* We're about todo theactual update -- check for conflict first, to
36473680
* avoid possibly having to roll back work we've just done.
36483681
*
3649-
* NOTE: For a tuple insert, we only need to check for table locks, since
3650-
* predicate locking at the index level will cover ranges for anything
3651-
* except a table scan. Therefore, only provide the relation.
3682+
* This is safe without a recheck as long as there is no possibility of
3683+
* another process scanning the pages between this check and the update
3684+
* being visible to the scan (i.e., exclusive buffer content lock(s) are
3685+
* continuously held from this point until the tuple update is visible).
3686+
*
3687+
* For the new tuple the only check needed is at the relation level, but
3688+
* since both tuples are in the same relation and the check for oldtup
3689+
* will include checking the relation level, there is no benefit to a
3690+
* separate check for the new tuple.
36523691
*/
3653-
CheckForSerializableConflictIn(relation,NULL,InvalidBuffer);
3692+
CheckForSerializableConflictIn(relation,&oldtup,buffer);
36543693

36553694
/*
36563695
* At this point newbuf and buffer are both pinned and locked, and newbuf

‎src/backend/storage/lmgr/predicate.c

Lines changed: 4 additions & 5 deletions
Original file line numberDiff line numberDiff line change
@@ -3212,22 +3212,21 @@ ReleasePredicateLocks(bool isCommit)
32123212
return;
32133213
}
32143214

3215+
LWLockAcquire(SerializableXactHashLock,LW_EXCLUSIVE);
3216+
32153217
Assert(!isCommit||SxactIsPrepared(MySerializableXact));
32163218
Assert(!isCommit|| !SxactIsDoomed(MySerializableXact));
32173219
Assert(!SxactIsCommitted(MySerializableXact));
32183220
Assert(!SxactIsRolledBack(MySerializableXact));
32193221

32203222
/* may not be serializable during COMMIT/ROLLBACK PREPARED */
3221-
if (MySerializableXact->pid!=0)
3222-
Assert(IsolationIsSerializable());
3223+
Assert(MySerializableXact->pid==0||IsolationIsSerializable());
32233224

32243225
/* We'd better not already be on the cleanup list. */
32253226
Assert(!SxactIsOnFinishedList(MySerializableXact));
32263227

32273228
topLevelIsDeclaredReadOnly=SxactIsReadOnly(MySerializableXact);
32283229

3229-
LWLockAcquire(SerializableXactHashLock,LW_EXCLUSIVE);
3230-
32313230
/*
32323231
* We don't hold XidGenLock lock here, assuming that TransactionId is
32333232
* atomic!
@@ -4364,7 +4363,7 @@ CheckTableForSerializableConflictIn(Relation relation)
43644363
LWLockAcquire(SerializablePredicateLockListLock,LW_EXCLUSIVE);
43654364
for (i=0;i<NUM_PREDICATELOCK_PARTITIONS;i++)
43664365
LWLockAcquire(PredicateLockHashPartitionLockByIndex(i),LW_SHARED);
4367-
LWLockAcquire(SerializableXactHashLock,LW_SHARED);
4366+
LWLockAcquire(SerializableXactHashLock,LW_EXCLUSIVE);
43684367

43694368
/* Scan through target list */
43704369
hash_seq_init(&seqstat,PredicateLockTargetHash);

0 commit comments

Comments
 (0)

[8]ページ先頭

©2009-2025 Movatter.jp