Movatterモバイル変換


[0]ホーム

URL:


Skip to content

Navigation Menu

Sign in
Appearance settings

Search code, repositories, users, issues, pull requests...

Provide feedback

We read every piece of feedback, and take your input very seriously.

Saved searches

Use saved searches to filter your results more quickly

Sign up
Appearance settings

Commit2efbfb9

Browse files
committed
Fix concurrent locking of tuple update chain
If several sessions are concurrently locking a tuple update chain withnonconflicting lock modes using an old snapshot, and they all succeed,it may happen that some of them fail because of restarting the loop (dueto a concurrent Xmax change) and getting an error in the subsequent passwhile trying to obtain a tuple lock that they already have in some tupleversion.This can only happen with very high concurrency (where a row is beingboth updated and FK-checked by multiple transactions concurrently), butit's been observed in the field and can have unpleasant consequencessuch as an FK check failing to see a tuple that definitely exists: ERROR: insert or update on table "child_table" violates foreign key constraint "fk_constraint_name" DETAIL: Key (keyid)=(123456) is not present in table "parent_table".(where the key is observably present in the table).Discussion:https://postgr.es/m/20170714210011.r25mrff4nxjhmf3g@alvherre.pgsql
1 parent4ecee11 commit2efbfb9

File tree

1 file changed

+37
-4
lines changed

1 file changed

+37
-4
lines changed

‎src/backend/access/heap/heapam.c

Lines changed: 37 additions & 4 deletions
Original file line numberDiff line numberDiff line change
@@ -4906,8 +4906,10 @@ compute_new_xmax_infomask(TransactionId xmax, uint16 old_infomask,
49064906
* with the given xid, does the current transaction need to wait, fail, or can
49074907
* it continue if it wanted to acquire a lock of the given mode? "needwait"
49084908
* is set to true if waiting is necessary; if it can continue, then
4909-
* HeapTupleMayBeUpdated is returned. In case of a conflict, a different
4910-
* HeapTupleSatisfiesUpdate return code is returned.
4909+
* HeapTupleMayBeUpdated is returned. If the lock is already held by the
4910+
* current transaction, return HeapTupleSelfUpdated. In case of a conflict
4911+
* with another transaction, a different HeapTupleSatisfiesUpdate return code
4912+
* is returned.
49114913
*
49124914
* The held status is said to be hypothetical because it might correspond to a
49134915
* lock held by a single Xid, i.e. not a real MultiXactId; we express it this
@@ -4930,8 +4932,9 @@ test_lockmode_for_conflict(MultiXactStatus status, TransactionId xid,
49304932
if (TransactionIdIsCurrentTransactionId(xid))
49314933
{
49324934
/*
4933-
* Updated by our own transaction? Just return failure. This shouldn't
4934-
* normally happen.
4935+
* The tuple has already been locked by our own transaction. This is
4936+
* very rare but can happen if multiple transactions are trying to
4937+
* lock an ancient version of the same tuple.
49354938
*/
49364939
returnHeapTupleSelfUpdated;
49374940
}
@@ -5100,6 +5103,22 @@ heap_lock_updated_tuple_rec(Relation rel, ItemPointer tid, TransactionId xid,
51005103
members[i].xid,
51015104
mode,&needwait);
51025105

5106+
/*
5107+
* If the tuple was already locked by ourselves in a
5108+
* previous iteration of this (say heap_lock_tuple was
5109+
* forced to restart the locking loop because of a change
5110+
* in xmax), then we hold the lock already on this tuple
5111+
* version and we don't need to do anything; and this is
5112+
* not an error condition either. We just need to skip
5113+
* this tuple and continue locking the next version in the
5114+
* update chain.
5115+
*/
5116+
if (res==HeapTupleSelfUpdated)
5117+
{
5118+
pfree(members);
5119+
gotonext;
5120+
}
5121+
51035122
if (needwait)
51045123
{
51055124
LockBuffer(buf,BUFFER_LOCK_UNLOCK);
@@ -5160,6 +5179,19 @@ heap_lock_updated_tuple_rec(Relation rel, ItemPointer tid, TransactionId xid,
51605179

51615180
res=test_lockmode_for_conflict(status,rawxmax,mode,
51625181
&needwait);
5182+
5183+
/*
5184+
* If the tuple was already locked by ourselves in a previous
5185+
* iteration of this (say heap_lock_tuple was forced to
5186+
* restart the locking loop because of a change in xmax), then
5187+
* we hold the lock already on this tuple version and we don't
5188+
* need to do anything; and this is not an error condition
5189+
* either. We just need to skip this tuple and continue
5190+
* locking the next version in the update chain.
5191+
*/
5192+
if (res==HeapTupleSelfUpdated)
5193+
gotonext;
5194+
51635195
if (needwait)
51645196
{
51655197
LockBuffer(buf,BUFFER_LOCK_UNLOCK);
@@ -5221,6 +5253,7 @@ heap_lock_updated_tuple_rec(Relation rel, ItemPointer tid, TransactionId xid,
52215253

52225254
END_CRIT_SECTION();
52235255

5256+
next:
52245257
/* if we find the end of update chain, we're done. */
52255258
if (mytup.t_data->t_infomask&HEAP_XMAX_INVALID||
52265259
ItemPointerEquals(&mytup.t_self,&mytup.t_data->t_ctid)||

0 commit comments

Comments
 (0)

[8]ページ先頭

©2009-2025 Movatter.jp