Movatterモバイル変換


[0]ホーム

URL:


Skip to content

Navigation Menu

Sign in
Appearance settings

Search code, repositories, users, issues, pull requests...

Provide feedback

We read every piece of feedback, and take your input very seriously.

Saved searches

Use saved searches to filter your results more quickly

Sign up
Appearance settings

Commit14a91a8

Browse files
committed
Avoid spurious deadlocks when upgrading a tuple lock
When two (or more) transactions are waiting for transaction T1 to release atuple-level lock, and transaction T1 upgrades its lock to a higher level, aspurious deadlock can be reported among the waiting transactions when T1finishes. The simplest example case seems to be:T1: select id from job where name = 'a' for key share;Y: select id from job where name = 'a' for update; -- starts waiting for XZ: select id from job where name = 'a' for key share;T1: update job set name = 'b' where id = 1;Z: update job set name = 'c' where id = 1; -- starts waiting for XT1: rollback;At this point, transaction Y is rolled back on account of a deadlock: Yholds the heavyweight tuple lock and is waiting for the Xmax to be released,while Z holds part of the multixact and tries to acquire the heavyweightlock (per protocol) and goes to sleep; once X releases its part of themultixact, Z is awakened only to be put back to sleep on the heavyweightlock that Y is holding while sleeping. Kaboom.This can be avoided by having Z skip the heavyweight lock acquisition. Asfar as I can see, the biggest downside is that if there are multiple Ztransactions, the order in which they resume after X finishes is notguaranteed.Backpatch to 9.6. The patch applies cleanly on 9.5, but the new tests don'twork there (because isolationtester is not smart enough), so I'm not goingto risk it.Author: Oleksii KliukinDiscussion:https://postgr.es/m/B9C9D7CD-EB94-4635-91B6-E558ACEC0EC3@hintbits.com
1 parent945ae92 commit14a91a8

File tree

5 files changed

+281
-21
lines changed

5 files changed

+281
-21
lines changed

‎src/backend/access/heap/README.tuplock

Lines changed: 10 additions & 0 deletions
Original file line numberDiff line numberDiff line change
@@ -36,6 +36,16 @@ do LockTuple as well, if there is any conflict, to ensure that they don't
3636
starve out waiting exclusive-lockers. However, if there is not any active
3737
conflict for a tuple, we don't incur any extra overhead.
3838

39+
We make an exception to the above rule for those lockers that already hold
40+
some lock on a tuple and attempt to acquire a stronger one on it. In that
41+
case, we skip the LockTuple() call even when there are conflicts, provided
42+
that the target tuple is being locked, updated or deleted by multiple sessions
43+
concurrently. Failing to skip the lock would risk a deadlock, e.g., between a
44+
session that was first to record its weaker lock in the tuple header and would
45+
be waiting on the LockTuple() call to upgrade to the stronger lock level, and
46+
another session that has already done LockTuple() and is waiting for the first
47+
session transaction to release its tuple header-level lock.
48+
3949
We provide four levels of tuple locking strength: SELECT FOR UPDATE obtains an
4050
exclusive lock which prevents any kind of modification of the tuple. This is
4151
the lock level that is implicitly taken by DELETE operations, and also by

‎src/backend/access/heap/heapam.c

Lines changed: 63 additions & 21 deletions
Original file line numberDiff line numberDiff line change
@@ -115,7 +115,7 @@ static void GetMultiXactIdHintBits(MultiXactId multi, uint16 *new_infomask,
115115
staticTransactionIdMultiXactIdGetUpdateXid(TransactionIdxmax,
116116
uint16t_infomask);
117117
staticboolDoesMultiXactIdConflict(MultiXactIdmulti,uint16infomask,
118-
LockTupleModelockmode);
118+
LockTupleModelockmode,bool*current_is_member);
119119
staticvoidMultiXactIdWait(MultiXactIdmulti,MultiXactStatusstatus,uint16infomask,
120120
Relationrel,ItemPointerctid,XLTW_Operoper,
121121
int*remaining);
@@ -3112,15 +3112,20 @@ heap_delete(Relation relation, ItemPointer tid,
31123112
*/
31133113
if (infomask&HEAP_XMAX_IS_MULTI)
31143114
{
3115-
/* wait for multixact */
3115+
boolcurrent_is_member= false;
3116+
31163117
if (DoesMultiXactIdConflict((MultiXactId)xwait,infomask,
3117-
LockTupleExclusive))
3118+
LockTupleExclusive,&current_is_member))
31183119
{
31193120
LockBuffer(buffer,BUFFER_LOCK_UNLOCK);
31203121

3121-
/* acquire tuple lock, if necessary */
3122-
heap_acquire_tuplock(relation,&(tp.t_self),LockTupleExclusive,
3123-
LockWaitBlock,&have_tuple_lock);
3122+
/*
3123+
* Acquire the lock, if necessary (but skip it when we're
3124+
* requesting a lock and already have one; avoids deadlock).
3125+
*/
3126+
if (!current_is_member)
3127+
heap_acquire_tuplock(relation,&(tp.t_self),LockTupleExclusive,
3128+
LockWaitBlock,&have_tuple_lock);
31243129

31253130
/* wait for multixact */
31263131
MultiXactIdWait((MultiXactId)xwait,MultiXactStatusUpdate,infomask,
@@ -3710,15 +3715,20 @@ heap_update(Relation relation, ItemPointer otid, HeapTuple newtup,
37103715
{
37113716
TransactionIdupdate_xact;
37123717
intremain;
3718+
boolcurrent_is_member= false;
37133719

37143720
if (DoesMultiXactIdConflict((MultiXactId)xwait,infomask,
3715-
*lockmode))
3721+
*lockmode,&current_is_member))
37163722
{
37173723
LockBuffer(buffer,BUFFER_LOCK_UNLOCK);
37183724

3719-
/* acquire tuple lock, if necessary */
3720-
heap_acquire_tuplock(relation,&(oldtup.t_self),*lockmode,
3721-
LockWaitBlock,&have_tuple_lock);
3725+
/*
3726+
* Acquire the lock, if necessary (but skip it when we're
3727+
* requesting a lock and already have one; avoids deadlock).
3728+
*/
3729+
if (!current_is_member)
3730+
heap_acquire_tuplock(relation,&(oldtup.t_self),*lockmode,
3731+
LockWaitBlock,&have_tuple_lock);
37223732

37233733
/* wait for multixact */
37243734
MultiXactIdWait((MultiXactId)xwait,mxact_status,infomask,
@@ -4600,6 +4610,7 @@ heap_lock_tuple(Relation relation, HeapTuple tuple,
46004610
uint16infomask;
46014611
uint16infomask2;
46024612
boolrequire_sleep;
4613+
boolskip_tuple_lock;
46034614
ItemPointerDatat_ctid;
46044615

46054616
/* must copy state data before unlocking buffer */
@@ -4625,6 +4636,7 @@ heap_lock_tuple(Relation relation, HeapTuple tuple,
46254636
if (first_time)
46264637
{
46274638
first_time= false;
4639+
skip_tuple_lock= false;
46284640

46294641
if (infomask&HEAP_XMAX_IS_MULTI)
46304642
{
@@ -4653,6 +4665,21 @@ heap_lock_tuple(Relation relation, HeapTuple tuple,
46534665
result=HeapTupleMayBeUpdated;
46544666
gotoout_unlocked;
46554667
}
4668+
else
4669+
{
4670+
/*
4671+
* Disable acquisition of the heavyweight tuple lock.
4672+
* Otherwise, when promoting a weaker lock, we might
4673+
* deadlock with another locker that has acquired the
4674+
* heavyweight tuple lock and is waiting for our
4675+
* transaction to finish.
4676+
*
4677+
* Note that in this case we still need to wait for
4678+
* the multixact if required, to avoid acquiring
4679+
* conflicting locks.
4680+
*/
4681+
skip_tuple_lock= true;
4682+
}
46564683
}
46574684

46584685
if (members)
@@ -4807,7 +4834,7 @@ heap_lock_tuple(Relation relation, HeapTuple tuple,
48074834
if (infomask&HEAP_XMAX_IS_MULTI)
48084835
{
48094836
if (!DoesMultiXactIdConflict((MultiXactId)xwait,infomask,
4810-
mode))
4837+
mode,NULL))
48114838
{
48124839
/*
48134840
* No conflict, but if the xmax changed under us in the
@@ -4884,13 +4911,15 @@ heap_lock_tuple(Relation relation, HeapTuple tuple,
48844911
/*
48854912
* Acquire tuple lock to establish our priority for the tuple, or
48864913
* die trying. LockTuple will release us when we are next-in-line
4887-
* for the tuple. We must do this even if we are share-locking.
4914+
* for the tuple. We must do this even if we are share-locking,
4915+
* but not if we already have a weaker lock on the tuple.
48884916
*
48894917
* If we are forced to "start over" below, we keep the tuple lock;
48904918
* this arranges that we stay at the head of the line while
48914919
* rechecking tuple state.
48924920
*/
4893-
if (!heap_acquire_tuplock(relation,tid,mode,wait_policy,
4921+
if (!skip_tuple_lock&&
4922+
!heap_acquire_tuplock(relation,tid,mode,wait_policy,
48944923
&have_tuple_lock))
48954924
{
48964925
/*
@@ -7062,10 +7091,13 @@ HeapTupleGetUpdateXid(HeapTupleHeader tuple)
70627091
* tuple lock of the given strength?
70637092
*
70647093
* The passed infomask pairs up with the given multixact in the tuple header.
7094+
*
7095+
* If current_is_member is not NULL, it is set to 'true' if the current
7096+
* transaction is a member of the given multixact.
70657097
*/
70667098
staticbool
70677099
DoesMultiXactIdConflict(MultiXactIdmulti,uint16infomask,
7068-
LockTupleModelockmode)
7100+
LockTupleModelockmode,bool*current_is_member)
70697101
{
70707102
intnmembers;
70717103
MultiXactMember*members;
@@ -7086,15 +7118,24 @@ DoesMultiXactIdConflict(MultiXactId multi, uint16 infomask,
70867118
TransactionIdmemxid;
70877119
LOCKMODEmemlockmode;
70887120

7089-
memlockmode=LOCKMODE_from_mxstatus(members[i].status);
7121+
if (result&& (current_is_member==NULL||*current_is_member))
7122+
break;
70907123

7091-
/* ignore members that don't conflict with the lock we want */
7092-
if (!DoLockModesConflict(memlockmode,wanted))
7093-
continue;
7124+
memlockmode=LOCKMODE_from_mxstatus(members[i].status);
70947125

7095-
/* ignore members from current xact */
7126+
/* ignore members from current xact(but track their presence)*/
70967127
memxid=members[i].xid;
70977128
if (TransactionIdIsCurrentTransactionId(memxid))
7129+
{
7130+
if (current_is_member!=NULL)
7131+
*current_is_member= true;
7132+
continue;
7133+
}
7134+
elseif (result)
7135+
continue;
7136+
7137+
/* ignore members that don't conflict with the lock we want */
7138+
if (!DoLockModesConflict(memlockmode,wanted))
70987139
continue;
70997140

71007141
if (ISUPDATE_from_mxstatus(members[i].status))
@@ -7113,10 +7154,11 @@ DoesMultiXactIdConflict(MultiXactId multi, uint16 infomask,
71137154
/*
71147155
* Whatever remains are either live lockers that conflict with our
71157156
* wanted lock, and updaters that are not aborted. Those conflict
7116-
* with what we want, so return true.
7157+
* with what we want. Set up to return true, but keep going to
7158+
* look for the current transaction among the multixact members,
7159+
* if needed.
71177160
*/
71187161
result= true;
7119-
break;
71207162
}
71217163
pfree(members);
71227164
}
Lines changed: 150 additions & 0 deletions
Original file line numberDiff line numberDiff line change
@@ -0,0 +1,150 @@
1+
Parsed test spec with 3 sessions
2+
3+
starting permutation: s1_share s2_for_update s3_share s3_for_update s1_rollback s3_rollback s2_rollback
4+
step s1_share: select id from tlu_job where id = 1 for share;
5+
id
6+
7+
1
8+
step s2_for_update: select id from tlu_job where id = 1 for update; <waiting ...>
9+
step s3_share: select id from tlu_job where id = 1 for share;
10+
id
11+
12+
1
13+
step s3_for_update: select id from tlu_job where id = 1 for update; <waiting ...>
14+
step s1_rollback: rollback;
15+
step s3_for_update: <... completed>
16+
id
17+
18+
1
19+
step s3_rollback: rollback;
20+
step s2_for_update: <... completed>
21+
id
22+
23+
1
24+
step s2_rollback: rollback;
25+
26+
starting permutation: s1_keyshare s2_for_update s3_keyshare s1_update s3_update s1_rollback s3_rollback s2_rollback
27+
step s1_keyshare: select id from tlu_job where id = 1 for key share;
28+
id
29+
30+
1
31+
step s2_for_update: select id from tlu_job where id = 1 for update; <waiting ...>
32+
step s3_keyshare: select id from tlu_job where id = 1 for key share;
33+
id
34+
35+
1
36+
step s1_update: update tlu_job set name = 'b' where id = 1;
37+
step s3_update: update tlu_job set name = 'c' where id = 1; <waiting ...>
38+
step s1_rollback: rollback;
39+
step s3_update: <... completed>
40+
step s3_rollback: rollback;
41+
step s2_for_update: <... completed>
42+
id
43+
44+
1
45+
step s2_rollback: rollback;
46+
47+
starting permutation: s1_keyshare s2_for_update s3_keyshare s1_update s3_update s1_commit s3_rollback s2_rollback
48+
step s1_keyshare: select id from tlu_job where id = 1 for key share;
49+
id
50+
51+
1
52+
step s2_for_update: select id from tlu_job where id = 1 for update; <waiting ...>
53+
step s3_keyshare: select id from tlu_job where id = 1 for key share;
54+
id
55+
56+
1
57+
step s1_update: update tlu_job set name = 'b' where id = 1;
58+
step s3_update: update tlu_job set name = 'c' where id = 1; <waiting ...>
59+
step s1_commit: commit;
60+
step s3_update: <... completed>
61+
step s3_rollback: rollback;
62+
step s2_for_update: <... completed>
63+
id
64+
65+
1
66+
step s2_rollback: rollback;
67+
68+
starting permutation: s1_keyshare s2_for_update s3_keyshare s3_delete s1_rollback s3_rollback s2_rollback
69+
step s1_keyshare: select id from tlu_job where id = 1 for key share;
70+
id
71+
72+
1
73+
step s2_for_update: select id from tlu_job where id = 1 for update; <waiting ...>
74+
step s3_keyshare: select id from tlu_job where id = 1 for key share;
75+
id
76+
77+
1
78+
step s3_delete: delete from tlu_job where id = 1; <waiting ...>
79+
step s1_rollback: rollback;
80+
step s3_delete: <... completed>
81+
step s3_rollback: rollback;
82+
step s2_for_update: <... completed>
83+
id
84+
85+
1
86+
step s2_rollback: rollback;
87+
88+
starting permutation: s1_keyshare s2_for_update s3_keyshare s3_delete s1_rollback s3_commit s2_rollback
89+
step s1_keyshare: select id from tlu_job where id = 1 for key share;
90+
id
91+
92+
1
93+
step s2_for_update: select id from tlu_job where id = 1 for update; <waiting ...>
94+
step s3_keyshare: select id from tlu_job where id = 1 for key share;
95+
id
96+
97+
1
98+
step s3_delete: delete from tlu_job where id = 1; <waiting ...>
99+
step s1_rollback: rollback;
100+
step s3_delete: <... completed>
101+
step s3_commit: commit;
102+
step s2_for_update: <... completed>
103+
id
104+
105+
step s2_rollback: rollback;
106+
107+
starting permutation: s1_share s2_for_update s3_for_update s1_rollback s2_rollback s3_rollback
108+
step s1_share: select id from tlu_job where id = 1 for share;
109+
id
110+
111+
1
112+
step s2_for_update: select id from tlu_job where id = 1 for update; <waiting ...>
113+
step s3_for_update: select id from tlu_job where id = 1 for update; <waiting ...>
114+
step s1_rollback: rollback;
115+
step s2_for_update: <... completed>
116+
id
117+
118+
1
119+
step s2_rollback: rollback;
120+
step s3_for_update: <... completed>
121+
id
122+
123+
1
124+
step s3_rollback: rollback;
125+
126+
starting permutation: s1_share s2_update s3_update s1_rollback s2_rollback s3_rollback
127+
step s1_share: select id from tlu_job where id = 1 for share;
128+
id
129+
130+
1
131+
step s2_update: update tlu_job set name = 'b' where id = 1; <waiting ...>
132+
step s3_update: update tlu_job set name = 'c' where id = 1; <waiting ...>
133+
step s1_rollback: rollback;
134+
step s2_update: <... completed>
135+
step s2_rollback: rollback;
136+
step s3_update: <... completed>
137+
step s3_rollback: rollback;
138+
139+
starting permutation: s1_share s2_delete s3_delete s1_rollback s2_rollback s3_rollback
140+
step s1_share: select id from tlu_job where id = 1 for share;
141+
id
142+
143+
1
144+
step s2_delete: delete from tlu_job where id = 1; <waiting ...>
145+
step s3_delete: delete from tlu_job where id = 1; <waiting ...>
146+
step s1_rollback: rollback;
147+
step s2_delete: <... completed>
148+
step s2_rollback: rollback;
149+
step s3_delete: <... completed>
150+
step s3_rollback: rollback;

‎src/test/isolation/isolation_schedule

Lines changed: 1 addition & 0 deletions
Original file line numberDiff line numberDiff line change
@@ -46,6 +46,7 @@ test: update-locked-tuple
4646
test: propagate-lock-delete
4747
test: tuplelock-conflict
4848
test: tuplelock-update
49+
test: tuplelock-upgrade-no-deadlock
4950
test: freeze-the-dead
5051
test: nowait
5152
test: nowait-2

0 commit comments

Comments
 (0)

[8]ページ先頭

©2009-2025 Movatter.jp