Movatterモバイル変換


[0]ホーム

URL:


Skip to content

Navigation Menu

Sign in
Appearance settings

Search code, repositories, users, issues, pull requests...

Provide feedback

We read every piece of feedback, and take your input very seriously.

Saved searches

Use saved searches to filter your results more quickly

Sign up
Appearance settings

Commit5f12bc9

Browse files
committed
Avoid improbable PANIC during heap_update.
heap_update needs to clear any existing "all visible" flag onthe old tuple's page (and on the new page too, if different).Per coding rules, to do this it must acquire pin on the appropriatevisibility-map page while not holding exclusive buffer lock;which creates a race condition since someone else could set theflag whenever we're not holding the buffer lock. The code issupposed to handle that by re-checking the flag after acquiringbuffer lock and retrying if it became set. However, one codepath through heap_update itself, as well as one in its subroutineRelationGetBufferForTuple, failed to do this. The end result,in the unlikely event that a concurrent VACUUM did set the flagwhile we're transiently not holding lock, is a non-recurring"PANIC: wrong buffer passed to visibilitymap_clear" failure.This has been seen a few times in the buildfarm since recent VACUUMchanges that added code paths that could set the all-visible flagwhile holding only exclusive buffer lock. Previously, the flagwas (usually?) set only after doing LockBufferForCleanup, whichwould insist on buffer pin count zero, thus preventing the flagfrom becoming set partway through heap_update. However, it'sclear that it's heap_update not VACUUM that's at fault here.What's less clear is whether there is any hazard from these bugsin released branches. heap_update is certainly violating APIexpectations, but if there is no code path that can set all-visiblewithout a cleanup lock then it's only a latent bug. That's not100% certain though, besides which we should worry about extensionsor future back-patch fixes that could introduce such code paths.I chose to back-patch to v12. Fixing RelationGetBufferForTuplebefore that would require also back-patching portions of olderfixes (notably0d1fe9f), which is more code churn than seemsprudent to fix a hypothetical issue.Discussion:https://postgr.es/m/2247102.1618008027@sss.pgh.pa.us
1 parent4749c7f commit5f12bc9

File tree

2 files changed

+45
-23
lines changed

2 files changed

+45
-23
lines changed

‎src/backend/access/heap/heapam.c

Lines changed: 27 additions & 17 deletions
Original file line numberDiff line numberDiff line change
@@ -3476,7 +3476,7 @@ heap_update(Relation relation, ItemPointer otid, HeapTuple newtup,
34763476
* overhead would be unchanged, that doesn't seem necessarily
34773477
* worthwhile.
34783478
*/
3479-
if (PageIsAllVisible(BufferGetPage(buffer))&&
3479+
if (PageIsAllVisible(page)&&
34803480
visibilitymap_clear(relation,block,vmbuffer,
34813481
VISIBILITYMAP_ALL_FROZEN))
34823482
cleared_all_frozen= true;
@@ -3538,36 +3538,46 @@ heap_update(Relation relation, ItemPointer otid, HeapTuple newtup,
35383538
* first". To implement this, we must do RelationGetBufferForTuple
35393539
* while not holding the lock on the old page, and we must rely on it
35403540
* to get the locks on both pages in the correct order.
3541+
*
3542+
* Another consideration is that we need visibility map page pin(s) if
3543+
* we will have to clear the all-visible flag on either page. If we
3544+
* call RelationGetBufferForTuple, we rely on it to acquire any such
3545+
* pins; but if we don't, we have to handle that here. Hence we need
3546+
* a loop.
35413547
*/
3542-
if (newtupsize>pagefree)
3543-
{
3544-
/* Assume there's no chance to put heaptup on same page. */
3545-
newbuf=RelationGetBufferForTuple(relation,heaptup->t_len,
3546-
buffer,0,NULL,
3547-
&vmbuffer_new,&vmbuffer);
3548-
}
3549-
else
3548+
for (;;)
35503549
{
3550+
if (newtupsize>pagefree)
3551+
{
3552+
/* It doesn't fit, must use RelationGetBufferForTuple. */
3553+
newbuf=RelationGetBufferForTuple(relation,heaptup->t_len,
3554+
buffer,0,NULL,
3555+
&vmbuffer_new,&vmbuffer);
3556+
/* We're all done. */
3557+
break;
3558+
}
3559+
/* Acquire VM page pin if needed and we don't have it. */
3560+
if (vmbuffer==InvalidBuffer&&PageIsAllVisible(page))
3561+
visibilitymap_pin(relation,block,&vmbuffer);
35513562
/* Re-acquire the lock on the old tuple's page. */
35523563
LockBuffer(buffer,BUFFER_LOCK_EXCLUSIVE);
35533564
/* Re-check using the up-to-date free space */
35543565
pagefree=PageGetHeapFreeSpace(page);
3555-
if (newtupsize>pagefree)
3566+
if (newtupsize>pagefree||
3567+
(vmbuffer==InvalidBuffer&&PageIsAllVisible(page)))
35563568
{
35573569
/*
3558-
* Rats, it doesn't fit anymore. We must nowunlock and
3559-
*relock to avoid deadlock. Fortunately, this path should
3560-
* seldom be taken.
3570+
* Rats, it doesn't fit anymore, or somebody just nowset the
3571+
*all-visible flag. We must now unlock and loop to avoid
3572+
*deadlock. Fortunately, this path shouldseldom be taken.
35613573
*/
35623574
LockBuffer(buffer,BUFFER_LOCK_UNLOCK);
3563-
newbuf=RelationGetBufferForTuple(relation,heaptup->t_len,
3564-
buffer,0,NULL,
3565-
&vmbuffer_new,&vmbuffer);
35663575
}
35673576
else
35683577
{
3569-
/*OK, it fits here, so we're done. */
3578+
/*We're all done. */
35703579
newbuf=buffer;
3580+
break;
35713581
}
35723582
}
35733583
}

‎src/backend/access/heap/hio.c

Lines changed: 18 additions & 6 deletions
Original file line numberDiff line numberDiff line change
@@ -284,9 +284,13 @@ RelationAddExtraBlocks(Relation relation, BulkInsertState bistate)
284284
*happen if space is freed in that page after heap_update finds there's not
285285
*enough there). In that case, the page will be pinned and locked only once.
286286
*
287-
*For the vmbuffer and vmbuffer_other arguments, we avoid deadlock by
288-
*locking them only after locking the corresponding heap page, and taking
289-
*no further lwlocks while they are locked.
287+
*We also handle the possibility that the all-visible flag will need to be
288+
*cleared on one or both pages. If so, pin on the associated visibility map
289+
*page must be acquired before acquiring buffer lock(s), to avoid possibly
290+
*doing I/O while holding buffer locks. The pins are passed back to the
291+
*caller using the input-output arguments vmbuffer and vmbuffer_other.
292+
*Note that in some cases the caller might have already acquired such pins,
293+
*which is indicated by these arguments not being InvalidBuffer on entry.
290294
*
291295
*We normally use FSM to help us find free space. However,
292296
*if HEAP_INSERT_SKIP_FSM is specified, we just append a new empty page to
@@ -631,6 +635,8 @@ RelationGetBufferForTuple(Relation relation, Size len,
631635
if (otherBuffer!=InvalidBuffer)
632636
{
633637
Assert(otherBuffer!=buffer);
638+
targetBlock=BufferGetBlockNumber(buffer);
639+
Assert(targetBlock>otherBlock);
634640

635641
if (unlikely(!ConditionalLockBuffer(otherBuffer)))
636642
{
@@ -639,10 +645,16 @@ RelationGetBufferForTuple(Relation relation, Size len,
639645
LockBuffer(buffer,BUFFER_LOCK_EXCLUSIVE);
640646

641647
/*
642-
* Because the buffer was unlocked for a while, it's possible,
643-
* although unlikely, that the page was filled. If so, just retry
644-
* from start.
648+
* Because the buffers were unlocked for a while, it's possible,
649+
* although unlikely, that an all-visible flag became set or that
650+
* somebody used up the available space in the new page. We can
651+
* use GetVisibilityMapPins to deal with the first case. In the
652+
* second case, just retry from start.
645653
*/
654+
GetVisibilityMapPins(relation,otherBuffer,buffer,
655+
otherBlock,targetBlock,vmbuffer_other,
656+
vmbuffer);
657+
646658
if (len>PageGetHeapFreeSpace(page))
647659
{
648660
LockBuffer(otherBuffer,BUFFER_LOCK_UNLOCK);

0 commit comments

Comments
 (0)

[8]ページ先頭

©2009-2025 Movatter.jp