Movatterモバイル変換


[0]ホーム

URL:


Skip to content

Navigation Menu

Sign in
Appearance settings

Search code, repositories, users, issues, pull requests...

Provide feedback

We read every piece of feedback, and take your input very seriously.

Saved searches

Use saved searches to filter your results more quickly

Sign up
Appearance settings

Commit5d742b9

Browse files
committed
Fix multiple bugs in index page locking during hot-standby WAL replay.
In ordinary operation, VACUUM must be careful to take a cleanup lock oneach leaf page of a btree index; this ensures that no indexscans couldstill be "in flight" to heap tuples due to be deleted. (Because ofpossible index-tuple motion due to concurrent page splits, it's not enoughto lock only the pages we're deleting index tuples from.) In Hot Standby,the WAL replay process must likewise lock every leaf page. There wereseveral bugs in the code for that:* The replay scan might come across unused, all-zero pages in the index.While btree_xlog_vacuum itself did the right thing (ie, nothing) withsuch pages, xlogutils.c supposed that such pages must be corrupt andwould throw an error. This accounts for various reports of replicationfailures with "PANIC: WAL contains references to invalid pages". Tofix, add a ReadBufferMode value that instructs XLogReadBufferExtendednot to complain when we're doing this.* btree_xlog_vacuum performed the extra locking if standbyState ==STANDBY_SNAPSHOT_READY, but that's not the correct test: we won't open upfor hot standby queries until the database has reached consistency, andwe don't want to do the extra locking till then either, for fear of readingcorrupted pages (which bufmgr.c would complain about). Fix by exporting anew function from xlog.c that will report whether we're actually in hotstandby replay mode.* To ensure full coverage of the index in the replay scan, btvacuumscanwould emit a dummy WAL record for the last page of the index, if novacuuming work had been done on that page. However, if the last pageof the index is all-zero, that would result in corruption of said page,since the functions called on it weren't prepared to handle that case.There's no need to lock any such pages, so change the logic to targetthe last normal leaf page instead.The first two of these bugs were diagnosed by Andres Freund, the other oneby me. Fixes based on ideas from Heikki Linnakangas and myself.This has been wrong since Hot Standby was introduced, so back-patch to 9.0.
1 parentf261ad2 commit5d742b9

File tree

7 files changed

+100
-44
lines changed

7 files changed

+100
-44
lines changed

‎src/backend/access/nbtree/nbtree.c

Lines changed: 43 additions & 29 deletions
Original file line numberDiff line numberDiff line change
@@ -57,8 +57,8 @@ typedef struct
5757
IndexBulkDeleteCallbackcallback;
5858
void*callback_state;
5959
BTCycleIdcycleid;
60-
BlockNumberlastBlockVacuumed;/*last blknoreached by Vacuum scan */
61-
BlockNumberlastUsedPage;/* blknoof last non-recyclable page */
60+
BlockNumberlastBlockVacuumed;/*highest blknoactually vacuumed */
61+
BlockNumberlastBlockLocked;/*highestblknowe've cleanup-locked */
6262
BlockNumbertotFreePages;/* true total # of free pages */
6363
MemoryContextpagedelcontext;
6464
}BTVacState;
@@ -631,7 +631,7 @@ btvacuumscan(IndexVacuumInfo *info, IndexBulkDeleteResult *stats,
631631
vstate.callback_state=callback_state;
632632
vstate.cycleid=cycleid;
633633
vstate.lastBlockVacuumed=BTREE_METAPAGE;/* Initialise at first block */
634-
vstate.lastUsedPage=BTREE_METAPAGE;
634+
vstate.lastBlockLocked=BTREE_METAPAGE;
635635
vstate.totFreePages=0;
636636

637637
/* Create a temporary memory context to run _bt_pagedel in */
@@ -687,27 +687,30 @@ btvacuumscan(IndexVacuumInfo *info, IndexBulkDeleteResult *stats,
687687
}
688688

689689
/*
690-
* InHotStandby we need to scan right up to the end of the index for
691-
* correct locking, so we may need to write a WAL record for the final
692-
* block in the index if it was not vacuumed. It's possible that VACUUMing
693-
* has actually removed zeroed pages at the end of the index so we need to
694-
* take care to issue the record for last actual block and not for the
695-
* last block that was scanned. Ignore empty indexes.
690+
* If the WAL is replayed in hot standby, the replay process needs to get
691+
* cleanup locks on all index leaf pages, just as we've been doing here.
692+
* However, we won't issue any WAL records about pages that have no items
693+
* to be deleted. For pages between pages we've vacuumed, the replay code
694+
* will take locks under the direction of the lastBlockVacuumed fields in
695+
* the XLOG_BTREE_VACUUM WAL records. To cover pages after the last one
696+
* we vacuum, we need to issue a dummy XLOG_BTREE_VACUUM WAL record
697+
* against the last leaf page in the index, if that one wasn't vacuumed.
696698
*/
697699
if (XLogStandbyInfoActive()&&
698-
num_pages>1&&vstate.lastBlockVacuumed<(num_pages-1))
700+
vstate.lastBlockVacuumed<vstate.lastBlockLocked)
699701
{
700702
Bufferbuf;
701703

702704
/*
703-
*Wecan't use _bt_getbuf()herebecauseit always applies
704-
*_bt_checkpage(), which will barf on an all-zero page. We want to
705-
*recycle all-zero pages, not fail. Also, we want to use a
706-
*nondefault buffer access strategy.
705+
*The page should be valid, but wecan't use _bt_getbuf() becausewe
706+
*want to use a nondefault buffer access strategy. Since we aren't
707+
*going to delete any items, getting cleanup lock again is probably
708+
*overkill, but for consistency do that anyway.
707709
*/
708-
buf=ReadBufferExtended(rel,MAIN_FORKNUM,num_pages-1,RBM_NORMAL,
709-
info->strategy);
710+
buf=ReadBufferExtended(rel,MAIN_FORKNUM,vstate.lastBlockLocked,
711+
RBM_NORMAL,info->strategy);
710712
LockBufferForCleanup(buf);
713+
_bt_checkpage(rel,buf);
711714
_bt_delitems_vacuum(rel,buf,NULL,0,vstate.lastBlockVacuumed);
712715
_bt_relbuf(rel,buf);
713716
}
@@ -782,10 +785,6 @@ btvacuumpage(BTVacState *vstate, BlockNumber blkno, BlockNumber orig_blkno)
782785
}
783786
}
784787

785-
/* If the page is in use, update lastUsedPage */
786-
if (!_bt_page_recyclable(page)&&vstate->lastUsedPage<blkno)
787-
vstate->lastUsedPage=blkno;
788-
789788
/* Page is valid, see what to do with it */
790789
if (_bt_page_recyclable(page))
791790
{
@@ -821,6 +820,13 @@ btvacuumpage(BTVacState *vstate, BlockNumber blkno, BlockNumber orig_blkno)
821820
LockBuffer(buf,BUFFER_LOCK_UNLOCK);
822821
LockBufferForCleanup(buf);
823822

823+
/*
824+
* Remember highest leaf page number we've taken cleanup lock on; see
825+
* notes in btvacuumscan
826+
*/
827+
if (blkno>vstate->lastBlockLocked)
828+
vstate->lastBlockLocked=blkno;
829+
824830
/*
825831
* Check whether we need to recurse back to earlier pages.What we
826832
* are concerned about is a page split that happened since we started
@@ -887,18 +893,26 @@ btvacuumpage(BTVacState *vstate, BlockNumber blkno, BlockNumber orig_blkno)
887893
*/
888894
if (ndeletable>0)
889895
{
890-
BlockNumberlastBlockVacuumed=BufferGetBlockNumber(buf);
891-
892-
_bt_delitems_vacuum(rel,buf,deletable,ndeletable,vstate->lastBlockVacuumed);
896+
/*
897+
* Notice that the issued XLOG_BTREE_VACUUM WAL record includes an
898+
* instruction to the replay code to get cleanup lock on all pages
899+
* between the previous lastBlockVacuumed and this page. This
900+
* ensures that WAL replay locks all leaf pages at some point.
901+
*
902+
* Since we can visit leaf pages out-of-order when recursing,
903+
* replay might end up locking such pages an extra time, but it
904+
* doesn't seem worth the amount of bookkeeping it'd take to avoid
905+
* that.
906+
*/
907+
_bt_delitems_vacuum(rel,buf,deletable,ndeletable,
908+
vstate->lastBlockVacuumed);
893909

894910
/*
895-
* Keep track of the block number of the lastBlockVacuumed, so we
896-
* can scan those blocks as well during WAL replay. This then
897-
* provides concurrency protection and allows btrees to be used
898-
* while in recovery.
911+
* Remember highest leaf page number we've issued a
912+
* XLOG_BTREE_VACUUM WAL record for.
899913
*/
900-
if (lastBlockVacuumed>vstate->lastBlockVacuumed)
901-
vstate->lastBlockVacuumed=lastBlockVacuumed;
914+
if (blkno>vstate->lastBlockVacuumed)
915+
vstate->lastBlockVacuumed=blkno;
902916

903917
stats->tuples_removed+=ndeletable;
904918
/* must recompute maxoff */

‎src/backend/access/nbtree/nbtxlog.c

Lines changed: 30 additions & 11 deletions
Original file line numberDiff line numberDiff line change
@@ -489,28 +489,47 @@ btree_xlog_vacuum(XLogRecPtr lsn, XLogRecord *record)
489489
BTPageOpaqueopaque;
490490

491491
/*
492-
* If queries might be active then we need to ensure everyblock is
492+
* If queries might be active then we need to ensure everyleaf page is
493493
* unpinned between the lastBlockVacuumed and the current block, if there
494-
* are any. This ensures that every block in the index is touched during
495-
* VACUUM as required to ensure scans work correctly.
494+
* are any. This prevents replay of the VACUUM from reaching the stage of
495+
* removing heap tuples while there could still be indexscans "in flight"
496+
* to those particular tuples (see nbtree/README).
497+
*
498+
* It might be worth checking if there are actually any backends running;
499+
* if not, we could just skip this.
500+
*
501+
* Since VACUUM can visit leaf pages out-of-order, it might issue records
502+
* with lastBlockVacuumed >= block; that's not an error, it just means
503+
* nothing to do now.
504+
*
505+
* Note: since we touch all pages in the range, we will lock non-leaf
506+
* pages, and also any empty (all-zero) pages that may be in the index. It
507+
* doesn't seem worth the complexity to avoid that. But it's important
508+
* that HotStandbyActiveInReplay() will not return true if the database
509+
* isn't yet consistent; so we need not fear reading still-corrupt blocks
510+
* here during crash recovery.
496511
*/
497-
if (standbyState==STANDBY_SNAPSHOT_READY&&
498-
(xlrec->lastBlockVacuumed+1)!=xlrec->block)
512+
if (HotStandbyActiveInReplay())
499513
{
500-
BlockNumberblkno=xlrec->lastBlockVacuumed+1;
514+
BlockNumberblkno;
501515

502-
for (;blkno<xlrec->block;blkno++)
516+
for (blkno=xlrec->lastBlockVacuumed+1;blkno<xlrec->block;blkno++)
503517
{
504518
/*
519+
* We use RBM_NORMAL_NO_LOG mode because it's not an error
520+
* condition to see all-zero pages. The original btvacuumpage
521+
* scan would have skipped over all-zero pages, noting them in FSM
522+
* but not bothering to initialize them just yet; so we mustn't
523+
* throw an error here. (We could skip acquiring the cleanup lock
524+
* if PageIsNew, but it's probably not worth the cycles to test.)
525+
*
505526
* XXX we don't actually need to read the block, we just need to
506527
* confirm it is unpinned. If we had a special call into the
507528
* buffer manager we could optimise this so that if the block is
508529
* not in shared_buffers we confirm it as unpinned.
509-
*
510-
* Another simple optimization would be to check if there's any
511-
* backends running; if not, we could just skip this.
512530
*/
513-
buffer=XLogReadBufferExtended(xlrec->node,MAIN_FORKNUM,blkno,RBM_NORMAL);
531+
buffer=XLogReadBufferExtended(xlrec->node,MAIN_FORKNUM,blkno,
532+
RBM_NORMAL_NO_LOG);
514533
if (BufferIsValid(buffer))
515534
{
516535
LockBufferForCleanup(buffer);

‎src/backend/access/transam/xlog.c

Lines changed: 13 additions & 2 deletions
Original file line numberDiff line numberDiff line change
@@ -159,6 +159,9 @@ static XLogRecPtr LastRec;
159159
*/
160160
staticboolLocalRecoveryInProgress= true;
161161

162+
/* True if we're allowing hot standby backends, but only in startup process */
163+
staticboolbackendsAllowed= false;
164+
162165
/*
163166
* Local state for XLogInsertAllowed():
164167
*1: unconditionally allowed to insert XLOG
@@ -6654,8 +6657,6 @@ StartupXLOG(void)
66546657
staticvoid
66556658
CheckRecoveryConsistency(void)
66566659
{
6657-
staticboolbackendsAllowed= false;
6658-
66596660
/*
66606661
* During crash recovery, we don't reach a consistent state until we've
66616662
* replayed all the WAL.
@@ -6734,6 +6735,16 @@ RecoveryInProgress(void)
67346735
}
67356736
}
67366737

6738+
/*
6739+
* Like HotStandbyActive(), but to be used only in WAL replay code,
6740+
* where we don't need to ask any other process what the state is.
6741+
*/
6742+
bool
6743+
HotStandbyActiveInReplay(void)
6744+
{
6745+
returnbackendsAllowed;
6746+
}
6747+
67376748
/*
67386749
* Is this process allowed to insert new WAL records?
67396750
*

‎src/backend/access/transam/xlogutils.c

Lines changed: 6 additions & 0 deletions
Original file line numberDiff line numberDiff line change
@@ -264,6 +264,10 @@ XLogReadBuffer(RelFileNode rnode, BlockNumber blkno, bool init)
264264
*
265265
* In RBM_ZERO and RBM_ZERO_ON_ERROR modes, if the page doesn't exist, the
266266
* relation is extended with all-zeroes pages up to the given block number.
267+
*
268+
* In RBM_NORMAL_NO_LOG mode, we return InvalidBuffer if the page doesn't
269+
* exist, and we don't check for all-zeroes. Thus, no log entry is made
270+
* to imply that the page should be dropped or truncated later.
267271
*/
268272
Buffer
269273
XLogReadBufferExtended(RelFileNodernode,ForkNumberforknum,
@@ -304,6 +308,8 @@ XLogReadBufferExtended(RelFileNode rnode, ForkNumber forknum,
304308
log_invalid_page(rnode,forknum,blkno, false);
305309
returnInvalidBuffer;
306310
}
311+
if (mode==RBM_NORMAL_NO_LOG)
312+
returnInvalidBuffer;
307313
/* OK to extend the file */
308314
/* we do this in recovery only - no rel-extension lock needed */
309315
Assert(InRecovery);

‎src/backend/storage/buffer/bufmgr.c

Lines changed: 4 additions & 1 deletion
Original file line numberDiff line numberDiff line change
@@ -197,7 +197,8 @@ ReadBuffer(Relation reln, BlockNumber blockNum)
197197
* Assume when this function is called, that reln has been opened already.
198198
*
199199
* In RBM_NORMAL mode, the page is read from disk, and the page header is
200-
* validated. An error is thrown if the page header is not valid.
200+
* validated. An error is thrown if the page header is not valid.(But
201+
* note that an all-zero page is considered "valid"; see PageIsVerified().)
201202
*
202203
* RBM_ZERO_ON_ERROR is like the normal mode, but if the page header is not
203204
* valid, the page is zeroed instead of throwing an error. This is intended
@@ -211,6 +212,8 @@ ReadBuffer(Relation reln, BlockNumber blockNum)
211212
* current physical EOF; that is likely to cause problems in md.c when
212213
* the page is modified and written out. P_NEW is OK, though.
213214
*
215+
* RBM_NORMAL_NO_LOG mode is treated the same as RBM_NORMAL here.
216+
*
214217
* If strategy is not NULL, a nondefault buffer access strategy is used.
215218
* See buffer/README for details.
216219
*/

‎src/include/access/xlog.h

Lines changed: 1 addition & 0 deletions
Original file line numberDiff line numberDiff line change
@@ -286,6 +286,7 @@ extern void xlog_desc(StringInfo buf, uint8 xl_info, char *rec);
286286
externvoidissue_xlog_fsync(intfd,uint32log,uint32seg);
287287

288288
externboolRecoveryInProgress(void);
289+
externboolHotStandbyActiveInReplay(void);
289290
externboolXLogInsertAllowed(void);
290291
externvoidGetXLogReceiptTime(TimestampTz*rtime,bool*fromStream);
291292

‎src/include/storage/bufmgr.h

Lines changed: 3 additions & 1 deletion
Original file line numberDiff line numberDiff line change
@@ -38,7 +38,9 @@ typedef enum
3838
RBM_NORMAL,/* Normal read */
3939
RBM_ZERO,/* Don't read from disk, caller will
4040
* initialize */
41-
RBM_ZERO_ON_ERROR/* Read, but return an all-zeros page on error */
41+
RBM_ZERO_ON_ERROR,/* Read, but return an all-zeros page on error */
42+
RBM_NORMAL_NO_LOG/* Don't log page as invalid during WAL
43+
* replay; otherwise same as RBM_NORMAL */
4244
}ReadBufferMode;
4345

4446
/* in globals.c ... this duplicates miscadmin.h */

0 commit comments

Comments
 (0)

[8]ページ先頭

©2009-2025 Movatter.jp