Movatterモバイル変換


[0]ホーム

URL:


Skip to content

Navigation Menu

Sign in
Appearance settings

Search code, repositories, users, issues, pull requests...

Provide feedback

We read every piece of feedback, and take your input very seriously.

Saved searches

Use saved searches to filter your results more quickly

Sign up
Appearance settings

Commite842908

Browse files
committed
Avoid useless truncation attempts during VACUUM.
VACUUM can skip heap pages altogether when there's a run of consecutivepages that are all-visible according to the visibility map. This causes itto not update its nonempty_pages count, just as if those pages were empty,which means that at the end we will think they are candidates for deletion.Thus, we may take the table's AccessExclusive lock only to find that nopages are really truncatable. This usually causes no real problems on amaster server, thanks to the lock being acquired only conditionally; but onhot-standby servers, the same lock must be acquired unconditionally whichcan result in unnecessary query cancellations.To improve matters, force examination of the table's last page wheneverwe reach there with a nonempty_pages count that would allow a truncationattempt. If it's not empty, we'll advance nonempty_pages and therebyprevent the truncation attempt.If we are unable to acquire cleanup lock on that page, there's no need toforce it, unless we're doing an anti-wraparound vacuum. We can just checkfor tuples with a shared buffer lock and then give up. (When we are doingan anti-wraparound vacuum, and decide it's okay to skip the page because itcontains no freezable tuples, this patch still improves matters becausenonempty_pages is properly updated, which it was not before.)Since only the last page is special-cased in this way, we might attempt atruncation that will release many fewer pages than the normal heuristicwould suggest; at worst, only one page would be truncated. But that seemsall right, because the situation won't repeat during the next vacuum.The real problem with the old logic is that the useless truncation attempthappens every time we vacuum, so long as the state of the last few dozenpages doesn't change.This is a longstanding deficiency, but since the consequences aren't verysevere in most scenarios, I'm not going to risk a back-patch.Jeff Janes and Tom Lane
1 parente5e5267 commite842908

File tree

1 file changed

+72
-25
lines changed

1 file changed

+72
-25
lines changed

‎src/backend/commands/vacuumlazy.c

Lines changed: 72 additions & 25 deletions
Original file line numberDiff line numberDiff line change
@@ -138,7 +138,7 @@ static BufferAccessStrategy vac_strategy;
138138
staticvoidlazy_scan_heap(Relationonerel,LVRelStats*vacrelstats,
139139
Relation*Irel,intnindexes,boolscan_all);
140140
staticvoidlazy_vacuum_heap(Relationonerel,LVRelStats*vacrelstats);
141-
staticboollazy_check_needs_freeze(Bufferbuf);
141+
staticboollazy_check_needs_freeze(Bufferbuf,bool*hastup);
142142
staticvoidlazy_vacuum_index(Relationindrel,
143143
IndexBulkDeleteResult**stats,
144144
LVRelStats*vacrelstats);
@@ -147,6 +147,7 @@ static void lazy_cleanup_index(Relation indrel,
147147
LVRelStats*vacrelstats);
148148
staticintlazy_vacuum_page(Relationonerel,BlockNumberblkno,Bufferbuffer,
149149
inttupindex,LVRelStats*vacrelstats,Buffer*vmbuffer);
150+
staticboolshould_attempt_truncation(LVRelStats*vacrelstats);
150151
staticvoidlazy_truncate_heap(Relationonerel,LVRelStats*vacrelstats);
151152
staticBlockNumbercount_nondeletable_pages(Relationonerel,
152153
LVRelStats*vacrelstats);
@@ -175,7 +176,6 @@ lazy_vacuum_rel(Relation onerel, int options, VacuumParams *params,
175176
LVRelStats*vacrelstats;
176177
Relation*Irel;
177178
intnindexes;
178-
BlockNumberpossibly_freeable;
179179
PGRUsageru0;
180180
TimestampTzstarttime=0;
181181
longsecs;
@@ -263,14 +263,8 @@ lazy_vacuum_rel(Relation onerel, int options, VacuumParams *params,
263263

264264
/*
265265
* Optionally truncate the relation.
266-
*
267-
* Don't even think about it unless we have a shot at releasing a goodly
268-
* number of pages. Otherwise, the time taken isn't worth it.
269266
*/
270-
possibly_freeable=vacrelstats->rel_pages-vacrelstats->nonempty_pages;
271-
if (possibly_freeable>0&&
272-
(possibly_freeable >=REL_TRUNCATE_MINIMUM||
273-
possibly_freeable >=vacrelstats->rel_pages /REL_TRUNCATE_FRACTION))
267+
if (should_attempt_truncation(vacrelstats))
274268
lazy_truncate_heap(onerel,vacrelstats);
275269

276270
/* Vacuum the Free Space Map */
@@ -510,6 +504,15 @@ lazy_scan_heap(Relation onerel, LVRelStats *vacrelstats,
510504
* scan_all is not set, so no great harm done; the next vacuum will find
511505
* them. If we make the reverse mistake and vacuum a page unnecessarily,
512506
* it'll just be a no-op.
507+
*
508+
* We will scan the table's last page, at least to the extent of
509+
* determining whether it has tuples or not, even if it should be skipped
510+
* according to the above rules; except when we've already determined that
511+
* it's not worth trying to truncate the table. This avoids having
512+
* lazy_truncate_heap() take access-exclusive lock on the table to attempt
513+
* a truncation that just fails immediately because there are tuples in
514+
* the last page. This is worth avoiding mainly because such a lock must
515+
* be replayed on any hot standby, where it can be disruptive.
513516
*/
514517
for (next_not_all_visible_block=0;
515518
next_not_all_visible_block<nblocks;
@@ -540,6 +543,10 @@ lazy_scan_heap(Relation onerel, LVRelStats *vacrelstats,
540543
boolhas_dead_tuples;
541544
TransactionIdvisibility_cutoff_xid=InvalidTransactionId;
542545

546+
/* see note above about forcing scanning of last page */
547+
#defineFORCE_CHECK_PAGE() \
548+
(blkno == nblocks - 1 && should_attempt_truncation(vacrelstats))
549+
543550
if (blkno==next_not_all_visible_block)
544551
{
545552
/* Time to advance next_not_all_visible_block */
@@ -567,7 +574,7 @@ lazy_scan_heap(Relation onerel, LVRelStats *vacrelstats,
567574
else
568575
{
569576
/* Current block is all-visible */
570-
if (skipping_all_visible_blocks&& !scan_all)
577+
if (skipping_all_visible_blocks&& !scan_all&& !FORCE_CHECK_PAGE())
571578
continue;
572579
all_visible_according_to_vm= true;
573580
}
@@ -631,33 +638,41 @@ lazy_scan_heap(Relation onerel, LVRelStats *vacrelstats,
631638
{
632639
/*
633640
* If we're not scanning the whole relation to guard against XID
634-
* wraparound, it's OK to skip vacuuming a page. The next vacuum
635-
* will clean it up.
641+
* wraparound, and we don't want to forcibly check the page, then
642+
* it's OK to skip vacuuming pages we get a lock conflict on. They
643+
* will be dealt with in some future vacuum.
636644
*/
637-
if (!scan_all)
645+
if (!scan_all&& !FORCE_CHECK_PAGE())
638646
{
639647
ReleaseBuffer(buf);
640648
vacrelstats->pinskipped_pages++;
641649
continue;
642650
}
643651

644652
/*
645-
* If this is a wraparound checking vacuum, then we read the page
646-
* with share lock to see if any xids need to be frozen. If the
647-
* page doesn't need attention we just skip and continue. If it
648-
* does, we wait for cleanup lock.
653+
* Read the page with share lock to see if any xids on it need to
654+
* be frozen. If not we just skip the page, after updating our
655+
* scan statistics. If there are some, we wait for cleanup lock.
649656
*
650657
* We could defer the lock request further by remembering the page
651658
* and coming back to it later, or we could even register
652659
* ourselves for multiple buffers and then service whichever one
653660
* is received first. For now, this seems good enough.
661+
*
662+
* If we get here with scan_all false, then we're just forcibly
663+
* checking the page, and so we don't want to insist on getting
664+
* the lock; we only need to know if the page contains tuples, so
665+
* that we can update nonempty_pages correctly. It's convenient
666+
* to use lazy_check_needs_freeze() for both situations, though.
654667
*/
655668
LockBuffer(buf,BUFFER_LOCK_SHARE);
656-
if (!lazy_check_needs_freeze(buf))
669+
if (!lazy_check_needs_freeze(buf,&hastup)|| !scan_all)
657670
{
658671
UnlockReleaseBuffer(buf);
659672
vacrelstats->scanned_pages++;
660673
vacrelstats->pinskipped_pages++;
674+
if (hastup)
675+
vacrelstats->nonempty_pages=blkno+1;
661676
continue;
662677
}
663678
LockBuffer(buf,BUFFER_LOCK_UNLOCK);
@@ -1304,22 +1319,25 @@ lazy_vacuum_page(Relation onerel, BlockNumber blkno, Buffer buffer,
13041319
* need to be cleaned to avoid wraparound
13051320
*
13061321
* Returns true if the page needs to be vacuumed using cleanup lock.
1322+
* Also returns a flag indicating whether page contains any tuples at all.
13071323
*/
13081324
staticbool
1309-
lazy_check_needs_freeze(Bufferbuf)
1325+
lazy_check_needs_freeze(Bufferbuf,bool*hastup)
13101326
{
1311-
Pagepage;
1327+
Pagepage=BufferGetPage(buf);
13121328
OffsetNumberoffnum,
13131329
maxoff;
13141330
HeapTupleHeadertupleheader;
13151331

1316-
page=BufferGetPage(buf);
1332+
*hastup=false;
13171333

1318-
if (PageIsNew(page)||PageIsEmpty(page))
1319-
{
1320-
/* PageIsNew probably shouldn't happen... */
1334+
/* If we hit an uninitialized page, we want to force vacuuming it. */
1335+
if (PageIsNew(page))
1336+
return true;
1337+
1338+
/* Quick out for ordinary empty page. */
1339+
if (PageIsEmpty(page))
13211340
return false;
1322-
}
13231341

13241342
maxoff=PageGetMaxOffsetNumber(page);
13251343
for (offnum=FirstOffsetNumber;
@@ -1330,6 +1348,11 @@ lazy_check_needs_freeze(Buffer buf)
13301348

13311349
itemid=PageGetItemId(page,offnum);
13321350

1351+
/* this should match hastup test in count_nondeletable_pages() */
1352+
if (ItemIdIsUsed(itemid))
1353+
*hastup= true;
1354+
1355+
/* dead and redirect items never need freezing */
13331356
if (!ItemIdIsNormal(itemid))
13341357
continue;
13351358

@@ -1432,6 +1455,30 @@ lazy_cleanup_index(Relation indrel,
14321455
pfree(stats);
14331456
}
14341457

1458+
/*
1459+
* should_attempt_truncation - should we attempt to truncate the heap?
1460+
*
1461+
* Don't even think about it unless we have a shot at releasing a goodly
1462+
* number of pages. Otherwise, the time taken isn't worth it.
1463+
*
1464+
* This is split out so that we can test whether truncation is going to be
1465+
* called for before we actually do it. If you change the logic here, be
1466+
* careful to depend only on fields that lazy_scan_heap updates on-the-fly.
1467+
*/
1468+
staticbool
1469+
should_attempt_truncation(LVRelStats*vacrelstats)
1470+
{
1471+
BlockNumberpossibly_freeable;
1472+
1473+
possibly_freeable=vacrelstats->rel_pages-vacrelstats->nonempty_pages;
1474+
if (possibly_freeable>0&&
1475+
(possibly_freeable >=REL_TRUNCATE_MINIMUM||
1476+
possibly_freeable >=vacrelstats->rel_pages /REL_TRUNCATE_FRACTION))
1477+
return true;
1478+
else
1479+
return false;
1480+
}
1481+
14351482
/*
14361483
* lazy_truncate_heap - try to truncate off any empty pages at the end
14371484
*/

0 commit comments

Comments
 (0)

[8]ページ先頭

©2009-2025 Movatter.jp