Movatterモバイル変換


[0]ホーム

URL:


Skip to content

Navigation Menu

Sign in
Appearance settings

Search code, repositories, users, issues, pull requests...

Provide feedback

We read every piece of feedback, and take your input very seriously.

Saved searches

Use saved searches to filter your results more quickly

Sign up
Appearance settings

Commitc2c4bc6

Browse files
committed
When updating reltuples after ANALYZE, just extrapolate from our sample.
The existing logic for updating pg_class.reltuples trusted the samplingresults only for the pages ANALYZE actually visited, preferring tobelieve the previous tuple density estimate for all the unvisited pages.While there's some rationale for doing that for VACUUM (first thatVACUUM is likely to visit a very nonrandom subset of pages, and secondthat we know for sure that the unvisited pages did not change), there'sno such rationale for ANALYZE: by assumption, it's looked at an unbiasedrandom sample of the table's pages. Furthermore, in a very large tableANALYZE will have examined only a tiny fraction of the table's pages,meaning it cannot slew the overall density estimate very far at all.In a table that is physically growing, this causes reltuples to increasenearly proportionally to the change in relpages, regardless of what isactually happening in the table. This has been observed to cause reltuplesto become so much larger than reality that it effectively shuts offautovacuum, whose threshold for doing anything is a fraction of reltuples.(Getting to the point where that would happen seems to require someadditional, not well understood, conditions. But it's undeniable that ifreltuples is seriously off in a large table, ANALYZE alone will not fix itin any reasonable number of iterations, especially not if the table iscontinuing to grow.)Hence, restrict the use of vac_estimate_reltuples() to VACUUM alone,and in ANALYZE, just extrapolate from the sample pages on the assumptionthat they provide an accurate model of the whole table. If, by very badluck, they don't, at least another ANALYZE will fix it; in the old logica single bad estimate could cause problems indefinitely.In HEAD, let's remove vac_estimate_reltuples' is_analyze argumentaltogether; it was never used for anything and now it's totally pointless.But keep it in the back branches, in case any third-party code is callingthis function.Per bug #15005. Back-patch to all supported branches.David Gould, reviewed by Alexander Kuzmenkov, cosmetic changes by meDiscussion:https://postgr.es/m/20180117164916.3fdcf2e9@engels
1 parent4b0e717 commitc2c4bc6

File tree

2 files changed

+24
-39
lines changed

2 files changed

+24
-39
lines changed

‎src/backend/commands/analyze.c

Lines changed: 11 additions & 8 deletions
Original file line numberDiff line numberDiff line change
@@ -1194,19 +1194,22 @@ acquire_sample_rows(Relation onerel, int elevel,
11941194
qsort((void*)rows,numrows,sizeof(HeapTuple),compare_rows);
11951195

11961196
/*
1197-
* Estimate total numbers of rows in relation. For live rows, use
1198-
* vac_estimate_reltuples; for dead rows, we have no source of old
1199-
* information, so we have to assume the density is the same in unseen
1200-
* pages as in the pages we scanned.
1197+
* Estimate total numbers of live and dead rows in relation, extrapolating
1198+
* on the assumption that the average tuple density in pages we didn't
1199+
* scan is the same as in the pages we did scan. Since what we scanned is
1200+
* a random sample of the pages in the relation, this should be a good
1201+
* assumption.
12011202
*/
1202-
*totalrows=vac_estimate_reltuples(onerel, true,
1203-
totalblocks,
1204-
bs.m,
1205-
liverows);
12061203
if (bs.m>0)
1204+
{
1205+
*totalrows=floor((liverows /bs.m)*totalblocks+0.5);
12071206
*totaldeadrows=floor((deadrows /bs.m)*totalblocks+0.5);
1207+
}
12081208
else
1209+
{
1210+
*totalrows=0.0;
12091211
*totaldeadrows=0.0;
1212+
}
12101213

12111214
/*
12121215
* Emit some interesting relation info

‎src/backend/commands/vacuum.c

Lines changed: 13 additions & 31 deletions
Original file line numberDiff line numberDiff line change
@@ -643,13 +643,13 @@ vacuum_set_xid_limits(Relation rel,
643643
* vac_estimate_reltuples() -- estimate the new value for pg_class.reltuples
644644
*
645645
*If we scanned the whole relation then we should just use the count of
646-
*live tuples seen; but if we did not, we should nottrust the count
647-
*unreservedly, especially not in VACUUM, whichmay have scanned a quite
648-
*nonrandomsubset of the table. When we have only partial information,
649-
*we takethe old value of pg_class.reltuples as a measurement of the
646+
*live tuples seen; but if we did not, we should notblindly extrapolate
647+
*from that number, since VACUUMmay have scanned a quite nonrandom
648+
*subset of the table. When we have only partial information, we take
649+
*the old value of pg_class.reltuples as a measurement of the
650650
*tuple density in the unscanned pages.
651651
*
652-
*This routineisshared by VACUUM and ANALYZE.
652+
*The is_analyze argumentishistorical.
653653
*/
654654
double
655655
vac_estimate_reltuples(Relationrelation,boolis_analyze,
@@ -660,9 +660,8 @@ vac_estimate_reltuples(Relation relation, bool is_analyze,
660660
BlockNumberold_rel_pages=relation->rd_rel->relpages;
661661
doubleold_rel_tuples=relation->rd_rel->reltuples;
662662
doubleold_density;
663-
doublenew_density;
664-
doublemultiplier;
665-
doubleupdated_density;
663+
doubleunscanned_pages;
664+
doubletotal_tuples;
666665

667666
/* If we did scan the whole table, just use the count as-is */
668667
if (scanned_pages >=total_pages)
@@ -686,31 +685,14 @@ vac_estimate_reltuples(Relation relation, bool is_analyze,
686685

687686
/*
688687
* Okay, we've covered the corner cases. The normal calculation is to
689-
* convert the old measurement to a density (tuples per page), then update
690-
* the density using an exponential-moving-average approach, and finally
691-
* compute reltuples as updated_density * total_pages.
692-
*
693-
* For ANALYZE, the moving average multiplier is just the fraction of the
694-
* table's pages we scanned. This is equivalent to assuming that the
695-
* tuple density in the unscanned pages didn't change. Of course, it
696-
* probably did, if the new density measurement is different. But over
697-
* repeated cycles, the value of reltuples will converge towards the
698-
* correct value, if repeated measurements show the same new density.
699-
*
700-
* For VACUUM, the situation is a bit different: we have looked at a
701-
* nonrandom sample of pages, but we know for certain that the pages we
702-
* didn't look at are precisely the ones that haven't changed lately.
703-
* Thus, there is a reasonable argument for doing exactly the same thing
704-
* as for the ANALYZE case, that is use the old density measurement as the
705-
* value for the unscanned pages.
706-
*
707-
* This logic could probably use further refinement.
688+
* convert the old measurement to a density (tuples per page), then
689+
* estimate the number of tuples in the unscanned pages using that figure,
690+
* and finally add on the number of tuples in the scanned pages.
708691
*/
709692
old_density=old_rel_tuples /old_rel_pages;
710-
new_density=scanned_tuples /scanned_pages;
711-
multiplier= (double)scanned_pages / (double)total_pages;
712-
updated_density=old_density+ (new_density-old_density)*multiplier;
713-
returnfloor(updated_density*total_pages+0.5);
693+
unscanned_pages= (double)total_pages- (double)scanned_pages;
694+
total_tuples=old_density*unscanned_pages+scanned_tuples;
695+
returnfloor(total_tuples+0.5);
714696
}
715697

716698

0 commit comments

Comments
 (0)

[8]ページ先頭

©2009-2025 Movatter.jp