Movatterモバイル変換


[0]ホーム

URL:


Skip to content

Navigation Menu

Sign in
Appearance settings

Search code, repositories, users, issues, pull requests...

Provide feedback

We read every piece of feedback, and take your input very seriously.

Saved searches

Use saved searches to filter your results more quickly

Sign up
Appearance settings

Commit1e0fb6a

Browse files
committed
Use SnapshotDirty rather than an active snapshot to probe index endpoints.
If there are lots of uncommitted tuples at the end of the index range,get_actual_variable_range() ends up fetching each one and doing an MVCCvisibility check on it, until it finally hits a visible tuple. This isbad enough in isolation, considering that we don't need an exact answeronly an approximate one. But because the tuples are not yet committed,each visibility check does a TransactionIdIsInProgress() test, whichinvolves scanning the ProcArray. When multiple sessions do thisconcurrently, the ensuing contention results in horrid performance loss.20X overall throughput loss on not-too-complicated queries is easy todemonstrate in the back branches (though someone's made it noticeablyless bad in HEAD).We can dodge the problem fairly effectively by using SnapshotDirty ratherthan a normal MVCC snapshot. This will cause the index probe to takeuncommitted tuples as good, so that we incur only one tuple fetch and testeven if there are many such tuples. The extent to which this degrades theestimate is debatable: it's possible the result is actually a more accurateprediction than before, if the endmost tuple has become committed by thetime we actually execute the query being planned. In any case, it's notvery likely that it makes the estimate a lot worse.SnapshotDirty will still reject tuples that are known committed dead, sowe won't give bogus answers if an invalid outlier has been deleted but notyet vacuumed from the index. (Because btrees know how to mark such tuplesdead in the index, we shouldn't have a big performance problem in the casethat there are many of them at the end of the range.) This considerationmotivates not using SnapshotAny, which was also considered as a fix.Note: the back branches were using SnapshotNow instead of an MVCC snapshot,but the problem and solution are the same.Per performance complaints from Bartlomiej Romanski, Josh Berkus, andothers. Back-patch to 9.0, where the issue was introduced (by commit40608e7).
1 parent19d66ab commit1e0fb6a

File tree

1 file changed

+19
-2
lines changed

1 file changed

+19
-2
lines changed

‎src/backend/utils/adt/selfuncs.c

Lines changed: 19 additions & 2 deletions
Original file line numberDiff line numberDiff line change
@@ -4652,6 +4652,7 @@ get_actual_variable_range(PlannerInfo *root, VariableStatData *vardata,
46524652
HeapTupletup;
46534653
Datumvalues[INDEX_MAX_KEYS];
46544654
boolisnull[INDEX_MAX_KEYS];
4655+
SnapshotDataSnapshotDirty;
46554656

46564657
estate=CreateExecutorState();
46574658
econtext=GetPerTupleExprContext(estate);
@@ -4674,6 +4675,7 @@ get_actual_variable_range(PlannerInfo *root, VariableStatData *vardata,
46744675
slot=MakeSingleTupleTableSlot(RelationGetDescr(heapRel));
46754676
econtext->ecxt_scantuple=slot;
46764677
get_typlenbyval(vardata->atttype,&typLen,&typByVal);
4678+
InitDirtySnapshot(SnapshotDirty);
46774679

46784680
/* set up an IS NOT NULL scan key so that we ignore nulls */
46794681
ScanKeyEntryInitialize(&scankeys[0],
@@ -4689,7 +4691,22 @@ get_actual_variable_range(PlannerInfo *root, VariableStatData *vardata,
46894691
/* If min is requested ... */
46904692
if (min)
46914693
{
4692-
index_scan=index_beginscan(heapRel,indexRel,SnapshotNow,
4694+
/*
4695+
* In principle, we should scan the index with our current
4696+
* active snapshot, which is the best approximation we've got
4697+
* to what the query will see when executed. But that won't
4698+
* be exact if a new snap is taken before running the query,
4699+
* and it can be very expensive if a lot of uncommitted rows
4700+
* exist at the end of the index (because we'll laboriously
4701+
* fetch each one and reject it). What seems like a good
4702+
* compromise is to use SnapshotDirty.That will accept
4703+
* uncommitted rows, and thus avoid fetching multiple heap
4704+
* tuples in this scenario. On the other hand, it will reject
4705+
* known-dead rows, and thus not give a bogus answer when the
4706+
* extreme value has been deleted; that case motivates not
4707+
* using SnapshotAny here.
4708+
*/
4709+
index_scan=index_beginscan(heapRel,indexRel,&SnapshotDirty,
46934710
1,scankeys);
46944711

46954712
/* Fetch first tuple in sortop's direction */
@@ -4720,7 +4737,7 @@ get_actual_variable_range(PlannerInfo *root, VariableStatData *vardata,
47204737
/* If max is requested, and we didn't find the index is empty */
47214738
if (max&&have_data)
47224739
{
4723-
index_scan=index_beginscan(heapRel,indexRel,SnapshotNow,
4740+
index_scan=index_beginscan(heapRel,indexRel,&SnapshotDirty,
47244741
1,scankeys);
47254742

47264743
/* Fetch first tuple in reverse direction */

0 commit comments

Comments
 (0)

[8]ページ先頭

©2009-2025 Movatter.jp