Movatterモバイル変換


[0]ホーム

URL:


Skip to content

Navigation Menu

Sign in
Appearance settings

Search code, repositories, users, issues, pull requests...

Provide feedback

We read every piece of feedback, and take your input very seriously.

Saved searches

Use saved searches to filter your results more quickly

Sign up
Appearance settings

Commit8d578b9

Browse files
committed
Fix race in parallel hash join batch cleanup, take II.
With unlucky timing and parallel_leader_participation=off (not thedefault), PHJ could attempt to access per-batch shared state just as itwas being freed. There was code intended to prevent that by checkingfor a cleared pointer, but it was racy. Fix, by introducing an extrabarrier phase. The new phase PHJ_BUILD_RUNNING means that it's safe toaccess the per-batch state to find a batch to help with, andPHJ_BUILD_DONE means that it is too late. The last to detach will freethe array of per-batch state as before, but now it will also atomicallyadvance the phase, so that late attachers can avoid the hazard. Thismirrors the way per-batch hash tables are freed (see phasesPHJ_BATCH_PROBING and PHJ_BATCH_DONE).An earlier attempt to fix this (commit3b8981b, later reverted) missedone special case. When the inner side is empty (the "empty inneroptimization), the build barrier would only make it toPHJ_BUILD_HASHING_INNER phase before workers attempted to detach fromthe hashtable. In that case, fast-forward the build barrier toPHJ_BUILD_RUNNING before proceeding, so that our later assertions holdand we can still negotiate who is cleaning up.Revealed by build farm failures, where BarrierAttach() failed a sanitycheck assertion, because the memory had been clobbered by dsa_free().In non-assert builds, the result could be a segmentation fault.Back-patch to all supported releases.Author: Thomas Munro <thomas.munro@gmail.com>Author: Melanie Plageman <melanieplageman@gmail.com>Reported-by: Michael Paquier <michael@paquier.xyz>Reported-by: David Geier <geidav.pg@gmail.com>Tested-by: David Geier <geidav.pg@gmail.com>Discussion:https://postgr.es/m/20200929061142.GA29096%40paquier.xyz
1 parentef719e7 commit8d578b9

File tree

3 files changed

+74
-33
lines changed

3 files changed

+74
-33
lines changed

‎src/backend/executor/nodeHash.c

Lines changed: 34 additions & 16 deletions
Original file line numberDiff line numberDiff line change
@@ -333,14 +333,21 @@ MultiExecParallelHash(HashState *node)
333333
hashtable->nbuckets=pstate->nbuckets;
334334
hashtable->log2_nbuckets=my_log2(hashtable->nbuckets);
335335
hashtable->totalTuples=pstate->total_tuples;
336-
ExecParallelHashEnsureBatchAccessors(hashtable);
336+
337+
/*
338+
* Unless we're completely done and the batch state has been freed, make
339+
* sure we have accessors.
340+
*/
341+
if (BarrierPhase(build_barrier)<PHJ_BUILD_DONE)
342+
ExecParallelHashEnsureBatchAccessors(hashtable);
337343

338344
/*
339345
* The next synchronization point is in ExecHashJoin's HJ_BUILD_HASHTABLE
340-
* case, which will bring the build phase toPHJ_BUILD_DONE (if it isn't
341-
* there already).
346+
* case, which will bring the build phase toPHJ_BUILD_RUNNING (if it
347+
*isn'tthere already).
342348
*/
343349
Assert(BarrierPhase(build_barrier)==PHJ_BUILD_HASHING_OUTER||
350+
BarrierPhase(build_barrier)==PHJ_BUILD_RUNNING||
344351
BarrierPhase(build_barrier)==PHJ_BUILD_DONE);
345352
}
346353

@@ -620,7 +627,7 @@ ExecHashTableCreate(HashState *state, List *hashOperators, List *hashCollations,
620627
/*
621628
* The next Parallel Hash synchronization point is in
622629
* MultiExecParallelHash(), which will progress it all the way to
623-
*PHJ_BUILD_DONE. The caller must not return control from this
630+
*PHJ_BUILD_RUNNING. The caller must not return control from this
624631
* executor node between now and then.
625632
*/
626633
}
@@ -3054,14 +3061,11 @@ ExecParallelHashEnsureBatchAccessors(HashJoinTable hashtable)
30543061
}
30553062

30563063
/*
3057-
* It's possible for a backend to start up very late so that the whole
3058-
* join is finished and the shm state for tracking batches has already
3059-
* been freed by ExecHashTableDetach(). In that case we'll just leave
3060-
* hashtable->batches as NULL so that ExecParallelHashJoinNewBatch() gives
3061-
* up early.
3064+
* We should never see a state where the batch-tracking array is freed,
3065+
* because we should have given up sooner if we join when the build
3066+
* barrier has reached the PHJ_BUILD_DONE phase.
30623067
*/
3063-
if (!DsaPointerIsValid(pstate->batches))
3064-
return;
3068+
Assert(DsaPointerIsValid(pstate->batches));
30653069

30663070
/* Use hash join memory context. */
30673071
oldcxt=MemoryContextSwitchTo(hashtable->hashCxt);
@@ -3181,9 +3185,18 @@ ExecHashTableDetachBatch(HashJoinTable hashtable)
31813185
void
31823186
ExecHashTableDetach(HashJoinTablehashtable)
31833187
{
3184-
if (hashtable->parallel_state)
3188+
ParallelHashJoinState*pstate=hashtable->parallel_state;
3189+
3190+
/*
3191+
* If we're involved in a parallel query, we must either have gotten all
3192+
* the way to PHJ_BUILD_RUNNING, or joined too late and be in
3193+
* PHJ_BUILD_DONE.
3194+
*/
3195+
Assert(!pstate||
3196+
BarrierPhase(&pstate->build_barrier) >=PHJ_BUILD_RUNNING);
3197+
3198+
if (pstate&&BarrierPhase(&pstate->build_barrier)==PHJ_BUILD_RUNNING)
31853199
{
3186-
ParallelHashJoinState*pstate=hashtable->parallel_state;
31873200
inti;
31883201

31893202
/* Make sure any temporary files are closed. */
@@ -3199,17 +3212,22 @@ ExecHashTableDetach(HashJoinTable hashtable)
31993212
}
32003213

32013214
/* If we're last to detach, clean up shared memory. */
3202-
if (BarrierDetach(&pstate->build_barrier))
3215+
if (BarrierArriveAndDetach(&pstate->build_barrier))
32033216
{
3217+
/*
3218+
* Late joining processes will see this state and give up
3219+
* immediately.
3220+
*/
3221+
Assert(BarrierPhase(&pstate->build_barrier)==PHJ_BUILD_DONE);
3222+
32043223
if (DsaPointerIsValid(pstate->batches))
32053224
{
32063225
dsa_free(hashtable->area,pstate->batches);
32073226
pstate->batches=InvalidDsaPointer;
32083227
}
32093228
}
3210-
3211-
hashtable->parallel_state=NULL;
32123229
}
3230+
hashtable->parallel_state=NULL;
32133231
}
32143232

32153233
/*

‎src/backend/executor/nodeHashjoin.c

Lines changed: 38 additions & 16 deletions
Original file line numberDiff line numberDiff line change
@@ -45,7 +45,8 @@
4545
* PHJ_BUILD_ALLOCATING -- one sets up the batches and table 0
4646
* PHJ_BUILD_HASHING_INNER -- all hash the inner rel
4747
* PHJ_BUILD_HASHING_OUTER -- (multi-batch only) all hash the outer
48-
* PHJ_BUILD_DONE -- building done, probing can begin
48+
* PHJ_BUILD_RUNNING -- building done, probing can begin
49+
* PHJ_BUILD_DONE -- all work complete, one frees batches
4950
*
5051
* While in the phase PHJ_BUILD_HASHING_INNER a separate pair of barriers may
5152
* be used repeatedly as required to coordinate expansions in the number of
@@ -73,7 +74,7 @@
7374
* batches whenever it encounters them while scanning and probing, which it
7475
* can do because it processes batches in serial order.
7576
*
76-
* OncePHJ_BUILD_DONE is reached, backends then split up and process
77+
* OncePHJ_BUILD_RUNNING is reached, backends then split up and process
7778
* different batches, or gang up and work together on probing batches if there
7879
* aren't enough to go around. For each batch there is a separate barrier
7980
* with the following phases:
@@ -95,11 +96,16 @@
9596
*
9697
* To avoid deadlocks, we never wait for any barrier unless it is known that
9798
* all other backends attached to it are actively executing the node or have
98-
* already arrived. Practically, that means that we never return a tuple
99-
* while attached to a barrier, unless the barrier has reached its final
100-
* state. In the slightly special case of the per-batch barrier, we return
101-
* tuples while in PHJ_BATCH_PROBING phase, but that's OK because we use
102-
* BarrierArriveAndDetach() to advance it to PHJ_BATCH_DONE without waiting.
99+
* finished. Practically, that means that we never emit a tuple while attached
100+
* to a barrier, unless the barrier has reached a phase that means that no
101+
* process will wait on it again. We emit tuples while attached to the build
102+
* barrier in phase PHJ_BUILD_RUNNING, and to a per-batch barrier in phase
103+
* PHJ_BATCH_PROBING. These are advanced to PHJ_BUILD_DONE and PHJ_BATCH_DONE
104+
* respectively without waiting, using BarrierArriveAndDetach(). The last to
105+
* detach receives a different return value so that it knows that it's safe to
106+
* clean up. Any straggler process that attaches after that phase is reached
107+
* will see that it's too late to participate or access the relevant shared
108+
* memory objects.
103109
*
104110
*-------------------------------------------------------------------------
105111
*/
@@ -296,7 +302,21 @@ ExecHashJoinImpl(PlanState *pstate, bool parallel)
296302
* outer relation.
297303
*/
298304
if (hashtable->totalTuples==0&& !HJ_FILL_OUTER(node))
305+
{
306+
if (parallel)
307+
{
308+
/*
309+
* Advance the build barrier to PHJ_BUILD_RUNNING
310+
* before proceeding so we can negotiate resource
311+
* cleanup.
312+
*/
313+
Barrier*build_barrier=&parallel_state->build_barrier;
314+
315+
while (BarrierPhase(build_barrier)<PHJ_BUILD_RUNNING)
316+
BarrierArriveAndWait(build_barrier,0);
317+
}
299318
returnNULL;
319+
}
300320

301321
/*
302322
* need to remember whether nbatch has increased since we
@@ -317,6 +337,7 @@ ExecHashJoinImpl(PlanState *pstate, bool parallel)
317337

318338
build_barrier=&parallel_state->build_barrier;
319339
Assert(BarrierPhase(build_barrier)==PHJ_BUILD_HASHING_OUTER||
340+
BarrierPhase(build_barrier)==PHJ_BUILD_RUNNING||
320341
BarrierPhase(build_barrier)==PHJ_BUILD_DONE);
321342
if (BarrierPhase(build_barrier)==PHJ_BUILD_HASHING_OUTER)
322343
{
@@ -329,9 +350,18 @@ ExecHashJoinImpl(PlanState *pstate, bool parallel)
329350
BarrierArriveAndWait(build_barrier,
330351
WAIT_EVENT_HASH_BUILD_HASH_OUTER);
331352
}
332-
Assert(BarrierPhase(build_barrier)==PHJ_BUILD_DONE);
353+
elseif (BarrierPhase(build_barrier)==PHJ_BUILD_DONE)
354+
{
355+
/*
356+
* If we attached so late that the job is finished and
357+
* the batch state has been freed, we can return
358+
* immediately.
359+
*/
360+
returnNULL;
361+
}
333362

334363
/* Each backend should now select a batch to work on. */
364+
Assert(BarrierPhase(build_barrier)==PHJ_BUILD_RUNNING);
335365
hashtable->curbatch=-1;
336366
node->hj_JoinState=HJ_NEED_NEW_BATCH;
337367

@@ -1090,14 +1120,6 @@ ExecParallelHashJoinNewBatch(HashJoinState *hjstate)
10901120
intstart_batchno;
10911121
intbatchno;
10921122

1093-
/*
1094-
* If we started up so late that the batch tracking array has been freed
1095-
* already by ExecHashTableDetach(), then we are finished. See also
1096-
* ExecParallelHashEnsureBatchAccessors().
1097-
*/
1098-
if (hashtable->batches==NULL)
1099-
return false;
1100-
11011123
/*
11021124
* If we were already attached to a batch, remember not to bother checking
11031125
* it again, and detach from it (possibly freeing the hash table if we are

‎src/include/executor/hashjoin.h

Lines changed: 2 additions & 1 deletion
Original file line numberDiff line numberDiff line change
@@ -258,7 +258,8 @@ typedef struct ParallelHashJoinState
258258
#definePHJ_BUILD_ALLOCATING1
259259
#definePHJ_BUILD_HASHING_INNER2
260260
#definePHJ_BUILD_HASHING_OUTER3
261-
#definePHJ_BUILD_DONE4
261+
#definePHJ_BUILD_RUNNING4
262+
#definePHJ_BUILD_DONE5
262263

263264
/* The phases for probing each batch, used by for batch_barrier. */
264265
#definePHJ_BATCH_ELECTING0

0 commit comments

Comments
 (0)

[8]ページ先頭

©2009-2025 Movatter.jp