Movatterモバイル変換


[0]ホーム

URL:


Skip to content

Navigation Menu

Sign in
Appearance settings

Search code, repositories, users, issues, pull requests...

Provide feedback

We read every piece of feedback, and take your input very seriously.

Saved searches

Use saved searches to filter your results more quickly

Sign up
Appearance settings

Commitdf4685f

Browse files
committed
Minor optimizations based on ParallelContext having nworkers_launched.
Originally, we didn't have nworkers_launched, so code that used parallelcontexts had to be preprared for the possibility that not all of theworkers requested actually got launched. But now we can count on knowingthe number of workers that were successfully launched, which can shaveoff a few cycles and simplify some code slightly.Amit Kapila, reviewed by Haribabu Kommi, per a suggestion from PeterGeoghegan.
1 parent546cd0d commitdf4685f

File tree

3 files changed

+12
-16
lines changed

3 files changed

+12
-16
lines changed

‎src/backend/access/transam/parallel.c

Lines changed: 4 additions & 4 deletions
Original file line numberDiff line numberDiff line change
@@ -520,7 +520,7 @@ WaitForParallelWorkersToFinish(ParallelContext *pcxt)
520520
*/
521521
CHECK_FOR_INTERRUPTS();
522522

523-
for (i=0;i<pcxt->nworkers;++i)
523+
for (i=0;i<pcxt->nworkers_launched;++i)
524524
{
525525
if (pcxt->worker[i].error_mqh!=NULL)
526526
{
@@ -560,7 +560,7 @@ WaitForParallelWorkersToExit(ParallelContext *pcxt)
560560
inti;
561561

562562
/* Wait until the workers actually die. */
563-
for (i=0;i<pcxt->nworkers;++i)
563+
for (i=0;i<pcxt->nworkers_launched;++i)
564564
{
565565
BgwHandleStatusstatus;
566566

@@ -610,7 +610,7 @@ DestroyParallelContext(ParallelContext *pcxt)
610610
/* Kill each worker in turn, and forget their error queues. */
611611
if (pcxt->worker!=NULL)
612612
{
613-
for (i=0;i<pcxt->nworkers;++i)
613+
for (i=0;i<pcxt->nworkers_launched;++i)
614614
{
615615
if (pcxt->worker[i].error_mqh!=NULL)
616616
{
@@ -708,7 +708,7 @@ HandleParallelMessages(void)
708708
if (pcxt->worker==NULL)
709709
continue;
710710

711-
for (i=0;i<pcxt->nworkers;++i)
711+
for (i=0;i<pcxt->nworkers_launched;++i)
712712
{
713713
/*
714714
* Read as many messages as we can from each worker, but stop when

‎src/backend/executor/execParallel.c

Lines changed: 1 addition & 1 deletion
Original file line numberDiff line numberDiff line change
@@ -522,7 +522,7 @@ ExecParallelFinish(ParallelExecutorInfo *pei)
522522
WaitForParallelWorkersToFinish(pei->pcxt);
523523

524524
/* Next, accumulate buffer usage. */
525-
for (i=0;i<pei->pcxt->nworkers;++i)
525+
for (i=0;i<pei->pcxt->nworkers_launched;++i)
526526
InstrAccumParallelQuery(&pei->buffer_usage[i]);
527527

528528
/* Finally, accumulate instrumentation, if any. */

‎src/backend/executor/nodeGather.c

Lines changed: 7 additions & 11 deletions
Original file line numberDiff line numberDiff line change
@@ -153,7 +153,6 @@ ExecGather(GatherState *node)
153153
if (gather->num_workers>0&&IsInParallelMode())
154154
{
155155
ParallelContext*pcxt;
156-
boolgot_any_worker= false;
157156

158157
/* Initialize the workers required to execute Gather node. */
159158
if (!node->pei)
@@ -169,29 +168,26 @@ ExecGather(GatherState *node)
169168
LaunchParallelWorkers(pcxt);
170169

171170
/* Set up tuple queue readers to read the results. */
172-
if (pcxt->nworkers>0)
171+
if (pcxt->nworkers_launched>0)
173172
{
174173
node->nreaders=0;
175174
node->reader=
176-
palloc(pcxt->nworkers*sizeof(TupleQueueReader*));
175+
palloc(pcxt->nworkers_launched*sizeof(TupleQueueReader*));
177176

178-
for (i=0;i<pcxt->nworkers;++i)
177+
for (i=0;i<pcxt->nworkers_launched;++i)
179178
{
180-
if (pcxt->worker[i].bgwhandle==NULL)
181-
continue;
182-
183179
shm_mq_set_handle(node->pei->tqueue[i],
184180
pcxt->worker[i].bgwhandle);
185181
node->reader[node->nreaders++]=
186182
CreateTupleQueueReader(node->pei->tqueue[i],
187183
fslot->tts_tupleDescriptor);
188-
got_any_worker= true;
189184
}
190185
}
191-
192-
/* No workers? Then never mind. */
193-
if (!got_any_worker)
186+
else
187+
{
188+
/* No workers? Then never mind. */
194189
ExecShutdownGatherWorkers(node);
190+
}
195191
}
196192

197193
/* Run plan locally if no workers or not single-copy. */

0 commit comments

Comments
 (0)

[8]ページ先頭

©2009-2025 Movatter.jp