Movatterモバイル変換


[0]ホーム

URL:


Skip to content

Navigation Menu

Sign in
Appearance settings

Search code, repositories, users, issues, pull requests...

Provide feedback

We read every piece of feedback, and take your input very seriously.

Saved searches

Use saved searches to filter your results more quickly

Sign up
Appearance settings

Commitf7ae68a

Browse files
author
Amit Kapila
committed
Don't shut down Gather[Merge] early under Limit.
Revert part of commit19df170.Early shutdown was added by that commit so that we could collectstatistics from workers, but unfortunately, it interacted badly withrescans. The problem is that we ended up destroying the parallel contextwhich is required for rescans. This leads to rescans of a Limit node overa Gather node to produce unpredictable results as it tries to accessdestroyed parallel context. By reverting the early shutdown code, wemight lose statistics in some cases of Limit over Gather [Merge], but thatwill require further study to fix.Reported-by: Jerry SieversDiagnosed-by: Thomas MunroAuthor: Amit Kapila, testcase by Vignesh CBackpatch-through: 9.6Discussion:https://postgr.es/m/87ims2amh6.fsf@jsievers.enova.com
1 parentbd743b6 commitf7ae68a

File tree

3 files changed

+67
-10
lines changed

3 files changed

+67
-10
lines changed

‎src/backend/executor/nodeLimit.c

Lines changed: 6 additions & 8 deletions
Original file line numberDiff line numberDiff line change
@@ -129,19 +129,17 @@ ExecLimit(PlanState *pstate)
129129
* we are at the end of the window, return NULL without
130130
* advancing the subplan or the position variable; but change
131131
* the state machine state to record having done so.
132+
*
133+
* Once at the end, ideally, we can shut down parallel
134+
* resources but that would destroy the parallel context which
135+
* would be required for rescans. To do that, we need to find
136+
* a way to pass down more information about whether rescans
137+
* are possible.
132138
*/
133139
if (!node->noCount&&
134140
node->position-node->offset >=node->count)
135141
{
136142
node->lstate=LIMIT_WINDOWEND;
137-
138-
/*
139-
* If we know we won't need to back up, we can release
140-
* resources at this point.
141-
*/
142-
if (!(node->ps.state->es_top_eflags&EXEC_FLAG_BACKWARD))
143-
(void)ExecShutdownNode(outerPlan);
144-
145143
returnNULL;
146144
}
147145

‎src/test/regress/expected/select_parallel.out

Lines changed: 40 additions & 1 deletion
Original file line numberDiff line numberDiff line change
@@ -253,9 +253,48 @@ select * from
253253
9000 | 3
254254
(3 rows)
255255

256-
reset enable_material;
256+
-- test rescans for a Limit node with a parallel node beneath it.
257257
reset enable_seqscan;
258+
set enable_indexonlyscan to off;
259+
set enable_indexscan to off;
260+
alter table tenk1 set (parallel_workers = 0);
261+
alter table tenk2 set (parallel_workers = 1);
262+
explain (costs off)
263+
select count(*) from tenk1
264+
left join (select tenk2.unique1 from tenk2 order by 1 limit 1000) ss
265+
on tenk1.unique1 < ss.unique1 + 1
266+
where tenk1.unique1 < 2;
267+
QUERY PLAN
268+
------------------------------------------------------------
269+
Aggregate
270+
-> Nested Loop Left Join
271+
Join Filter: (tenk1.unique1 < (tenk2.unique1 + 1))
272+
-> Seq Scan on tenk1
273+
Filter: (unique1 < 2)
274+
-> Limit
275+
-> Gather Merge
276+
Workers Planned: 1
277+
-> Sort
278+
Sort Key: tenk2.unique1
279+
-> Parallel Seq Scan on tenk2
280+
(11 rows)
281+
282+
select count(*) from tenk1
283+
left join (select tenk2.unique1 from tenk2 order by 1 limit 1000) ss
284+
on tenk1.unique1 < ss.unique1 + 1
285+
where tenk1.unique1 < 2;
286+
count
287+
-------
288+
1999
289+
(1 row)
290+
291+
--reset the value of workers for each table as it was before this test.
292+
alter table tenk1 set (parallel_workers = 4);
293+
alter table tenk2 reset (parallel_workers);
294+
reset enable_material;
258295
reset enable_bitmapscan;
296+
reset enable_indexonlyscan;
297+
reset enable_indexscan;
259298
-- test parallel bitmap heap scan.
260299
set enable_seqscan to off;
261300
set enable_indexscan to off;

‎src/test/regress/sql/select_parallel.sql

Lines changed: 21 additions & 1 deletion
Original file line numberDiff line numberDiff line change
@@ -90,9 +90,29 @@ select * from
9090
(selectcount(*)from tenk1where thousand>99) ss
9191
right join (values (1),(2),(3)) v(x)on true;
9292

93-
reset enable_material;
93+
-- test rescans for a Limit node with a parallel node beneath it.
9494
reset enable_seqscan;
95+
set enable_indexonlyscan to off;
96+
set enable_indexscan to off;
97+
altertable tenk1set (parallel_workers=0);
98+
altertable tenk2set (parallel_workers=1);
99+
explain (costs off)
100+
selectcount(*)from tenk1
101+
left join (selecttenk2.unique1from tenk2order by1limit1000) ss
102+
ontenk1.unique1<ss.unique1+1
103+
wheretenk1.unique1<2;
104+
selectcount(*)from tenk1
105+
left join (selecttenk2.unique1from tenk2order by1limit1000) ss
106+
ontenk1.unique1<ss.unique1+1
107+
wheretenk1.unique1<2;
108+
--reset the value of workers for each table as it was before this test.
109+
altertable tenk1set (parallel_workers=4);
110+
altertable tenk2 reset (parallel_workers);
111+
112+
reset enable_material;
95113
reset enable_bitmapscan;
114+
reset enable_indexonlyscan;
115+
reset enable_indexscan;
96116

97117
-- test parallel bitmap heap scan.
98118
set enable_seqscan to off;

0 commit comments

Comments
 (0)

[8]ページ先頭

©2009-2025 Movatter.jp