Movatterモバイル変換


[0]ホーム

URL:


Skip to content

Navigation Menu

Sign in
Appearance settings

Search code, repositories, users, issues, pull requests...

Provide feedback

We read every piece of feedback, and take your input very seriously.

Saved searches

Use saved searches to filter your results more quickly

Sign up
Appearance settings

Commit1599e7b

Browse files
committed
doc: Move parallel_leader_participation to its correct category
parallel_leader_participation got introduced ine5253fd, where it waslisted under RESOURCES_ASYNCHRONOUS in guc.c, but the documentationdid not reflect that and listed it with the other planner-relatedoptions. This commit fixes this inconsistency as the parameter isintended to be an asynchronous one.While on it, reorganize a bit the section dedicated to asynchronousparameters, backend_flush_after being moved first to do better in termsof alphabetical order of the options listed.Reported-by: Yanliang LeiAuthor: Bharath RupireddyDiscussion:https://postgr.es/m/16972-42d4b0c15aa1d5f5@postgresql.org
1 parent7c298c6 commit1599e7b

File tree

1 file changed

+44
-44
lines changed

1 file changed

+44
-44
lines changed

‎doc/src/sgml/config.sgml

Lines changed: 44 additions & 44 deletions
Original file line numberDiff line numberDiff line change
@@ -2383,6 +2383,36 @@ include_dir 'conf.d'
23832383
<title>Asynchronous Behavior</title>
23842384

23852385
<variablelist>
2386+
<varlistentry id="guc-backend-flush-after" xreflabel="backend_flush_after">
2387+
<term><varname>backend_flush_after</varname> (<type>integer</type>)
2388+
<indexterm>
2389+
<primary><varname>backend_flush_after</varname> configuration parameter</primary>
2390+
</indexterm>
2391+
</term>
2392+
<listitem>
2393+
<para>
2394+
Whenever more than this amount of data has
2395+
been written by a single backend, attempt to force the OS to issue
2396+
these writes to the underlying storage. Doing so will limit the
2397+
amount of dirty data in the kernel's page cache, reducing the
2398+
likelihood of stalls when an <function>fsync</function> is issued at the end of a
2399+
checkpoint, or when the OS writes data back in larger batches in the
2400+
background. Often that will result in greatly reduced transaction
2401+
latency, but there also are some cases, especially with workloads
2402+
that are bigger than <xref linkend="guc-shared-buffers"/>, but smaller
2403+
than the OS's page cache, where performance might degrade. This
2404+
setting may have no effect on some platforms.
2405+
If this value is specified without units, it is taken as blocks,
2406+
that is <symbol>BLCKSZ</symbol> bytes, typically 8kB.
2407+
The valid range is
2408+
between <literal>0</literal>, which disables forced writeback,
2409+
and <literal>2MB</literal>. The default is <literal>0</literal>, i.e., no
2410+
forced writeback. (If <symbol>BLCKSZ</symbol> is not 8kB,
2411+
the maximum value scales proportionally to it.)
2412+
</para>
2413+
</listitem>
2414+
</varlistentry>
2415+
23862416
<varlistentry id="guc-effective-io-concurrency" xreflabel="effective_io_concurrency">
23872417
<term><varname>effective_io_concurrency</varname> (<type>integer</type>)
23882418
<indexterm>
@@ -2579,32 +2609,25 @@ include_dir 'conf.d'
25792609
</listitem>
25802610
</varlistentry>
25812611

2582-
<varlistentry id="guc-backend-flush-after" xreflabel="backend_flush_after">
2583-
<term><varname>backend_flush_after</varname> (<type>integer</type>)
2612+
<varlistentry id="guc-parallel-leader-participation" xreflabel="parallel_leader_participation">
2613+
<term>
2614+
<varname>parallel_leader_participation</varname> (<type>boolean</type>)
25842615
<indexterm>
2585-
<primary><varname>backend_flush_after</varname> configuration parameter</primary>
2616+
<primary><varname>parallel_leader_participation</varname> configuration parameter</primary>
25862617
</indexterm>
25872618
</term>
25882619
<listitem>
25892620
<para>
2590-
Whenever more than this amount of data has
2591-
been written by a single backend, attempt to force the OS to issue
2592-
these writes to the underlying storage. Doing so will limit the
2593-
amount of dirty data in the kernel's page cache, reducing the
2594-
likelihood of stalls when an <function>fsync</function> is issued at the end of a
2595-
checkpoint, or when the OS writes data back in larger batches in the
2596-
background. Often that will result in greatly reduced transaction
2597-
latency, but there also are some cases, especially with workloads
2598-
that are bigger than <xref linkend="guc-shared-buffers"/>, but smaller
2599-
than the OS's page cache, where performance might degrade. This
2600-
setting may have no effect on some platforms.
2601-
If this value is specified without units, it is taken as blocks,
2602-
that is <symbol>BLCKSZ</symbol> bytes, typically 8kB.
2603-
The valid range is
2604-
between <literal>0</literal>, which disables forced writeback,
2605-
and <literal>2MB</literal>. The default is <literal>0</literal>, i.e., no
2606-
forced writeback. (If <symbol>BLCKSZ</symbol> is not 8kB,
2607-
the maximum value scales proportionally to it.)
2621+
Allows the leader process to execute the query plan under
2622+
<literal>Gather</literal> and <literal>Gather Merge</literal> nodes
2623+
instead of waiting for worker processes. The default is
2624+
<literal>on</literal>. Setting this value to <literal>off</literal>
2625+
reduces the likelihood that workers will become blocked because the
2626+
leader is not reading tuples fast enough, but requires the leader
2627+
process to wait for worker processes to start up before the first
2628+
tuples can be produced. The degree to which the leader can help or
2629+
hinder performance depends on the plan type, number of workers and
2630+
query duration.
26082631
</para>
26092632
</listitem>
26102633
</varlistentry>
@@ -5889,29 +5912,6 @@ SELECT * FROM parent WHERE key = 2400;
58895912
</listitem>
58905913
</varlistentry>
58915914

5892-
<varlistentry id="guc-parallel-leader-participation" xreflabel="parallel_leader_participation">
5893-
<term>
5894-
<varname>parallel_leader_participation</varname> (<type>boolean</type>)
5895-
<indexterm>
5896-
<primary><varname>parallel_leader_participation</varname> configuration parameter</primary>
5897-
</indexterm>
5898-
</term>
5899-
<listitem>
5900-
<para>
5901-
Allows the leader process to execute the query plan under
5902-
<literal>Gather</literal> and <literal>Gather Merge</literal> nodes
5903-
instead of waiting for worker processes. The default is
5904-
<literal>on</literal>. Setting this value to <literal>off</literal>
5905-
reduces the likelihood that workers will become blocked because the
5906-
leader is not reading tuples fast enough, but requires the leader
5907-
process to wait for worker processes to start up before the first
5908-
tuples can be produced. The degree to which the leader can help or
5909-
hinder performance depends on the plan type, number of workers and
5910-
query duration.
5911-
</para>
5912-
</listitem>
5913-
</varlistentry>
5914-
59155915
<varlistentry id="guc-plan-cache_mode" xreflabel="plan_cache_mode">
59165916
<term><varname>plan_cache_mode</varname> (<type>enum</type>)
59175917
<indexterm>

0 commit comments

Comments
 (0)

[8]ページ先頭

©2009-2025 Movatter.jp