diff options
author | Robert Haas | 2016-12-05 16:03:17 +0000 |
---|---|---|
committer | Robert Haas | 2016-12-05 16:03:17 +0000 |
commit | 0e50af245397c9bf3e7b02c0958be599de838fac (patch) | |
tree | 7db0d4f9d6904e9fa0fa68491f52b95126f05636 | |
parent | 2b959d4957ff47c77b2518dcddbf3aa126a1593c (diff) |
Assorted documentation improvements for max_parallel_workers.
Commit b460f5d6693103076dc554aa7cbb96e1e53074f9 overlooked a few bits
of documentation that seem like they should mention the new setting.
-rw-r--r-- | doc/src/sgml/config.sgml | 10 | ||||
-rw-r--r-- | doc/src/sgml/parallel.sgml | 17 |
2 files changed, 23 insertions, 4 deletions
diff --git a/doc/src/sgml/config.sgml b/doc/src/sgml/config.sgml index b917f9578a..0fc4e57d90 100644 --- a/doc/src/sgml/config.sgml +++ b/doc/src/sgml/config.sgml @@ -1990,6 +1990,12 @@ include_dir 'conf.d' same or higher value than on the master server. Otherwise, queries will not be allowed in the standby server. </para> + + <para> + When changing this value, consider also adjusting + <xref linkend="guc-max-parallel-workers"> and + <xref linkend="guc-max-parallel-workers-per-gather">. + </para> </listitem> </varlistentry> @@ -2047,6 +2053,10 @@ include_dir 'conf.d' parallel queries. The default value is 8. When increasing or decreasing this value, consider also adjusting <xref linkend="guc-max-parallel-workers-per-gather">. + Also, note that a setting for this value which is higher than + <xref linkend="guc-max-worker-processes"> will have no effect, + since parallel workers are taken from the pool of worker processes + established by that setting. </para> </listitem> </varlistentry> diff --git a/doc/src/sgml/parallel.sgml b/doc/src/sgml/parallel.sgml index 38a040ef75..f39c21a455 100644 --- a/doc/src/sgml/parallel.sgml +++ b/doc/src/sgml/parallel.sgml @@ -61,14 +61,15 @@ EXPLAIN SELECT * FROM pgbench_accounts WHERE filler LIKE '%x%'; session will request a number of <link linkend="bgworker">background worker processes</link> equal to the number of workers chosen by the planner. The total number of background - workers that can exist at any one time is limited by - <xref linkend="guc-max-worker-processes">, so it is possible for a + workers that can exist at any one time is limited by both + <xref linkend="guc-max-worker-processes"> and + <xref linkend="guc-max-parallel-workers">, so it is possible for a parallel query to run with fewer workers than planned, or even with no workers at all. The optimal plan may depend on the number of workers that are available, so this can result in poor query performance. If this occurrence is frequent, considering increasing - <varname>max_worker_processes</> so that more workers can be run - simultaneously or alternatively reducing + <varname>max_worker_processes</> and <varname>max_parallel_workers</> + so that more workers can be run simultaneously or alternatively reducing <xref linkend="guc-max-parallel-workers-per-gather"> so that the planner requests fewer workers. </para> @@ -205,6 +206,14 @@ EXPLAIN SELECT * FROM pgbench_accounts WHERE filler LIKE '%x%'; <listitem> <para> + No background workers can be obtained because of the limitation that + the total number of background workers launched for purposes of + parallel query cannot exceed <xref linkend="guc-max-parallel-workers">. + </para> + </listitem> + + <listitem> + <para> The client sends an Execute message with a non-zero fetch count. See the discussion of the <link linkend="protocol-flow-ext-query">extended query protocol</link>. |