summaryrefslogtreecommitdiff
diff options
context:
space:
mode:
authorPavan Deolasee2015-07-31 05:44:34 +0000
committerPavan Deolasee2015-09-14 05:38:48 +0000
commitb73b54f56b847ea0f0304bea2ba83ace0c0999ec (patch)
tree20b4ca95055ba80be88df7add0b2c0623f809a39
parent60f4a4f0f9b795a94fac6c8e566d99ddcd185c54 (diff)
Correct grammar mistakes and improve consistency in the usage of component
names. Patch by Mark Wong
-rw-r--r--doc/src/sgml/installation.sgml458
-rw-r--r--doc/src/sgml/intro.sgml237
-rw-r--r--doc/src/sgml/lobj.sgml4
-rw-r--r--doc/src/sgml/maintenance.sgml28
-rw-r--r--doc/src/sgml/manage-ag.sgml2
-rw-r--r--doc/src/sgml/mvcc.sgml68
-rw-r--r--doc/src/sgml/oid2name.sgml6
-rw-r--r--doc/src/sgml/pageinspect.sgml6
-rw-r--r--doc/src/sgml/perform.sgml19
-rw-r--r--doc/src/sgml/pgfreespacemap.sgml7
-rw-r--r--doc/src/sgml/pgstattuple.sgml7
-rw-r--r--doc/src/sgml/pgxc_ctl-ref.sgml532
-rw-r--r--doc/src/sgml/pgxcclean.sgml12
-rw-r--r--doc/src/sgml/pgxcddl.sgml46
-rw-r--r--doc/src/sgml/pgxcmonitor.sgml26
-rw-r--r--doc/src/sgml/plperl.sgml5
-rw-r--r--doc/src/sgml/ref/alter_node.sgml4
-rw-r--r--doc/src/sgml/ref/alter_table.sgml58
-rw-r--r--doc/src/sgml/ref/checkpoint.sgml4
-rw-r--r--doc/src/sgml/ref/clean_connection.sgml11
-rw-r--r--doc/src/sgml/ref/commit_prepared.sgml4
-rw-r--r--doc/src/sgml/ref/create_function.sgml17
-rw-r--r--doc/src/sgml/ref/create_node.sgml13
-rw-r--r--doc/src/sgml/ref/create_nodegroup.sgml6
-rw-r--r--doc/src/sgml/ref/create_table_as.sgml6
-rw-r--r--doc/src/sgml/ref/execute_direct.sgml8
-rw-r--r--doc/src/sgml/ref/gtm.sgml34
-rw-r--r--doc/src/sgml/runtime.sgml187
-rw-r--r--doc/src/sgml/start.sgml8
29 files changed, 870 insertions, 953 deletions
diff --git a/doc/src/sgml/installation.sgml b/doc/src/sgml/installation.sgml
index 73baef7356..b8bdd0ec20 100644
--- a/doc/src/sgml/installation.sgml
+++ b/doc/src/sgml/installation.sgml
@@ -1532,24 +1532,25 @@ PostgreSQL, contrib and HTML documentation successfully made. Ready to install.
<title>Installing the Files</title>
<para>
- Before learning how to install <productname>Postgres-XL</>, you should determine
- what you are going to install on each server. The following lists
- <productname>Postgres-XL</> components you've built and you're going to install.
+ Before learning how to install <productname>Postgres-XL</>, you should
+ determine what you are going to install on each server. The following
+ lists the <productname>Postgres-XL</> components that you've built are
+ going to install.
<variablelist>
<varlistentry>
<term><envar>GTM</envar></term>
<listitem>
<para>
- GTM stands for Global Transaction Manager. It provides global transaction ID
- and snapshot to each transaction in <productname>Postgres-XL</> database cluster.
- It also provide several global value such as sequence and global timestamp.
+ GTM stands for Global Transaction Manager. It provides global
+ transaction IDs and snapshots to each transaction in the
+ <productname>Postgres-XL</> database cluster. It also provides several
+ global values such as sequences and global timestamps.
</para>
<para>
- GTM itself can be configured as a backup of other GTM as
- GTM-Standby so that GTM can continue to run even if main GTM
- fails. You may want to install GTM-Standby to separate
- server.
+ GTM itself can be configured as a backup of another GTM as a
+ GTM-Standby so that the GTM can continue to run even if the main GTM
+ fails. You may want to install a GTM-Standby to a separate server.
</para>
</listitem>
</varlistentry>
@@ -1558,12 +1559,12 @@ PostgreSQL, contrib and HTML documentation successfully made. Ready to install.
<term><envar>GTM-Proxy</envar></term>
<listitem>
<para>
- Because GTM has to take care of each transaction, it has to
- read and write enormous amount of messages which may
- restrict <productname>Postgres-XL</> scalability. GTM-Proxy is
- a proxy of GTM feature which groups requests and response to
+ Because the GTM has to take care of each transaction, it has to
+ read and write enormous amounts of messages, which may
+ restrict <productname>Postgres-XL</>'s scalability. GTM-Proxy is
+ a proxy of the GTM feature that groups requests and responses to
reduce network read/write by GTM. Distributing one snapshot to
- multiple transactions also contributes to reduce GTM network
+ multiple transactions also contributes to reduce the GTM network
workload.
</para>
</listitem>
@@ -1573,16 +1574,17 @@ PostgreSQL, contrib and HTML documentation successfully made. Ready to install.
<term><envar>Coordinator</envar></term>
<listitem>
<para>
- The Coordinator is an entry point to <productname>Postgres-XL</> from applications.
- You can run more than one Coordinator simultaneously in the cluster. Each Coordinator behaves
- just as a <productname>PostgreSQL</> database server, while all the Coordinators
- handles transactions in harmonized way so that any transaction coming into one
- Coordinator is protected against any other transactions coming into others.
- Updates by a transaction is visible immediately to others running in other
- Coordinators.
- To simplify the load balance of Coordinators and Datanodes, as mentioned
- below, it is highly advised to install same number of Coordinator and Datanode
- in a server.
+ The Coordinator is an entry point to <productname>Postgres-XL</> from
+ applications. You can run more than one Coordinator simultaneously in
+ the cluster. Each Coordinator behaves just as a
+ <productname>PostgreSQL</> database server, while all the Coordinators
+ handles transactions in harmonized way so that any transaction coming
+ into one Coordinator is protected against any other transactions coming
+ into others. Updates by a transaction is visible immediately to others
+ running in other Coordinators. To simplify the load balancing of
+ Coordinators and Datanodes, as mentioned below, it is highly
+ recommended to install same number of Coordinator and Datanode in a
+ server.
</para>
</listitem>
</varlistentry>
@@ -1593,14 +1595,15 @@ PostgreSQL, contrib and HTML documentation successfully made. Ready to install.
Datanode
</para>
<para>
- The Coordinator and Datanode shares the same binary but their behavior is a little
- different. The Coordinator decomposes incoming statements into those handled by
- Datanodes. If necessary, the Coordinator materializes response from Datanodes
- to calculate final response to applications.
+ A Coordinator and Datanode share the same binaries but their behavior
+ is a little different. The Coordinator decomposes incoming statements
+ into those handled by Datanodes. If necessary, the Coordinator
+ materializes responses from Datanodes to calculate final response to
+ applications.
</para>
<para>
- The Datanode is very close to PostgreSQL itself because it just handles incoming
- statements locally.
+ The Datanode is very close to PostgreSQL itself because it just handles
+ incoming statements locally.
</para>
</listitem>
</varlistentry>
@@ -1868,12 +1871,11 @@ export MANPATH
</para>
<para>
- When, as typical case, you're configuring both Coordinator and
- Datanode in a same server, please be careful not to assign same
- resource, such as listening point (IP address and port number) to
- different component. If you apply single set of environment
- described here to different components, they will conflict
- and <productname>Postgres-XL</> will not run correctly.
+ When, as typical case, you're configuring both a Coordinator and Datanode
+ in a same server, please be careful not to assign the same resource, such
+ as listening point (IP address and port number) to the different
+ components. Otherwise they will conflict and <productname>Postgres-XL</>
+ will not run correctly.
</para>
</sect2>
</sect1>
@@ -1906,59 +1908,50 @@ export MANPATH
<step>
<para>
- If you follow the previous steps, you will have files ready to
- distribute to servers where you want to run one or
- more <productname>Postgres-XL</> components.
+ If you follow the previous steps, you will have files ready to distribute
+ to servers where you want to run one or more <productname>Postgres-XL</>
+ components.
</para>
<para>
- After you've installed your build locally, build target
- will include the following directories.
+ After you've installed your build locally, the build target will include
+ the following directories.
<screen>
bin/ include/ lib/ share/
</screen>
- <filename>bin</> directory contains executable binaries and
- scripts. <filename>include</> contains header files needed to
- build <productname>Postgres-XL</> applications. <filename>lib</>
- contains shared libraries needed to run binaries, as well as
- static libraries which should be included into your application
- binaries. Finally, <productname>share</> contains miscellaneous
- files <productname>Postgres-XL</> should read at runtime, as well
- as sample files.
-
+ The <filename>bin</> directory contains executable binaries and scripts.
+ The <filename>include</> directory contains header files needed to build
+ <productname>Postgres-XL</> applications. The <filename>lib</> directory
+ contains shared libraries needed to run binaries, as well as static
+ libraries that should be included into your application binaries.
+ Finally, the <productname>share</> directory contains miscellaneous files
+ that <productname>Postgres-XL</> should read at runtime, as well as sample
+ files.
</para>
-
<para>
-
- If your servers has sufficient file space, you can copy all the
- files to the target server. Total size is less than 30mega
- bytes. If you want to install minimum files to each servers,
- please follow the following paragraphs.
-
+ If your servers has sufficient file space, you can copy all the files to
+ the target server. Total size is less than 30 megabytes. If you want to
+ install minimal files to each servers, please follow the following
+ paragraphs.
</para>
-
<para>
- For the server to run GTM or GTM-Standby, you need to copy the
- following files to your path: <filename>bin/gtm</> and <filename>bin/gtm_ctl</>.
+ For the server to run a GTM or a GTM-Standby, you need to copy the
+ following files to your path: <filename>bin/gtm</> and
+ <filename>bin/gtm_ctl</>.
</para>
-
<para>
- For the server to run GTM-Proxy (the server you run Coordinator and/or Datanode),
- you need to copy the following files to your path: <filename>bin/gtm_proxy</filename>
- and <filename>bin/gtm_ctl</>.
+ For a server to run a GTM-Proxy (the server you run a Coordinator and/or
+ a Datanode), you need to copy the following files to your path:
+ <filename>bin/gtm_proxy</filename> and <filename>bin/gtm_ctl</>.
</para>
-
<para>
- For server to run Coordinator or Datanode, or both, you should
- copy the following files to your
- path: <filename>bin/initdb</>.
- You should also copy everything in <filename>path</> directory to
- your library search path.
+ For a server to run a Coordinator or a Datanode, or both, you should copy
+ the following files to your path: <filename>bin/initdb</>. You should
+ also copy everything in the <filename>path</> directory to your library
+ search path.
</para>
-
</step>
-
<step>
<para>
Create a database installation with the <command>initdb</>
@@ -1983,9 +1976,8 @@ postgres$ <userinput>/usr/local/pgsql/bin/initdb -D /usr/local/pgsql/data</>
</para>
<para>
- If you're configuring both Datanodes and Coordinators on the same
- server, you should specify different <option>-D</> values for
- each of them.
+ If you're configuring both Datanodes and Coordinators on the same server,
+ you should specify different <option>-D</> values for each of them.
</para>
</step>
@@ -2000,35 +1992,33 @@ postgres$ <userinput>/usr/local/pgsql/bin/initdb -D /usr/local/pgsql/data</>
<step>
<para>
- You should configure GTM and GTM-Proxy, as well as
- GTM-Standby if you need high-availability capability for GTM
- before you really run <productname>Postgres-XL</> database
- cluster. You can do the following before you
- run <command>initdb</>.
+ You should configure a GTM and a GTM-Proxy, as well as a GTM-Standby if
+ you need high-availability capability for the GTM before you really run a
+ <productname>Postgres-XL</> database cluster. You can do the following
+ before you run <command>initdb</>.
</para>
<para>
- Each of GTM, GTM-Proxy and GTM need their own working directories.
- Create them as <productname>Postgres-XL</> owner user. Please
- assign a port number to each of them, although you don't have to do
- any configuration work now.
+ Each GTM, GTM-Proxy and GTM-Standby need their own working directories.
+ Create them as the <productname>Postgres-XL</> owner's user. Please
+ assign a port number to each of them, although you don't have to do any
+ configuration work now.
</para>
</step>
<step>
<para>
- Now you should configure each Coordinator and Datanode. Because
- they have to communicate each other and number of servers,
- Datanodes and Coordinators depend upon configurations, we do not
- provide default configuration files for them.
+ Now you should configure each Coordinator and Datanode. Because they have
+ to communicate each other and a number of other servers, Datanodes and
+ Coordinators need to be configured correctly, we do not provide default
+ configuration files for them.
</para>
<para>
- You can configure Datanodes and Coordinators by
- editing the <filename>postgresql.conf</> file located under the
- directory you specified with <option>-D</> option
- of <command>initdb</>. The following paragraphs describe what
- parameter to edit at for Coordinators. You can specify
- any other <filename>postgresql.conf</> parameters as
- standalone <productname>PostgreSQL</>.
+ You can configure Datanodes and Coordinators by editing the
+ <filename>postgresql.conf</> file located under the directory you
+ specified with <option>-D</> option of <command>initdb</>. The following
+ paragraphs describe what parameter to edit for Coordinators. You can
+ specify any other <filename>postgresql.conf</> parameters as standalone
+ <productname>PostgreSQL</>.
</para>
<variablelist>
@@ -2036,13 +2026,13 @@ postgres$ <userinput>/usr/local/pgsql/bin/initdb -D /usr/local/pgsql/data</>
<term><envar>max_prepared_transactions</envar></term>
<listitem>
<para>
- <option>max_prepared_transactions</> specifies maximum number
- of two-phase commit transactions. Even if you don't use
- explicit two phase commit operation, the Coordinator may issue
- two-phase commit operation implicitly if a transaction is
- involved with multiple Datanodes and/or Coordinators. You should
- specify <option>max_prepared_transactions</> value at
- least the number of <option>max_connection</>.
+ <option>max_prepared_transactions</> specifies maximum number of
+ two-phase commit transactions. Even if you don't use explicit two
+ phase commit operations, the Coordinator may issue a two-phase commit
+ operation implicitly if a transaction is involved with multiple
+ Datanodes and/or Coordinators. You should specify
+ <option>max_prepared_transactions</> value at least the number of
+ <option>max_connection</>.
</para>
</listitem>
</varlistentry>
@@ -2051,12 +2041,11 @@ postgres$ <userinput>/usr/local/pgsql/bin/initdb -D /usr/local/pgsql/data</>
<term><envar>min_pool_size</envar><term>
<listitem>
<par>
- Coordinator is associated with a connection pooler which takes
- care of connection with other Coordinators and Datanodes. This
- parameter specifies minimum number of connection to pool.
- If you're not configuring <productname>XL</> cluster in
- unbalanced way, you should specify the same value to all the
- Coordinators.
+ A Coordinator is associated with a connection pooler which takes care of
+ connections with other Coordinators and Datanodes. This parameter
+ specifies minimum number of connection to pool. If you're not
+ configuring the <productname>Postgres-XL</> cluster in an unbalanced
+ way, you should specify the same value to all the Coordinators.
</par>
</listitem>
</varlistentry>
@@ -2065,14 +2054,13 @@ postgres$ <userinput>/usr/local/pgsql/bin/initdb -D /usr/local/pgsql/data</>
<term><envar>max_pool_size</envar></term>
<listitem>
<para>
- This parameter specifies maximum number of the pooled
- connection. This value should be at least more than the number
- of all the Coordinators and Datanodes. If you specify less
- value, you will see very frequent close and ope connection
- which leads to serious performance problem.
- If you're not configuring <productname>XL</> cluster in
- unbalanced way, you should specify the same value to all the
- Coordinators.
+ This parameter specifies maximum number of the pooled connections. This
+ value should be at least more than the number of all the Coordinators
+ and Datanodes. If you specify a lesser value, you will see very
+ frequent close and open connections, which leads to serious performance
+ problems. If you're not configuring a <productname>Postgres-XL</>
+ cluster in an unbalanced way, you should specify the same value to all
+ the Coordinators.
</para>
</listitem>
</varlistentry>
@@ -2081,13 +2069,13 @@ postgres$ <userinput>/usr/local/pgsql/bin/initdb -D /usr/local/pgsql/data</>
<term><envar>pool_conn_keepalive</envar></term>
<listitem>
<para>
- This parameter specifies how long to keep the connection alive.
- If older than this amount, the pooler discards the connection.
- This parameter is useful in multi-tenant environments where
- many connections to many different databases may be used,
- so that idle connections may cleaned up. It is also useful
- for automatically closing connections occasionally in case
- there is some unknown memory leak so that this memory can be freed.
+ This parameter specifies how long to keep the connection alive. If
+ older than this amount, the pooler discards the connection. This
+ parameter is useful in multi-tenant environments where many connections
+ to many different databases may be used, so that idle connections may be
+ cleaned up. It is also useful for automatically closing connections
+ occasionally in case there is some unknown memory leak so that this
+ memory can be freed.
</para>
</listitem>
</varlistentry>
@@ -2096,12 +2084,11 @@ postgres$ <userinput>/usr/local/pgsql/bin/initdb -D /usr/local/pgsql/data</>
<term><envar>pool_maintenance_timeout</envar></term>
<listitem>
<para>
- This parameter specifies how long to wait until pooler
- maintenance is performed. During such maintenance,
- old idle connections are discarded.
- This parameter is useful in multi-tenant environments where
- many connections to many different databases may be used,
- so that idle connections may cleaned up.
+ This parameter specifies how long to wait until pooler maintenance is
+ performed. During such maintenance, old idle connections are discarded.
+ This parameter is useful in multi-tenant environments where many
+ connections to many different databases may be used, so that idle
+ connections may cleaned up.
</para>
</listitem>
</varlistentry>
@@ -2110,9 +2097,8 @@ postgres$ <userinput>/usr/local/pgsql/bin/initdb -D /usr/local/pgsql/data</>
<term><envar>remote_query_cost</envar></term>
<listitem>
<para>
- This parameter specifies the cost overhead of setting up
- a remote query to obtain remote data. It is used by
- the planner in costing queries.
+ This parameter specifies the cost overhead of setting up a remote query
+ to obtain remote data. It is used by the planner in costing queries.
</para>
</listitem>
</varlistentry>
@@ -2121,11 +2107,10 @@ postgres$ <userinput>/usr/local/pgsql/bin/initdb -D /usr/local/pgsql/data</>
<term><envar>network_byte_cost</envar></term>
<listitem>
<para>
- This parameter is used in query cost planning to
- estimate the cost involved in row shipping and obtaining
- remote data based on the expected data size.
- Row shipping is expensive and adds latency, so this
- setting helps to favor plans that minimizes row shipping.
+ This parameter is used in query cost planning to estimate the cost
+ involved in row shipping and obtaining remote data based on the expected
+ data size. Row shipping is expensive and adds latency, so this setting
+ helps to favor plans that minimizes row shipping.
</para>
</listitem>
</varlistentry>
@@ -2134,16 +2119,15 @@ postgres$ <userinput>/usr/local/pgsql/bin/initdb -D /usr/local/pgsql/data</>
<term><envar>sequence_range</envar></term>
<listitem>
<para>
- This parameter is used to get several sequence values
- at once from GTM. This greatly speeds up COPY and INSERT SELECT
- operations where the target table uses sequences.
- <productname>Postgres-XL</productname> will not use this entire
- amount at once, but will increase the request size over
- time if many requests are done in a short time frame in
- the same session.
- After a short time without any sequence requests, decreases back down to 1.
- Note that any settings here are overriden if the CACHE clause was
- used in <xref linkend='sql-createsequence'> or <xref linkend='sql-altersequence'>.
+ This parameter is used to get several sequence values at once from the
+ GTM. This greatly speeds up COPY, INSERT and SELECT operations where
+ the target table uses sequences. <productname>Postgres-XL</productname>
+ will not use this entire amount at once, but will increase the request
+ size over time if many requests are done in a short time frame in the
+ same session. After a short time without any sequence requests, the
+ number of sequences decreases back down to 1. Note that any settings
+ here are overriden if the CACHE clause was used in <xref
+ linkend='sql-createsequence'> or <xref linkend='sql-altersequence'>.
</para>
</listitem>
</varlistentry>
@@ -2152,8 +2136,8 @@ postgres$ <userinput>/usr/local/pgsql/bin/initdb -D /usr/local/pgsql/data</>
<term><envar>max_coordinators</envar></term>
<listitem>
<para>
- This parameter specifies maximum number of the Coordinators that can
- be added to the cluster. Cluster would have to be restarted to increase
+ This parameter specifies maximum number of the Coordinators that can be
+ added to the cluster. The cluster will have to be restarted to increase
the value.
</para>
</listitem>
@@ -2163,8 +2147,8 @@ postgres$ <userinput>/usr/local/pgsql/bin/initdb -D /usr/local/pgsql/data</>
<term><envar>max_datanodes</envar></term>
<listitem>
<para>
- This parameter specifies maximum number of the Datanodes that can
- be added to the cluster. Cluster would have to be restarted to increase
+ This parameter specifies maximum number of the Datanodes that can be
+ added to the cluster. The cluster will have to be restarted to increase
the value.
</para>
</listitem>
@@ -2194,7 +2178,7 @@ postgres$ <userinput>/usr/local/pgsql/bin/initdb -D /usr/local/pgsql/data</>
<term><envar>pooler_port</envar></term>
<listitem>
<para>
- Connection pooler needs separate port.
+ Connection pooler needs a separate port.
</para>
</listitem>
</varlistentry>
@@ -2203,9 +2187,9 @@ postgres$ <userinput>/usr/local/pgsql/bin/initdb -D /usr/local/pgsql/data</>
<term><envar>gtm_port</envar></term>
<listitem>
<para>
- Specify the port number of gtm you're connecting to. This is
- local to the server and you should specify the port assigned to
- the GTM-Proxy local to the Coordinator.
+ Specify the port number of the GTM you're connecting to. This is local
+ to the server and you should specify the port assigned to the GTM-Proxy
+ local to the Coordinator.
</para>
</listitem>
</varlistentry>
@@ -2214,10 +2198,10 @@ postgres$ <userinput>/usr/local/pgsql/bin/initdb -D /usr/local/pgsql/data</>
<step>
<para>
- Now you should configure <filename>postgresql.conf</> for each
- Datanodes. Please note, as in the case of Coordinator, you can
- specify other <filename>postgresql.conf</> parameters as in
- standalone <productname>PostgreSQL</>.
+ Now you should configure <filename>postgresql.conf</> for each Datanode.
+ Please note, as in the case of the Coordinator, you can specify other
+ <filename>postgresql.conf</> parameters as in standalone
+ <productname>PostgreSQL</>.
</para>
<variablelist>
@@ -2226,12 +2210,12 @@ postgres$ <userinput>/usr/local/pgsql/bin/initdb -D /usr/local/pgsql/data</>
<term><envar>max_connections</envar></term>
<listitem>
<para>
- <option>max_connections</> is, in short, a maximum number of
- background processes of the Datanode. You should be careful
- to specify reasonable value to this parameter because each
- Coordinator backend may have connections to all the Datanodes.
- You should specify this value as <option>max_connections</> of
- Coordinator multiplied by the number of Coordinators.
+ <option>max_connections</> is, in short, a maximum number of background
+ processes of the Datanode. You should be careful to specify a
+ reasonable value to this parameter because each Coordinator backend may
+ have connections to all the Datanodes. You should specify this value as
+ the <option>max_connections</> of Coordinator multiplied by the number
+ of Coordinators.
</para>
</listitem>
</varlistentry>
@@ -2240,13 +2224,13 @@ postgres$ <userinput>/usr/local/pgsql/bin/initdb -D /usr/local/pgsql/data</>
<term><envar>max_prepared_transactions</envar></term>
<listitem>
<para>
- <option>max_prepared_transactions</> specifies maximum number
- of two-phase commit transactions. Even if you don't use
- explicit two phase commit operation, Coordinator may issue
- two-phase commit operation implicitly if a transaction is
- involved with multiple Datanodes and/or Coordinators. The value
- of this parameter should be at least the value
- of <option>max_connections</> of Coordinator multiplied by the number of Coordinators.
+ <option>max_prepared_transactions</> specifies maximum number of
+ two-phase commit transactions. Even if you don't use explicit two
+ phase commit operation, a Coordinator may issue two-phase commit
+ operation implicitly if a transaction is involved with multiple
+ Datanodes and/or Coordinators. The value of this parameter should be
+ at least the value of <option>max_connections</> of the Coordinator
+ multiplied by the number of Coordinators.
</para>
</listitem>
</varlistentry>
@@ -2263,9 +2247,9 @@ postgres$ <userinput>/usr/local/pgsql/bin/initdb -D /usr/local/pgsql/data</>
<term><envar>gtm_port</envar></term>
<listitem>
<para>
- Specify the port number of gtm you're connecting to. This is
- local to the server and you should specify the port assigned to
- the GTM-Proxy local to the Datanode.
+ Specify the port number of the GTM that you're connecting to. This is
+ local to the server and you should specify the port assigned to the
+ GTM-Proxy local to the Datanode.
</para>
</listitem>
</varlistentry>
@@ -2273,8 +2257,8 @@ postgres$ <userinput>/usr/local/pgsql/bin/initdb -D /usr/local/pgsql/data</>
</variablelist>
<para>
- Postgres-XL introduces some additional parameters for the
- Datanodes as well
+ Postgres-XL introduces some additional parameters for the Datanodes as
+ well
</para>
<variablelist>
@@ -2283,17 +2267,17 @@ postgres$ <userinput>/usr/local/pgsql/bin/initdb -D /usr/local/pgsql/data</>
<term><envar>shared_queues</envar></term>
<listitem>
<para>
- For some joins that occur in queries, data from one Datanode may
- need to be joined with data from another Datanode.
- <productname>Postgres-XL</productname> uses shared queues for this purpose.
- During execution each Datanode knows if it needs to produce or consume
- tuples, or both.
+ For some joins that occur in queries, data from one Datanode may need to
+ be joined with data from another Datanode.
+ <productname>Postgres-XL</productname> uses shared queues for this
+ purpose. During execution each Datanode knows if it needs to produce or
+ consume tuples, or both.
</para>
<para>
Note that there may be mulitple shared_queues used even for a single
query. So a value should be set taking into account the number of
- connections it can accept and expected number of such joins occurring
- simultaneously.
+ connections it can accept and the expected number of such joins
+ occurring simultaneously.
</para>
</listitem>
@@ -2314,94 +2298,90 @@ postgres$ <userinput>/usr/local/pgsql/bin/initdb -D /usr/local/pgsql/data</>
<step>
<para>
Then you are ready to start the <productname>Postgres-XL</> cluster.
- First, you should start GTM bu something like:
+ First, you should start the GTM with something like:
<programlisting>
gtm -D /usr/local/pgsql/gtm -h localhost -p 20001 -n 1 -x 1000
</programlisting>
- This will start GTM. <option>-h</> specifies IP address or host
- name to listen to the connection
- from <command>gtm_standby</>. <option>-p</> specifies the port
- number to listen to. <option>-n</> specifies the node number
- within GTM. This identifies GTM especially when you run
- GTM-Standby. <option>-x</> option specifies the value of initial
- Global Transaction ID. We need to specify this
- because <command>initdb</> consumes some of the transaction ID
- locally and GTM must to begin to provide global transaction ID
- greater than the consumed one. A value of at least 2000 is believed
- to be safe.
+ This will start GTM. <option>-h</> specifies IP address or host name to
+ listen to the connection from <command>gtm_standby</>. <option>-p</>
+ specifies the port number to listen to. <option>-n</> specifies the node
+ number within GTM. This identifies GTM especially when you run a
+ GTM-Standby. <option>-x</> option specifies the value of the initial
+ Global Transaction ID. We need to specify this because
+ <command>initdb</> consumes some of the transaction IDs locally and the
+ GTM must begin to provide a global transaction ID greater than the
+ consumed one. A value of at least 2000 is believed to be safe.
</para>
</step>
<step>
<para>
- Next, you should start GTM-Proxy on each server you're running
+ Next, you should start a GTM-Proxy on each server you're running a
Coordinator and/or Datanode like:
<programlisting>
gtm_proxy -h localhost -p 20002 -s localhost -t 20001 -i 1 -n 2 -D /usr/local/pgsql/gtm_proxy
</programlisting>
- This will start GTM-Proxy. <option>-h</> option is the host name
- or IP address which GTM-Proxy listens to. <option>-p</> option
- is the port number to listen to. <option>-s</>
- and <option>-t</> are IP address or the host
- name and the port number of GTM as specified
- above. <option>-i</> is the node number of GTM-Proxy beginning
- with 1. <option>-n</> is the number of worker thread of
- GTM-Proxy. Usually, 1 or 2 is advised. Then <option>-D</>
- option is the working directory of the GTM-Proxy.
+ This will start a GTM-Proxy. <option>-h</> option is the host name or IP
+ address which GTM-Proxy listens to. <option>-p</> option is the port
+ number to listen to. <option>-s</> and <option>-t</> are IP address or
+ the host name and the port number of the GTM as specified above.
+ <option>-i</> is the node number of GTM-Proxy beginning with 1.
+ <option>-n</> is the number of worker thread of GTM-Proxy. Usually, 1 or
+ 2 is advised. Then <option>-D</> option is the working directory of the
+ GTM-Proxy.
</para>
<para>
- Please note that you should start GTM-Proxy on all the servers
- you run Coordinator/Datanode.
+ Please note that you should start a GTM-Proxy on all the servers you run
+ Coordinator/Datanode.
</para>
</step>
<step>
<para>
- Now you can start Datanode on each server like:
+ Now you can start a Datanode on each server like:
<programlisting>
postgres --datanode -D /usr/local/pgsql/Datanode
</programlisting>
- This will start the Datanode. <option>--datanode</>
- specifies <command>postgres</> to start as a
- Datanode. <option>-D</> specifies the data directory of the
- Datanode. You can specify other options of standalone <command>postgres</>.
+ This will start the Datanode. <option>--datanode</> specifies
+ <command>postgres</> to start as a Datanode. <option>-D</> specifies the
+ data directory of the Datanode. You can specify other options of
+ standalone <command>postgres</>.
</para>
<para>
- Please note that you should issue <command>postgres</> command at
+ Please note that you should issue <command>postgres</> commands at
all the servers you're running Datanode.
</para>
</step>
<para>
- Finally, you can start Coordinator like:
+ Finally, you can start a Coordinator like:
<programlisting>
postgres --coordinator -D /usr/local/pgsql/Coordinator
</programlisting>
- This will start the Coordinator. <option>--coordinator</>
- specifies <command>postgres</> to start as a
- Coordinator. <option>-D</> specifies the data directory of the
- Coordinator. You can specify other options of standalone <command>postgres</>.
+ This will start the Coordinator. <option>--coordinator</> specifies
+ <command>postgres</> to start as a Coordinator. <option>-D</> specifies
+ the data directory of the Coordinator. You can specify other options of
+ standalone <command>postgres</>.
</para>
<para>
- Please note that you should issue <command>postgres</> command at
+ Please note that you should issue <command>postgres</> commands at
all the servers you're running Coordinators.
</para>
<step>
<para>
- The previous step should have told you how to
- start up the whole database cluster. Do so now. The command should look
- something like:
+ The previous step should have told you how to start up the whole database
+ cluster. Do so now. The command should look something like:
<programlisting>
postgres --datanode -D /usr/local/pgsql/Datanode
</programlisting>
- This will start the Datanode in the foreground. To put the Datanode
- in the background use something like:
+ This will start the Datanode in the foreground. To put the Datanode in the
+ background use something like:
<programlisting>
nohup postgres --datanode -D /usr/local/pgsql/data \
&lt;/dev/null &gt;&gt;server.log 2&gt;&amp;1 &lt;/dev/null &amp;
</programlisting>
- You can apply this to all the other components, GTM, GTM-Proxies,
- and Coordinators.
+ You can apply this to all the other components, GTM, GTM-Proxies, and
+ Coordinators.
</para>
<para>
@@ -2435,9 +2415,9 @@ kill `cat /usr/local/pgsql/gtm-proxy/gtm_proxy.pid
<screen>
<userinput>psql -p 20004 testdb</>
</screen>
- Please do not forget to give the port number of one of the
- Coordinators. Then you are connected to a Coordinator listening
- to the port you specified.
+ Please do not forget to give the port number of one of the Coordinators.
+ Then you are connected to a Coordinator listening to the port you
+ specified.
</para>
</step>
</procedure>
@@ -2500,18 +2480,18 @@ kill `cat /usr/local/pgsql/gtm-proxy/gtm_proxy.pid
<title>Supported Platforms</title>
<para>
- <productname>Postgres-XL</> can be expected to work on these operating systems:
- Linux (all recent distributions), FreeBSD and Mac OS X. Other Unix-like systems may
- also work but are not currently being tested.
+ <productname>Postgres-XL</> can be expected to work on these operating
+ systems: Linux (all recent distributions), FreeBSD and Mac OS X. Other
+ Unix-like systems may also work but are not currently being tested.
</para>
<para>
If you have installation problems on a platform that is known
- to be supported according to recent build farm results, please report
- it to <email>[email protected]</email>. If you are interested
- in porting <productname>Postgres-XL</> to a new platform,
- <email>[email protected]</email> is the appropriate place
- to discuss that.
+ to be supported according to recent build farm results, please report it to
+ <email>[email protected]</email>. If you are
+ interested in porting <productname>Postgres-XL</> to a new platform,
+ <email>[email protected]</email> is the
+ appropriate place to discuss that.
</para>
</sect1>
diff --git a/doc/src/sgml/intro.sgml b/doc/src/sgml/intro.sgml
index e50547820e..5ca0c19d4e 100644
--- a/doc/src/sgml/intro.sgml
+++ b/doc/src/sgml/intro.sgml
@@ -3,21 +3,20 @@
<preface id="preface">
<title>Preface</title>
- <para>
- This book is the official documentation of
- <productname>Postgres-XL</productname>. It has been written by the
- <productname>Postgres-XL</productname> developers and other
- volunteers in parallel to the development of the
- <productname>Postgres-XL</productname> software. It describes all
- the functionality that the current version of
- <productname>Postgres-XL</productname> officially supports.
+ <para>
+ This book is the official documentation of
+ <productname>Postgres-XL</productname>. It has been written by the
+ <productname>Postgres-XL</productname> developers and other volunteers in
+ parallel to the development of the <productname>Postgres-XL</productname>
+ software. It describes all the functionality that the current version of
+ <productname>Postgres-XL</productname> officially supports.
</para>
<para>
<productname>Postgres-XL</> is essentially a collection of multiple
- <productname>PostgreSQL</> database to provide both read and write
- performance scalability. It also provides full-featured transaction
- consistency as <productname>PostgreSQL</> provides, at the exception
- of SSI which is incomplete.
+ <productname>PostgreSQL</> databases to provide both read and write
+ performance scalability. It also provides the same full-featured transaction
+ consistency as <productname>PostgreSQL</> provides, at the exception of SSI
+ which is incomplete.
</para>
<para>
<productname>Postgres-XL</> inherits almost all major features from <productname>PostgreSQL</>.
@@ -171,70 +170,66 @@
<title>In short</title>
<para>
- Postgres-XL is an open source project to provide both write-scalability
- massively parallel processing transparently to PostgreSQL.
- It is a collection of tightly coupled database
- components which can be installed in more than one hardware or
- virtual machines.
- </para>
+ Postgres-XL is an open source project to provide both write-scalability
+ and massively parallel processing transparently to PostgreSQL. It is a
+ collection of tightly coupled database components which can be installed
+ on more than one system or virtual machine.
+ </para>
- <para>
- Write-scalable means Postgres-XL can be configured with as many
- database servers as you want and handle many more writes (updating
- SQL statements) than a single standalone database server could
- otherwise do. You can have more than one database
- server which all provide a single database view. Any
- database update from any database server is immediately visible to
- any other transactions running on different servers. Transparent
- means you do not necessarily need to worry about how your data is stored in
- more than one database servers internally.
- <footnote>
- <para>
- Of course, you should use the information about how tables are stored
- internally when you design the database physically to get most
- from Postgres-XL.
- </para>
- </footnote>
- </para>
+ <para>
+ Write-scalable means Postgres-XL can be configured with as many database
+ servers as you want and handle many more writes (updating SQL statements)
+ than a single standalone database server could otherwise do. You can have
+ more than one database server that provides a single database view. Any
+ database update from any database server is immediately visible to any
+ other transactions running on different servers. Transparent means you do
+ not necessarily need to worry about how your data is stored in more than
+ one database servers internally.
+ <footnote>
+ <para>
+ Of course, you should use the information about how tables are stored
+ internally when you design the database physically to get most from
+ Postgres-XL.
+ </para>
+ </footnote>
+ </para>
<para>
- You can configure Postgres-XL to run on more than one machines. It
- stores your data in a distributed way, that is, partitioned or
- replicated depending on what is chosen for each table.
+ You can configure Postgres-XL to run on more than one machine. It stores
+ your data in a distributed way, that is, partitioned or replicated
+ depending on what is chosen for each table.
<footnote>
<para>
- To distinguish from PostgreSQL's native partitioning, we refer to this
- as "distribution". In distributed database textbooks, this is often
- referred to as "horizontal fragment"), and more recently, sharding.
+ To distinguish from PostgreSQL's native partitioning, we refer to this as
+ "distribution". In distributed database textbooks, this is often
+ referred to as a "horizontal fragment", and more recently, sharding.
</para>
</footnote>
- When you issue queries, Postgres-XL determines where the target
- data is stored and dispatches corresponding plans to the servers containing the
+ When you issue queries, Postgres-XL determines where the target data is
+ stored and dispatches corresponding plans to the servers containing the
target data.
</para>
<para>
- In typical web systems, you can have as many web servers or
- application servers to handle your transactions. However, you
- cannot do this for a database server in general because all the
- changing data have to be visible to all the transactions. Unlike
- other database cluster solution, Postgres-XL provides this
- capability. You can install as many database servers as you
- like. Each database server provides uniform data view to your
- applications. Any database update from any server is immediately
- visible to applications connecting the database from other
- servers. This is on of the most important features of
- Postgres-XL.
+ In typical web systems, you can have as many web servers or application
+ servers to handle your transactions. However, you cannot do this for a
+ database server in general because all the changing data have to be visible
+ to all the transactions. Unlike other database cluster solutions,
+ Postgres-XL provides this capability. You can install as many database
+ servers as you like. Each database server provides uniform data view to
+ your applications. Any database update from any server is immediately
+ visible to applications connecting the database from other servers. This is
+ one of the most important features of Postgres-XL.
</para>
<para>
- The other significant feature of Postgres-XL is MPP parallelism.
- You can use Postgres-XL to handle workloads for Business Intelligence,
- Data Warehousing, or Big Data. In Postgres-XL, a plan is generated once
- on a coordinator, and sent down to the individual data nodes. This is
- then executed, with the data nodes communicating directly with one another,
- where each understands from where it is expected to receive any tuples
- that it needs to ship, and where it needs to send to others.
+ The other significant feature of Postgres-XL is MPP parallelism. You can
+ use Postgres-XL to handle workloads for Business Intelligence, Data
+ Warehousing, or Big Data. In Postgres-XL, a plan is generated once on a
+ coordinator, and sent down to the individual data nodes. This is then
+ executed, with the data nodes communicating directly with one another,
+ where each understands from where it is expected to receive any tuples that
+ it needs to ship, and where it needs to send to others.
</para>
</sect2>
@@ -243,22 +238,22 @@
<title>Postgres-XL's Goal</title>
<para>
- The ultimate goal of Postgres-XL is to provide database
- scalability with ACID consistency across all types of database workloads.
- That is, Postgres-XL should provide the following features:
+ The ultimate goal of Postgres-XL is to provide database scalability with
+ ACID consistency across all types of database workloads. That is,
+ Postgres-XL should provide the following features:
<itemizedlist spacing="compact">
<listitem>
<para>
- Postgres-XL should provide multiple servers to accept transactions and statements
- from applications, which are known as "Coordinator" processes.
+ Postgres-XL should provide multiple servers to accept transactions and
+ statements from applications, which are known as "Coordinator"
+ processes.
</para>
</listitem>
<listitem>
<para>
- Any Coordinator should provide consistent database view to
- applications. Any updates from any Coordinator must be visible in
- real time manner as if such updates are done in single
- PostgreSQL server.
+ Any Coordinator should provide a consistent database view to
+ applications. Any updates from any Coordinator must be visible in real
+ time as if such updates are done in single PostgreSQL server.
</para>
</listitem>
<listitem>
@@ -270,12 +265,11 @@
<listitem>
<para>
Tables should be able to be stored in the database designated as
- replicated or distributed (known as fragments or
- partitions). Replication and distribution should be transparent
- to applications; that is, such replicated and distributed tables
- are seen as single tables and the location or number of copies of
- each record/tuple is managed by Postgres-XL and is not visible
- to applications.
+ replicated or distributed (known as fragments or partitions).
+ Replication and distribution should be transparent to applications; that
+ is, such replicated and distributed tables are seen as single tables and
+ the location or number of copies of each record/tuple is managed by
+ Postgres-XL and is not visible to applications.
</para>
</listitem>
<listitem>
@@ -303,9 +297,9 @@
</para>
<para>
- Postgres-XL is composed of three major components: the GTM
- (Global Transaction Manager), the Coordinator and the Datanode. Their
- features are given in the following sections.
+ Postgres-XL is composed of three major components: the GTM (Global
+ Transaction Manager), the Coordinator and the Datanode. Their features are
+ given in the following sections.
</para>
<sect3>
@@ -317,14 +311,13 @@
</para>
<para>
- As described later in this
- manual, <productname>PostgreSQL</productname>'s transaction
- management is based upon MVCC (Multi-Version Concurrency Control)
- technology. <productname>Postgres-XL</productname> extracts this
- technology into separate component as GTM so that
- any <productname>Postgres-XL</productname> component's
- transaction management is based upon single global status.
- Details will be described in <xref linkend="overview">.
+ As described later in this manual, <productname>PostgreSQL</productname>'s
+ transaction management is based upon MVCC (Multi-Version Concurrency
+ Control) technology. <productname>Postgres-XL</productname> extracts this
+ technology into separate component such as the GTM so that any
+ <productname>Postgres-XL</productname> component's transaction management
+ is based upon single global status. Details will be described in <xref
+ linkend="overview">.
</para>
</sect3>
@@ -332,15 +325,15 @@
<title>Coordinator</title>
<para>
- The Coordinator is an interface to the database for applications. It acts like
- conventional PostgreSQL backend process, however the Coordinator
+ The Coordinator is an interface to the database for applications. It acts
+ like a conventional PostgreSQL backend process, however the Coordinator
does not store any actual data. The actual data is stored by the Datanodes
as described below. The Coordinator receives SQL statements, gets Global
- Transaction Id and Global Snapshots as needed, determines which
- Datanodes are involved and asks them to execute (a part of)
- statement. When issuing statement to Datanodes, it is
- associated with GXID and Global Snapshot so that Multi-version Concurrency Control
- (MVCC) properties extend cluster-wide.
+ Transaction Id and Global Snapshots as needed, determines which Datanodes
+ are involved and asks them to execute (a part of) statement. When issuing
+ statement to Datanodes, it is associated with GXID and Global Snapshot so
+ that Multi-version Concurrency Control (MVCC) properties extend
+ cluster-wide.
</para>
</sect3>
@@ -348,18 +341,16 @@
<title>Datanode</title>
<para>
- The Datanode actually stores user data. Tables may be distributed
- among Datanodes, or replicated to all the Datanodes.
- The Datanode does not have global view of the whole database, it
- just takes care of locally stored data. Incoming statement is
- examined by the Coordinator as described next, and subplans are
- made. These are then transferred to
- each Datanode involved together with GXID and Global Snapshot
- as needed. The datanode may receive request from various
- Coordinators in separate sessions. However, because each the
- transaction is identified
- uniquely and associated with consistent (global) snapshot, each Datanode
- can properly execute in its transaction and snapshot context.
+ The Datanode actually stores user data. Tables may be distributed among
+ Datanodes, or replicated to all the Datanodes. The Datanode does not have
+ a global view of the whole database, it just takes care of locally stored
+ data. Incoming statements are examined by the Coordinator as described
+ next, and subplans are made. These are then transferred to each Datanode
+ involved together with a GXID and Global Snapshot as needed. The datanode
+ may receive request from various Coordinators in separate sessions.
+ However, because each transaction is identified uniquely and associated
+ with a consistent (global) snapshot, each Datanode can properly execute in
+ its transaction and snapshot context.
</para>
</sect3>
@@ -370,15 +361,13 @@
<title><productname>Postgres-XL</productname> Inherits From <productname>PostgreSQL</productname></title>
<para>
- <productname>Postgres-XL</productname> is an extension
- to <productname>PostgreSQL</productname> and inherits most of its
- features.
+ <productname>Postgres-XL</productname> is an extension to
+ <productname>PostgreSQL</productname> and inherits most of its features.
</para>
<para>
- It is an open-source descendant of
- <productname>PostgreSQL</productname> and its
- original Berkeley code. It supports a large part of the SQL
+ It is an open-source descendant of <productname>PostgreSQL</productname>
+ and its original Berkeley code. It supports a large part of the SQL
standard and offers many modern features:
<itemizedlist spacing="compact">
@@ -390,7 +379,8 @@
foreign keys
<footnote>
<para>
- <productname>Postgres-XL</productname>'s foreign key usage has some restrictions. For details, see <xref linkend="SQL-CREATETABLE">.
+ <productname>Postgres-XL</productname>'s foreign key usage has some
+ restrictions. For details, see <xref linkend="SQL-CREATETABLE">.
</para>
</footnote>
</simpara>
@@ -400,7 +390,8 @@
triggers
<footnote>
<para>
- <productname>Postgres-XL</productname> does not support trigger in the current version. This may be supported in the future releases.
+ <productname>Postgres-XL</productname> does not support triggers in
+ the current version. This may be supported in future releases.
</para>
</footnote>
</simpara>
@@ -409,15 +400,17 @@
<simpara>views</simpara>
</listitem>
<listitem>
- <simpara>transactional integrity, at the exception of SSI whose support is incomplete</simpara>
+ <simpara>transactional integrity, at the exception of SSI whose support
+ is incomplete</simpara>
</listitem>
<listitem>
<simpara>multiversion concurrency control</simpara>
</listitem>
</itemizedlist>
- Also, similar to <productname>PostgreSQL</productname>, <productname>Postgres-XL</productname> can be extended by the
- user in many ways, for example by adding new
+ Also, similar to <productname>PostgreSQL</productname>,
+ <productname>Postgres-XL</productname> can be extended by the user in many
+ ways, for example by adding new
<itemizedlist spacing="compact">
<listitem>
@@ -442,10 +435,10 @@
</para>
<para>
- <productname>Postgres-XL</productname>
- can be used, modified, and distributed by anyone free of charge
- for any purpose, be it private, commercial, or academic, provided it
- adheres to the Mozilla Public License version 2.
+ <productname>Postgres-XL</productname> can be used, modified, and
+ distributed by anyone free of charge for any purpose, be it private,
+ commercial, or academic, provided it adheres to the Mozilla Public License
+ version 2.
</para>
</sect2>
diff --git a/doc/src/sgml/lobj.sgml b/doc/src/sgml/lobj.sgml
index 8042e9c54a..c15fa84d5b 100644
--- a/doc/src/sgml/lobj.sgml
+++ b/doc/src/sgml/lobj.sgml
@@ -8,8 +8,8 @@
<para>
Large objects are not supported by <productname>Postgres-XL</>.
- <productname>Postgres-XL</> does not provide consistent means
- to handle the large object as OID is inconsistent among cluster nodes.
+ <productname>Postgres-XL</> does not provide a consistent way to handle the
+ large object as OIDs are inconsistent among cluster nodes.
</para>
<para>
diff --git a/doc/src/sgml/maintenance.sgml b/doc/src/sgml/maintenance.sgml
index 4659896d66..7650d6cf0c 100644
--- a/doc/src/sgml/maintenance.sgml
+++ b/doc/src/sgml/maintenance.sgml
@@ -72,8 +72,8 @@
</indexterm>
<para>
- Please note that this section describes the task of individual
- Coordinator and Datanode. It should be done for each of them.
+ Please note that this section describes the tasks of individual Coordinators
+ and Datanodes. It should be done for each of them.
</para>
<para>
@@ -96,8 +96,8 @@
<title>Vacuuming Basics</title>
<para>
- Please note that this section describes the task of individual
- Coordinator and Datanode. It should be done for each of them.
+ Please note that this section describes the tasks of individual Coordinators
+ and Datanodes. It should be done for each of them.
</para>
<para>
@@ -166,8 +166,8 @@
</indexterm>
<para>
- Please note that this section describes the task of individual
- Coordinator and Datanode. It should be done for each of them.
+ Please note that this section describes the tasks of individual
+ Coordinators and Datanodes. It should be done for each of them.
</para>
<para>
@@ -285,8 +285,8 @@
</indexterm>
<para>
- Please note that this section describes the task of individual
- Coordinator and Datanode. It should be done for each of them.
+ Please note that this section describes the tasks of individual
+ Coordinators and Datanodes. It should be done for each of them.
</para>
<para>
@@ -411,8 +411,8 @@
</indexterm>
<para>
- Please note that this section describes the task of individual
- Coordinator and Datanode. It should be done for each of them.
+ Please note that this section describes the tasks of individual
+ Coordinators and Datanodes. It should be done for each of them.
</para>
<para>
@@ -433,8 +433,8 @@
<para>
Please note that <productname>Postgres-XL</>'s transaction is
- called global transaction ID (<acronym>GXID</>) and is supplied by
- global transaction manager (<acronym>GTM</>).
+ called Global Transaction ID (<acronym>GXID</>) and is supplied by the
+ Global Transaction Manager (<acronym>GTM</>).
</para>
<para>
@@ -713,8 +713,8 @@ HINT: Stop the postmaster and vacuum that database in single-user mode.
</indexterm>
<para>
- Please note that this section describes the task of individual
- Coordinator and Datanode. It should be done for each of them.
+ Please note that this section describes the tasks of individual
+ Coordinators and Datanodes. It should be done for each of them.
</para>
<para>
diff --git a/doc/src/sgml/manage-ag.sgml b/doc/src/sgml/manage-ag.sgml
index e229a0171f..1c676fe498 100644
--- a/doc/src/sgml/manage-ag.sgml
+++ b/doc/src/sgml/manage-ag.sgml
@@ -283,7 +283,7 @@ createdb -T template0 <replaceable>dbname</>
<para>
In <productname>Postgres-XL</>, template databases are held in
each Coordinator and Datanode. They are locally copied when new
- database is created.
+ databases are created.
</para>
</note>
diff --git a/doc/src/sgml/mvcc.sgml b/doc/src/sgml/mvcc.sgml
index 4f65e98972..fe9b2f9f5c 100644
--- a/doc/src/sgml/mvcc.sgml
+++ b/doc/src/sgml/mvcc.sgml
@@ -18,12 +18,12 @@
</para>
<para>
- <productname>Postgres-XL</> inherited concurrency control
- from <productname>PostgreSQL</> and extended it globally to all of the
- Coordinators and Datanodes involved. Regardless of what
- Coordinator is connected to, all of the transactions
- in the <productname>Postgres-XL</> database cluster behaves in
- a consistent way as if they are running in single database.
+ <productname>Postgres-XL</> inherited concurrency control from
+ <productname>PostgreSQL</> and extended it globally to all of the
+ Coordinators and Datanodes involved. Regardless of which Coordinator is
+ connected to, all of the transactions in the <productname>Postgres-XL</>
+ database cluster behaves in a consistent way as if they are running in
+ single database.
</para>
<sect1 id="mvcc-intro">
@@ -1507,10 +1507,10 @@ UPDATE accounts SET balance = balance - 100.00 WHERE acctnum = 22222;
</para>
<para>
- Please note that <productname>Postgres-XL</>'s advisory lock is
- local to each Coordinator or Datanode. If you wish to acquire
- advisory locks on different Coordinator, you should do it manually
- using <type>EXECUTE DIRECT</> statement.
+ Please note that <productname>Postgres-XL</>'s advisory locks are local to
+ each Coordinator or Datanode. If you wish to acquire an advisory lock on
+ a different Coordinator, you should do it manually using <type>EXECUTE
+ DIRECT</> statement.
</para>
<para>
@@ -1783,14 +1783,13 @@ SELECT pg_advisory_lock(q.id) FROM
</para>
<para>
- In conventional replication clusters, you can run read
- transactions concurrently in multiple standby, or slave
- servers. Replication servers provide read scalability. However,
- you cannot issue write transactions to standby servers because
- they don't have means to propagate changes safely. They cannot
- maintain consistent view of database to applications for write
- operations, unless you issue write transactions to single master
- server.
+ In conventional replication clusters, you can run read transactions
+ concurrently in multiple standby, or slave servers. Replication servers
+ provide read scalability. However, you cannot issue write transactions to
+ standby servers because they don't have the means to propagate changes
+ safely. They cannot maintain a consistent view of the database to
+ applications for write operations, unless you issue write transactions to a
+ single master server.
</para>
<para>
@@ -1805,30 +1804,27 @@ SELECT pg_advisory_lock(q.id) FROM
</para>
<para>
- In <productname>Postgres-XL</productname>, any Coordinator can
- accept any transaction, regardless whether it is read only or
- read/write. Transaction integrity is enforced by GTM (Global
- Transaction Manager). Because we have multiple Coordinators, each
- of them can handle incoming transactions and statements in
- parallel.
+ In <productname>Postgres-XL</productname>, any Coordinator can accept any
+ transaction, regardless whether it is read only or read/write. Transaction
+ integrity is enforced by the GTM (Global Transaction Manager). Because we
+ have multiple Coordinators, each of them can handle incoming transactions
+ and statements in parallel.
</para>
<para>
- Analyzed statements are converted into internal plans, which
- include SQL statements targeted to Datanodes. Plans are sent on to
- each target Datanode, handled, and the result is sent back to
- the originating Coordinator where all the results from target
- Datanodes will be combined into the result to be sent back to the
- application.
+ Analyzed statements are converted into internal plans, which include SQL
+ statements targeted to Datanodes. Plans are sent on to each target
+ Datanode, executed, and the result is sent back to the originating
+ Coordinator where all the results from target Datanodes will be combined
+ into the results to be sent back to the application.
</para>
<para>
- Each table can be distributed or replicated as described
- in <xref linkend="intro-whatis">. If you design each table's
- distribution carefully, most of the statements may end up targeting
- just one Datanode, which is most effecient. In this way, Coordinators and Datanodes
- runs transactions in parallel which scales out both read and write
- operations.
+ Each table can be distributed or replicated as described in <xref
+ linkend="intro-whatis">. If you design each table's distribution
+ carefully, most of the statements may end up targeting just one Datanode,
+ which is most effecient. In this way, Coordinators and Datanodes runs
+ transactions in parallel which scales out both read and write operations.
</para>
<para>
diff --git a/doc/src/sgml/oid2name.sgml b/doc/src/sgml/oid2name.sgml
index 8843ea5c4f..6bcc3bf78b 100644
--- a/doc/src/sgml/oid2name.sgml
+++ b/doc/src/sgml/oid2name.sgml
@@ -44,9 +44,9 @@
</note>
<para>
- Please note that you can issue this command to each Datanode or
- Coordinator. The result is the information local to Datanode or
- Coordinator specified.
+ Please note that you can issue this command to each Datanode or Coordinator.
+ The result is the information local to the Datanode or Coordinator
+ specified.
</para>
<para>
diff --git a/doc/src/sgml/pageinspect.sgml b/doc/src/sgml/pageinspect.sgml
index 45b2762ef4..df45a3fa7b 100644
--- a/doc/src/sgml/pageinspect.sgml
+++ b/doc/src/sgml/pageinspect.sgml
@@ -14,9 +14,9 @@
</para>
<para>
- Functions of this module returns information about connecting
- Coordinators locally. To get information of a Datanode, you can
- use EXECUTE DIRECT from a Coordinator.
+ Functions of this module returns information about connecting Coordinators
+ locally. To get information from a specific a Datanode, you can use EXECUTE
+ DIRECT from a Coordinator.
</para>
<sect2>
diff --git a/doc/src/sgml/perform.sgml b/doc/src/sgml/perform.sgml
index e947dca296..2be518dff9 100644
--- a/doc/src/sgml/perform.sgml
+++ b/doc/src/sgml/perform.sgml
@@ -1097,14 +1097,13 @@ SELECT * FROM a, b, c WHERE a.id = b.id AND b.ref = c.id;
</para>
<para>
- <productname>Postgres-XL</> stores table data in a distributed or
- replicated fashion. To leverage this, the planner tries to find
- the best way to use as much of Datanode power as possible. If the
- equi-join is done by distribution columns and they share the
- distribution method (hash/modulo), the Coordinator can tell
- the Datanode to perform join.
- If not, it may tell Datanodes to ship rows to other Datanodes
- and expect rows from other Datanodes to be shipped to it.
+ <productname>Postgres-XL</> stores table data in a distributed or replicated
+ fashion. To leverage this, the planner tries to find the best way to use as
+ much of a Datanode's power as possible. If the equi-join is done by
+ distribution columns and they share the distribution method (hash/modulo),
+ the Coordinator can tell the Datanode to perform a join. If not, it may
+ tell the Datanodes to ship rows to other Datanodes and expect rows from
+ other Datanodes to be shipped to it.
</para>
<para>
@@ -1384,8 +1383,8 @@ SELECT * FROM x, y, a, b, c WHERE something AND somethingelse;
</para>
<para>
- In <productname>Postgres-XL</>, only the distribution column can have foreign key
- constraint.
+ In <productname>Postgres-XL</>, only the distribution column can have a
+ foreign key constraint.
</para>
</sect2>
diff --git a/doc/src/sgml/pgfreespacemap.sgml b/doc/src/sgml/pgfreespacemap.sgml
index 52b8f9459d..a88cd52678 100644
--- a/doc/src/sgml/pgfreespacemap.sgml
+++ b/doc/src/sgml/pgfreespacemap.sgml
@@ -21,10 +21,9 @@
</para>
<para>
- Functions of this module return information from the
- Coordinator that the session is currently connect to.
- To get information about a Datanode, you can
- use <command>EXECUTE DIRECT</command>.
+ Functions of this module return information from the Coordinator that the
+ session is currently connected to. To get information about a Datanode, you
+ can use <command>EXECUTE DIRECT</command>.
</para>
<sect2>
diff --git a/doc/src/sgml/pgstattuple.sgml b/doc/src/sgml/pgstattuple.sgml
index a423baceb9..e438b68386 100644
--- a/doc/src/sgml/pgstattuple.sgml
+++ b/doc/src/sgml/pgstattuple.sgml
@@ -13,10 +13,9 @@
</para>
<para>
- Functions of this module return information from the
- Coordinator that the session is currently connect to.
- To get information about a Datanode, you can
- use <command>EXECUTE DIRECT</command>.
+ Functions of this module return information from the Coordinator that the
+ session is currently connected to. To get information about a Datanode, you
+ can use <command>EXECUTE DIRECT</command>.
</para>
<sect2>
diff --git a/doc/src/sgml/pgxc_ctl-ref.sgml b/doc/src/sgml/pgxc_ctl-ref.sgml
index b47905ff56..9a2f1a9c7c 100644
--- a/doc/src/sgml/pgxc_ctl-ref.sgml
+++ b/doc/src/sgml/pgxc_ctl-ref.sgml
@@ -26,15 +26,16 @@
<para>
You should build pgxc_ctl using your Postgres-XL build environment.
- <application>pgxc_ctl</application> source code comes with Postgres-XL source code tarball.
- The latest version of the source code will be available at its home repository,
+ <application>pgxc_ctl</application> source code comes with the Postgres-XL
+ source code tarball. The latest version of the source code will be
+ available at its home repository,
<programlisting>
http:// pgxc_ctl
</programlisting>
- If you would like to use the latest version from pgxc_ctl home repository,
- get the source code tarball and expand it in the source's contrib
- directory of your Postgres-XL build environment.
- If you are using pgxc_ctl in Postgres-XL tarball, you don't have to do this.
+ If you would like to use the latest version from the pgxc_ctl home
+ repository, get the source code tarball and expand it in the source's
+ contrib directory of your Postgres-XL build environment. If you are using
+ pgxc_ctl from the Postgres-XL tarball, you don't have to do this.
</para>
<para>
@@ -60,18 +61,17 @@ $ cd contrib/pgxc_ctl
$ make
$ make install
</programlisting>
- The <application>pgxc_ctl</application> binary will be installed in the
- same directory as <application>Postgres-XL</application> binaries.
+ The <application>pgxc_ctl</application> binary will be installed in the same
+ directory as the <application>Postgres-XL</application> binaries.
</para>
<para>
- <application>Postgres-XL</application> consists of many components
- (or called "nodes") running in various physical or virtual
- machines.
- Because pgxc_ctl relies on ssh connections between the machines where
- <application>pgxc_ctl</application> and other nodes are running,
- you should setup ssh-agent authentication to avoid typing password
- each time pgcx_ctl issues ssh.
+ <application>Postgres-XL</application> consists of many components (or
+ "nodes") running in various physical or virtual machines. Because pgxc_ctl
+ relies on ssh connections between the machines where
+ <application>pgxc_ctl</application> and other nodes are running, you should
+ setup ssh-agent authentication to avoid typing a password each time pgcx_ctl
+ issues ssh.
</para>
</sect2>
@@ -109,14 +109,12 @@ $ make install
<title>pgxc_ctl configuration file</title>
<para>
- pgxc_ctl uses configuration file.
- The default name and the location is
- <literal>$PGXC_CTL_HOME/pgxc_ctl.conf</literal>.
- When you change Postgres-XL cluster configuration using pgxc_ctl
- commands, this file will be updated.
- Depending upon your configuration,
- <application>pgxc_ctl</application> will back up this file
- according to your configuration.
+ pgxc_ctl uses a configuration file. The default name and the location is
+ <literal>$PGXC_CTL_HOME/pgxc_ctl.conf</literal>. When you change
+ Postgres-XL cluster configuration using pgxc_ctl commands, this file will be
+ updated. Depending upon your configuration,
+ <application>pgxc_ctl</application> will back up this file according to your
+ configuration.
</para>
<para>
@@ -132,13 +130,12 @@ $ make install
<title>pgxc_ctl initialization file</title>
<para>
- You can specify your preferred parameters of pgxc_ctl behavior.
- You can specify parameters in <literal>/etc/pgxc_ctl</literal>
- and/or <literal>$HOME/.pgxc_ctl</literal> file.
- Setups in <literal>$HOME/.pgxc_ctl</literal> have higher priority
- so you can specify system-wide setups at
- <literal>/etc/pgxc_ctl</literal> and then your personal preferences
- in <literal>$HOME/.pgxc_ctl</literal>.
+ You can specify your preferred parameters of pgxc_ctl behavior. You can
+ specify parameters in <literal>/etc/pgxc_ctl</literal> and/or
+ <literal>$HOME/.pgxc_ctl</literal>. Setups in
+ <literal>$HOME/.pgxc_ctl</literal> have higher priority so you can specify
+ system-wide setups at <literal>/etc/pgxc_ctl</literal> and then your
+ personal preferences in <literal>$HOME/.pgxc_ctl</literal>.
</para>
<para>
@@ -183,10 +180,9 @@ PGXC$ pwd
/home/postgres-xl/my_pgxc_ctl
PGXC$
</programlisting>
- You can specify your pgxc_ctl_home as environment variable
- <literal>PGXC_CTL_HOME</literal>, or you can specify this as
- variable <literal>pgxc_ctl_home</literal> in your initialization
- files.
+ You can specify your pgxc_ctl_home with the environment variable
+ <literal>PGXC_CTL_HOME</literal>, or you can specify this as variable
+ <literal>pgxc_ctl_home</literal> in your initialization files.
</para>
<para>
@@ -197,10 +193,9 @@ PGXC$
<para>
Type prepare or prepare config to get a configuration template file
- <filename>pgxc_ctl.conf</filename> at
- <literal>$PGXC_CTL_HOME</literal>.
- You may add file name as an option to get configuration template in
- your favorite file. For example:
+ <filename>pgxc_ctl.conf</filename> at <literal>$PGXC_CTL_HOME</literal>.
+ You may add a file name as an option to get a configuration template in your
+ favorite file. For example:
<programlisting>
PGXC$ prepare
PGXC$
@@ -267,8 +262,8 @@ gtmProxyServers=(node06 node07 node08 node09) # Specify none if you dont'
gtmProxyPorts=(20001 20001 20001 20001) # Not used if it is not configured.
gtmProxyDirs=($gtmProxyDir $gtmProxyDir $gtmProxyDir $gtmProxyDir) # Not used if it is not configured.
</programlisting>
- This section specifies the gtm proxy configuration.
- We have four <literal>gtm proxies</literal> in each of the server.
+ This section specifies the GTM proxy configuration.
+ We have four <literal>GTM proxies</literal> in each of the server.
They share working directory path and is specified as a shortcut
which is referred to later.
</para>
@@ -308,10 +303,9 @@ pgxc [options ... ] [pgxc_command]
<term><option>--configuration <replaceable class="parameter">configuration_file</replaceable></></term>
<listitem>
<para>
- Specifies configuration file.
- The default is <filename>pgxc_ctl_conf</filename>, or the value
- of <literal>configFile</literal> option
- found in the initialization file.
+ Specifies configuration file. The default is
+ <filename>pgxc_ctl.conf</filename>, or the value of
+ <literal>configFile</literal> option found in the initialization file.
</para>
</listitem>
</varlistentry>
@@ -431,9 +425,9 @@ name value [ value ... ] # comment
</programlisting>
</para>
<para>
- Blank lines or lines beginning with '#' are simply ignored. If
- you'd like to include space or tab in the variable name, enclose
- the name with <literal>'...'</literal> or <literal>"..."</literal>.
+ Blank lines or lines beginning with '#' are simply ignored. If you'd like
+ to include spaces or tabs in the variable name, enclose the name with
+ <literal>'...'</literal> or <literal>"..."</literal>.
</para>
<para>
Please note that this file is not a bash script.
@@ -604,13 +598,12 @@ $
<term>GTM</term>
<listitem>
<para>
- GTM stands for global transaction manager.
- You must have one in the cluster.
- For production, GTM should be configured in a separate server.
- GTM can have a slave which can fail over when GTM fails.
- GTM slave can be installed (hopefully) in a separate server but
- can be installed in one of the others where you have gtm_proxy,
- coordinators and datanodes.
+ GTM stands for global transaction manager. You must have one in the
+ cluster. For production, the GTM should be configured on a separate
+ server. The GTM can have a slave which can be failed over to. The GTM
+ slave can be installed (hopefully) in a separate server but can be
+ installed in one of the others where you have a gtm_proxy, coordinators
+ and datanodes.
</para>
</listitem>
</varlistentry>
@@ -619,10 +612,9 @@ $
<term>GTM-Proxy</term>
<listitem>
<para>
- GTM proxy reduces the communication load between coordinator and
- GTM and helps GTM failover.
- You should configure one gtm_proxy in each server where you have
- coordinator or datanode as described below.
+ The GTM proxy reduces the communication load between Coordinator and GTM
+ and helps GTM failover. You should configure one gtm_proxy on each
+ server where you have a Coordinator or Datanode as described below.
</para>
</listitem>
</varlistentry>
@@ -631,13 +623,11 @@ $
<term>Coordinator</term>
<listitem>
<para>
- Coordinator handles application connection and statement
- handling.
- For simplicity and load balancing, it is a good idea to
- install coordinator to each server other than where GTM (and GTM
- slave) are configured. Coordinator can have a slave. Slave can
- be configured in one of the servers where other coordinator
- master is installed.
+ The Coordinator handles application connections and statement handling.
+ For simplicity and load balancing, it is a good idea to install a
+ Coordinator on each server other than where GTM (and GTM slave) are
+ configured. Coordinators can have a slave. Slaves can be configured in
+ one of the servers where another Coordinator master is installed.
</para>
</listitem>
</varlistentry>
@@ -646,9 +636,9 @@ $
<term>Datanode</term>
<listitem>
<para>
- Datanode stores the data and run local SQL statement supplied by
- a coordinator. Datanode should also be configured in all the
- servers except those for GTM (and GTM slave).
+ A Datanode stores the data and runs local SQL statement supplied by a
+ Coordinator. A Datanode should also be configured on all the servers
+ except those for the GTM (and GTM slave).
</para>
</listitem>
</varlistentry>
@@ -664,7 +654,7 @@ $
<itemizedlist>
<listitem>
<para>
- hostname, IP address or host name you can refer through DNS,
+ hostname, IP address or host name you can refer to through DNS,
<filename>/etc/hosts</filename> or by equivalent means.
</para>
</listitem>
@@ -684,19 +674,19 @@ $
</para>
<para>
- Also, coordinator and datanode needs additional port for connection pooling to
- other nodes.
+ Also, Coordinators and Datanodes need an additional port for connection
+ pooling to other nodes.
</para>
<para>
In the same host, you must not assign the same port and the same
- work directory to nodes.
+ work directory between nodes.
<application>pgxc_ctl</application> checks this.
</para>
<para>
- When assign the port, you should be careful not to assign already
- assigned one to other service.
+ When assigning the port, you should be careful not to assign an already
+ assigned one to other services.
</para>
<para>
@@ -704,8 +694,7 @@ $
<itemizedlist>
<listitem>
<para>
- You should not assign the same port to GTM master and GTM
- slave.
+ You should not assign the same port to the GTM master and GTM slave.
</para>
</listitem>
@@ -713,20 +702,18 @@ $
</para>
<para>
- GTM, coordinators and datanodes can configure their slave.
- <application>pgxc_ctl</application> does not support cascaded
- slave or more than one slave for coordinator and datanode.
- It is not a restriction of <application>postgres-XL</application>,
- it is a restriction of <application>pgxc_ctl</application>.
+ GTM, Coordinators and Datanodes can configure their slaves.
+ <application>pgxc_ctl</application> does not support cascaded slaves or
+ more than one slave per Coordinator and Datanode. It is not a restriction
+ of <application>postgres-XL</application>, it is a restriction of
+ <application>pgxc_ctl</application>.
</para>
<para>
- At present, coordinator and datanode slaves are connected using
- synchronous replication in pgxc_ctl.
- This is not a <application>Posgres-XL</application> restriction
- either.
- In the future, asynchronous, cascaded and multiple slaves may be
- supported.
+ At present, Coordinator and Datanode slaves are connected using synchronous
+ replication in pgxc_ctl. This is not a
+ <application>Posgres-XL</application> restriction either. In the future,
+ asynchronous, cascaded and multiple slaves may be supported.
</para>
</sect3>
@@ -738,11 +725,10 @@ $
<para>
As described in the previous section, you can configure your
<application>Postgres-XL</application> cluster by editing
- <filename>pgxc_ctl.conf</filename> or other configuration files
- manually.
- But editing the file from the scratch can be a mess.
- It is much better to have separate configuration file.
- You can create configuration file template by typing
+ <filename>pgxc_ctl.conf</filename> or other configuration files manually.
+ But editing the file from the scratch can be a mess. It is much better to
+ have a separate configuration file. You can create a configuration file
+ template by typing
<programlisting>
PGXC$ prepare config
@@ -765,17 +751,16 @@ PGXC$ prepare config my_config.conf
</programlisting>
</para>
<para>
- Then you can edit this file to configure you
- <application>postgres-XL</application> cluster.
- This file is actually a bash script file defining many variables
- to define the cluster configuration.
- With template values and comments, it will be easy to understand
- what they mean.
+ Then you can edit this file to configure your
+ <application>postgres-XL</application> cluster. This file is actually a
+ bash script file defining many variables to define the cluster
+ configuration. With template values and comments, it will be easy to
+ understand what they mean.
</para>
<para>
- You can also generate a minimal configuration file, god enough to test
-<application>Postgres-XL</application> on the localhost by specifying
-<literal>minimal</literal>. For example:
+ You can also generate a minimal configuration file, good enough to test
+ <application>Postgres-XL</application> on the localhost by specifying
+ <literal>minimal</literal>. For example:
<programlisting>
PGXC$ prepare config minimal
PGXC$ prepare config minimal my_minimal_config.conf
@@ -830,9 +815,8 @@ PGXC$ prepare config minimal my_minimal_config.conf
You need full access to this directory.
</para>
<para>
- This parameter was left here to make it compatible with
- bash-version.
- It is recommended to configure this parameter in
+ This parameter was left here to make it compatible with the
+ bash-version. It is recommended to configure this parameter in
initialization file.
</para>
</listitem>
@@ -842,15 +826,12 @@ PGXC$ prepare config minimal my_minimal_config.conf
<term><option>pgxcInstallDir</option></term>
<listitem>
<para>
- <application>Postgres-XL</application> should at least be
- installed in the server you are running
- <application>pgxc_ctl</application>.
- This variable specifies this installation directory, as you
- specify with <option>--prefix=</option> option of configure
- command when you build it.
- All the installation will be copied to the same directory at
- each servers and you should give appropriate privilege to this
- directory in advance.
+ <application>Postgres-XL</application> should at least be installed in
+ the server you are running <application>pgxc_ctl</application>. This
+ variable specifies the installation directory, as you specify with the
+ <option>--prefix=</option> option to the configure script. All of the
+ installation will be copied to the same directory at each servers and
+ you should give appropriate privilege to this directory in advance.
</para>
</listitem>
</varlistentry>
@@ -888,9 +869,8 @@ PGXC$ prepare config minimal my_minimal_config.conf
all the servers.
</para>
<para>
- This parameter was left here to make it compatible with
- bash-version.
- It is recommended to configure this parameter in
+ This parameter was left here to make it compatible with the
+ bash-version. It is recommended to configure this parameter in
initialization file.
</para>
</listitem>
@@ -909,7 +889,7 @@ PGXC$ prepare config minimal my_minimal_config.conf
<term><option>gtmName</option></term>
<listitem>
<para>
- Node name of GTM master.
+ Node name of the GTM master.
</para>
</listitem>
</varlistentry>
@@ -918,10 +898,9 @@ PGXC$ prepare config minimal my_minimal_config.conf
<term><option>gtmExtraConfig</option></term>
<listitem>
<para>
- If you'd like to add specific configuration to both GTM master
- and slave, specify the file which contains such lines for
- gtm.config file.
- Otherwise, specify <literal>none</literal>.
+ If you'd like to add specific configuration parameters to both the GTM
+ master and slave, specify the file that contains such lines for the
+ gtm.config file. Otherwise, specify <literal>none</literal>.
</para>
</listitem>
</varlistentry>
@@ -957,10 +936,10 @@ PGXC$ prepare config minimal my_minimal_config.conf
<term><option>gtmMasterSpecificExtraConfig</option></term>
<listitem>
<para>
- If you'd like to add specific configuration only to GTM
- master, specify the file which contains such lines for
- <filename>gtm.config</filename> file.
- Otherwise, specify <literal>none</literal>
+ If you'd like to add specific configuration parameters only to the GTM
+ master, specify the file which contains such lines for the
+ <filename>gtm.config</filename> file. Otherwise, specify
+ <literal>none</literal>
</para>
</listitem>
</varlistentry>
@@ -969,9 +948,8 @@ PGXC$ prepare config minimal my_minimal_config.conf
<term><option>gtmSlave</option></term>
<listitem>
<para>
- Option to enable GTM slave.
- Specify <literal>y</literal> to enable, <literal>n</literal>
- otherwise.
+ Option to enable a GTM slave. Specify <literal>y</literal> to enable,
+ <literal>n</literal> otherwise.
</para>
</listitem>
</varlistentry>
@@ -1007,7 +985,8 @@ PGXC$ prepare config minimal my_minimal_config.conf
<term><option>gtmSlaveServer</option></term>
<listitem>
<para>
- Host name where GTM slave runs. Effective only when GTM slave is effective.
+ Host name of where the GTM slave runs. Effective only when a GTM slave
+ is configured.
</para>
</listitem>
</varlistentry>
@@ -1016,10 +995,10 @@ PGXC$ prepare config minimal my_minimal_config.conf
<term><option>gtmSlaveSpecificExtraConfig</option></term>
<listitem>
<para>
- If you'd like to add specific configuration only to GTM slave,
- specify the file which contains such lines for
- <filename>gtm.config</filename> file.
- Otherwise, specify <literal>none</literal>.
+ If you'd like to add specific configuration parameters only to the GTM
+ slave, specify the file which contains such lines for the
+ <filename>gtm.config</filename> file. Otherwise, specify
+ <literal>none</literal>.
</para>
</listitem>
</varlistentry>
@@ -1038,12 +1017,11 @@ PGXC$ prepare config minimal my_minimal_config.conf
<listitem>
<para>
This specifies if you configure any GTM proxy in your
- <application>Postgres-XL</application> cluster.
- Specify the value <literal>y</literal> if you configure gtm
- proxy in your <application>Postgres-XL</application> cluster.
- Otherwise specify <literal>n</literal>.
- If you specify <literal>n</literal>, all the other parameters
- for gtm_proxy will be ignored.
+ <application>Postgres-XL</application> cluster. Specify the value
+ <literal>y</literal> if you configure the GTM proxy in your
+ <application>Postgres-XL</application> cluster. Otherwise specify
+ <literal>n</literal>. If you specify <literal>n</literal>, all the
+ other parameters for gtm_proxy will be ignored.
</para>
</listitem>
</varlistentry>
@@ -1052,10 +1030,9 @@ PGXC$ prepare config minimal my_minimal_config.conf
<term><option>gtmProxyDir</option></term>
<listitem>
<para>
- This is a shortcut used to assign same work directory to all
- the GTM proxies.
- You don't have to worry about it when you specify these values
- manually.
+ This is a shortcut used to assign the same work directory to all GTM
+ proxies. You don't have to worry about it when you specify these
+ values manually.
</para>
</listitem>
</varlistentry>
@@ -1073,10 +1050,10 @@ PGXC$ prepare config minimal my_minimal_config.conf
<term><option>gtmPxyExtraConfig</option></term>
<listitem>
<para>
- If you'd like to add configuration value to all the GTM proxy,
- specify the file name which contains such lines for
- <filename>gtm_proxy.conf</filename>.
- Otherwise specify <literal>none</literal>.
+ If you'd like to add configuration values to all GTM proxies, specify
+ the file name which contains such lines for the
+ <filename>gtm_proxy.conf</filename>. Otherwise specify
+ <literal>none</literal>.
</para>
</listitem>
</varlistentry>
@@ -1087,8 +1064,8 @@ PGXC$ prepare config minimal my_minimal_config.conf
<para>
Specify unique name for each GTM proxy.
This is an array.
- In the template, we have four servers for coordinator and
- datanode and we have four gtm proxy as well.
+ In the template, we have four servers for Coordinators and
+ Datanodes and we have four GTM Proxies as well.
</para>
</listitem>
</varlistentry>
@@ -1137,8 +1114,8 @@ PGXC$ prepare config minimal my_minimal_config.conf
<term><option>coordArchLogDir</option></term>
<listitem>
<para>
- Shortcut to assign the same WAL archive directory to all the
- coordinator slaves.
+ Shortcut to assign the same WAL archive directory to all
+ Coordinator slaves.
Not needed if you specify these manually.
</para>
</listitem>
@@ -1148,10 +1125,10 @@ PGXC$ prepare config minimal my_minimal_config.conf
<term><option>coordArchLogDirs</option></term>
<listitem>
<para>
- Array of WAL archive log directory for each datanode slave.
- If you don't configure coordinator slaves and specify
- coordSlave variable value to <literal>n</literal>, you don't
- have to worry about this variable.
+ Array of WAL archive log directory for each datanode slave. If you
+ don't configure Coordinator slaves and specify a coordSlave variable
+ value to <literal>n</literal>, you don't have to worry about this
+ variable.
</para>
</listitem>
</varlistentry>
@@ -1160,10 +1137,10 @@ PGXC$ prepare config minimal my_minimal_config.conf
<term><option>coordExtraConfig</option></term>
<listitem>
<para>
- If you would like to add extra configuration value for all the
+ If you would like to add extra configuration values for all
coordinators, specify the file name containing such lines for
- <filename>postgresql.conf</filename>.
- Specify <literal>none</literal> otherwise.
+ <filename>postgresql.conf</filename>. Specify <literal>none</literal>
+ otherwise.
</para>
</listitem>
</varlistentry>
@@ -1172,10 +1149,9 @@ PGXC$ prepare config minimal my_minimal_config.conf
<term><option>coordExtraPgHba</option></term>
<listitem>
<para>
- File name which contains entries of
- <filename>pg_hba.conf</filename> file for all the
- coordinators.
- Specify <literal>none</literal> if you do not have such file.
+ File name which contains entries for the
+ <filename>pg_hba.conf</filename> file for all coordinators. Specify
+ <literal>none</literal> if you do not have such file.
</para>
</listitem>
</varlistentry>
@@ -1184,8 +1160,7 @@ PGXC$ prepare config minimal my_minimal_config.conf
<term><option>coordMasterDir</option></term>
<listitem>
<para>
- Shortcut to assign the same work directory to all the
- coordinator masters.
+ Shortcut to assign the same work directory to all coordinator masters.
Not needed if you specify these manually.
</para>
</listitem>
@@ -1236,8 +1211,8 @@ PGXC$ prepare config minimal my_minimal_config.conf
<term><option>coordNames</option></term>
<listitem>
<para>
- Array to specify coordinator names.
- Coordinator slave uses the same name as the master.
+ Array to specify Coordinator names. A Coordinator slave uses the same
+ name as the master.
</para>
</listitem>
</varlistentry>
@@ -1267,9 +1242,9 @@ PGXC$ prepare config minimal my_minimal_config.conf
<term><option>poolerPorts</option></term>
<listitem>
<para>
- Array of the port number for each pooler. Pooler takes
- care of the connection between coordinator and datanode and
- needs separate port.
+ Array of the port number for each pooler. The pooler takes care of the
+ connection between a Coordinator and Datanode and needs a separate
+ port.
</para>
</listitem>
</varlistentry>
@@ -1278,11 +1253,9 @@ PGXC$ prepare config minimal my_minimal_config.conf
<term><option>coordSlave</option></term>
<listitem>
<para>
- Specify <literal>y</literal> if you configure coordinator
- slave.
- <literal>n</literal> otherwise. If you specify
- <literal>n</literal>, then all the other variables for
- coordinator slave will be ignored.
+ Specify <literal>y</literal> if you configure a Coordinator slave.
+ <literal>n</literal> otherwise. If you specify <literal>n</literal>,
+ then all the other variables for coordinator slave will be ignored.
</para>
</listitem>
</varlistentry>
@@ -1291,9 +1264,8 @@ PGXC$ prepare config minimal my_minimal_config.conf
<term><option>coordSlaveDir</option></term>
<listitem>
<para>
- Shortcut to assign the same work directory to all the
- coordinator slaves. Not needed if you specify these
- manually.
+ Shortcut to assign the same work directory to all the Coordinator
+ slaves. Not needed if you specify these manually.
</para>
</listitem>
</varlistentry>
@@ -1302,7 +1274,7 @@ PGXC$ prepare config minimal my_minimal_config.conf
<term><option>coordSlaveDirs</option></term>
<listitem>
<para>
- Array of work directory for each coordinator slaves.
+ Array of work directories for each Coordinator slaves.
</para>
</listitem>
</varlistentry>
@@ -1311,9 +1283,9 @@ PGXC$ prepare config minimal my_minimal_config.conf
<term><option>coordSlaveServers</option></term>
<listitem>
<para>
- Array of the hostname where slave of each
- coordinator runs. Specify <literal>none</literal> if you don't configure the
- slave for specific coordinator.
+ Array of the hostname where the slave of each Coordinator runs.
+ Specify <literal>none</literal> if you don't configure the slave for
+ specific coordinator.
</para>
</listitem>
</varlistentry>
@@ -1331,9 +1303,9 @@ PGXC$ prepare config minimal my_minimal_config.conf
<term><option>coordSlavePoolerPorts</option></term>
<listitem>
<para>
- Array of the port number for each pooler. Pooler takes
- care of the connection between coordinator and datanode and
- needs separate port.
+ Array of the port number for each pooler. The pooler takes care of the
+ connection between a Coordinator and Datanode and needs a separate
+ port.
</para>
</listitem>
</varlistentry>
@@ -1342,9 +1314,9 @@ PGXC$ prepare config minimal my_minimal_config.conf
<term><option>coordSpecificExtraConfig</option></term>
<listitem>
<para>
- Array of the filename which contains extra
- configuration values for each coordinator. Specify <literal>none</literal>
- if you don't have such file.
+ Array of the filename which contains extra configuration values for
+ each coordinator. Specify <literal>none</literal> if you don't have
+ such a file.
</para>
</listitem>
</varlistentry>
@@ -1362,9 +1334,8 @@ PGXC$ prepare config minimal my_minimal_config.conf
<term><option>datanodeArchLogDir</option></term>
<listitem>
<para>
- Shortcut to assign the same WAL archive directory
- to all the datanode slaves. Not needed if you specify these
- manually.
+ Shortcut to assign the same WAL archive directory to all the Datanode
+ slaves. Not needed if you specify these manually.
</para>
</listitem>
</varlistentry>
@@ -1373,8 +1344,7 @@ PGXC$ prepare config minimal my_minimal_config.conf
<term><option>datanodeArchLogDirs</option></term>
<listitem>
<para>
- Array of WAL archive log directory for each datanode
- slave.
+ Array of WAL archive log directory for each Datanode slave.
</para>
</listitem>
</varlistentry>
@@ -1383,9 +1353,9 @@ PGXC$ prepare config minimal my_minimal_config.conf
<term><option>datanodeExtraConfig</option></term>
<listitem>
<para>
- If you would like to add extra configuration value
- for all the datanodes, specify the file name containing
- such lines for postgresql.conf. Specify <literal>none</literal> otherwise.
+ If you would like to add extra configuration values for all the
+ Datanodes, specify the file name containing such lines for
+ postgresql.conf. Specify <literal>none</literal> otherwise.
</para>
</listitem>
</varlistentry>
@@ -1394,9 +1364,8 @@ PGXC$ prepare config minimal my_minimal_config.conf
<term><option>datanodeExtraPgHba</option></term>
<listitem>
<para>
- File name which contains entries for all the
- datanodes' pg_hba.conf file. Specify <literal>none</literal> if you don't
- have such file.
+ File name which contains entries for all the Datanodes' pg_hba.conf
+ file. Specify <literal>none</literal> if you don't have such file.
</para>
</listitem>
</varlistentry>
@@ -1405,9 +1374,8 @@ PGXC$ prepare config minimal my_minimal_config.conf
<term><option>datanodeMasterDir</option></term>
<listitem>
<para>
- Shortcut to assign the same work directory to all
- the datanode masters. Not needed if you specify these
- manually.
+ Shortcut to assign the same work directory to all Datanode masters.
+ Not needed if you specify these manually.
</para>
</listitem>
</varlistentry>
@@ -1416,7 +1384,7 @@ PGXC$ prepare config minimal my_minimal_config.conf
<term><option>datanodeMasterDirs</option></term>
<listitem>
<para>
- Array of datanode master work directory.
+ Array of Datanode masters' work directories.
</para>
</listitem>
</varlistentry>
@@ -1425,8 +1393,8 @@ PGXC$ prepare config minimal my_minimal_config.conf
<term><option>datanodeMaterServers</option></term>
<listitem>
<para>
- Array of the host name where each datanode master runs.
- Specify in the order of $coordNames above.
+ Array of the host name where each Datanode master runs. Specify in the
+ order of $coordNames above.
</para>
</listitem>
</varlistentry>
@@ -1435,9 +1403,8 @@ PGXC$ prepare config minimal my_minimal_config.conf
<term><option>datanodeMaxWalSender</option></term>
<listitem>
<para>
- shortcut to assign the same value to each member of
- datanodeMaxWalSenders. Not needed if you assign the value
- manually.
+ Shortcut to assign the same value to each member of
+ datanodeMaxWalSenders. Not needed if you assign the value manually.
</para>
</listitem>
</varlistentry>
@@ -1446,7 +1413,7 @@ PGXC$ prepare config minimal my_minimal_config.conf
<term><option>datanodeMaxWalSenders</option></term>
<listitem>
<para>
- Array of datanode max_wal_senders value.
+ Array of Datanode max_wal_senders value.
</para>
</listitem>
</varlistentry>
@@ -1455,7 +1422,7 @@ PGXC$ prepare config minimal my_minimal_config.conf
<term><option>datanodeNames</option></term>
<listitem>
<para>
- Array to specify coordinator names.
+ Array to specify Coordinator names.
</para>
</listitem>
</varlistentry>
@@ -1475,7 +1442,7 @@ PGXC$ prepare config minimal my_minimal_config.conf
<term><option>datanodePorts</option></term>
<listitem>
<para>
- Array of the listening port number for each datanode.
+ Array of the listening port number for each Datanode.
</para>
</listitem>
</varlistentry>
@@ -1484,9 +1451,9 @@ PGXC$ prepare config minimal my_minimal_config.conf
<term><option>datanodePoolerPorts</option></term>
<listitem>
<para>
- Array of the port number for each pooler. Pooler takes
- care of the connection between datanode and datanode and
- needs separate port.
+ Array of the port number for each pooler. Pooler takes care of the
+ connection between a Coordinator and Datanode and needs a separate
+ port.
</para>
</listitem>
</varlistentry>
@@ -1495,9 +1462,9 @@ PGXC$ prepare config minimal my_minimal_config.conf
<term><option>datanodeSlave</option></term>
<listitem>
<para>
- Specify <literal>y</literal> if you configure datanode slaves. <literal>n</literal>
- otherwise. If you specify <literal>n</literal>, all the other variables for
- datanode slaves will be ignored.
+ Specify <literal>y</literal> if you configure Datanode slaves.
+ <literal>n</literal> otherwise. If you specify <literal>n</literal>,
+ all the other variables for Datanode slaves will be ignored.
</para>
</listitem>
</varlistentry>
@@ -1506,9 +1473,8 @@ PGXC$ prepare config minimal my_minimal_config.conf
<term><option>datanodeSlaveDir</option></term>
<listitem>
<para>
- Shortcut to assign the same work directory to all
- the datanode slaves. Not needed if you specify these
- manually.
+ Shortcut to assign the same work directory to all Datanode slaves. Not
+ needed if you specify these manually.
</para>
</listitem>
</varlistentry>
@@ -1517,7 +1483,7 @@ PGXC$ prepare config minimal my_minimal_config.conf
<term><option>datanodeSlaveDirs</option></term>
<listitem>
<para>
- Array of work directory for each datanode slave.
+ Array of work directories for each Datanode slave.
</para>
</listitem>
</varlistentry>
@@ -1526,9 +1492,9 @@ PGXC$ prepare config minimal my_minimal_config.conf
<term><option>datanodeSlaveServers</option></term>
<listitem>
<para>
- Array of the hostname where slave of each
- datanode runs. Specify <literal>none</literal> if you don't configure the
- slave for specific coordinator.
+ Array of the hostname where the slave of each Datanode runs. Specify
+ <literal>none</literal> if you don't configure the slave for specific
+ coordinator.
</para>
</listitem>
</varlistentry>
@@ -1546,9 +1512,9 @@ PGXC$ prepare config minimal my_minimal_config.conf
<term><option>datanodeSlavePoolerPorts</option></term>
<listitem>
<para>
- Array of the port number for each pooler. Pooler takes
- care of the connection between datanode and datanode and
- needs separate port.
+ Array of the port number for each pooler. The pooler takes care of the
+ connection between a Coordinator and Datanode and needs a separate
+ port.
</para>
</listitem>
</varlistentry>
@@ -1557,9 +1523,9 @@ PGXC$ prepare config minimal my_minimal_config.conf
<term><option>datanodeSpecificExtraConfig</option></term>
<listitem>
<para>
- Array of the filename which contains extra
- configuration values for each datanode. Specify <literal>none</literal>
- if you don't have such file.
+ Array of the filename that contains extra configuration values for each
+ Datanode. Specify <literal>none</literal> if you don't have such
+ file.
</para>
</listitem>
</varlistentry>
@@ -1568,9 +1534,9 @@ PGXC$ prepare config minimal my_minimal_config.conf
<term><option>datanodeSpecificExtraPgHba</option></term>
<listitem>
<para>
- Array of file names which contain specific
- extra pg_hba.conf entry for each datanode. Specify <literal>none</literal>
- if you don't have such file.
+ Array of file names which contain specific extra pg_hba.conf entries
+ for each Datanode. Specify <literal>none</literal> if you don't have
+ such file.
</para>
</listitem>
</varlistentry>
@@ -1579,9 +1545,9 @@ PGXC$ prepare config minimal my_minimal_config.conf
<term><option>primaryDataode</option></term>
<listitem>
<para>
- Specify name of the primary node. This must be one of
- the name in $datanodeNames. If you don't want the primary
- node, specify <literal>N/A</literal> or <literal>none</literal>.
+ Specify name of the primary node. This must be one of the names in
+ $datanodeNames. If you don't want the primary node, specify
+ <literal>N/A</literal> or <literal>none</literal>.
</para>
</listitem>
</varlistentry>
@@ -1618,18 +1584,18 @@ PGXC$ prepare config minimal my_minimal_config.conf
<term><literal>add datanode slave <replaceable class="parameter">name</replaceable> <replaceable class="parameter">host</replaceable> <replaceable class="parameter">port</replaceable> <replaceable class="parameter">pooler</replaceable> <replaceable class="parameter">dir</replaceable> <replaceable class="parameter">archDir</replaceable></literal></term>
<listitem>
<para>
- Add the specified node to your Postgres-XL cluster.
- Each node need host name and its work directory. gtm slave,
- gtm_proxy, coordinator master/slave and datanode master/slave need its own port
- to listen to. Coordinator and datanode also need pooler port to pool
- connections to datanodes. Coordinator and datanode slave need a
- directory to receive WAL segments from their master. While adding coordinator
- master and datanode master, extra server configuration and extra pg_hba
- configuration can be specified in a file.
+ Add the specified node to your Postgres-XL cluster. Each node needs a
+ host name and its work directory. GTM slave, GTM proxy, Coordinator
+ master/slave and Datanode master/slave need its own port to listen to.
+ Coordinators and Datanodes also need a pooler port to pool connections to
+ Datanodes. Coordinator and Datanode slaves need a directory to receive
+ WAL segments from their master. While adding a Coordinator master and a
+ Datanode master, extra server configuration and extra pg_hba
+ configuration parameters can be specified in a file.
</para>
<para>
- When you add coordinator and datanode master, node information at
- all the coordinators will be updated with the new one and gtm_proxy
+ When you add a Coordinator and Datanode master, node information at
+ all of the Coordinators will be updated with the new one and gtm_proxy
will be selected automatically based upon where the new node runs.
</para>
<para>
@@ -1654,8 +1620,8 @@ PGXC$ prepare config minimal my_minimal_config.conf
<listitem>
<para>
Invokes createuser utility to create a new user using specified
- coordinator. Of coordinator is not specified, <application>pgxc_ctl</application> chooses
- one of the available ones.
+ coordinator. If a Coordinator is not specified,
+ <application>pgxc_ctl</application> chooses one of the available ones.
</para>
</listitem>
</varlistentry>
@@ -1664,12 +1630,12 @@ PGXC$ prepare config minimal my_minimal_config.conf
<term><literal>deploy [ all | <replaceable class="parameter">host ...</replaceable> ]</literal></term>
<listitem>
<para>
- Deploys Postgres-XL binaries and other installation material to
- specified hosts. If "all" is specified, they will be deployed to
- all the hosts found in the configuration file. If list of the host
- is specifies, deployment will be done to all the specified hosts,
- regardless if they are found in the configuration file or not.
- Target directory is taken from the variable <option>pgxcInstallDir</option>.
+ Deploys Postgres-XL binaries and other installation material to specified
+ hosts. If "all" is specified, they will be deployed to all hosts found
+ in the configuration file. If a list of the hosts are specified,
+ deployment will be done to all the specified hosts, regardless if they
+ are found in the configuration file or not. Target directory is taken
+ from the variable <option>pgxcInstallDir</option>.
</para>
</listitem>
</varlistentry>
@@ -1697,12 +1663,11 @@ PGXC$ prepare config minimal my_minimal_config.conf
Initializes specified nodes.
</para>
<para>
- At initialization, all the working directories of each component
- will be created if it does not exist. If it does and
-<literal>force</literal> is specified, then all the
- contents under the working directory will be removed. Without
-<literal>force</literal> option, existing non-empty directories will not be
-cleaned and the server will start with the existing data.
+ At initialization, all the working directories of each component will be
+ created if it does not exist. If it does and <literal>force</literal> is
+ specified, then all contents under the working directory will be removed.
+ Without <literal>force</literal> option, existing non-empty directories will
+ not be cleaned and the server will start with the existing data.
</para>
<para>
When "all" option is specified, then node information at each
@@ -1722,8 +1687,8 @@ cleaned and the server will start with the existing data.
<term><literal>kill datanode [ master | slave ] [ all | <replaceable class="parameter">nodename ...</replaceable> ]</literal></term>
<listitem>
<para>
- Kills specified node. If nodename is specified and it has both
- master and slave, then both master and slave will be chosen.
+ Kills specified node. If nodename is specified and it has both a master
+ and slave, then both master and slave will be chosen.
</para>
<para>
When killing components, their ports will be cleaned too.
@@ -1736,10 +1701,9 @@ cleaned and the server will start with the existing data.
<term><literal>log [ message | msg ] <replaceable class="parameter">message_body</replaceable></literal></term>
<listitem>
<para>
- Prints the specified contents to the log file.
- variable or var option writes specified variable name and its
- value.
- message or msg option writes specified message.
+ Prints the specified contents to the log file. Variable or var option
+ writes specified variable name and its value. Message or msg option
+ writes specified message.
</para>
</listitem>
</varlistentry>
@@ -1764,11 +1728,12 @@ cleaned and the server will start with the existing data.
<term><literal>prepare [ <replaceable class="parameter">path</replaceable> ]</literal></term>
<listitem>
<para>
- Write pgxc_ctl configuration file template to the specified file.
- If path option is not specified, target file will be default
- configuration file, or the file specified by configFile option in
- <filename>/etc/pgxc_ctl</filename> or <filename>~/.pgxc_ctl</filename>. If you specify relative path, it
- will be against <filename>pgxc_ctl_home</filename>.
+ Write pgxc_ctl configuration file template to the specified file. If
+ path option is not specified, target file will be the default
+ configuration file, or the file specified by configFile option in
+ <filename>/etc/pgxc_ctl</filename> or <filename>~/.pgxc_ctl</filename>.
+ If you specify relative path, it will be against
+ <filename>pgxc_ctl_home</filename>.
</para>
</listitem>
</varlistentry>
@@ -1778,9 +1743,9 @@ cleaned and the server will start with the existing data.
<term><literal></literal></term>
<listitem>
<para>
- Invokes psql targetted to specified coordinator. If no
- coordinator is specifies, <application>pgxc_ctl</application> will choose one of the available
- ones.
+ Invokes psql targetted to the specified Coordinator. If no Coordinator
+ is specified, <application>pgxc_ctl</application> will choose one of the
+ available ones.
</para>
</listitem>
</varlistentry>
@@ -1798,8 +1763,8 @@ cleaned and the server will start with the existing data.
<term><literal>reconnect gtm_proxy [ all | <replaceable class="parameter">nodename ...</replaceable> ]</literal></term>
<listitem>
<para>
- Reconnects specified gtm_proxy to new gtm. This is needed after
- you failover gtm to its slave.
+ Reconnects specified gtm_proxy to a new GTM. This is needed after you
+ failover a GTM to its slave.
</para>
</listitem>
</varlistentry>
@@ -1823,7 +1788,7 @@ cleaned and the server will start with the existing data.
<para>
Set variable value. You can specify multiple values to a
variable. In this case simply specify these values as separated
- value.
+ values.
</para>
</listitem>
</varlistentry>
@@ -1863,16 +1828,16 @@ cleaned and the server will start with the existing data.
<term><literal>stop [ -m smart | fast | immediate ] datanode [ master | slave ] [ all | <replaceable class="parameter">nodename ...</replaceable> ] </literal></term>
<listitem>
<para>
- Stops specified node. For datanode and coordinator, you can
- specify stop mode as in "pg_ctl stop" command.
+ Stops specified node. For Datanode and Coordinator, you can
+ specify stop as in the "pg_ctl stop" command.
</para>
<para>
- When you stop coordinator or datanode slave, the master will be
+ When you stop a Coordinator or Datanode slave, the master will be
reconfigured to remove synchronous replication.
</para>
<para>
- When you stop coordinator or datanode slave, the master will be
- reconfigurated to remove synchronous replication.
+ When you stop Coordinator or Datanode slave, the master will be
+ reconfigurated to remove synchronous replication.
</para>
</listitem>
</varlistentry>
@@ -1881,9 +1846,8 @@ cleaned and the server will start with the existing data.
<term><literal>unregister <replaceable class="parameter">unregister_option ...</replaceable></literal></term>
<listitem>
<para>
- Unregisteres specified node from the gtm. This could be needed
- when some node crashes and would like to start new one.
-
+ Unregisteres specified node from the GTM. This could be needed when
+ starting a new node after a node crashes.
</para>
<para>
unregister_option is one of the following:
diff --git a/doc/src/sgml/pgxcclean.sgml b/doc/src/sgml/pgxcclean.sgml
index f305afacc4..fc985bf802 100644
--- a/doc/src/sgml/pgxcclean.sgml
+++ b/doc/src/sgml/pgxcclean.sgml
@@ -17,14 +17,16 @@ pgxc_clean <optional> <replaceable>option</> </optional> <optional><replaceable>
</para>
<para>
- <application>pgxc_clean</application> is a Postgres-XL utility to maintain transaction status after a crash.
- When a Postgres-XL node crashes and recovers or fails over, the commit status of such node may be inconsistent
- with other nodes. <application>pgxc_clean</application> checks transaction commit status and corrects them.
+ <application>pgxc_clean</application> is a Postgres-XL utility to maintain
+ transaction status after a crash. When a Postgres-XL node crashes and
+ recovers or fails over, the commit status of the node may be inconsistent
+ with other nodes. <application>pgxc_clean</application> checks
+ transaction commit status and corrects them.
</para>
<para>
- You should run this utility against one of the available Coordinators. The tool cleans up transaction status
- of all the nodes automatically.
+ You should run this utility against one of the available Coordinators. The
+ tool cleans up transaction status of all nodes automatically.
</para>
</sect2>
diff --git a/doc/src/sgml/pgxcddl.sgml b/doc/src/sgml/pgxcddl.sgml
index 1f64bb7ed7..2afe06fb9d 100644
--- a/doc/src/sgml/pgxcddl.sgml
+++ b/doc/src/sgml/pgxcddl.sgml
@@ -21,12 +21,12 @@ pgxc_ddl <optional> <replaceable>option</> </optional>
</para>
<para>
- pgxc_ddl is used to synchronize all Coordinator catalog tables from
- one chosen by a user. It is also possible to launch a DDL on one
- Coordinator, and then to synchronize all the Coordinator catalog
- tables from the catalog table of the Coordinator having received the
- DDL. The copy method is "cold". All the Coordinators are stopped,
- catalog files are copied, then all the Coordinators are restarted.
+ pgxc_ddl is used to synchronize all Coordinator catalog tables from one
+ chosen by a user. It is also possible to launch a DDL on one Coordinator,
+ and then to synchronize all Coordinator catalog tables from the catalog
+ table of the Coordinator having received the DDL. The copy method is "cold".
+ All Coordinators are stopped, catalog files are copied, then all
+ Coordinators are restarted.
</para>
<para>
@@ -113,9 +113,9 @@ pgxc_ddl <optional> <replaceable>option</> </optional>
</para>
<para>
- Because <command>pgxc_ddl</command> requires access to Coordinator
- configuration file and data folders, the following parameters have
- to be set in <filename>pgxc.conf</filename>:
+ Because <command>pgxc_ddl</command> requires access to the Coordinator
+ configuration file and data folders, the following parameters have to be set
+ in <filename>pgxc.conf</filename>:
</para>
<table>
@@ -135,10 +135,10 @@ pgxc_ddl <optional> <replaceable>option</> </optional>
<entry><varname>coordinator_ports</varname></entry>
<entry>string</entry>
<entry>
- Specify the port number of all the Coordinators. Maintain the
- order of the value same as those in coordinator_hosts. It is
- necessary to specify a number of ports equal to the number of
- hosts. A comma separator is also necessary.
+ Specify the port number of all Coordinators. Maintain the order of the
+ value as the same as those in coordinator_hosts. It is necessary to
+ specify a number of ports equal to the number of hosts. A comma
+ separator is also necessary.
</entry>
</row>
@@ -146,10 +146,10 @@ pgxc_ddl <optional> <replaceable>option</> </optional>
<entry><varname>coordinator_folders</varname></entry>
<entry>string</entry>
<entry>
- Specify the data folders of all the Coordinators. Maintain the
- order of the value same as those in coordinator_hosts. It is
- necessary to specify a number of data folders equal to the
- number of hosts. A comma separator is also necessary.
+ Specify the data folders of all Coordinators. Maintain the order of the
+ value as the same as those in coordinator_hosts. It is necessary to
+ specify a number of data folders equal to the number of hosts. A comma
+ separator is also necessary.
</entry>
</row>
@@ -157,8 +157,8 @@ pgxc_ddl <optional> <replaceable>option</> </optional>
<entry><varname>coordinator_hosts</varname></entry>
<entry>string</entry>
<entry>
- Specify the host name or IP address of Coordinator. Separate
- each value with comma.
+ Specify the host name or IP address of the Coordinators. Separate each
+ value with comma.
</entry>
</row>
</tbody>
@@ -176,10 +176,10 @@ pgxc_ddl <optional> <replaceable>option</> </optional>
<note>
<para>
- Configuration files of Coordinators that have their catalog files
- changed are defined with an extension name postgresql.conf.number,
- "number" being the number of t Coordinator in the order defined
- in <filename>pgxc.conf</filename>.
+ Configuration files of Coordinators that have their catalog files changed
+ are defined with an extension name postgresql.conf.number, "number" being
+ the number of the Coordinator in the order defined in
+ <filename>pgxc.conf</filename>.
</para>
</note>
</sect2>
diff --git a/doc/src/sgml/pgxcmonitor.sgml b/doc/src/sgml/pgxcmonitor.sgml
index 0132dc2749..f0091153a9 100644
--- a/doc/src/sgml/pgxcmonitor.sgml
+++ b/doc/src/sgml/pgxcmonitor.sgml
@@ -26,8 +26,8 @@ pgxc_monitor <optional> <replaceable>option</> </optional>
</para>
<para>
- If the target node is running, it exits with exit code zero.
- If not, it exits with non-zero exit code.
+ If the target node is running, it exits with exit code zero. If not, it
+ exits with a non-zero exit code.
</para>
<para>
@@ -64,8 +64,8 @@ pgxc_monitor <optional> <replaceable>option</> </optional>
<term><option>-n <replaceable class="parameter">nodename</replaceable></></term>
<listitem>
<para>
- Node name to use when testing gtm or gtm_proxy.
- Default value is <literal>pgxc_monitor</literal>.
+ Node name to use when testing the GTM or gtm_proxy. Default value is
+ <literal>pgxc_monitor</literal>.
</para>
</listitem>
</varlistentry>
@@ -132,21 +132,19 @@ pgxc_monitor <optional> <replaceable>option</> </optional>
<title>Options</title>
<para>
- When monitoring Coordinator or Datanode, <option>-p</option> and
- <option>-h</option> options can be supplied using
- <literal>.pgpass</literal> file.
- If you use non-default target database name, and username, as well as
- password,
- they must also be supplied using <option>.pgpass</option> file.
+ When monitoring a Coordinator or Datanode, <option>-p</option> and
+ <option>-h</option> options can be supplied using <literal>.pgpass</literal>
+ file. If you use a non-default target database name, and username, as well
+ as password, they must also be supplied using <option>.pgpass</option> file.
</para>
<para>
- If password is needed, it must also be supplied using
+ If a password is needed, it must also be supplied using
<option>.pgpass</option> file too.
</para>
<para>
- Monitoring Coordinator and Datanode uses system(3) function.
- Therefore,you should not use set-userid bit or set-groupid bit.
- Also, because this uses psql command, psql must be in your PATH.
+ Monitoring Coordinator and Datanode uses the system(3) function. Therefore,
+ you should not use the set-userid bit or set-groupid bit. Also, because
+ this uses psql command, psql must be in your PATH.
</para>
<para>
The username and database name can be specified via command line
diff --git a/doc/src/sgml/plperl.sgml b/doc/src/sgml/plperl.sgml
index 640cb375f7..a1207652a2 100644
--- a/doc/src/sgml/plperl.sgml
+++ b/doc/src/sgml/plperl.sgml
@@ -571,9 +571,8 @@ SELECT * from lotsa_md5(500);
<listitem>
<para>
- <command>PREPARE</> is not supported in the current release
- of <productname>Postgres-XL</>. This may be supported in the
- future releases.
+ <command>PREPARE</> is not supported in the current release of
+ <productname>Postgres-XL</>. This may be supported in a future releas.
</para>
<para>
diff --git a/doc/src/sgml/ref/alter_node.sgml b/doc/src/sgml/ref/alter_node.sgml
index 733ac9dcad..5485e70ec4 100644
--- a/doc/src/sgml/ref/alter_node.sgml
+++ b/doc/src/sgml/ref/alter_node.sgml
@@ -37,8 +37,8 @@ ALTER NODE <replaceable class="parameter">nodename</replaceable> WITH
cluster node information in catalog pgxc_node.
</para>
<para>
- Node connection that has been modified does not guaranty that connection
- information cached in pooler is updated accordingly.
+ Node connection that has been modified does not guarantee that connection
+ information cached in the pooler is updated accordingly.
</para>
<para>
<command>ALTER NODE</command> only runs on the local node where it is launched.
diff --git a/doc/src/sgml/ref/alter_table.sgml b/doc/src/sgml/ref/alter_table.sgml
index c030d2a8ff..1fd3d29f89 100644
--- a/doc/src/sgml/ref/alter_table.sgml
+++ b/doc/src/sgml/ref/alter_table.sgml
@@ -705,9 +705,9 @@ ALTER TABLE ALL IN TABLESPACE <replaceable class="PARAMETER">name</replaceable>
<term><literal>ROUNDROBIN</literal></term>
<listitem>
<para>
- Each row of the table will be placed in one of the Datanodes
- by round-robin manner. The value of the row will not be
- needed to determine what Datanode to go.
+ Each row of the table will be placed in one of the Datanodes in a
+ round-robin manner. The value of the row will not be needed to
+ determine what Datanode to go.
</para>
</listitem>
</varlistentry>
@@ -776,9 +776,9 @@ ALTER TABLE ALL IN TABLESPACE <replaceable class="PARAMETER">name</replaceable>
<term><literal>DELETE NODE</literal></term>
<listitem>
<para>
- This deletes a list of nodes where data of table is distributed
- to the existing list. If the list of nodes deleted contains nodes
- not used by table, an error is returned.
+ This deletes a list of nodes where the data of a table is distributed
+ to the existing list. If the list of nodes deleted contains nodes not
+ used by table, an error is returned.
</para>
</listitem>
</varlistentry>
@@ -1164,14 +1164,16 @@ ALTER TABLE ALL IN TABLESPACE <replaceable class="PARAMETER">name</replaceable>
<term>Default redistribution:</term>
<listitem>
<para>
- This is the slowest scenario possible. It is done in 3 or 4 steps. Data is firstly
- saved on Coordinator by fetching all the data with <command>COPY TO</> command. At
- this point all the tuples are saved using tuple store. The amount of cache allowed for
- tuple store operation can be controlled with <varname>work_mem</>. Then the table is
- truncated on all the nodes. Then catalogs are updated. Finally data inside tuple store
- is redistributed using an internal <command>COPY FROM</> mechanism. <command>REINDEX</>
- is issued if necessary. The overall performance of this scenario is close to the
- time necessary to run consecutively <command>COPY TO</> and <command>COPY FROM</>.
+ This is the slowest scenario possible. It is done in 3 or 4 steps. Data
+ is firstly saved on Coordinator by fetching all the data with
+ <command>COPY TO</> command. At this point all the tuples are saved
+ using a tuple store. The amount of cache allowed for tuple store
+ operation can be controlled with <varname>work_mem</>. Then the table
+ is truncated on all the nodes. Then catalogs are updated. Finally data
+ inside the tuple store is redistributed using an internal <command>COPY
+ FROM</> mechanism. <command>REINDEX</> is issued if necessary. The
+ overall performance of this scenario is close to the time necessary to
+ run consecutively <command>COPY TO</> and <command>COPY FROM</>.
</para>
</listitem>
</varlistentry>
@@ -1181,10 +1183,11 @@ ALTER TABLE ALL IN TABLESPACE <replaceable class="PARAMETER">name</replaceable>
<listitem>
<para>
The node list of a table can have new nodes as well as removed nodes.
- If nodes are only removed, <command>TRUNCATE</> is launched to remote nodes that are
- removed. If new nodes are added, then table data is fetch on Coordinator with <command>
- COPY TO</> and stored inside a tuplestore controlled with <varname>work_mem</>, then
- data stored is only sent to the new nodes using <command>COPY FROM</> with data stored
+ If nodes are only removed, <command>TRUNCATE</> is launched to remote
+ nodes that are removed. If new nodes are added, then table data is
+ fetched on the Coordinator with <command> COPY TO</> and stored inside
+ a tuple store controlled with <varname>work_mem</>, then data stored is
+ only sent to the new nodes using <command>COPY FROM</> with data stored
inside the tuplestore. <command>REINDEX</> is issued if necessary.
</para>
</listitem>
@@ -1194,14 +1197,17 @@ ALTER TABLE ALL IN TABLESPACE <replaceable class="PARAMETER">name</replaceable>
<term>Redistribution from replicated to distributed table:</term>
<listitem>
<para>
- If the relation node list contains new nodes, the default redistribution
- mechanism is used. However, if the node list of relation after redistribution is
- included in node list of relation after redistribution, as all the tuples are already
- located on remote nodes, it is not necessary to fetch any data on Coordinator. Hence,
- <command>DELETE</> is used to remove on remote nodes only the necessary tuples. This
- query uses selects tuples to remove with conditions based on the number of nodes in node
- list of relation after redistribution, the <literal>HASH</> or <literal>MODULO</> value
- used for new distribution and the remote node itself where <command>DELETE</> is launched..
+ If the relation node list contains new nodes, the default
+ redistribution mechanism is used. However, if the node list of relation
+ after redistribution is included in node list of relation after
+ redistribution, as all the tuples are already located on remote nodes,
+ it is not necessary to fetch any data on Coordinator. Hence,
+ <command>DELETE</> is used to remove on remote nodes only the necessary
+ tuples. This query selects tuples to remove with conditions based on
+ the number of nodes in the node list of relations after redistribution,
+ the <literal>HASH</> or <literal>MODULO</> value used for new
+ distribution and the remote node itself where <command>DELETE</> is
+ launched..
<command>REINDEX</> is issued if necessary.
</para>
</listitem>
diff --git a/doc/src/sgml/ref/checkpoint.sgml b/doc/src/sgml/ref/checkpoint.sgml
index 00d65cfa37..e51dd9f9cf 100644
--- a/doc/src/sgml/ref/checkpoint.sgml
+++ b/doc/src/sgml/ref/checkpoint.sgml
@@ -53,8 +53,8 @@ CHECKPOINT
</para>
<para>
- In <productname>Postrgres-XL</>, <command>CHECKPOINT</> is
- performed at local Coordinator and all of the underlying Datanodes.
+ In <productname>Postrgres-XL</>, <command>CHECKPOINT</> is performed at the
+ local Coordinator and all of the underlying Datanodes.
</para>
</refsect1>
diff --git a/doc/src/sgml/ref/clean_connection.sgml b/doc/src/sgml/ref/clean_connection.sgml
index 2b0cdee80d..08faa91950 100644
--- a/doc/src/sgml/ref/clean_connection.sgml
+++ b/doc/src/sgml/ref/clean_connection.sgml
@@ -70,8 +70,8 @@ CLEAN CONNECTION TO { COORDINATOR ( <replaceable class="parameter">nodename</rep
<term><replaceable class="parameter">username</replaceable></term>
<listitem>
<para>
- If specified in the optional clause <literal>TO USER</literal>,
- pooler connections are cleaned for given role.
+ If specified in the optional clause <literal>TO USER</literal>, pooler
+ connections are cleaned for the given role.
</para>
</listitem>
</varlistentry>
@@ -80,10 +80,9 @@ CLEAN CONNECTION TO { COORDINATOR ( <replaceable class="parameter">nodename</rep
<term><replaceable class="parameter">nodename</replaceable></term>
<listitem>
<para>
- In the case of cleaning connections to a given list of
- Coordinator, <replaceable class="parameter">nodename</replaceable>
- has to be specified with the clause <literal>TO COORDINATOR
- </literal>.
+ In the case of cleaning connections to a given list of Coordinators,
+ <replaceable class="parameter">nodename</replaceable> has to be specified
+ with the clause <literal>TO COORDINATOR </literal>.
</para>
<para>
In the case of cleaning connections to a given list of
diff --git a/doc/src/sgml/ref/commit_prepared.sgml b/doc/src/sgml/ref/commit_prepared.sgml
index a0433964f8..58438f99a1 100644
--- a/doc/src/sgml/ref/commit_prepared.sgml
+++ b/doc/src/sgml/ref/commit_prepared.sgml
@@ -76,8 +76,8 @@ COMMIT PREPARED <replaceable class="PARAMETER">transaction_id</replaceable>
</para>
<para>
If <literal>xc_maintenance_mode</literal> GUC parameter is set to
- <literal>ON</literal>, <command>COMMIT PREPARED</command> will not propagate to
- other nodes. It just runs locally and report the result to
+ <literal>ON</literal>, <command>COMMIT PREPARED</command> will not propagate
+ to other nodes. It just runs locally and report the result to the
<literal>GTM</literal>.
</para>
</refsect1>
diff --git a/doc/src/sgml/ref/create_function.sgml b/doc/src/sgml/ref/create_function.sgml
index b65d81542c..4fea9840b3 100644
--- a/doc/src/sgml/ref/create_function.sgml
+++ b/doc/src/sgml/ref/create_function.sgml
@@ -45,15 +45,16 @@ CREATE [ OR REPLACE ] FUNCTION
<note>
<para>
- <productname>Postgres-XL</> function usage currently requires a lot of care,
- otherwise unexepected results may occur, and you may bring your data in an inconsistent state.
- This behavior may change in a future version to make it safer.
+ <productname>Postgres-XL</> function usage currently requires a lot of
+ care, otherwise unexepected results may occur, and you may bring your data
+ to an inconsistent state. This behavior may change in a future version to
+ make it safer.
</para>
<para>
- A call such as <literal>SELECT my_function(1,2);</>, without any FROM clause,
- will execute on a local coordinator, and
- may involve other datanodes and behave as expected, being driven from a coordinator.
+ A call such as <literal>SELECT my_function(1,2);</>, without any FROM
+ clause, will execute on a local Coordinator, and may involve other
+ Datanodes and behave as expected, being driven from a Coordinator.
</para>
<para>
@@ -536,8 +537,8 @@ CREATE [ OR REPLACE ] FUNCTION
</para>
<para>
- It is user's responsibility to deploy the file to all the
- servers where Coordinator or Datanode may run.
+ It is user's responsibility to deploy the file to all the servers where
+ a Coordinator or Datanode may run.
</para>
</listitem>
diff --git a/doc/src/sgml/ref/create_node.sgml b/doc/src/sgml/ref/create_node.sgml
index b7c628ac0a..ada1def117 100644
--- a/doc/src/sgml/ref/create_node.sgml
+++ b/doc/src/sgml/ref/create_node.sgml
@@ -100,12 +100,13 @@ CREATE NODE <replaceable class="parameter">nodename</replaceable> WITH
value can be specified. In case no value is specified, <literal>PREFERRED</literal>
acts like <literal>false</>.
</para>
- <para>
- You can specify different <literal>PREFERRED</literal> node at different coordinator.
- This parameter affects performance of your <literal>Postgres-XL</literal> cluster.
- If you configure a datanode where you configure a coordinator,
- you should specify <literal>PREFERRED</literal> for the coordinator to such a local datanode.
- This will save network communication and improve cluster-wide performance.
+ <para> You can specify different <literal>PREFERRED</literal> nodes at
+ different Coordinators. This parameter affects performance of your
+ <literal>Postgres-XL</literal> cluster. If you configure a Datanode
+ where you configure a Coordinator, you should specify
+ <literal>PREFERRED</literal> for the Coordinator to such a local
+ Datanode. This will save network communication and improve cluster-wide
+ performance.
</para>
</listitem>
diff --git a/doc/src/sgml/ref/create_nodegroup.sgml b/doc/src/sgml/ref/create_nodegroup.sgml
index 3b72c11d29..f5955dda86 100644
--- a/doc/src/sgml/ref/create_nodegroup.sgml
+++ b/doc/src/sgml/ref/create_nodegroup.sgml
@@ -29,7 +29,7 @@ WITH ( <replaceable class="parameter">nodename</replaceable> [, ... ] )
<para>
<command>CREATE NODE GROUP</command> is a SQL command specific
to <productname>Postgres-XL</productname> that creates
- node group information in catalog pgxc_group.
+ node group information in catalog table pgxc_group.
</para>
<para>
@@ -65,8 +65,8 @@ WITH ( <replaceable class="parameter">nodename</replaceable> [, ... ] )
<refsect1>
<title>Notes</title>
<para>
- A group of nodes works as an alias for node lists when defining tables
- on sub-clusters. Only Datanode can be included in node groups.
+ A group of nodes works as an alias for node lists when defining tables on
+ sub-clusters. Only Datanodes can be included in node groups.
</para>
</refsect1>
diff --git a/doc/src/sgml/ref/create_table_as.sgml b/doc/src/sgml/ref/create_table_as.sgml
index b59156bd74..ca32cbf1ad 100644
--- a/doc/src/sgml/ref/create_table_as.sgml
+++ b/doc/src/sgml/ref/create_table_as.sgml
@@ -233,9 +233,9 @@ CREATE [ [ GLOBAL | LOCAL ] { TEMPORARY | TEMP } | UNLOGGED ] TABLE [ IF NOT EXI
<term><literal>ROUNDROBIN</literal></term>
<listitem>
<para>
- Each row of the table will be placed in one of the Datanodes
- by round-robin manner. The value of the row will not be
- needed to determine what Datanode to go.
+ Each row of the table will be placed in one of the Datanodes in a
+ round-robin manner. The value of the row will not be needed to
+ determine what Datanode to go to.
</para>
</listitem>
</varlistentry>
diff --git a/doc/src/sgml/ref/execute_direct.sgml b/doc/src/sgml/ref/execute_direct.sgml
index cc97b4052e..843c3a5ae0 100644
--- a/doc/src/sgml/ref/execute_direct.sgml
+++ b/doc/src/sgml/ref/execute_direct.sgml
@@ -26,10 +26,10 @@ EXECUTE DIRECT ON ( <replaceable class="parameter">nodename</replaceable> [, ...
<title>Description</title>
<para>
- <command>EXECUTE DIRECT</command> is a SQL command specially created
- for <productname>Postgres-XL</productname> to allow to launch queries directly to dedicated
- nodes determined by a list of nodes <replaceable class="parameter">
- nodelist</replaceable>.
+ <command>EXECUTE DIRECT</command> is a SQL command specially created for
+ <productname>Postgres-XL</productname> to allow launching queries directly
+ on one of the nodes determined by a list of nodes <replaceable
+ class="parameter"> nodelist</replaceable>.
</para>
<para>
diff --git a/doc/src/sgml/ref/gtm.sgml b/doc/src/sgml/ref/gtm.sgml
index 2e33f00c55..b7e8f40bc3 100644
--- a/doc/src/sgml/ref/gtm.sgml
+++ b/doc/src/sgml/ref/gtm.sgml
@@ -30,16 +30,16 @@
Description
</title>
<para>
- gtm provides consistent transaction management fully compatible with
- vanilla PostgreSQL. It is highly advised to start and stop gtm
+ The GTM provides consistent transaction management fully compatible with
+ vanilla PostgreSQL. It is highly advised to start and stop the GTM
using gtm_ctl(8).
</para>
<para>
- You must provide a gtm configuration
- file <filename>gtm.conf</filename> placed in the gtm working directory
+ You must provide a GTM configuration
+ file <filename>gtm.conf</filename> placed in the GTM working directory
as specified by <literal>-D</literal> command line option. The
- configuration file specifies gtm running environment and resources.
+ configuration file specifies the GTM running environment and resources.
</para>
<para>
@@ -126,7 +126,7 @@
<para>
Specifies <literal>keepalives_count</literal> option for the
connection to <application>gtm</application>. This option is
- effective only when it runs as GTM Standby.
+ effective only when it runs as a GTM Standby.
The default value is zero and keepalives feature is disabled.
</para>
</listitem>
@@ -141,7 +141,7 @@
<para>
Specifies <literal>keepalives_idle</literal> option for the
connection to <application>gtm</application>. This option is
- effective only when it runs as GTM Standby.
+ effective only when it runs as a GTM Standby.
The default value is zero and keepalives feature is disabled.
</para>
</listitem>
@@ -156,7 +156,7 @@
<para>
Specifies <literal>keepalives_interval</literal> option for the
connection to <application>gtm</application>. This option is
- effective only when it runs as GTM Standby.
+ effective only when it runs as a GTM Standby.
The default value is zero and keepalives feature is disabled.
</para>
</listitem>
@@ -269,13 +269,13 @@
</indexterm></term>
<listitem>
<para>
- Specifies if the backup to GTM-Standby is taken synchronously. If this is turned on,
- GTM will send and receive synchronize message to make sure that all the backups
- reached to the standby.
+ Specifies if the backup to the GTM-Standby is taken synchronously. If
+ this is turned on, the GTM will send and receive synchronize message to
+ make sure that all the backups reached to the standby.
</para>
<para>
- If it is turned off, all the backup information will be sent without checking they
- reached to GTM-Standby.
+ If it is turned off, all the backup information will be sent without
+ checking they reached the GTM-Standby.
</para>
<para>
Default value is off.
@@ -379,7 +379,7 @@
</para>
<para>
- When starting GTM as a standby instance, the following options can also be provided.
+ When starting a GTM as a standby instance, the following options can also be provided.
</para>
<para>
@@ -388,7 +388,7 @@
<term><option>s</option></term>
<listitem>
<para>
- Specify if GTM starts as a standby
+ Specify if the GTM starts as a standby
</para>
</listitem>
</varlistentry>
@@ -397,7 +397,7 @@
<term><option>i</option></term>
<listitem>
<para>
- Specify host name or IP address of active GTM instance to connect to
+ Specify host name or IP address of the active GTM instance to connect to
</para>
</listitem>
</varlistentry>
@@ -406,7 +406,7 @@
<term><option>q</option></term>
<listitem>
<para>
- Specify port number of active GTM instance to connect to
+ Specify port number of the active GTM instance to connect to
</para>
</listitem>
</varlistentry>
diff --git a/doc/src/sgml/runtime.sgml b/doc/src/sgml/runtime.sgml
index 6260fe794c..8975bf3f70 100644
--- a/doc/src/sgml/runtime.sgml
+++ b/doc/src/sgml/runtime.sgml
@@ -398,7 +398,7 @@ su - postgres -c "/usr/local/pgsql/bin/pg_ctl start -l logfile -D /usr/local/pgs
<para>
As described in the previous chapter, <productname>XL</> consists of
various components.
- Minimum set of components are GTM, GTM-Proxy, Coordinator and
+ Minimum set of components are a GTM, GTM-Proxy, Coordinator and
Datanode.
You must configure and start each of them.
Following sections will give you how to configure and start them.
@@ -433,36 +433,33 @@ su - postgres -c "/usr/local/pgsql/bin/pg_ctl start -l logfile -D /usr/local/pgs
</sect2>
<sect2 id="gtm-start">
- <title>Starting GTM</title>
+ <title>Starting a GTM</title>
<para>
- GTM provides global transaction management feature to all the
- other components in <productname>Postgres-XL</> database cluster.
- Because GTM handles transaction requirements from all
- the Coordinators and Datanodes, it is highly advised to run this
- in a separate server.
+ The GTM provides global transaction management feature to all the other
+ components in <productname>Postgres-XL</> database cluster. Because the
+ GTM handles transaction requirements from all the Coordinators and
+ Datanodes, it is highly advised to run this in a separate server.
</para>
<para>
- Before you start GTM, you should decide followings:
+ Before you start the GTM, you should decide followings:
</para>
<variablelist>
<varlistentry>
- <term>Where to run GTM</term>
+ <term>Where to run the GTM</term>
<listitem>
<para>
- Because GTM receives all the request to begin/end
- transactions and to refer to sequence values, you should
- run GTM in a separate server. If you
- run GTM in the same server as Datanode or
- Coordinator, it will become harder to make workload reasonably
- balanced.
+ Because the GTM receives all the request to begin/end transactions and to
+ refer to sequence values, you should run the GTM in a separate server. If
+ you run the GTM in the same server as Datanode or Coordinator, it will
+ become harder to make a workload reasonably balanced.
</para>
<para>
- Then, you should determine GTM's working directory.
- Please create this directory before you run GTM.
+ Then, you should determine the GTM's working directory. Please create
+ this directory before you run the GTM.
</para>
</listitem>
</varlistentry>
@@ -471,11 +468,9 @@ su - postgres -c "/usr/local/pgsql/bin/pg_ctl start -l logfile -D /usr/local/pgs
<term>Listen address and port of GTM</term>
<listitem>
<para>
- Next, you should determine listen address and port
- of GTM.
- Listen address can be either the IP address or host name which
- receives request from other component,
- typically <filename>GTM-Proxy</filename>.
+ Next, you should determine listen address and port of the GTM. Listen
+ address can be either the IP address or host name which receives request
+ from other component, typically <filename>GTM-Proxy</filename>.
</para>
</listitem>
</varlistentry>
@@ -498,29 +493,30 @@ su - postgres -c "/usr/local/pgsql/bin/pg_ctl start -l logfile -D /usr/local/pgs
</variablelist>
<para>
- When this is determined, you can initialize GTM with the command <xref
- linkend="app-initgtm">,
- for example:
+ When this is determined, you can initialize the GTM with the command <xref
+ linkend="app-initgtm">, for example:
<screen>
<prompt>$</> <userinput>initgtm -Z gtm -D /usr/local/pgsql/data_gtm</userinput>
</screen>
</para>
<para>
- All the parameters related to GTM can be modified in <filename>gtm.conf</filename>
- located in data folder initialized by <command>initgtm</command>.
+ All the parameters related to the GTM can be modified in
+ <filename>gtm.conf</filename> located in data folder initialized by
+ <command>initgtm</command>.
</para>
<para>
- Then you can start GTM as follows:
+ Then you can start the GTM as follows:
<!-- Check precise parameters -->
<screen>
<prompt>$</> <userinput>gtm -D /usr/local/pgsql/data_gtm</userinput>
</screen>
- where <option>-D</> option specifies working directory of GTM.
+ where <option>-D</> option specifies working directory of the GTM.
</para>
<para>
- Alternatively, GTM can be started using <command>gtm_ctl</>, for example:
+ Alternatively, the GTM can be started using <command>gtm_ctl</>, for
+ example:
<screen>
<prompt>$</> <userinput>gtm_ctl -Z gtm start -D /usr/local/pgsql/data_gtm</userinput>
</screen>
@@ -528,37 +524,37 @@ su - postgres -c "/usr/local/pgsql/bin/pg_ctl start -l logfile -D /usr/local/pgs
</sect2>
<sect2 id="gtm-proxy-start">
- <title>Starting GTM-Proxy</title>
+ <title>Starting a GTM-Proxy</title>
<para>
- GTM-Proxy is not a mandatory component of Postgres-XL cluster but
- it can be used to group messages between GTM and cluster nodes,
- reducing workload and the number of packages exchanged through network.
+ A GTM-Proxy is not a mandatory component of Postgres-XL cluster but it can
+ be used to group messages between the GTM and cluster nodes, reducing
+ workload and the number of packages exchanged through network.
</para>
<para>
- As described in the previous section, <filename>GTM-Proxy</> needs
- its own listen address, port, working directory and GTM-Proxy ID,
- which should be unique and begins with one.
- In addition, you should determine how many working threads to
- run.
- You should also use GTM's address and port to start <filename>GTM-Proxy</>.
+ As described in the previous section, a <filename>GTM-Proxy</> needs its
+ own listen address, port, working directory and GTM-Proxy ID, which should
+ be unique and begins with one. In addition, you should determine how many
+ working threads to run. You should also use the GTM's address and port to
+ start <filename>GTM-Proxy</>.
</para>
<para>
- Then, you need first to initialize GTM-Proxy with <command>initgtm</command>,
- for example:
+ Then, you need first to initialize a GTM-Proxy with
+ <command>initgtm</command>, for example:
<screen>
<prompt>$</> <userinput>initgtm -Z gtm_proxy -D /usr/local/pgsql/data_gtm_proxy</userinput>
</screen>
</para>
<para>
- All the parameters related to GTM-Proxy can be modified in <filename>gtm_proxy.conf</filename>
- located in data folder initialized by <command>initgtm</command>.
+ All the parameters related to a GTM-Proxy can be modified in
+ <filename>gtm_proxy.conf</filename> located in data folder initialized by
+ <command>initgtm</command>.
</para>
<para>
- Then, you can start <filename>GTM-Proxy</> like:
+ Then, you can start a <filename>GTM-Proxy</> like:
<screen>
<prompt>$</> <userinput>gtm_proxy -D /usr/local/pgsql/data_gtm_proxy</userinput>
</screen>
@@ -567,8 +563,8 @@ su - postgres -c "/usr/local/pgsql/bin/pg_ctl start -l logfile -D /usr/local/pgs
</para>
<para>
- Alternatively, you can start GTM-Proxy using <filename>gtm_ctl</>
- as follows:
+ Alternatively, you can start a GTM-Proxy using <filename>gtm_ctl</> as
+ follows:
<screen>
<prompt>$</> <userinput>gtm_ctl start -Z gtm_proxy -D /usr/local/pgsql/data_gtm_proxy</userinput>
</screen>
@@ -587,10 +583,9 @@ su - postgres -c "/usr/local/pgsql/bin/pg_ctl start -l logfile -D /usr/local/pgs
</para>
<para>
- Datanode is almost native <productname>PostgreSQL</> with some
- extension.
- Additional options in <filename>postgresql.conf</> for the
- Datanode are as follows:
+ Datanode is almost native <productname>PostgreSQL</> with some extensions.
+ Additional options in <filename>postgresql.conf</> for the Datanode are as
+ follows:
</para>
<variablelist>
@@ -601,13 +596,10 @@ su - postgres -c "/usr/local/pgsql/bin/pg_ctl start -l logfile -D /usr/local/pgs
<para>
This value is not just a number of connections you expect to each
- Coordinator.
- Each Coordinator backend has a chance to connect to all the
- Datanode.
- You should specify number of total connections whole Coordinator
- may accept.
- For example, if you have five Coordinators and each of them may
- accept forty connections, you should specify 200 as this
+ Coordinator. Each Coordinator backend has a chance to connect to all the
+ Datanodes. You should specify number of total connections whole
+ Coordinator may accept. For example, if you have five Coordinators and
+ each of them may accept forty connections, you should specify 200 as this
parameter value.
</para>
</listitem>
@@ -617,10 +609,9 @@ su - postgres -c "/usr/local/pgsql/bin/pg_ctl start -l logfile -D /usr/local/pgs
<term>max_prepared_transactions</term>
<listitem>
<para>
- Even though your application does not intend to
- issue <command>PREPARE TRANSACTION</>, Coordinator may issue this
- internally when more than one Datanode are involved.
- You should specify this parameter the same value
+ Even though your application does not intend to issue <command>PREPARE
+ TRANSACTION</>, a Coordinator may issue this internally when more than one
+ Datanodes are involved. You should specify this parameter the same value
as <filename>max_connections</>.
</para>
</listitem>
@@ -630,8 +621,7 @@ su - postgres -c "/usr/local/pgsql/bin/pg_ctl start -l logfile -D /usr/local/pgs
<term>pgxc_node_name</term>
<listitem>
<para>
- GTM needs to identify each Datanode, as specified by
- this parameter.
+ The GTM needs to identify each Datanode, as specified by this parameter.
The value should be unique and start with one.
</para>
</listitem>
@@ -651,9 +641,8 @@ su - postgres -c "/usr/local/pgsql/bin/pg_ctl start -l logfile -D /usr/local/pgs
<term>gtm_port</term>
<listitem>
<para>
- Specify the port number of GTM-Proxy, as specified
- in <option>-p</> option in <command>gtm_proxy</>
- or <command>gtm_ctl</>.
+ Specify the port number of the GTM-Proxy, as specified in <option>-p</>
+ option in <command>gtm_proxy</> or <command>gtm_ctl</>.
</para>
</listitem>
</varlistentry>
@@ -662,7 +651,7 @@ su - postgres -c "/usr/local/pgsql/bin/pg_ctl start -l logfile -D /usr/local/pgs
<term>gtm_host</term>
<listitem>
<para>
- Specify the host name or IP address of GTM-Proxy, as specified
+ Specify the host name or IP address of the GTM-Proxy, as specified
in <option>-h</> option in <command>gtm_proxy</>
or <command>gtm_ctl</>.
</para>
@@ -705,7 +694,7 @@ su - postgres -c "/usr/local/pgsql/bin/pg_ctl start -l logfile -D /usr/local/pgs
<sect2 id="Coordinator-configuration">
<title>Configuring Coordinators</title>
<para>
- Although Coordinator and Datanode shares the same binary, their
+ Although Coordinators and Datanodes shares the same binary, their
configuration is a little different due to their functionalities.
</para>
@@ -715,10 +704,9 @@ su - postgres -c "/usr/local/pgsql/bin/pg_ctl start -l logfile -D /usr/local/pgs
<term>max_connections</term>
<listitem>
<para>
- You don't have to take other Coordinator or Datanode into
- account.
- Just specify the number of connections the Coordinator accepts
- from applications.
+ You don't have to take other Coordinators or Datanodes into account. Just
+ specify the number of connections the Coordinator accepts from
+ applications.
</para>
</listitem>
</varlistentry>
@@ -736,8 +724,7 @@ su - postgres -c "/usr/local/pgsql/bin/pg_ctl start -l logfile -D /usr/local/pgs
<term>pgxc_node_name</term>
<listitem>
<para>
- GTM needs to identify each Datanode, as specified by
- this parameter.
+ The GTM needs to identify each Datanode, as specified by this parameter.
</para>
</listitem>
</varlistentry>
@@ -746,10 +733,9 @@ su - postgres -c "/usr/local/pgsql/bin/pg_ctl start -l logfile -D /usr/local/pgs
<term>port</term>
<listitem>
<para>
- Because both Coordinator and Datanode may run on the same server,
- you may want to assign separate port number to the Coordinator.
- It may be convenient to use default value of PostgreSQL listen
- port.
+ Because both a Coordinator and Datanode may run on the same server, you
+ may want to assign separate port numbers to the Coordinator. It may be
+ convenient to use default value of PostgreSQL listen port.
</para>
</listitem>
</varlistentry>
@@ -758,9 +744,8 @@ su - postgres -c "/usr/local/pgsql/bin/pg_ctl start -l logfile -D /usr/local/pgs
<term>gtm_port</term>
<listitem>
<para>
- Specify the port number of GTM-Proxy, as specified
- in <option>-p</> option in <command>gtm_proxy</>
- or <command>gtm_ctl</>.
+ Specify the port number of the GTM-Proxy, as specified in <option>-p</>
+ option in <command>gtm_proxy</> or <command>gtm_ctl</>.
</para>
</listitem>
</varlistentry>
@@ -769,9 +754,8 @@ su - postgres -c "/usr/local/pgsql/bin/pg_ctl start -l logfile -D /usr/local/pgs
<term>gtm_host</term>
<listitem>
<para>
- Specify the host name or IP address of GTM-Proxy, as specified
- in <option>-h</> option in <command>gtm_proxy</>
- or <command>gtm_ctl</>.
+ Specify the host name or IP address of the GTM-Proxy, as specified in
+ <option>-h</> option in <command>gtm_proxy</> or <command>gtm_ctl</>.
</para>
</listitem>
</varlistentry>
@@ -790,11 +774,10 @@ su - postgres -c "/usr/local/pgsql/bin/pg_ctl start -l logfile -D /usr/local/pgs
<term>max_pool_size</term>
<listitem>
<para>
- Coordinator maintains connections to Datanode as a pool.
- This parameter specifies max number of connections the
- Coordinator maintains.
- Specify <option>max_connection</> value of remote nodes as this
- parameter value.
+ A Coordinator maintains connections to Datanodes as a pool. This
+ parameter specifies max number of connections the Coordinator maintains.
+ Specify <option>max_connection</> value of remote nodes as this parameter
+ value.
</para>
</listitem>
</varlistentry>
@@ -803,9 +786,8 @@ su - postgres -c "/usr/local/pgsql/bin/pg_ctl start -l logfile -D /usr/local/pgs
<term>min_pool_size</term>
<listitem>
<para>
- This is the minimum number of Coordinator to remote node connections
- maintained by the pooler.
- Typically specify 1.
+ This is the minimum number of Coordinators to remote node connections
+ maintained by the pooler. Typically specify 1.
</para>
</listitem>
</varlistentry>
@@ -867,16 +849,15 @@ su - postgres -c "/usr/local/pgsql/bin/pg_ctl start -l logfile -D /usr/local/pgs
<term>sequence_range</term>
<listitem>
<para>
- This parameter is used to get several sequence values
- at once from GTM. This greatly speeds up COPY and INSERT SELECT
- operations where the target table uses sequences.
- <productname>Postgres-XL</productname> will not use this entire
- amount at once, but will increase the request size over
- time if many requests are done in a short time frame in
- the same session.
- After a short time without any sequence requests, decreases back down to 1.
- Note that any settings here are overriden if the CACHE clause was
- used in <xref linkend='sql-createsequence'> or <xref linkend='sql-altersequence'>.
+ This parameter is used to get several sequence values at once from the
+ GTM. This greatly speeds up COPY and INSERT SELECT operations where the
+ target table uses sequences. <productname>Postgres-XL</productname>
+ will not use this entire amount at once, but will increase the request
+ size over time if many requests are done in a short time frame in the
+ same session. After a short time without any sequence requests,
+ decreases back down to 1. Note that any settings here are overriden if
+ the CACHE clause was used in <xref linkend='sql-createsequence'> or
+ <xref linkend='sql-altersequence'>.
</para>
</listitem>
</varlistentry>
diff --git a/doc/src/sgml/start.sgml b/doc/src/sgml/start.sgml
index 5df5daa93b..cf285f8777 100644
--- a/doc/src/sgml/start.sgml
+++ b/doc/src/sgml/start.sgml
@@ -122,10 +122,10 @@
</para>
<para>
- The GTM could be single point of failure (SPOF). To prevent this, you
- can run another GTM as GTM-Standby to backup GTM's status. When
- GTM fails, GTM-Proxy can switch to the standby on the fly. This
- will be described in detail in high-availability sections.
+ The GTM could be single point of failure (SPOF). To prevent this, you can
+ run another GTM as a GTM-Standby to backup GTM's status. When GTM fails,
+ GTM-Proxy can switch to the standby on the fly. This will be described in
+ detail in high-availability sections.
</para>
<para>