summaryrefslogtreecommitdiff
diff options
context:
space:
mode:
-rw-r--r--doc-xc/src/sgml/adminpack.sgmlin2
-rw-r--r--doc-xc/src/sgml/advanced.sgmlin2
-rw-r--r--doc-xc/src/sgml/arch-dev.sgmlin124
-rw-r--r--doc-xc/src/sgml/auto-explain.sgmlin8
-rw-r--r--doc-xc/src/sgml/backup.sgmlin28
-rw-r--r--doc-xc/src/sgml/config.sgmlin64
-rw-r--r--doc-xc/src/sgml/datatype.sgmlin2
-rw-r--r--doc-xc/src/sgml/ddl.sgmlin36
-rw-r--r--doc-xc/src/sgml/func.sgmlin30
-rw-r--r--doc-xc/src/sgml/high-availability.sgmlin4
-rw-r--r--doc-xc/src/sgml/history.sgmlin14
-rw-r--r--doc-xc/src/sgml/indices.sgmlin4
-rw-r--r--doc-xc/src/sgml/installation.sgmlin132
-rw-r--r--doc-xc/src/sgml/intro.sgmlin62
-rw-r--r--doc-xc/src/sgml/maintenance.sgmlin18
-rw-r--r--doc-xc/src/sgml/manage-ag.sgmlin2
-rw-r--r--doc-xc/src/sgml/mvcc.sgmlin32
-rw-r--r--doc-xc/src/sgml/oid2name.sgmlin6
-rw-r--r--doc-xc/src/sgml/pageinspect.sgmlin4
-rw-r--r--doc-xc/src/sgml/perform.sgmlin18
-rw-r--r--doc-xc/src/sgml/pgarchivecleanup.sgmlin4
-rw-r--r--doc-xc/src/sgml/pgbuffercache.sgmlin2
-rw-r--r--doc-xc/src/sgml/pgfreespacemap.sgmlin4
-rw-r--r--doc-xc/src/sgml/pgrowlocks.sgmlin4
-rw-r--r--doc-xc/src/sgml/pgstattuple.sgmlin4
-rw-r--r--doc-xc/src/sgml/pltcl.sgmlin4
-rw-r--r--doc-xc/src/sgml/query.sgmlin2
-rw-r--r--doc-xc/src/sgml/recovery-config.sgmlin6
-rw-r--r--doc-xc/src/sgml/ref/alter_database.sgmlin2
-rw-r--r--doc-xc/src/sgml/ref/alter_node.sgmlin2
-rw-r--r--doc-xc/src/sgml/ref/checkpoint.sgmlin2
-rw-r--r--doc-xc/src/sgml/ref/cluster.sgmlin2
-rw-r--r--doc-xc/src/sgml/ref/commit_prepared.sgmlin2
-rw-r--r--doc-xc/src/sgml/ref/copy.sgmlin2
-rw-r--r--doc-xc/src/sgml/ref/create_aggregate.sgmlin10
-rw-r--r--doc-xc/src/sgml/ref/create_database.sgmlin2
-rw-r--r--doc-xc/src/sgml/ref/create_function.sgmlin2
-rw-r--r--doc-xc/src/sgml/ref/create_index.sgmlin4
-rw-r--r--doc-xc/src/sgml/ref/create_node.sgmlin6
-rw-r--r--doc-xc/src/sgml/ref/create_nodegroup.sgmlin4
-rw-r--r--doc-xc/src/sgml/ref/create_table.sgmlin16
-rw-r--r--doc-xc/src/sgml/ref/create_table_as.sgmlin8
-rw-r--r--doc-xc/src/sgml/ref/create_tablespace.sgmlin2
-rw-r--r--doc-xc/src/sgml/ref/drop_database.sgmlin2
-rw-r--r--doc-xc/src/sgml/ref/explain.sgmlin6
-rw-r--r--doc-xc/src/sgml/ref/gtm.sgmlin4
-rw-r--r--doc-xc/src/sgml/ref/gtm_proxy.sgmlin4
-rw-r--r--doc-xc/src/sgml/ref/load.sgmlin6
-rw-r--r--doc-xc/src/sgml/ref/pg_controldata.sgmlin4
-rw-r--r--doc-xc/src/sgml/ref/pg_ctl-ref.sgmlin10
-rw-r--r--doc-xc/src/sgml/ref/pg_resetxlog.sgmlin4
-rw-r--r--doc-xc/src/sgml/ref/pgxc_clean-ref.sgmlin6
-rw-r--r--doc-xc/src/sgml/ref/pgxc_ddl.sgmlin36
-rw-r--r--doc-xc/src/sgml/ref/postgres-ref.sgmlin8
-rw-r--r--doc-xc/src/sgml/ref/postmaster.sgmlin2
-rw-r--r--doc-xc/src/sgml/ref/prepare_transaction.sgmlin4
-rw-r--r--doc-xc/src/sgml/ref/rollback_prepared.sgmlin2
-rw-r--r--doc-xc/src/sgml/ref/vacuum.sgmlin2
-rw-r--r--doc-xc/src/sgml/ref/vacuumdb.sgmlin2
-rw-r--r--doc-xc/src/sgml/release-xc-1.0.sgmlin6
-rw-r--r--doc-xc/src/sgml/runtime.sgmlin106
-rw-r--r--doc-xc/src/sgml/start.sgmlin36
-rw-r--r--doc-xc/src/sgml/wal.sgmlin10
63 files changed, 474 insertions, 474 deletions
diff --git a/doc-xc/src/sgml/adminpack.sgmlin b/doc-xc/src/sgml/adminpack.sgmlin
index 33e85f9268..fef740a815 100644
--- a/doc-xc/src/sgml/adminpack.sgmlin
+++ b/doc-xc/src/sgml/adminpack.sgmlin
@@ -38,7 +38,7 @@ int4 pg_catalog.pg_logfile_rotate()
<note>
<para>
- Functions of this module run only on the coordinator you're connecting.
+ Functions of this module run only on the Coordinator you're connecting.
</para>
</note>
diff --git a/doc-xc/src/sgml/advanced.sgmlin b/doc-xc/src/sgml/advanced.sgmlin
index d006de9dc1..8725aa8202 100644
--- a/doc-xc/src/sgml/advanced.sgmlin
+++ b/doc-xc/src/sgml/advanced.sgmlin
@@ -161,7 +161,7 @@ DETAIL: Key (city)=(Berkeley) is not present in table "cities".
distributes each row of tables based upon the value of the first
column of the table. You can choose any column as a basis of
table distribution, or you can have copies of a table in all the
- datanodes.
+ Datanodes.
</para>
<para>
Please refer to <xref linkend="sql-select"> for details.
diff --git a/doc-xc/src/sgml/arch-dev.sgmlin b/doc-xc/src/sgml/arch-dev.sgmlin
index 768208cc29..20d3293076 100644
--- a/doc-xc/src/sgml/arch-dev.sgmlin
+++ b/doc-xc/src/sgml/arch-dev.sgmlin
@@ -626,11 +626,11 @@
<para>
Coordinator is an entry point
to <productname>Postgres-XC</productname> from applications.
- You can configure more than one coordinators in the
+ You can configure more than one Coordinators in the
same <productname>Postgres-XC</productname>. With the help
of GTM, they provide transparent concurrency and integrity of
transactions globally. Application can choose any
- coordinator to connect with. Any coordinator provides the
+ Coordinator to connect with. Any Coordinator provides the
same view of the database.
</para>
</listitem>
@@ -641,11 +641,11 @@
<para>
Datanode stores user data. As described
in <xref linkend="whatis-in-short">
- and <xref linkend="SQL-CREATETABLE">, more than one datanodes
+ and <xref linkend="SQL-CREATETABLE">, more than one Datanodes
can be configured. Each table can be replicated or
- distributed among datanodes. A table is distributed, you can
+ distributed among Datanodes. A table is distributed, you can
choose a column as the distribute key, whose value is used to
- determine which datanode each row should be stored.
+ determine which Datanode each row should be stored.
</para>
</listitem>
</varlistentry>
@@ -762,7 +762,7 @@
(running, committed, aborted etc.) to provide snapshot globally
(global snapshot). Please note that global snapshot
includes <varname>GXID</varname> initiated by other
- coordinators or datanodes. This is needed because some older
+ Coordinators or Datanodes. This is needed because some older
transaction may visit new server after a while. In this case,
if <varname>GXID</varname> of such a transaction is not
included in the snapshot, this transaction may be regarded as
@@ -833,7 +833,7 @@
network. GTM architecture is intended to be used with Gigabit
local network. We encourage to install Postgres-XC with local
Gigabit network with minimum latency, that is, use as fewer
- switches involved in the connection among GTM, coordinator and
+ switches involved in the connection among GTM, Coordinator and
data nodes.
</para>
@@ -854,10 +854,10 @@
<step>
<para>
- GTM opens a port to accept connection from each coordinator and
- datanode backend. When GTM accepts a connection, it creates a
+ GTM opens a port to accept connection from each Coordinator and
+ Datanode backend. When GTM accepts a connection, it creates a
thread (GTM Thread) to handle request to GTM from the connected
- coordinator backend.
+ Coordinator backend.
</para>
</step>
@@ -865,13 +865,13 @@
<para>
GTM Thread receives each request, record it and
sends <varname>GXID</varname>, <emphasis>snapshot</emphasis>
- and other response to the coordinator backend.
+ and other response to the Coordinator backend.
</para>
</step>
<step>
<para>
- They are repeated until the coordinator backend requests
+ They are repeated until the Coordinator backend requests
disconnect.
</para>
</step>
@@ -885,19 +885,19 @@
<para>
You may have been noticed that each transaction is issuing
request to GTM so frequently and we can collect them into single
- block of requests in each coordinator to reduce the amount of
+ block of requests in each Coordinator to reduce the amount of
interaction, as <emphasis>GTM-Proxy</emphasis>.
</para>
<para>
- In this configuration, each coordinator and datanode backend
+ In this configuration, each Coordinator and Datanode backend
does not connect to GTM directly. Instead, we have GTM Proxy
- between GTM and coordinator backend to group multiple requests
+ between GTM and Coordinator backend to group multiple requests
and responses. GTM Proxy, like GTM explained in the previous
- sections, accepts connection from the coordinator
+ sections, accepts connection from the Coordinator
backend. However, it does not create new thread. The following
paragraphs explains how GTM Proxy is initialized and how it
- handles requests from coordinator backends.
+ handles requests from Coordinator backends.
</para>
<para>
@@ -932,20 +932,20 @@
</procedure>
<para>
- When each coordinator backend requests for connection, Proxy
+ When each Coordinator backend requests for connection, Proxy
Main Thread assigns a GTM Proxy Thread to handle
request. Therefore, one GTM Proxy Thread handles multiple
- coordinator backends. If a coordinator has one hundred
- coordinator backends and one GTM Proxy Thread, this thread takes
- care of one hundred coordinator backend.
+ Coordinator backends. If a Coordinator has one hundred
+ Coordinator backends and one GTM Proxy Thread, this thread takes
+ care of one hundred Coordinator backend.
</para>
<para>
- Then GTM Proxy Thread scans all the requests from coordinator
- backend. If coordinator is more busy, it is expected to capture
+ Then GTM Proxy Thread scans all the requests from Coordinator
+ backend. If Coordinator is more busy, it is expected to capture
more requests in a single scan. Therefore, the proxy can group
many requests into single block of requests, to reduce the
- number of interaction between GTM and the coordinator.
+ number of interaction between GTM and the Coordinator.
</para>
<para>
@@ -958,29 +958,29 @@
</sect3>
</sect2>
- <sect2 id="xc-overview-coordinator">
+ <sect2 id="xc-overview-Coordinator">
<title>Coordinator</title>
&xconly;
<para>
Coordinator handles SQL statements from applications and
- determine which datanode should be involved and generates local
- SQL statements for each datanode. In the most simplest case, if
- single datanode is involved, the coordinator simply proxies
- incoming statement to the datanode. In more complicated case,
- for example, if the target datanode cannot be determined, then
- the coordinator generates local statements for each datanode,
- collects the result to materialize at the coordinator for further
- handling. In this case, the coordinator will try to optimize the
+ determine which Datanode should be involved and generates local
+ SQL statements for each Datanode. In the most simplest case, if
+ single Datanode is involved, the Coordinator simply proxies
+ incoming statement to the Datanode. In more complicated case,
+ for example, if the target Datanode cannot be determined, then
+ the Coordinator generates local statements for each Datanode,
+ collects the result to materialize at the Coordinator for further
+ handling. In this case, the Coordinator will try to optimize the
plan by
<itemizedlist>
<listitem>
<para>
- Pushdown <command>WHERE</command> clause to datanodes,
+ Pushdown <command>WHERE</command> clause to Datanodes,
</para>
</listitem>
<listitem>
<para>
- Pushdown <emphasis>joins</emphasis> to datanodes,
+ Pushdown <emphasis>joins</emphasis> to Datanodes,
</para>
</listitem>
<listitem>
@@ -995,8 +995,8 @@
</listitem>
</itemizedlist>
- If a transaction is involved by more than one datanodes and/or
- coordinators, the coordinator will handle the transaction with
+ If a transaction is involved by more than one Datanodes and/or
+ Coordinators, the Coordinator will handle the transaction with
two-phase commit protocol internally.
</para>
@@ -1005,43 +1005,43 @@
functions, <productname>Postgres-XC</productname> introduced new
function collection function between existing transition function
and finalize function. Collection function runs on the
- coordinator to collect all the intermediate results from involved
- datanodes. For details, see <xref linkend="xaggr">
+ Coordinator to collect all the intermediate results from involved
+ Datanodes. For details, see <xref linkend="xaggr">
and <xref linkend="SQL-CREATEAGGREGATE">.
</para>
<para>
- In the case of reading replicated tables, coordinator can choose
- any datanode to read. The most efficient way is to select one
+ In the case of reading replicated tables, Coordinator can choose
+ any Datanode to read. The most efficient way is to select one
running in the same hardware or virtual machine. This is
- called <emphasis>preferred datanode</emphasis> and can be
- specified by a GUC local to each coordinator.
+ called <emphasis>preferred Datanode</emphasis> and can be
+ specified by a GUC local to each Coordinator.
</para>
<para>
On the other hand, in the case of writing replicated tables, all
- the coordinators choose the same datanode to begin with to avoid
+ the Coordinators choose the same Datanode to begin with to avoid
update conflicts. This is called <emphasis>primary
- datanode</emphasis>.
+ Datanode</emphasis>.
</para>
<para>
Coordinators also take care of DDL statements. Because DDL
statements handles system catalogs, which are replicated in all
- the coordinators and datanodes, they are proxied to all the
- coordinators and datanodes. To synchronize the catalog update in
- all the nodes, the coordinator handles DDL with two-phase commit
+ the Coordinators and Datanodes, they are proxied to all the
+ Coordinators and Datanodes. To synchronize the catalog update in
+ all the nodes, the Coordinator handles DDL with two-phase commit
protocol internally.
</para>
</sect2>
- <sect2 id="xc-overview-datanode">
+ <sect2 id="xc-overview-Datanode">
<title>Datanode</title>
&xconly;
<para>
- While coordinators handle cluster-wide SQL statements, datanodes
- take care of just local issues. In this sense, datanodes are
+ While Coordinators handle cluster-wide SQL statements, Datanodes
+ take care of just local issues. In this sense, Datanodes are
essentially <productname>PostgreSQL</productname> servers except
that transaction management information is obtained from GTM, as
well as other global value.
@@ -1054,7 +1054,7 @@
<title>Coordinator And Datanode Connection</title>
<para>
- The number of connection between coordinator and data node may
+ The number of connection between Coordinator and data node may
increase from time to time. This may leave unused connection and
waste system resources. Repeating real connect and disconnect
requires data node backend initialization which increases latency
@@ -1062,28 +1062,28 @@
</para>
<para>
- For example, as in the case of GTM, if each coordinator has one
- hundred connections to applications and we have ten coordinators,
- after a while, each coordinator may have connection to each data
- node. It means that each coordinator backend has ten connections
- to coordinators and each coordinator has one thousand (10 x 10)
- connections to coordinators.
+ For example, as in the case of GTM, if each Coordinator has one
+ hundred connections to applications and we have ten Coordinators,
+ after a while, each Coordinator may have connection to each data
+ node. It means that each Coordinator backend has ten connections
+ to Coordinators and each Coordinator has one thousand (10 x 10)
+ connections to Coordinators.
</para>
<para>
Because we consume much more resources for locks and other
control information per backend and only a few of such connection
is active at a given time, it is not a good idea to hold such
- unused connection between coordinator and data node.
+ unused connection between Coordinator and data node.
</para>
<para>
To improve this, Postgres-XC is equipped with connection pooler
- between coordinator and data node. When a coordinator backend
+ between Coordinator and data node. When a Coordinator backend
requires connection to a data node, the pooler looks for
appropriate connection from the pool. If there's an available
- one, the pooler assigns it to the coordinator backend. When the
- connection is no longer needed, the coordinator backend returns
+ one, the pooler assigns it to the Coordinator backend. When the
+ connection is no longer needed, the Coordinator backend returns
the connection to the pooler. Pooler does not disconnect the
connection. It keeps the connection to the pool for later reuse,
keeping data node backend running.
diff --git a/doc-xc/src/sgml/auto-explain.sgmlin b/doc-xc/src/sgml/auto-explain.sgmlin
index 771e2980e4..e926a70898 100644
--- a/doc-xc/src/sgml/auto-explain.sgmlin
+++ b/doc-xc/src/sgml/auto-explain.sgmlin
@@ -36,10 +36,10 @@ LOAD 'auto_explain';
&xconly;
<!## XC>
<para>
- To log plans on datanodes, you must preload this module in each
- datanode. This module will log local plans of each node. For
- example, coordinator log will include the plan for coordinator only.
- Corresponding plan in datanodes will be found in each datanode's
+ To log plans on Datanodes, you must preload this module in each
+ Datanode. This module will log local plans of each node. For
+ example, Coordinator log will include the plan for Coordinator only.
+ Corresponding plan in Datanodes will be found in each Datanode's
log.
</para>
<!## end>
diff --git a/doc-xc/src/sgml/backup.sgmlin b/doc-xc/src/sgml/backup.sgmlin
index 1179ccf95c..1f0cab70a4 100644
--- a/doc-xc/src/sgml/backup.sgmlin
+++ b/doc-xc/src/sgml/backup.sgmlin
@@ -110,7 +110,7 @@ pg_dump <replaceable class="parameter">dbname</replaceable> &gt; <replaceable cl
<para>
In <productname>Postgres-XC</>, <application>pg_dump</>
and <application>pg_dumpall</> backs up all the information stored
- both in coordinators and datanodes.
+ both in Coordinators and Datanodes.
</para>
</important>
<!## end>
@@ -336,9 +336,9 @@ pg_restore -d <replaceable class="parameter">dbname</replaceable> <replaceable c
<!## XC>
&xconly;
<para>
- File system level backup covers only each coordinator or datanode.
+ File system level backup covers only each Coordinator or Datanode.
To make file system level backup, you should backup each
- coordinator and datanode manually.
+ Coordinator and Datanode manually.
</para>
<!## end>
@@ -473,9 +473,9 @@ tar -cf backup.tar /usr/local/pgsql/data
&xconly;
<para>
This section describes PITR for <productname>PostgreSQL</>.
- Because coordinator and datanode of <productname>Postgres-XC</> are
+ Because Coordinator and Datanode of <productname>Postgres-XC</> are
essentially <productname>PostgreSQL</productname>server, you can do
- PITR for each coordinator and datanode manually.
+ PITR for each Coordinator and Datanode manually.
</para>
<!## end>
@@ -570,9 +570,9 @@ tar -cf backup.tar /usr/local/pgsql/data
&xconly;
<para>
This section describes PITR for <productname>PostgreSQL</>.
- Because coordinator and datanode of <productname>Postgres-XC</> are
+ Because Coordinator and Datanode of <productname>Postgres-XC</> are
essentially <productname>PostgreSQL</productname>server, you can
- set up WAL archiving for each coordinator and datanode manually.
+ set up WAL archiving for each Coordinator and Datanode manually.
</para>
<!## end>
@@ -773,10 +773,10 @@ archive_command = 'test ! -f /mnt/server/archivedir/%f &amp;&amp; cp %p /mnt/ser
&xconly;
<para>
This section describes how to make a base backup of single
- datanode or coordinator. Please note that you should take base
- backup of all the datanodes and coordinators. Also please note
+ Datanode or Coordinator. Please note that you should take base
+ backup of all the Datanodes and Coordinators. Also please note
that you don't have to take base backups at exactly the same time.
- You may take base backup of each datanode or coordinator one after
+ You may take base backup of each Datanode or Coordinator one after
another, or you may take some of them at the same time. You don't
have to do it at exactly the same time.
</para>
@@ -1001,10 +1001,10 @@ SELECT pg_stop_backup();
&xconly;
<para>
This section describes recovering with continuous archive backup
- for <productname>PostgreSQL</>. Because coordinator and datanode
+ for <productname>PostgreSQL</>. Because Coordinator and Datanode
of <productname>Postgres-XC</> are
essentially <productname>PostgreSQL</productname>server, you can do
- this for each coordinator and datanode manually.
+ this for each Coordinator and Datanode manually.
</para>
<!## end>
@@ -1200,9 +1200,9 @@ restore_command = 'cp /mnt/server/archivedir/%f %p'
<!## XC>
&xconly;
<para>
- Because coordinator and datanode of <productname>Postgres-XC</> are
+ Because Coordinator and Datanode of <productname>Postgres-XC</> are
essentially <productname>PostgreSQL</productname>server, you can
- apply timelines for each coordinator and datanode manually.
+ apply timelines for each Coordinator and Datanode manually.
</para>
<!## end>
diff --git a/doc-xc/src/sgml/config.sgmlin b/doc-xc/src/sgml/config.sgmlin
index 41e0d2eec1..59b1e0066e 100644
--- a/doc-xc/src/sgml/config.sgmlin
+++ b/doc-xc/src/sgml/config.sgmlin
@@ -26,8 +26,8 @@
<para>
There are many configuration parameters that affect the behavior of
the database system. In the first section of this chapter, we
- describe how to set configuration parameters for coordinator and
- datanodes. The subsequent sections discuss each parameter in
+ describe how to set configuration parameters for Coordinator and
+ Datanodes. The subsequent sections discuss each parameter in
detail.
</para>
<!## end>
@@ -123,9 +123,9 @@ include 'filename'
until the server is restarted.
<!## end>
<!## XC>
- The configuration file is reread whenever the main coordinator/datanode process receives a
+ The configuration file is reread whenever the main Coordinator/Datanode process receives a
<systemitem>SIGHUP</> signal (which is most easily sent by means
- of <literal>pg_ctl reload</>). The main coordinator/datanode process
+ of <literal>pg_ctl reload</>). The main Coordinator/Datanode process
also propagates this signal to all currently running server
processes so that existing sessions also get the new
value. Alternatively, you can send the signal to a single server
@@ -423,13 +423,13 @@ SET ENABLE_SEQSCAN TO OFF;
<!## XC>
<para>
- In the case of the coordinator, this parameter determines how many
- connections can each coordinator accept.
+ In the case of the Coordinator, this parameter determines how many
+ connections can each Coordinator accept.
</para>
<para>
- In the case of the datanode, number of connection to each
- datanode may become as large as <varname>max_connections</>
- multiplied by the number of coordinators.
+ In the case of the Datanode, number of connection to each
+ Datanode may become as large as <varname>max_connections</>
+ multiplied by the number of Coordinators.
</para>
<!## end>
@@ -442,7 +442,7 @@ SET ENABLE_SEQSCAN TO OFF;
adjust those parameters, if necessary.
<!## end>
<!## XC>
- Increasing this parameter might cause <filename>coordinator</> or <filename>datanode</>
+ Increasing this parameter might cause <filename>Coordinator</> or <filename>Datanode</>
to request more <systemitem class="osname">System V</> shared
memory or semaphores than your operating system's default configuration
allows. See <xref linkend="sysvipc"> for information on how to
@@ -480,7 +480,7 @@ SET ENABLE_SEQSCAN TO OFF;
<!## end>
<!## XC>
Determines the number of connection <quote>slots</quote> that
- are reserved for connections by <filename>coordinator</> or <filename>datanode</>
+ are reserved for connections by <filename>Coordinator</> or <filename>Datanode</>
superusers. At most <xref linkend="guc-max-connections">
connections can ever be active simultaneously. Whenever the
number of active concurrent connections is at least
@@ -1039,11 +1039,11 @@ SET ENABLE_SEQSCAN TO OFF;
<para>
Even though your application does not issue <command>PREPARE
- TRANSACTION</> explicitly, coordinator may generate this
+ TRANSACTION</> explicitly, Coordinator may generate this
command when an updating transaction involves more than one
- coordinators and datanodes.
+ Coordinators and Datanodes.
- For datanodes, you should specify this value as the same value
+ For Datanodes, you should specify this value as the same value
as <varname>max_connections</>.
</para>
@@ -2084,8 +2084,8 @@ SET ENABLE_SEQSCAN TO OFF;
Streaming replication has not been tested
with <productname>Postgres-XC</productname> yet. Because this
version of streaming replication is based upon asynchronous log shipping,
- there could be a risk to have the status of coordinators and
- datanodes inconsistent. The development team leaves the test
+ there could be a risk to have the status of Coordinators and
+ Datanodes inconsistent. The development team leaves the test
and the use of this entirely to users.
</para>
<!## end>
@@ -5563,7 +5563,7 @@ dynamic_library_path = 'C:\tools\postgresql;H:\my_project\lib;$libdir'
&xconly;
<para>
<productname>Postgres-XC</> does not detect global deadlocks
- where multiple node (coordinators and/or datanodes) are
+ where multiple node (Coordinators and/or Datanodes) are
involved.
</para>
<!## end>
@@ -6626,7 +6626,7 @@ LOG: CleanUpLock: deleting: lock(0xb7acd844) id(24688,24696,0,0,0,1)
&xconly;
<para>
<productname>Postgres-XC</> does not detect global deadlocks
- where multiple node (coordinators and/or datanodes) are
+ where multiple node (Coordinators and/or Datanodes) are
involved.
</para>
<!## end>
@@ -6833,9 +6833,9 @@ LOG: CleanUpLock: deleting: lock(0xb7acd844) id(24688,24696,0,0,0,1)
&xconly;
<para>
Because <productname>Postgres-XC</> distribute data into multiple
- datanodes and multiple coordinators can accept transactions in
- parallel, specific coordinator must know what coordinators and
- datanodes to connect. Coordinator and datanode also must know
+ Datanodes and multiple Coordinators can accept transactions in
+ parallel, specific Coordinator must know what Coordinators and
+ Datanodes to connect. Coordinator and Datanode also must know
where they can request transaction information. The following
describes these additional GUC parameters.
</para>
@@ -6849,10 +6849,10 @@ LOG: CleanUpLock: deleting: lock(0xb7acd844) id(24688,24696,0,0,0,1)
</indexterm>
<listitem>
<para>
- Specify the maximum connection pool of the coordinator to datanodes.
- Because each transaction can be involved by all the datanodes,
+ Specify the maximum connection pool of the Coordinator to Datanodes.
+ Because each transaction can be involved by all the Datanodes,
this parameter should at least be <varname>max_connections</>
- multiplied by number of datanodes.
+ multiplied by number of Datanodes.
</para>
</listitem>
</varlistentry>
@@ -6864,15 +6864,15 @@ LOG: CleanUpLock: deleting: lock(0xb7acd844) id(24688,24696,0,0,0,1)
</indexterm>
<listitem>
<para>
- Minumum number of the connection from the coordinator to datanodes.
+ Minumum number of the connection from the Coordinator to Datanodes.
</para>
</listitem>
</varlistentry>
- <varlistentry id="guc-max-coordinators" xreflabel="max_coordinators">
- <term><varname>max_coordinators</varname> (<type>integer</type>)</term>
+ <varlistentry id="guc-max-Coordinators" xreflabel="max_Coordinators">
+ <term><varname>max_Coordinators</varname> (<type>integer</type>)</term>
<indexterm>
- <primary><varname>max_coordinators</> configuration parameter</primary>
+ <primary><varname>max_Coordinators</> configuration parameter</primary>
</indexterm>
<listitem>
<para>
@@ -6881,10 +6881,10 @@ LOG: CleanUpLock: deleting: lock(0xb7acd844) id(24688,24696,0,0,0,1)
</listitem>
</varlistentry>
- <varlistentry id="guc-max-datanodes" xreflabel="max_datanodes">
- <term><varname>max_datanodes</varname> (<type>integer</type>)</term>
+ <varlistentry id="guc-max-Datanodes" xreflabel="max_Datanodes">
+ <term><varname>max_Datanodes</varname> (<type>integer</type>)</term>
<indexterm>
- <primary><varname>max_datanodes</> configuration parameter</primary>
+ <primary><varname>max_Datanodes</> configuration parameter</primary>
</indexterm>
<listitem>
<para>
@@ -6979,7 +6979,7 @@ LOG: CleanUpLock: deleting: lock(0xb7acd844) id(24688,24696,0,0,0,1)
</para>
<para>
As a session parameter, this parameter is shared among all
- the connections of the session. It affects originating coordinator as
+ the connections of the session. It affects originating Coordinator as
well as remote nodes involved in session.
</para>
</listitem>
diff --git a/doc-xc/src/sgml/datatype.sgmlin b/doc-xc/src/sgml/datatype.sgmlin
index cfe4332da3..38ea4c6b65 100644
--- a/doc-xc/src/sgml/datatype.sgmlin
+++ b/doc-xc/src/sgml/datatype.sgmlin
@@ -4694,7 +4694,7 @@ SELECT * FROM pg_attribute
<!## XC>
<para>
Please note that <productname>Postgres-XC</> enforces OID identity only locally.
- Different object at different coordinator or datanode may be assigned the same
+ Different object at different Coordinator or Datanode may be assigned the same
OID value.
</para>
<!## end>
diff --git a/doc-xc/src/sgml/ddl.sgmlin b/doc-xc/src/sgml/ddl.sgmlin
index 8c50ac13d8..2887dfb0b0 100644
--- a/doc-xc/src/sgml/ddl.sgmlin
+++ b/doc-xc/src/sgml/ddl.sgmlin
@@ -152,9 +152,9 @@ CREATE TABLE products (
<para>
In <productname>Postgres-XC</>, each table can be distributed or replicated
- among datanodes. By distributing tables, each query, if the target is
+ among Datanodes. By distributing tables, each query, if the target is
determined from the incoming statement, can be handled by single or small
- number of datanodes and more transactions can be handled in parallel.
+ number of Datanodes and more transactions can be handled in parallel.
If you replicate tables, and if they're more read than written, transactions
reading such tables can be handled in parallel.
</para>
@@ -167,7 +167,7 @@ CREATE TABLE products (
column.
By default, distribution column is the first column you specified in
<type>CREATE TABLE</type> statement and the column value is used to
- generate hash value as an index for datanode which accommodate the
+ generate hash value as an index for Datanode which accommodate the
row.
You can choose another distribution method such as <type>MODULO</> and
<type>ROUND ROBIN</>.
@@ -181,26 +181,26 @@ CREATE TABLE products (
) DISTRIBUTE BY HASH(product_no);
</programlisting>
In this case, the column <type>product_no</> is chosen as the distribute column and
- the target datanode of each row is determined based upon the hash value of the column.
- You can use <type>MODULO</> to specify modulo to test and determine the target datanode.
- You can also specify <type>ROUND ROBIN</> to determine the datanode by the order each
+ the target Datanode of each row is determined based upon the hash value of the column.
+ You can use <type>MODULO</> to specify modulo to test and determine the target Datanode.
+ You can also specify <type>ROUND ROBIN</> to determine the Datanode by the order each
row is inserted.
</para>
<para>
- Please note that with <type>HASH</> and <type>MODULO</>, coordinator have a chance to determine the
- location of target row from incoming statement. This reduces the number of involved datanodes
+ Please note that with <type>HASH</> and <type>MODULO</>, Coordinator have a chance to determine the
+ location of target row from incoming statement. This reduces the number of involved Datanodes
and can increase the number of transaction handled in parallel.
</para>
<para>
On the other hand, if exact value cannot be obtained from incoming statement, for example,
in the case of floating point number, <productname>Postgres-XC</> may fail to find precise
- target datanode and it is not recommended to use such column as a distribution column.
+ target Datanode and it is not recommended to use such column as a distribution column.
</para>
<para>
- To replicate a table into all the datanodes, specify <type>DISTRIBUTE BY REPLICATION</> as
+ To replicate a table into all the Datanodes, specify <type>DISTRIBUTE BY REPLICATION</> as
follows:
<programlisting>
CREATE TABLE products (
@@ -826,7 +826,7 @@ CREATE TABLE orders (
Please note that column with <type>REFERENCE</> must be the distribution column.
In this case, we cannot add <type>PRIMARY KEY</> to <type>order_id</>
because <type>PRIMARY KEY</type> must be the distribution column as well.
- This limitation is introduced because constraints are enforced only locally in each datanode,
+ This limitation is introduced because constraints are enforced only locally in each Datanode,
which will be resolved in the future.
</para>
<para>
@@ -1160,7 +1160,7 @@ CREATE TABLE circles (
<para>
Please note that <productname>Postgres-XC</> does not enforce
OID integrity among the cluster. OID is assigned locally in
- each coordinator and datanode. You can use this in expressions
+ each Coordinator and Datanode. You can use this in expressions
but you should not expect OID value is the same throughout the
<productname>XC</> cluster.
</para>
@@ -1189,7 +1189,7 @@ CREATE TABLE circles (
<para>
Please note that <productname>Postgres-XC</> does not enforce
OID integrity among the cluster. OID is assigned locally in
- each coordinator and datanode. You can use this in expressions
+ each Coordinator and Datanode. You can use this in expressions
but you should not expect OID value is the same throughout the
<productname>XC</> cluster.
</para>
@@ -1277,11 +1277,11 @@ CREATE TABLE circles (
<!## XC>
&xconly;
<para>
- In <productname>Postgres-XC</>, ctid is local to coordinators
- and datanodes. It is not a good practice to use this value in
+ In <productname>Postgres-XC</>, ctid is local to Coordinators
+ and Datanodes. It is not a good practice to use this value in
SQL statements. In very restricted situation, for example, when
- you query the database specifying the target coordinator or
- datanode using <type>EXECUTE DIRECT</> statement, you may use
+ you query the database specifying the target Coordinator or
+ Datanode using <type>EXECUTE DIRECT</> statement, you may use
this value.
</para>
<!## end>
@@ -1343,7 +1343,7 @@ CREATE TABLE circles (
<para>
Again, <productname>Postgres-XC</> does not enforce
OID integrity among the cluster. OID is assigned locally in
- each coordinator and datanode. You can use this in expressions
+ each Coordinator and Datanode. You can use this in expressions
but you should not expect OID value is the same throughout the
<productname>XC</> cluster.
</para>
diff --git a/doc-xc/src/sgml/func.sgmlin b/doc-xc/src/sgml/func.sgmlin
index ee519daf47..40fac7f1a7 100644
--- a/doc-xc/src/sgml/func.sgmlin
+++ b/doc-xc/src/sgml/func.sgmlin
@@ -13428,8 +13428,8 @@ SELECT pg_type_is_visible('myschema.widget'::regtype);
<!## XC>
&xconly;
<para>
- Because value of OID is enforced unique only in each coordinator
- or datanode in <productname>Postgres-XC</>, you should use these
+ Because value of OID is enforced unique only in each Coordinator
+ or Datanode in <productname>Postgres-XC</>, you should use these
functions locally, typically through <type>EXECUTE DIRECT</>
statement.
</para>
@@ -13813,9 +13813,9 @@ SELECT typlen FROM pg_type WHERE oid = pg_typeof(33);
<!## XC>
&xconly;
<para>
- Please note that OID is valid locally in each coordinator and
- datanode. You should use specific OID value in statements
- targetted to specific coordinator or datanode by <type>EXECUTE
+ Please note that OID is valid locally in each Coordinator and
+ Datanode. You should use specific OID value in statements
+ targetted to specific Coordinator or Datanode by <type>EXECUTE
DIRECT</> statement.
</para>
<!## end>
@@ -13921,9 +13921,9 @@ SELECT typlen FROM pg_type WHERE oid = pg_typeof(33);
<!## XC>
&xconly;
<para>
- In <productname>Postgres-XC</>, OID is maitain locally in each coordinator and datanode.
+ In <productname>Postgres-XC</>, OID is maitain locally in each Coordinator and Datanode.
If you specify specific OID value, you should do it in SQL stataements targetted to specif
- coordinator or datanode by <type>EXECUTE DIRECT</> statement.
+ Coordinator or Datanode by <type>EXECUTE DIRECT</> statement.
</para>
<!## end>
@@ -14274,7 +14274,7 @@ SELECT set_config('log_statement_stats', 'off', false);
&xconly;
<para>
Please note that these functions works just locally. To issue
- these functions to another coordinators or datanodes, you should
+ these functions to another Coordinators or Datanodes, you should
issue these functions through <type>EXECUTE DIRECT</> statement.
</para>
<!## end>
@@ -14479,7 +14479,7 @@ postgres=# SELECT * FROM pg_xlogfile_name_offset(pg_stop_backup());
&xconly;
<para>
Please note that these functions works just locally. To issue
- these functions to another coordinators or datanodes, you should
+ these functions to another Coordinators or Datanodes, you should
issue these functions through <type>EXECUTE DIRECT</> statement.
</para>
<!## end>
@@ -14624,7 +14624,7 @@ postgres=# SELECT * FROM pg_xlogfile_name_offset(pg_stop_backup());
&xconly;
<para>
Please note that these functions works just locally. To issue
- these functions to another coordinators or datanodes, you should
+ these functions to another Coordinators or Datanodes, you should
issue these functions through <type>EXECUTE DIRECT</> statement.
</para>
<!## end>
@@ -14854,9 +14854,9 @@ postgres=# SELECT * FROM pg_xlogfile_name_offset(pg_stop_backup());
space used up by the specified fork at all the data nodes where the table is
distributed or replicated. If the table is replicated on 3 tables, the size
will be 3 times that of individual nodes. If you need to retrieve the local
- results from a particular coordinator or data node, you should issue these
+ results from a particular Coordinator or data node, you should issue these
function calls explicitly through <type>EXECUTE DIRECT</> statement. All other
- system functions run locally at the coordinator, unless explicitly specified
+ system functions run locally at the Coordinator, unless explicitly specified
otherwise in this document.
</para>
<!## end>
@@ -14927,7 +14927,7 @@ postgres=# SELECT * FROM pg_xlogfile_name_offset(pg_stop_backup());
&xconly;
<para>
Please note that these functions works just locally. To issue
- these functions to another coordinators or datanodes, you should
+ these functions to another Coordinators or Datanodes, you should
issue these functions through <type>EXECUTE DIRECT</> statement.
</para>
<!## end>
@@ -15318,9 +15318,9 @@ SELECT (pg_stat_file('filename')).modification;
&xconly;
<para>
The advisory lock functions are aware of the Postgres XC cluster. Hence,
- if you use a function like pg_advisory_lock() from a particular coordinator,
+ if you use a function like pg_advisory_lock() from a particular Coordinator,
the resource will be locked across the complete cluster, so another
- application calling the same function from a different coordinator will see
+ application calling the same function from a different Coordinator will see
this lock, and will wait on the resource until the lock is released.
This applies to both transaction and session level advisory locks.
</para>
diff --git a/doc-xc/src/sgml/high-availability.sgmlin b/doc-xc/src/sgml/high-availability.sgmlin
index 5b8b60aa90..2e08119e59 100644
--- a/doc-xc/src/sgml/high-availability.sgmlin
+++ b/doc-xc/src/sgml/high-availability.sgmlin
@@ -1326,8 +1326,8 @@ if (!triggered)
Streaming replication has not been tested
with <productname>Postgres-XC</productname> yet. Because this
version of streaming replication is based upon asynchronous log shipping,
- there could be a risk to have the status of coordinators and
- datanodes inconsistent. The development team leaves the test
+ there could be a risk to have the status of Coordinators and
+ Datanodes inconsistent. The development team leaves the test
and the use of this entirely to users.
</para>
<!## end>
diff --git a/doc-xc/src/sgml/history.sgmlin b/doc-xc/src/sgml/history.sgmlin
index 99e41d4bf6..455e5b9a48 100644
--- a/doc-xc/src/sgml/history.sgmlin
+++ b/doc-xc/src/sgml/history.sgmlin
@@ -276,11 +276,11 @@
<para>
In 2010, <productname>Rita-DB</productname> was carried over to
NTT's Open Source Software Center as Postgres-XC project.
- EnterpriseDB Inc., joined the project. After six month effort,
+ EnterpriseDB Inc., joined the project. After six month effort,
they were successful to show
that multiple <productname>PostgreSQL</productname> can be
integrated into database cluster and they can provide transparent
- global transaction management feature. Benchmark showed both read and
+ global transaction management feature. Benchmark showed both read and
write scalability.
</para>
@@ -290,16 +290,22 @@
</para>
<para>
- After then, the development team decided to expand statement
+ After then, the development and core teams decided to expand statement
capability of <productname>Postgres-XC</productname> to
full <productname>PostgreSQL</productname> as much as possible.
They're working hard to this goal.
</para>
+ <para>
+ <productname>Postgres-XC</productname> is under copyright of Postgres-XC Development
+ Group since 2012. More details about this legal entity can be found
+ <ulink url="https://sourceforge.net/apps/mediawiki/postgres-xc/index.php?title=Charter">here.</ulink>
+ </para>
+
<!## omitted>
<para>
On the other hand, it is very important to provide high availability
- feature to database clusters. Especially, GTM was pointed out to
+ feature to database clusters. Especially, GTM was pointed out to
become single point of failure.
High availability feature was also added
to <productname>Postgres-XC</productname>.
diff --git a/doc-xc/src/sgml/indices.sgmlin b/doc-xc/src/sgml/indices.sgmlin
index 5d8de33fe9..252986f0e9 100644
--- a/doc-xc/src/sgml/indices.sgmlin
+++ b/doc-xc/src/sgml/indices.sgmlin
@@ -18,9 +18,9 @@
<!## XC>
&xconly;
<para>
- Each index is maintained locally in coordinator and datanode
+ Each index is maintained locally in Coordinator and Datanode
in <productname>Postgres-XC</>.
- Cross validation of index entries among coordinators and datanodes is
+ Cross validation of index entries among Coordinators and Datanodes is
not performed in the current implementation.
</para>
<!## end>
diff --git a/doc-xc/src/sgml/installation.sgmlin b/doc-xc/src/sgml/installation.sgmlin
index 8a5d5cfc2d..735fbdc274 100644
--- a/doc-xc/src/sgml/installation.sgmlin
+++ b/doc-xc/src/sgml/installation.sgmlin
@@ -80,24 +80,24 @@ su
gmake install
adduser postgres
mkdir /usr/local/pgsql/data_coord1
-mkdir /usr/local/pgsql/data_datanode1
-mkdir /usr/local/pgsql/data_datanode2
+mkdir /usr/local/pgsql/data_Datanode1
+mkdir /usr/local/pgsql/data_Datanode2
mkdir /usr/local/pgsql/data_gtm
chown postgres /usr/local/pgsql/data_coord1
-chown postgres /usr/local/pgsql/data_datanode1
-chown postgres /usr/local/pgsql/data_datanode2
+chown postgres /usr/local/pgsql/data_Datanode1
+chown postgres /usr/local/pgsql/data_Datanode2
chown postgres /usr/local/pgsql/data_gtm
su - postgres
/usr/local/pgsql/bin/initdb -D /usr/local/pgsql/data_coord1 --nodename coord1
-/usr/local/pgsql/bin/initdb -D /usr/local/pgsql/data_datanode1 --nodename datanode1
-/usr/local/pgsql/bin/initdb -D /usr/local/pgsql/data_datanode2 --nodename datanode2
+/usr/local/pgsql/bin/initdb -D /usr/local/pgsql/data_Datanode1 --nodename Datanode1
+/usr/local/pgsql/bin/initdb -D /usr/local/pgsql/data_Datanode2 --nodename Datanode2
/usr/local/pgsql/bin/initgtm -D /usr/local/pgsql/data_gtm -Z gtm
/usr/local/pgsql/bin/gtm -D /usr/local/pgsql/data_gtm &gt;logfile 2&gt;&amp;1 &amp;
-/usr/local/pgsql/bin/postgres -X -p 15432 -D /usr/local/pgsql/data_datanode1 &gt;logfile 2&gt;&amp;1 &amp;
-/usr/local/pgsql/bin/postgres -X -p 15433 -D /usr/local/pgsql/data_datanode2 &gt;logfile 2&gt;&amp;1 &amp;
+/usr/local/pgsql/bin/postgres -X -p 15432 -D /usr/local/pgsql/data_Datanode1 &gt;logfile 2&gt;&amp;1 &amp;
+/usr/local/pgsql/bin/postgres -X -p 15433 -D /usr/local/pgsql/data_Datanode2 &gt;logfile 2&gt;&amp;1 &amp;
/usr/local/pgsql/bin/postgres -C -D /usr/local/pgsql/data_coord1 &gt;logfile 2&gt;&amp;1 &amp;
-/usr/local/pgsql/bin/psql -c "CREATE NODE datanode1 WITH (TYPE = 'datanode', PORT = 15432)" postgres
-/usr/local/pgsql/bin/psql -c "CREATE NODE datanode2 WITH (TYPE = 'datanode', PORT = 15433)" postgres
+/usr/local/pgsql/bin/psql -c "CREATE NODE Datanode1 WITH (TYPE = 'Datanode', PORT = 15432)" postgres
+/usr/local/pgsql/bin/psql -c "CREATE NODE Datanode2 WITH (TYPE = 'Datanode', PORT = 15433)" postgres
/usr/local/pgsql/bin/psql -c "SELECT pgxc_pool_reload()" postgres
/usr/local/pgsql/bin/createdb test
/usr/local/pgsql/bin/psql test
@@ -1778,14 +1778,14 @@ Postgres-XC, contrib and HTML documentation successfully made. Ready to install.
<listitem>
<para>
Coordinator is an entry point to <productname>Postgres-XC</> from applications.
- You can run more than one coordinator in parallel. Each coordinator behaves
- as just <productname>PostgreSQL</> database server, while all the coordinators
+ You can run more than one Coordinator in parallel. Each Coordinator behaves
+ as just <productname>PostgreSQL</> database server, while all the Coordinators
handles transactions in harmonized way so that any transaction coming into one
- coordinator is protected against any other transactions coming into others.
+ Coordinator is protected against any other transactions coming into others.
Updates by a transaction is visible immediately to others running in other
- coordinators.
- To simplify the load balance of coordinators and datanodes, as mentioned
- below, it is highly advised to install same number of coordinator and datanode
+ Coordinators.
+ To simplify the load balance of Coordinators and Datanodes, as mentioned
+ below, it is highly advised to install same number of Coordinator and Datanode
in a server.
</para>
</listitem>
@@ -1797,9 +1797,9 @@ Postgres-XC, contrib and HTML documentation successfully made. Ready to install.
Datanode
</para>
<para>
- Coordinator and datanode shares the same binary but their behavior is a little
+ Coordinator and Datanode shares the same binary but their behavior is a little
different. Coordinator decomposes incoming statements into those handled by
- datanodes. If necessary, coordinator materializes response from datanodes
+ Datanodes. If necessary, Coordinator materializes response from Datanodes
to calculate final response to applications.
</para>
<para>
@@ -2113,8 +2113,8 @@ export MANPATH
<!## XC>
&xconly;
<para>
- When, as typical case, you're configuring both coordinator and
- datanode in a same server, please be careful not to assign same
+ When, as typical case, you're configuring both Coordinator and
+ Datanode in a same server, please be careful not to assign same
resource, such as listening point (IP address and port number) to
different component. If you apply single set of environment
described here to different components, they will conflict
@@ -2209,13 +2209,13 @@ bin/ include/ lib/ share/
</para>
<para>
- For the server to run GTM-Proxy (the server you run coordinator and/or datanode),
+ For the server to run GTM-Proxy (the server you run Coordinator and/or Datanode),
you need to copy the following files to your path: <filename>bin/gtm_proxy</filename>
and <filename>bin/gtm_ctl</>.
</para>
<para>
- For server to run coordinator or datanode, or both, you should
+ For server to run Coordinator or Datanode, or both, you should
copy the following files to your
path: <filename>bin/initdb</>, <filename>bin/pgxc_ddl</>, <filename>pgxc_clean</>.
You should also copy everything in <filename>path</> directory to
@@ -2257,7 +2257,7 @@ postgres$ <userinput>/usr/local/pgsql/bin/initdb -D /usr/local/pgsql/data</>
</para>
<!## XC>
<para>
- If you're configuring both datanode and coordinator on the same
+ If you're configuring both Datanode and Coordinator on the same
server, you should specify different <option>-D</> option for
each of them.
</para>
@@ -2295,17 +2295,17 @@ postgres$ <userinput>/usr/local/pgsql/bin/initdb -D /usr/local/pgsql/data</>
<step>
&xconly;
<para>
- Now you should configure each coordinator and datanode. Because
+ Now you should configure each Coordinator and Datanode. Because
they have to communicate each other and number of servers,
- datanodes and coordinators depend upon configurations, we don't
+ Datanodes and Coordinators depend upon configurations, we don't
provide default configuration file for them.
</para>
<para>
- You can configure datanode and coordinator by
+ You can configure Datanode and Coordinator by
editing <filename>postgresql.conf</> file located under the
directory you specified with <option>-D</> option
of <command>initdb</>. The following paragraphs describe what
- parameter to edit at least for coordinators. You can specify
+ parameter to edit at least for Coordinators. You can specify
any other <filename>postgresql.conf</> parameters as
standalone <productname>PostgreSQL</>.
</para>
@@ -2318,9 +2318,9 @@ postgres$ <userinput>/usr/local/pgsql/bin/initdb -D /usr/local/pgsql/data</>
<para>
<option>max_prepared_transactions</> specifies maximum number
of two-phase commit transactions. Even if you don't use
- explicit two phase commit operation, coordinator may issue
+ explicit two phase commit operation, Coordinator may issue
two-phase commit operation implicitly if a transaction is
- involved with multiple datanodes and/or coordinators. You should
+ involved with multiple Datanodes and/or Coordinators. You should
specify <option>max_prepared_transactions</> value at
least the number of <option>max_connection</>.
</para>
@@ -2333,11 +2333,11 @@ postgres$ <userinput>/usr/local/pgsql/bin/initdb -D /usr/local/pgsql/data</>
&xconly;
<par>
Coordinator is associated with a connection pooler which takes
- care of connection with other coordinators and datanodes. This
+ care of connection with other Coordinators and Datanodes. This
parameter specifies minimum number of connection to pool.
If you're not configuring <productname>XC</> cluster in
unballanced way, you should specify the same value to all the
- coordinators.
+ Coordinators.
</par>
</listitem>
</varlistentry>
@@ -2349,18 +2349,18 @@ postgres$ <userinput>/usr/local/pgsql/bin/initdb -D /usr/local/pgsql/data</>
<para>
This parameter specifies maximum number of the pooled
connection. This value should be at least more than the number
- of all the coordinators and datanodes. If you specify less
+ of all the Coordinators and Datanodes. If you specify less
value, you will see very frequent close and ope connection
which leads to serious performance problem.
If you're not configuring <productname>XC</> cluster in
unballanced way, you should specify the same value to all the
- coordinators.
+ Coordinators.
</para>
</listitem>
</varlistentry>
<varlistentry>
- <term><envar>max_coordinators</envar></term>
+ <term><envar>max_Coordinators</envar></term>
<listitem>
&xconly;
<para>
@@ -2372,7 +2372,7 @@ postgres$ <userinput>/usr/local/pgsql/bin/initdb -D /usr/local/pgsql/data</>
</varlistentry>
<varlistentry>
- <term><envar>max_datanodes</envar></term>
+ <term><envar>max_Datanodes</envar></term>
<listitem>
&xconly;
<para>
@@ -2400,7 +2400,7 @@ postgres$ <userinput>/usr/local/pgsql/bin/initdb -D /usr/local/pgsql/data</>
<listitem>
&xconly;
<para>
- Specify the port number listened to by this coordinator.
+ Specify the port number listened to by this Coordinator.
</para>
</listitem>
</varlistentry>
@@ -2422,7 +2422,7 @@ postgres$ <userinput>/usr/local/pgsql/bin/initdb -D /usr/local/pgsql/data</>
<para>
Specify the port number of gtm you're connecting to. This is
local to the server and you should specify the port assigned to
- the GTM-Proxy local to the coordinator.
+ the GTM-Proxy local to the Coordinator.
</para>
</listitem>
</varlistentry>
@@ -2433,7 +2433,7 @@ postgres$ <userinput>/usr/local/pgsql/bin/initdb -D /usr/local/pgsql/data</>
&xconly;
<para>
Now you should configure <filename>postgresql.conf</> for each
- datanodes. Please note, as in the case of coordinator, you can
+ Datanodes. Please note, as in the case of Coordinator, you can
specify other <filename>postgresql.conf</> parameters as in
standalone <productname>PostgreSQL</>.
</para>
@@ -2446,11 +2446,11 @@ postgres$ <userinput>/usr/local/pgsql/bin/initdb -D /usr/local/pgsql/data</>
&xconly;
<para>
<option>max_connections</> is, in short, a maximum number of
- background processes of the datanode. You should be careful
+ background processes of the Datanode. You should be careful
to specify reasonable value to this parameter because each
- coordinator backend may have connections to all the datanodes.
+ Coordinator backend may have connections to all the Datanodes.
You should specify this value as <option>max_connections</> of
- coordinator multiplied by the number of coordinators.
+ Coordinator multiplied by the number of Coordinators.
</para>
</listitem>
</varlistentry>
@@ -2462,11 +2462,11 @@ postgres$ <userinput>/usr/local/pgsql/bin/initdb -D /usr/local/pgsql/data</>
<para>
<option>max_prepared_transactions</> specifies maximum number
of two-phase commit transactions. Even if you don't use
- explicit two phase commit operation, coordinator may issue
+ explicit two phase commit operation, Coordinator may issue
two-phase commit operation implicitly if a transaction is
- involved with multiple datanodes and/or coordinators. The value
+ involved with multiple Datanodes and/or Coordinators. The value
of this parameter should be at least the value
- of <option>max_connections</> multiplied by the number of coordinators.
+ of <option>max_connections</> multiplied by the number of Coordinators.
</para>
</listitem>
</varlistentry>
@@ -2476,7 +2476,7 @@ postgres$ <userinput>/usr/local/pgsql/bin/initdb -D /usr/local/pgsql/data</>
<listitem>
&xconly;
<para>
- Specify the port number listened to by the datanode.
+ Specify the port number listened to by the Datanode.
</para>
</listitem>
</varlistentry>
@@ -2487,7 +2487,7 @@ postgres$ <userinput>/usr/local/pgsql/bin/initdb -D /usr/local/pgsql/data</>
<para>
Specify the port number of gtm you're connecting to. This is
local to the server and you should specify the port assigned to
- the GTM-Proxy local to the datanode.
+ the GTM-Proxy local to the Datanode.
</para>
</listitem>
</varlistentry>
@@ -2521,7 +2521,7 @@ gtm -D /usr/local/pgsql/gtm -h localhost -p 20001 -n 1 -x 1000
&xconly;
<para>
Next, you should start GTM-Proxy on each server you're running
- coordinator and/or datanode like:
+ Coordinator and/or Datanode like:
<programlisting>
gtm_proxy -h localhost -p 20002 -s localhost -t 20001 -i 1 -n 2 -D /usr/local/pgsql/gtm_proxy
</programlisting>
@@ -2537,41 +2537,41 @@ gtm_proxy -h localhost -p 20002 -s localhost -t 20001 -i 1 -n 2 -D /usr/local/pg
</para>
<para>
Please note that you should start GTM-Proxy on all the servers
- you run coordinator/datanode.
+ you run Coordinator/Datanode.
</para>
</step>
<step>
&xconly;
<para>
- Now you can start datanode on each server like:
+ Now you can start Datanode on each server like:
<programlisting>
-postgres -X -D /usr/local/pgsql/datanode
+postgres -X -D /usr/local/pgsql/Datanode
</programlisting>
- This will start the datanode. <option>-X</>
+ This will start the Datanode. <option>-X</>
specifies <command>postgres</> to start as a
- datanode. <option>-D</> specifies the data directory of the
+ Datanode. <option>-D</> specifies the data directory of the
data node. You can specify other options of standalone <command>postgres</>.
</para>
<para>
Please note that you should issue <command>postgres</> command at
- all the servers you're running datanode.
+ all the servers you're running Datanode.
</para>
</step>
<para>
- Finally, you can start coordinator like:
+ Finally, you can start Coordinator like:
<programlisting>
-postgres -C -D /usr/local/pgsql/coordinator
+postgres -C -D /usr/local/pgsql/Coordinator
</programlisting>
- This will start the coordinator. <option>-C</>
+ This will start the Coordinator. <option>-C</>
specifies <command>postgres</> to start as a
- coordinator. <option>-D</> specifies the data directory of the
- coordinator. You can specify other options of standalone <command>postgres</>.
+ Coordinator. <option>-D</> specifies the data directory of the
+ Coordinator. You can specify other options of standalone <command>postgres</>.
</para>
<para>
Please note that you should issue <command>postgres</> command at
- all the servers you're running coordinators.
+ all the servers you're running Coordinators.
</para>
<!## end>
@@ -2607,24 +2607,24 @@ kill `cat /usr/local/pgsql/data/postmaster.pid`
start up the whole database cluster. Do so now. The command should look
something like:
<programlisting>
-postgres -X -D /usr/local/pgsql/datanode
+postgres -X -D /usr/local/pgsql/Datanode
</programlisting>
- This will start the datanode in the foreground. To put the datanode
+ This will start the Datanode in the foreground. To put the Datanode
in the background use something like:
<programlisting>
nohup postgres -X -D /usr/local/pgsql/data \
&lt;/dev/null &gt;&gt;server.log 2&gt;&amp;1 &lt;/dev/null &amp;
</programlisting>
You can apply this to all the other components, GTM, GTM-Proxies,
- and coordinators.
+ and Coordinators.
</para>
<para>
- To stop a datanode running in the background you can type:
+ To stop a Datanode running in the background you can type:
<programlisting>
-kill `cat /usr/local/pgsql/datanode/postmaster.pid`
+kill `cat /usr/local/pgsql/Datanode/postmaster.pid`
</programlisting>
- You can apply this to stop a coordinator too.
+ You can apply this to stop a Coordinator too.
</para>
<para>
To stop the GTM running in the background you can type
@@ -2662,7 +2662,7 @@ kill `cat /usr/local/pgsql/gtm-proxy/gtm_proxy.pid
<!## end>
<!## XC>
Please do not forget to give the port number of one of the
- coordinators. Then you are connected to a coordinator listening
+ Coordinators. Then you are connected to a Coordinator listening
to the port you specified.
<!## end>
<!## PG>
diff --git a/doc-xc/src/sgml/intro.sgmlin b/doc-xc/src/sgml/intro.sgmlin
index 9a00426557..3df8cebd47 100644
--- a/doc-xc/src/sgml/intro.sgmlin
+++ b/doc-xc/src/sgml/intro.sgmlin
@@ -14,7 +14,6 @@
<productname>PostgreSQL</productname> officially supports.
<!## end>
<!## XC>
-<!## EN>
This book is the official documentation of
<productname>Postgres-XC</productname>. It has been written by the
<productname>Postgres-XC</productname> developers and other
@@ -26,24 +25,14 @@
<para>
<productname>Postgres-XC</> is essentially a collection of multiple
<productname>PostgreSQL</> database to provide both read and write
- performance scalability. It also provides full-featured transaction
- consistency as <productname>PostgreSQL</> provides.
+ performance scalability. It also provides full-featured transaction
+ consistency as <productname>PostgreSQL</> provides, at the exception
+ of SSI which is incomplete.
</para>
<para>
- <productname>Postgres-XC</> inherits almost all features from <productname>PostgreSQL</>.
+ <productname>Postgres-XC</> inherits almost all major features from <productname>PostgreSQL</>.
This document is also based upon <productname>PostgreSQL</> reference manual.
<!## end>
-<!## JP>
- 本マニュアルは
- <productname>Postgres-XC</productname> の公式文書です。
- 本マニュアルは
- <productname>Postgres-XC</productname> 開発者及びその他のボランティアが
- <productname>Postgres-XC</productname> ソフトウェア開発と平行して書いてきたものです。
- 本書では、
- the functionality that the current version of
- <productname>Postgres-XC</productname> の現行バージョンが正式にサポートしている機能を記述しています。
-<!## end>
-<!## end>
</para>
<para>
@@ -156,7 +145,7 @@
<para>
Postgres-XC is an open source project to provide write-scalable,
- synchronous multi-master, transparent PostgreSQL cluster
+ synchronous symmetric, transparent PostgreSQL cluster
solution. It is a collection if tightly coupled database
components which can be installed in more than one hardware or
virtual machines.
@@ -166,7 +155,7 @@
Write-scalable means Postgres-XC can be configured with as many
database servers as you want and handle much more writes (updating
SQL statements) which single database server cannot
- do. Multi-master means you can have more than one data base
+ do. Symmetric means you can have more than one data base
servers which provide single database view. Synchronous means any
database update from any database server is immediately visible to
any other transactions running in different masters. Transparent
@@ -224,7 +213,7 @@
<listitem>
<para>
Postgres-XC should provide multiple servers to accept transactions and statements
-from applications, which is known as "master" server in general in general. In Postgres-XC, this is called "coordinator".
+from applications, which is known as "master" server in general. In Postgres-XC, this is called "Coordinator".
</para>
</listitem>
<listitem>
@@ -305,16 +294,15 @@ from applications, which is known as "master" server in general in general. In P
<para>
Coordinator is an interface to applications. It acts like
- conventional PostgreSQL backend process. However, because tables
- may be replicated or distributed, coordinator does not store any
- actual data. Actual data is stored by datanode as described
- below. Coordinator receives SQL statements, get Global
+ conventional PostgreSQL backend process. However, Coordinator
+ does not store any actual data. Actual data is stored by Datanode
+ as described below. Coordinator receives SQL statements, get Global
Transaction Id and Global Snapshot as needed, determine which
- datanode is involved and ask them to execute (a part of)
- statement. When issuing statement to datanodes, it is
- associated with GXID and Global Snapshot so that datanode is not
+ Datanode is involved and ask them to execute (a part of)
+ statement. When issuing statement to Datanodes, it is
+ associated with GXID and Global Snapshot so that Datanode is not
confused if it receives another statement from another
- transaction originated by another coordinator.
+ transaction originated by another Coordinator.
</para>
</sect3>
@@ -323,17 +311,16 @@ from applications, which is known as "master" server in general in general. In P
<para>
Datanode actually stores your data. Tables may be distributed
- among datanodes, or replicated to all the datanodes. Because
- datanode does not have global view of the whole database, it
+ among Datanodes, or replicated to all the Datanodes.
+ Datanode does not have global view of the whole database, it
just takes care of locally stored data. Incoming statement is
- examined by the coordinator as described next, and rebuilt to
- execute at each datanode involved. It is then transferred to
- each datanodes involved together with GXID and Global Snapshot
+ examined by the Coordinator as described next, and rebuilt to
+ execute at each Datanode involved. It is then transferred to
+ each Datanodes involved together with GXID and Global Snapshot
as needed. Datanode may receive request from various
- coordinators. However, because each the transaction is identified
+ Coordinators. However, because each the transaction is identified
uniquely and associated with consistent (global) snapshot, data
- node doesn't have to worry what coordinator each transaction or
-
+ node doesn't have to worry what Coordinator each transaction or
statement came from.
</para>
@@ -383,9 +370,16 @@ from applications, which is known as "master" server in general in general. In P
<listitem>
<simpara>views</simpara>
</listitem>
+<!## PG>
<listitem>
<simpara>transactional integrity</simpara>
</listitem>
+<!## end>
+<!## XC>
+ <listitem>
+ <simpara>transactional integrity, at the exception of SSI whose support is incomplete</simpara>
+ </listitem>
+<!## end>
<listitem>
<simpara>multiversion concurrency control</simpara>
</listitem>
diff --git a/doc-xc/src/sgml/maintenance.sgmlin b/doc-xc/src/sgml/maintenance.sgmlin
index c09f207d95..19e682a033 100644
--- a/doc-xc/src/sgml/maintenance.sgmlin
+++ b/doc-xc/src/sgml/maintenance.sgmlin
@@ -15,7 +15,7 @@
&xconly;
<para>
Please note that this chapter describes database maintenance task
- of individual coordinator and datanode. Please remember that these
+ of individual Coordinator and Datanode. Please remember that these
tasks should be done for each of them.
</para>
<!## end>
@@ -108,7 +108,7 @@
&xconly;
<para>
Please note that this section describes the task of individual
- coordinator and datanode. It should be done for each of them.
+ Coordinator and Datanode. It should be done for each of them.
</para>
<!## end>
@@ -141,7 +141,7 @@
&xconly;
<para>
Please note that this section describes the task of individual
- coordinator and datanode. It should be done for each of them.
+ Coordinator and Datanode. It should be done for each of them.
</para>
<!## end>
@@ -219,7 +219,7 @@
&xconly;
<para>
Please note that this section describes the task of individual
- coordinator and datanode. It should be done for each of them.
+ Coordinator and Datanode. It should be done for each of them.
</para>
<!## end>
@@ -347,7 +347,7 @@
&xconly;
<para>
Please note that this section describes the task of individual
- coordinator and datanode. It should be done for each of them.
+ Coordinator and Datanode. It should be done for each of them.
</para>
<!## end>
@@ -440,7 +440,7 @@
&xconly;
<para>
Please note that this section describes the task of individual
- coordinator and datanode. It should be done for each of them.
+ Coordinator and Datanode. It should be done for each of them.
</para>
<!## end>
@@ -694,7 +694,7 @@ HINT: Stop the postmaster and use a standalone backend to VACUUM in "mydb".
&xconly;
<para>
Please note that this section describes the task of individual
- coordinator and datanode. It should be done for each of them.
+ Coordinator and Datanode. It should be done for each of them.
</para>
<!## end>
@@ -846,7 +846,7 @@ analyze threshold = analyze base threshold + analyze scale factor * number of tu
&xconly;
<para>
Please note that this section describes the task of individual
- coordinator and datanode. It should be done for each of them.
+ Coordinator and Datanode. It should be done for each of them.
</para>
<!## end>
@@ -895,7 +895,7 @@ analyze threshold = analyze base threshold + analyze scale factor * number of tu
&xconly;
<para>
Please note that this section describes the task of individual
- coordinator and datanode. It should be done for each of them.
+ Coordinator and Datanode. It should be done for each of them.
</para>
<!## end>
diff --git a/doc-xc/src/sgml/manage-ag.sgmlin b/doc-xc/src/sgml/manage-ag.sgmlin
index 9f5000a7fa..7f848762bb 100644
--- a/doc-xc/src/sgml/manage-ag.sgmlin
+++ b/doc-xc/src/sgml/manage-ag.sgmlin
@@ -304,7 +304,7 @@ createdb -T template0 <replaceable>dbname</>
</para>
<para>
In <productname>Postgres-XC</>, template databases are hold in
- each coordinator and datanode. They are locally copied when new
+ each Coordinator and Datanode. They are locally copied when new
database is created.
</para>
<!## end>
diff --git a/doc-xc/src/sgml/mvcc.sgmlin b/doc-xc/src/sgml/mvcc.sgmlin
index fd726fe636..755ea48715 100644
--- a/doc-xc/src/sgml/mvcc.sgmlin
+++ b/doc-xc/src/sgml/mvcc.sgmlin
@@ -32,8 +32,8 @@
<para>
<productname>Postgres-XC</> inherited concurrency control
of <productname>PostgreSQL</> and extended it globally to whole
- coordinators and datanodes involved. Regardless of what
- coordinator to connect to, all the transactions
+ Coordinators and Datanodes involved. Regardless of what
+ Coordinator to connect to, all the transactions
in <productname>Postgres-XC</> database cluster behaves in
consistent way as if they are running in single database.
</para>
@@ -1260,8 +1260,8 @@ UPDATE accounts SET balance = balance - 100.00 WHERE acctnum = 22222;
&xconly;
<para>
Please note that <productname>Postgres-XC</> does not detect
- deadlocks which spreads among multiple coordinators and/or
- datanodes, know as global deadlocks. There are many discussions
+ deadlocks which spreads among multiple Coordinators and/or
+ Datanodes, know as global deadlocks. There are many discussions
whether global deadlocks should be detected or not, mainly
because of the cost of the detection. So
far, <productname>Postgres-XC</> leave global deadlock detection
@@ -1352,8 +1352,8 @@ UPDATE accounts SET balance = balance - 100.00 WHERE acctnum = 22222;
&xconly;
<para>
Please note that <productname>Postgres-XC</>'s advisory lock is
- local to coordinator or datanode. If you wish to acquire
- advisory locks on different coordinator, you should do it manually
+ local to Coordinator or Datanode. If you wish to acquire
+ advisory locks on different Coordinator, you should do it manually
using <type>EXECUTE DIRECT</> statement.
</para>
<!## end>
@@ -1639,7 +1639,7 @@ SELECT pg_advisory_lock(q.id) FROM
control and MVCC common to <productname>PostgreSQL</productname>.
This section describes how <productname>Postgres-XC</productname>
implements global concurrency control and MVCC among multiple
- coordinators and datanodes.
+ Coordinators and Datanodes.
</para>
<para>
@@ -1660,25 +1660,25 @@ SELECT pg_advisory_lock(q.id) FROM
<para>
As described
in <xref linkend="intro-whatis">, <productname>Postgres-XC</productname>
- is composed of GTM (Global Transaction Manager), coordinators and
- datanodes.
+ is composed of GTM (Global Transaction Manager), Coordinators and
+ Datanodes.
</para>
<para>
- In <productname>Postgres-XC</productname>, any coordinator can
+ In <productname>Postgres-XC</productname>, any Coordinator can
accept any transaction, regardless whether it is read only or
read/write. Transaction integrity is enforced by GTM (global
- transaction manager). Because we have multiple coordinators, each
+ transaction manager). Because we have multiple Coordinators, each
of them can handle incoming transactions and statements in
parallel.
</para>
<para>
Analyzed statements are converted into internal plans, which
- include SQL statements targeted to datanodes. They're proxied to
- each target datanode, handled and the result will be sent back to
- originating coordinator where all the results from target
- datanodes will be combined into the result to be sent back to the
+ include SQL statements targeted to Datanodes. They're proxied to
+ each target Datanode, handled and the result will be sent back to
+ originating Coordinator where all the results from target
+ Datanodes will be combined into the result to be sent back to the
application.
</para>
@@ -1686,7 +1686,7 @@ SELECT pg_advisory_lock(q.id) FROM
Each table can be distributed or replicated as described
in <xref linkend="intro-whatis">. If you design each table's
distribution carefully, most of the statements may need to target
- to just one datanode. In this way, coordinators and datanodes
+ to just one Datanode. In this way, Coordinators and Datanodes
runs transactions in parallel which scales out both read and write
operations.
</para>
diff --git a/doc-xc/src/sgml/oid2name.sgmlin b/doc-xc/src/sgml/oid2name.sgmlin
index a37fd5523f..fd5f966ea3 100644
--- a/doc-xc/src/sgml/oid2name.sgmlin
+++ b/doc-xc/src/sgml/oid2name.sgmlin
@@ -29,9 +29,9 @@
&xconly;
<!## XC>
<para>
- Please note that you can issue this command to each datanode or
- coordinator. The result is the information local to datanode or
- coordinator specified.
+ Please note that you can issue this command to each Datanode or
+ Coordinator. The result is the information local to Datanode or
+ Coordinator specified.
</para>
<!## end>
diff --git a/doc-xc/src/sgml/pageinspect.sgmlin b/doc-xc/src/sgml/pageinspect.sgmlin
index a961bc45c8..762dd23d07 100644
--- a/doc-xc/src/sgml/pageinspect.sgmlin
+++ b/doc-xc/src/sgml/pageinspect.sgmlin
@@ -19,8 +19,8 @@
<!## XC>
<para>
Functions of this module returns information about connecting
- coordinators locally. To get information of a datanode, you can
- connect to the datanode using psql directly. In this case,
+ Coordinators locally. To get information of a Datanode, you can
+ connect to the Datanode using psql directly. In this case,
statements will be handled by local transaction control without GTM,
you will have warnings and the visibility could be somewhat
inconsistent.
diff --git a/doc-xc/src/sgml/perform.sgmlin b/doc-xc/src/sgml/perform.sgmlin
index bf624d5e84..9ffe6723dc 100644
--- a/doc-xc/src/sgml/perform.sgmlin
+++ b/doc-xc/src/sgml/perform.sgmlin
@@ -691,13 +691,13 @@ SELECT * FROM a, b, c WHERE a.id = b.id AND b.ref = c.id;
<para>
<productname>Postgres-XC</> stores table data in a distributed or
replicated fashion. To leverage this, the planner tries to find
- the best way to use as much of datanode power as possible. If the
+ the best way to use as much of Datanode power as possible. If the
equi-join is done by distribution columns and they share the
- distribution method (hash/modulo), the coordinator can tell
- datanode to perform join. If not, the coordinator collects rows to
- join from datanodes and perform the join locally. In this case,
- the coordinator tries to push other predicate as much as possible
- to the datanode to reduce the amount of rows to receive.
+ distribution method (hash/modulo), the Coordinator can tell
+ Datanode to perform join. If not, the Coordinator collects rows to
+ join from Datanodes and perform the join locally. In this case,
+ the Coordinator tries to push other predicate as much as possible
+ to the Datanode to reduce the amount of rows to receive.
</para>
<!## end>
&common;
@@ -1030,7 +1030,7 @@ SELECT * FROM x, y, a, b, c WHERE something AND somethingelse;
<para>
Please note that you should tune the configuration variables in
all the nodes involved. Maybe you need to tune this just for
- datanodes. Coordinator's database will be updated almost only
+ Datanodes. Coordinator's database will be updated almost only
by <acronym>DDL</>s.
</para>
<!## end>
@@ -1122,8 +1122,8 @@ SELECT * FROM x, y, a, b, c WHERE something AND somethingelse;
&xconly;
<para>
In <productname>Postgres-XC</>, manual <command>VACUUM</> will be
- populated to all the datanodes as well. However, you should
- configure autovacuum for each coordinator and datanodes.
+ populated to all the Datanodes as well. However, you should
+ configure autovacuum for each Coordinator and Datanodes.
</para>
<!## end>
</sect2>
diff --git a/doc-xc/src/sgml/pgarchivecleanup.sgmlin b/doc-xc/src/sgml/pgarchivecleanup.sgmlin
index baaed19c70..9c1b65982d 100644
--- a/doc-xc/src/sgml/pgarchivecleanup.sgmlin
+++ b/doc-xc/src/sgml/pgarchivecleanup.sgmlin
@@ -24,8 +24,8 @@
<!## XC>
<para>
- Please note that this command takes care of each datanode or
- coordinator. You should configure each of them manually.
+ Please note that this command takes care of each Datanode or
+ Coordinator. You should configure each of them manually.
</para>
<!## end>
diff --git a/doc-xc/src/sgml/pgbuffercache.sgmlin b/doc-xc/src/sgml/pgbuffercache.sgmlin
index fa4a39bfb7..28a5ccf76c 100644
--- a/doc-xc/src/sgml/pgbuffercache.sgmlin
+++ b/doc-xc/src/sgml/pgbuffercache.sgmlin
@@ -30,7 +30,7 @@
<!## XC>
<para>
<filename>pg_buffercache</filename> returns information local to the
- connecting coordinator. To inquire information local to other node,
+ connecting Coordinator. To inquire information local to other node,
use <command>EXECUTE DIRECT</command>.
</para>
<!## end>
diff --git a/doc-xc/src/sgml/pgfreespacemap.sgmlin b/doc-xc/src/sgml/pgfreespacemap.sgmlin
index b3b1880a1b..e5bf1ac1de 100644
--- a/doc-xc/src/sgml/pgfreespacemap.sgmlin
+++ b/doc-xc/src/sgml/pgfreespacemap.sgmlin
@@ -27,8 +27,8 @@
<!## XC>
<para>
Functions of this module returns information about connecting
- coordinators locally. To get information of a datanode, you can
- connect to the datanode using psql directly. In this case,
+ Coordinators locally. To get information of a Datanode, you can
+ connect to the Datanode using psql directly. In this case,
statements will be handled by local transaction control without GTM,
you will have warnings and the visibility could be somewhat
inconsistent.
diff --git a/doc-xc/src/sgml/pgrowlocks.sgmlin b/doc-xc/src/sgml/pgrowlocks.sgmlin
index 52f9d81630..57cfb45e54 100644
--- a/doc-xc/src/sgml/pgrowlocks.sgmlin
+++ b/doc-xc/src/sgml/pgrowlocks.sgmlin
@@ -18,8 +18,8 @@
<!## XC>
<para>
Functions of this module returns information about connecting
- coordinators locally. To get information of a datanode, you can
- connect to the datanode using psql directly. In this case,
+ Coordinators locally. To get information of a Datanode, you can
+ connect to the Datanode using psql directly. In this case,
statements will be handled by local transaction control without GTM,
you will have warnings and the visibility could be somewhat
inconsistent.
diff --git a/doc-xc/src/sgml/pgstattuple.sgmlin b/doc-xc/src/sgml/pgstattuple.sgmlin
index 1b3e9235a0..e930b73ded 100644
--- a/doc-xc/src/sgml/pgstattuple.sgmlin
+++ b/doc-xc/src/sgml/pgstattuple.sgmlin
@@ -18,8 +18,8 @@
<!## XC>
<para>
Functions of this module returns information about connecting
- coordinators locally. To get information of a datanode, you can
- connect to the datanode using psql directly. In this case,
+ Coordinators locally. To get information of a Datanode, you can
+ connect to the Datanode using psql directly. In this case,
statements will be handled by local transaction control without GTM,
you will have warnings and the visibility could be somewhat
inconsistent.
diff --git a/doc-xc/src/sgml/pltcl.sgmlin b/doc-xc/src/sgml/pltcl.sgmlin
index b3b1a15ae8..73fa7cd6b2 100644
--- a/doc-xc/src/sgml/pltcl.sgmlin
+++ b/doc-xc/src/sgml/pltcl.sgmlin
@@ -489,8 +489,8 @@ $$ LANGUAGE pltcl;
<!## XC>
&xconly;
<para>
- Please note that OID is maintained locally at each datanode
- and coordinator.
+ Please note that OID is maintained locally at each Datanode
+ and Coordinator.
</para>
<!## end>
</listitem>
diff --git a/doc-xc/src/sgml/query.sgmlin b/doc-xc/src/sgml/query.sgmlin
index 027bbf8bc4..5412f9c362 100644
--- a/doc-xc/src/sgml/query.sgmlin
+++ b/doc-xc/src/sgml/query.sgmlin
@@ -137,7 +137,7 @@
</para>
<para>
&xconly;
- In <productname>Postgres-XC</> these databases can be distributed into more than one datanodes.
+ In <productname>Postgres-XC</> these databases can be distributed into more than one Datanodes.
This will not affect how tables and rows are seen from applications point of view, but will be
important for Database Administrator (DBA).
Table distribution will be described later.
diff --git a/doc-xc/src/sgml/recovery-config.sgmlin b/doc-xc/src/sgml/recovery-config.sgmlin
index f79b72c5b5..f098378524 100644
--- a/doc-xc/src/sgml/recovery-config.sgmlin
+++ b/doc-xc/src/sgml/recovery-config.sgmlin
@@ -67,11 +67,11 @@
<footnote>
<para>
Although <productname>Postgres-XC</productname> does not
- prohibit to use warm-standby, in either coordinators or
- datanodes (see <xref linkend="xc-overview">), there's no
+ prohibit to use warm-standby, in either Coordinators or
+ Datanodes (see <xref linkend="xc-overview">), there's no
guarantee that warm-standbys are synchronized. If you use
warm standby for high availability feature, there's a risk
- that data in coordinators and datanodes are inconsistent.
+ that data in Coordinators and Datanodes are inconsistent.
It is highly recommended to use <command>BARRIER</command>.
</para>
</footnote>
diff --git a/doc-xc/src/sgml/ref/alter_database.sgmlin b/doc-xc/src/sgml/ref/alter_database.sgmlin
index 40d3b010a6..a51cc30496 100644
--- a/doc-xc/src/sgml/ref/alter_database.sgmlin
+++ b/doc-xc/src/sgml/ref/alter_database.sgmlin
@@ -102,7 +102,7 @@ ALTER DATABASE <replaceable class="PARAMETER">name</replaceable> RESET ALL
&xconly;
<para>
If there's any live connection to any of the template database in
- coordinator or datanode, you will have an error message. In this
+ Coordinator or Datanode, you will have an error message. In this
case, you should clean these connections using <command>CLEAN
CONNECITON</> statement.
</para>
diff --git a/doc-xc/src/sgml/ref/alter_node.sgmlin b/doc-xc/src/sgml/ref/alter_node.sgmlin
index 042a6d99ff..d52c994d5f 100644
--- a/doc-xc/src/sgml/ref/alter_node.sgmlin
+++ b/doc-xc/src/sgml/ref/alter_node.sgmlin
@@ -105,7 +105,7 @@ ALTER NODE <replaceable class="parameter">nodename</replaceable> WITH
<listitem>
<para>
The node type for given cluster node. Possible values are:
- 'coordinator' for a Coordinator node and 'datanode' for a
+ 'Coordinator' for a Coordinator node and 'Datanode' for a
Datanode node.
</para>
</listitem>
diff --git a/doc-xc/src/sgml/ref/checkpoint.sgmlin b/doc-xc/src/sgml/ref/checkpoint.sgmlin
index 3910561eb9..f5932228ca 100644
--- a/doc-xc/src/sgml/ref/checkpoint.sgmlin
+++ b/doc-xc/src/sgml/ref/checkpoint.sgmlin
@@ -57,7 +57,7 @@ CHECKPOINT
&xconly;
<para>
In <productname>Postrgres-XC</>, <command>CHECKPOINT</> is
- performed at local coordinator and allthe underlying datanodes.
+ performed at local Coordinator and allthe underlying Datanodes.
</para>
<!## end>
</refsect1>
diff --git a/doc-xc/src/sgml/ref/cluster.sgmlin b/doc-xc/src/sgml/ref/cluster.sgmlin
index df198ff16d..7b7c4270ee 100644
--- a/doc-xc/src/sgml/ref/cluster.sgmlin
+++ b/doc-xc/src/sgml/ref/cluster.sgmlin
@@ -91,7 +91,7 @@ CLUSTER [VERBOSE]
&xconly;
<para>
In <productname>Postgres-XC</>, <command>CLUSTER</> will be
- executed on all the datanodes as well.
+ executed on all the Datanodes as well.
</para>
<!## end>
</refsect1>
diff --git a/doc-xc/src/sgml/ref/commit_prepared.sgmlin b/doc-xc/src/sgml/ref/commit_prepared.sgmlin
index 2008b20422..b76831d9bf 100644
--- a/doc-xc/src/sgml/ref/commit_prepared.sgmlin
+++ b/doc-xc/src/sgml/ref/commit_prepared.sgmlin
@@ -72,7 +72,7 @@ COMMIT PREPARED <replaceable class="PARAMETER">transaction_id</replaceable>
<!## XC>
&xconly;
<para>
- If more than one datanode and/or coordinator are involved in the
+ If more than one Datanode and/or Coordinator are involved in the
transaction, <command>COMMIT PREPARED</> command will propagate to
all these nodes.
</para>
diff --git a/doc-xc/src/sgml/ref/copy.sgmlin b/doc-xc/src/sgml/ref/copy.sgmlin
index 5f01152ece..f39bbd4625 100644
--- a/doc-xc/src/sgml/ref/copy.sgmlin
+++ b/doc-xc/src/sgml/ref/copy.sgmlin
@@ -193,7 +193,7 @@ COPY { <replaceable class="parameter">table_name</replaceable> [ ( <replaceable
&xconly;
<para>
In <productname>Postgres-XC</>, OIDs are only maintained locally
- in coordinators and datanodes. The value of <literal>OIDs</>
+ in Coordinators and Datanodes. The value of <literal>OIDs</>
may conflict if you copy this value to another table.
</para>
<!## end>
diff --git a/doc-xc/src/sgml/ref/create_aggregate.sgmlin b/doc-xc/src/sgml/ref/create_aggregate.sgmlin
index 1c0c2ea111..e2fa36af14 100644
--- a/doc-xc/src/sgml/ref/create_aggregate.sgmlin
+++ b/doc-xc/src/sgml/ref/create_aggregate.sgmlin
@@ -141,7 +141,7 @@ CREATE AGGREGATE <replaceable class="PARAMETER">name</replaceable> (
<listitem>
<para>
Two phased aggregation - is used when the entire aggregation takes place on
- the coordinator node. In first phase called transition phase,
+ the Coordinator node. In first phase called transition phase,
<productname>Postgres-XC</productname> creates a temporary variable
<!## end>
of data type <replaceable class="PARAMETER">stype</replaceable>
@@ -165,14 +165,14 @@ CREATE AGGREGATE <replaceable class="PARAMETER">name</replaceable> (
<listitem>
<para>
Three phased aggregation - is used when the process of aggregation is divided
- between coordinator and data nodes. In this mode, each
+ between Coordinator and data nodes. In this mode, each
<productname>Postgres-XC</productname> data node involved in the query carries
out the first phase named transition phase. This phase is similar to the first
phase in the two phased aggregation mode discussed above, except that, every
data node applies this phase on the rows available at the data node. The
- result of transition phase is then transferred to the coordinator node.
- Second phase called collection phase takes place on the coordinator.
- <productname>Postgres-XC</productname> coordinator node creates a temporary variable
+ result of transition phase is then transferred to the Coordinator node.
+ Second phase called collection phase takes place on the Coordinator.
+ <productname>Postgres-XC</productname> Coordinator node creates a temporary variable
of data type <replaceable class="PARAMETER">stype</replaceable>
to hold the current internal state of the collection phase. For every input
from the data node (result of transition phase on that node), the collection
diff --git a/doc-xc/src/sgml/ref/create_database.sgmlin b/doc-xc/src/sgml/ref/create_database.sgmlin
index 7cbe631324..dd6d17930b 100644
--- a/doc-xc/src/sgml/ref/create_database.sgmlin
+++ b/doc-xc/src/sgml/ref/create_database.sgmlin
@@ -83,7 +83,7 @@ CREATE DATABASE <replaceable class="PARAMETER">name</replaceable>
&xconly;
<para>
If there's any live connection to any of the template database in
- coordinator or datanode, you will have an error message. In this
+ Coordinator or Datanode, you will have an error message. In this
case, you should clean these connections using <command>CLEAN
CONNECTION</> statement.
</para>
diff --git a/doc-xc/src/sgml/ref/create_function.sgmlin b/doc-xc/src/sgml/ref/create_function.sgmlin
index 5db7c3f4ac..f8a8ab9ff2 100644
--- a/doc-xc/src/sgml/ref/create_function.sgmlin
+++ b/doc-xc/src/sgml/ref/create_function.sgmlin
@@ -493,7 +493,7 @@ CREATE [ OR REPLACE ] FUNCTION
&xconly;
<para>
It is user's responsibility to deploy the file to all the
- servers where coordinator or datanode may run.
+ servers where Coordinator or Datanode may run.
</para>
<!## end>
diff --git a/doc-xc/src/sgml/ref/create_index.sgmlin b/doc-xc/src/sgml/ref/create_index.sgmlin
index 74546d6442..6ee1c39bf8 100644
--- a/doc-xc/src/sgml/ref/create_index.sgmlin
+++ b/doc-xc/src/sgml/ref/create_index.sgmlin
@@ -89,8 +89,8 @@ CREATE [ UNIQUE ] INDEX [ <replaceable class="parameter">name</replaceable> ] ON
&xconly;
<para>
Please note that indexes are maintained only locally in each
- coordinator and/or datanodes. They do not have any information on
- the column value outside coordinator or datanode.
+ Coordinator and/or Datanodes. They do not have any information on
+ the column value outside Coordinator or Datanode.
</para>
<!## end>
diff --git a/doc-xc/src/sgml/ref/create_node.sgmlin b/doc-xc/src/sgml/ref/create_node.sgmlin
index c6e3c97622..75b84e8874 100644
--- a/doc-xc/src/sgml/ref/create_node.sgmlin
+++ b/doc-xc/src/sgml/ref/create_node.sgmlin
@@ -110,7 +110,7 @@ CREATE NODE <replaceable class="parameter">nodename</replaceable> WITH
<listitem>
<para>
The node type for given cluster node. Possible values are:
- 'coordinator' for a Coordinator node and 'datanode' for a
+ 'Coordinator' for a Coordinator node and 'Datanode' for a
Datanode node.
</para>
</listitem>
@@ -155,7 +155,7 @@ CREATE NODE <replaceable class="parameter">nodename</replaceable> WITH
<para>
Create a Coordinator node located on local machine using port 6543
<programlisting>
-CREATE NODE node2 WITH (TYPE = 'coordinator', HOST = 'localhost', PORT = 6543);
+CREATE NODE node2 WITH (TYPE = 'Coordinator', HOST = 'localhost', PORT = 6543);
</programlisting>
</para>
@@ -163,7 +163,7 @@ CREATE NODE node2 WITH (TYPE = 'coordinator', HOST = 'localhost', PORT = 6543);
Create a Datanode which is a preferred and primary node
located on remote machine with IP '192.168.0.3' on port 8888.
<programlisting>
-CREATE NODE node2 WITH (TYPE = 'datanode', HOST = '192.168.0.3', PORT = 8888, PRIMARY, PREFERRED);
+CREATE NODE node2 WITH (TYPE = 'Datanode', HOST = '192.168.0.3', PORT = 8888, PRIMARY, PREFERRED);
</programlisting>
</para>
diff --git a/doc-xc/src/sgml/ref/create_nodegroup.sgmlin b/doc-xc/src/sgml/ref/create_nodegroup.sgmlin
index 00e664831d..dfe46f922a 100644
--- a/doc-xc/src/sgml/ref/create_nodegroup.sgmlin
+++ b/doc-xc/src/sgml/ref/create_nodegroup.sgmlin
@@ -76,9 +76,9 @@ WITH <replaceable class="parameter">nodename</replaceable> [, ... ]
<title>Examples</title>
<para>
- Create a cluster node group made of nodes called datanode1, datanode2.
+ Create a cluster node group made of nodes called Datanode1, Datanode2.
<programlisting>
-CREATE NODE GROUP cluster_group WITH datanode1, datanode2;
+CREATE NODE GROUP cluster_group WITH Datanode1, Datanode2;
</programlisting>
</para>
diff --git a/doc-xc/src/sgml/ref/create_table.sgmlin b/doc-xc/src/sgml/ref/create_table.sgmlin
index 9679874251..4e529f9414 100644
--- a/doc-xc/src/sgml/ref/create_table.sgmlin
+++ b/doc-xc/src/sgml/ref/create_table.sgmlin
@@ -874,8 +874,8 @@ CREATE TABLE <replaceable class="PARAMETER">table_name</replaceable>
&xconly;
<para>
In <productname>Postgres-XC</>, OID is kept locally in each
- datanode and coordinator. The OID value may inconsistent for
- rows stored in different datanodes.
+ Datanode and Coordinator. The OID value may inconsistent for
+ rows stored in different Datanodes.
</para>
<!## end>
</listitem>
@@ -959,7 +959,7 @@ CREATE TABLE <replaceable class="PARAMETER">table_name</replaceable>
<listitem>
&xconly;
<para>
- This clause specifies how the table is distributed or replicated among datanodes.
+ This clause specifies how the table is distributed or replicated among Datanodes.
</para>
<variablelist>
@@ -969,7 +969,7 @@ CREATE TABLE <replaceable class="PARAMETER">table_name</replaceable>
<listitem>
<para>
Each row of the table will be replicated into all the
- datanode of the <productname>Postgres-XC</> database
+ Datanode of the <productname>Postgres-XC</> database
cluster.
</para>
</listitem>
@@ -979,9 +979,9 @@ CREATE TABLE <replaceable class="PARAMETER">table_name</replaceable>
<term><literal>ROUND ROBIN</literal></term>
<listitem>
<para>
- Each row of the table will be placed in one of the datanodes
+ Each row of the table will be placed in one of the Datanodes
by round-robin manner. The value of the row will not be
- needed to determine what datanode to go.
+ needed to determine what Datanode to go.
</para>
</listitem>
</varlistentry>
@@ -1783,8 +1783,8 @@ CREATE TABLE employees OF employee_type (
</para>
<para>
In <productname>Postgres-XC</>, OID is kept locally in each
- datanode and coordinator. The OID value may inconsistent for rows
- stored in different datanodes.
+ Datanode and Coordinator. The OID value may inconsistent for rows
+ stored in different Datanodes.
</para>
</refsect2>
diff --git a/doc-xc/src/sgml/ref/create_table_as.sgmlin b/doc-xc/src/sgml/ref/create_table_as.sgmlin
index 9437692c8b..ff7fb59b28 100644
--- a/doc-xc/src/sgml/ref/create_table_as.sgmlin
+++ b/doc-xc/src/sgml/ref/create_table_as.sgmlin
@@ -222,7 +222,7 @@ CREATE [ [ GLOBAL | LOCAL ] { TEMPORARY | TEMP } | UNLOGGED ] TABLE <replaceable
<listitem>
&xconly;
<para>
- This clause specifies how the table is distributed or replicated among datanodes.
+ This clause specifies how the table is distributed or replicated among Datanodes.
</para>
<variablelist>
@@ -232,7 +232,7 @@ CREATE [ [ GLOBAL | LOCAL ] { TEMPORARY | TEMP } | UNLOGGED ] TABLE <replaceable
<listitem>
<para>
Each row of the table will be replicated into all the
- datanode of the <productname>Postgres-XC</> database
+ Datanode of the <productname>Postgres-XC</> database
cluster.
</para>
</listitem>
@@ -242,9 +242,9 @@ CREATE [ [ GLOBAL | LOCAL ] { TEMPORARY | TEMP } | UNLOGGED ] TABLE <replaceable
<term><literal>ROUND ROBIN</literal></term>
<listitem>
<para>
- Each row of the table will be placed in one of the datanodes
+ Each row of the table will be placed in one of the Datanodes
by round-robin manner. The value of the row will not be
- needed to determine what datanode to go.
+ needed to determine what Datanode to go.
</para>
</listitem>
</varlistentry>
diff --git a/doc-xc/src/sgml/ref/create_tablespace.sgmlin b/doc-xc/src/sgml/ref/create_tablespace.sgmlin
index 39be9b32d2..e3f9ff21c9 100644
--- a/doc-xc/src/sgml/ref/create_tablespace.sgmlin
+++ b/doc-xc/src/sgml/ref/create_tablespace.sgmlin
@@ -112,7 +112,7 @@ CREATE TABLESPACE <replaceable class="parameter">tablespace_name</replaceable> [
&xconly;
<para>
<productname>Postgres-XC</> assigns the same path to a tablespace
- for all the coordinators and datanodes. So when creating a tablespace,
+ for all the Coordinators and Datanodes. So when creating a tablespace,
user needs to have permission to the same location path on all the servers
involved in the cluster.
</para>
diff --git a/doc-xc/src/sgml/ref/drop_database.sgmlin b/doc-xc/src/sgml/ref/drop_database.sgmlin
index 1368a70ed9..11ff0446bb 100644
--- a/doc-xc/src/sgml/ref/drop_database.sgmlin
+++ b/doc-xc/src/sgml/ref/drop_database.sgmlin
@@ -47,7 +47,7 @@ DROP DATABASE [ IF EXISTS ] <replaceable class="PARAMETER">name</replaceable>
&xconly;
<para>
If there's any live connection to any of the template database in
- coordinator or datanode, you will have an error message. In this
+ Coordinator or Datanode, you will have an error message. In this
case, you should clean these connections using <command>CLEAN
CONNECITON</> statement.
</para>
diff --git a/doc-xc/src/sgml/ref/explain.sgmlin b/doc-xc/src/sgml/ref/explain.sgmlin
index 24eae5c167..82a2f5000d 100644
--- a/doc-xc/src/sgml/ref/explain.sgmlin
+++ b/doc-xc/src/sgml/ref/explain.sgmlin
@@ -87,8 +87,8 @@ EXPLAIN [ ANALYZE ] [ VERBOSE ] <replaceable class="parameter">statement</replac
<!## XC>
<para>
Please note that <command>explain</command> explains local plan at
- the coordinator only. If you'd like to see remote plan at
- datanodes, use <filename>auto_explain</filename> package
+ the Coordinator only. If you'd like to see remote plan at
+ Datanodes, use <filename>auto_explain</filename> package
at <xref linkend="auto-explain">
</para>
<!## end>
@@ -184,7 +184,7 @@ ROLLBACK;
<term><literal>NODES</literal></term>
<listitem>
<para>
- Include information on the datanodes involved in the execution of Data
+ Include information on the Datanodes involved in the execution of Data
Scan Node. This parameter defaults to <literal>TRUE</literal>. This option
is available in <productname>Postgres-XC</productname>.
</para>
diff --git a/doc-xc/src/sgml/ref/gtm.sgmlin b/doc-xc/src/sgml/ref/gtm.sgmlin
index 9c9c0e6340..36f0c70ca4 100644
--- a/doc-xc/src/sgml/ref/gtm.sgmlin
+++ b/doc-xc/src/sgml/ref/gtm.sgmlin
@@ -247,7 +247,7 @@ PostgreSQL documentation
Valied values are <literal>act</literal> or <literal>standby</literal>.
<literal>act</literal> means to start up
this <application>gtm</application> as usual so
- that <application>gtm</application> clients (coordinators, data
+ that <application>gtm</application> clients (Coordinators, data
nodes or gtm-proxies) can connect for transaction
management. <literal>standby</literal> means
this <application>gtm</application> starts up as a backup
@@ -348,7 +348,7 @@ PostgreSQL documentation
To find the precise value to start with, you should
run <application>pg_controldata</application> to
find <literal>Latest checkpoint's NextXID</literal> of all the
- coordinators and datanodes and choose the value larger than or
+ Coordinators and Datanodes and choose the value larger than or
equals to the maximum value found.
</para>
</listitem>
diff --git a/doc-xc/src/sgml/ref/gtm_proxy.sgmlin b/doc-xc/src/sgml/ref/gtm_proxy.sgmlin
index 85ac3201df..fc09f3da2b 100644
--- a/doc-xc/src/sgml/ref/gtm_proxy.sgmlin
+++ b/doc-xc/src/sgml/ref/gtm_proxy.sgmlin
@@ -34,8 +34,8 @@ PostgreSQL documentation
</title>
&xconly
<para>
- Gtm proxy provides proxy feature from Postgres-XC coordinator and
- datanode to gtm. Gtm proxy groups connections and interactions
+ Gtm proxy provides proxy feature from Postgres-XC Coordinator and
+ Datanode to gtm. Gtm proxy groups connections and interactions
between gtm and other Postgres-XC components to reduce both the
number of interactions and the size of messages.
</para>
diff --git a/doc-xc/src/sgml/ref/load.sgmlin b/doc-xc/src/sgml/ref/load.sgmlin
index f404684a03..a1d6541e6a 100644
--- a/doc-xc/src/sgml/ref/load.sgmlin
+++ b/doc-xc/src/sgml/ref/load.sgmlin
@@ -64,10 +64,10 @@ LOAD '<replaceable class="PARAMETER">filename</replaceable>'
<!## XC>
<para>
Please note that <command>LOAD</command> command loads library only
- locally. You should load library manually in each datanode and
- coordinator (you can use psql directly to datanodes for this
+ locally. You should load library manually in each Datanode and
+ Coordinator (you can use psql directly to Datanodes for this
puupose), or <filename>edit postgresql.conf</filename> for all the
- datanodes and coordinators.
+ Datanodes and Coordinators.
</para>
<!## end>
diff --git a/doc-xc/src/sgml/ref/pg_controldata.sgmlin b/doc-xc/src/sgml/ref/pg_controldata.sgmlin
index 38a8ad3c67..5ce721aae3 100644
--- a/doc-xc/src/sgml/ref/pg_controldata.sgmlin
+++ b/doc-xc/src/sgml/ref/pg_controldata.sgmlin
@@ -16,7 +16,7 @@ PostgreSQL documentation
<refpurpose>display control information of a <productname>PostgreSQL</productname> database cluster</refpurpose>
<!## end>
<!## XC>
- <refpurpose>display control information of a coordinator or datanode database cluster of <productname>Postgres-XC</productname></refpurpose>
+ <refpurpose>display control information of a Coordinator or Datanode database cluster of <productname>Postgres-XC</productname></refpurpose>
<!## end>
</refnamediv>
@@ -58,7 +58,7 @@ PostgreSQL documentation
<!## XC>
&xconly;
<para>
- To print information of each datanode and coordinator, you should
+ To print information of each Datanode and Coordinator, you should
issue <command>pg_controldata</> against each of them.
</para>
<!## end>
diff --git a/doc-xc/src/sgml/ref/pg_ctl-ref.sgmlin b/doc-xc/src/sgml/ref/pg_ctl-ref.sgmlin
index d4219afcd8..811c18d823 100644
--- a/doc-xc/src/sgml/ref/pg_ctl-ref.sgmlin
+++ b/doc-xc/src/sgml/ref/pg_ctl-ref.sgmlin
@@ -251,7 +251,7 @@ PostgreSQL documentation
<term><option>-C</option></term>
<listitem>
<para>
- Specifies to run as a coordinator. Only for Postgres-XC.
+ Specifies to run as a Coordinator. Only for Postgres-XC.
</para>
</listitem>
</varlistentry>
@@ -358,8 +358,8 @@ PostgreSQL documentation
<term><option>-Z<replaceable class="parameter">nodeopt</replaceable></option></term>
<listitem>
<para>
- Specifies to run as coordinator or as datanode. You should
- specify coordinator or datanode as <replaceable>nodeopt</replaceable>. Only for Postgres-XC.
+ Specifies to run as Coordinator or as Datanode. You should
+ specify Coordinator or Datanode as <replaceable>nodeopt</replaceable>. Only for Postgres-XC.
</para>
</listitem>
</varlistentry>
@@ -415,7 +415,7 @@ PostgreSQL documentation
<term><option>-X</option></term>
<listitem>
<para>
- Specifies to run as a datanode. Only for Postgres-XC.
+ Specifies to run as a Datanode. Only for Postgres-XC.
</para>
</listitem>
</varlistentry>
@@ -504,7 +504,7 @@ PostgreSQL documentation
<!## XC>
<para>
- This command controls individual coordinator or datanode.
+ This command controls individual Coordinator or Datanode.
</para>
<!## end>
</refsect1>
diff --git a/doc-xc/src/sgml/ref/pg_resetxlog.sgmlin b/doc-xc/src/sgml/ref/pg_resetxlog.sgmlin
index c6f31dcbf6..009447b261 100644
--- a/doc-xc/src/sgml/ref/pg_resetxlog.sgmlin
+++ b/doc-xc/src/sgml/ref/pg_resetxlog.sgmlin
@@ -208,8 +208,8 @@ PostgreSQL documentation
<!## XC>
<para>
In <productname>Postgres-XC</>, <command>pg_resetxlog</command>
- will run only for local coordinator or datanode. You should run it
- for each coordinator or datanode manually.
+ will run only for local Coordinator or Datanode. You should run it
+ for each Coordinator or Datanode manually.
</para>
<!## end>
diff --git a/doc-xc/src/sgml/ref/pgxc_clean-ref.sgmlin b/doc-xc/src/sgml/ref/pgxc_clean-ref.sgmlin
index 577633e2d2..18638614e3 100644
--- a/doc-xc/src/sgml/ref/pgxc_clean-ref.sgmlin
+++ b/doc-xc/src/sgml/ref/pgxc_clean-ref.sgmlin
@@ -41,7 +41,7 @@ PostgreSQL documentation
with other nodes. <application>pgxc_clean</application> checks transaction commit status and corrects them.
</para>
<para>
- You should run this utility against one of the available coordinators. THe tool cleans up transaction status
+ You should run this utility against one of the available Coordinators. THe tool cleans up transaction status
of all the nodes automatically.
</para>
</refsect1>
@@ -76,7 +76,7 @@ PostgreSQL documentation
<term><option>--command=<replaceable class="parameter">hostname</replaceable></></term>
<listitem>
<para>
- Hostname of the coordinator to connect to.
+ Hostname of the Coordinator to connect to.
</para>
</listitem>
</varlistentry>
@@ -108,7 +108,7 @@ PostgreSQL documentation
<term><option>--port=<replaceable class="parameter">port_number</replaceable></></term>
<listitem>
<para>
- Specifies the port number of the coordinator.
+ Specifies the port number of the Coordinator.
</para>
</listitem>
</varlistentry>
diff --git a/doc-xc/src/sgml/ref/pgxc_ddl.sgmlin b/doc-xc/src/sgml/ref/pgxc_ddl.sgmlin
index 4397805dee..4a8ea91131 100644
--- a/doc-xc/src/sgml/ref/pgxc_ddl.sgmlin
+++ b/doc-xc/src/sgml/ref/pgxc_ddl.sgmlin
@@ -34,12 +34,12 @@ PostgreSQL documentation
</title>
&xconly
<para>
- pgxc_ddl is used to synchronize all coordinator catalog tables from
+ pgxc_ddl is used to synchronize all Coordinator catalog tables from
one chosen by a user. It is also possible to launch a DDL on one
-coordinator, and then to synchronize all the coordinator catalog
-tables from the catalog table of the coordinator having received the
-DDL. Copy method is cold-based. All the coordinators are stopped,
-catalog files are copied, then all the coordinators are restarted.
+Coordinator, and then to synchronize all the Coordinator catalog
+tables from the catalog table of the Coordinator having received the
+DDL. Copy method is cold-based. All the Coordinators are stopped,
+catalog files are copied, then all the Coordinators are restarted.
</para>
<para>
@@ -76,7 +76,7 @@ catalog files are copied, then all the coordinators are restarted.
<listitem>
<para>
Specify <filename>pgxc.conf</filename> folder, for
- characteristics of all the coordinators.
+ characteristics of all the Coordinators.
</para>
</listitem>
</varlistentry>
@@ -123,7 +123,7 @@ catalog files are copied, then all the coordinators are restarted.
<listitem>
<para>
Specify temporary folder where to copy the configuration files
- postgresql.conf and pg_hba.conf for each coordinator.
+ postgresql.conf and pg_hba.conf for each Coordinator.
</para>
</listitem>
</varlistentry>
@@ -240,7 +240,7 @@ gtm_ctl status -S gtm -D datafolder
</title>
&xconly
<para>
- Because <command>pgxc_ddl</command> requires access to coordinator
+ Because <command>pgxc_ddl</command> requires access to Coordinator
configuration file and data folders, the following parameters have
to be set in <filename>pgxc.conf</filename>:
</para>
@@ -259,32 +259,32 @@ gtm_ctl status -S gtm -D datafolder
<tbody>
<row>
- <entry><varname>coordinator_ports</varname></entry>
+ <entry><varname>Coordinator_ports</varname></entry>
<entry>string</entry>
<entry>
- Specify the port number of all the coordinators. Maintain the
- order of the value same as those in coordinator_hosts. It is
+ Specify the port number of all the Coordinators. Maintain the
+ order of the value same as those in Coordinator_hosts. It is
necessary to specify a number of ports equal to the number of
hosts. A comma separator is also necessary.
</entry>
</row>
<row>
- <entry><varname>coordinator_folders</varname></entry>
+ <entry><varname>Coordinator_folders</varname></entry>
<entry>string</entry>
<entry>
- Specify the data folders of all the coordinators. Maintain the
- order of the value same as those in coordinator_hosts. It is
+ Specify the data folders of all the Coordinators. Maintain the
+ order of the value same as those in Coordinator_hosts. It is
necessary to specify a number of data folders equal to the
number of hosts. A comma separator is also necessary.
</entry>
</row>
<row>
- <entry><varname>coordinator_hosts</varname></entry>
+ <entry><varname>Coordinator_hosts</varname></entry>
<entry>string</entry>
<entry>
- Specify the host name or IP address of coordinator. Separate
+ Specify the host name or IP address of Coordinator. Separate
each value with comma.
</entry>
</row>
@@ -303,9 +303,9 @@ gtm_ctl status -S gtm -D datafolder
<note>
<para>
- Configuration files of coordinators that have their catalog files
+ Configuration files of Coordinators that have their catalog files
changed are defined with an extension name postgresql.conf.number,
- "number" being the number of t coordinator in the order defined
+ "number" being the number of t Coordinator in the order defined
in <filename>pgxc.conf</filename>.
</para>
</note>
diff --git a/doc-xc/src/sgml/ref/postgres-ref.sgmlin b/doc-xc/src/sgml/ref/postgres-ref.sgmlin
index ba155bf1df..b3f66d10f7 100644
--- a/doc-xc/src/sgml/ref/postgres-ref.sgmlin
+++ b/doc-xc/src/sgml/ref/postgres-ref.sgmlin
@@ -153,7 +153,7 @@ PostgreSQL documentation
<listitem>
&xconly;
<para>
- Specifies postgres server should run as a coordinator.
+ Specifies postgres server should run as a Coordinator.
</para>
</listitem>
</varlistentry>
@@ -373,7 +373,7 @@ PostgreSQL documentation
<listitem>
&xconly;
<para>
- Specifies postgres server should run as a datanode.
+ Specifies postgres server should run as a Datanode.
</para>
</listitem>
</varlistentry>
@@ -833,7 +833,7 @@ PostgreSQL documentation
To start <command>postgres</command> in the background
<!## end>
<!## XC>
- To start <command>postgres</command> as a datanode in the background
+ To start <command>postgres</command> as a Datanode in the background
<!## end>
using default values, type:
@@ -854,7 +854,7 @@ PostgreSQL documentation
To start <command>postgres</command> with a specific
<!## end>
<!## XC>
- To start <command>postgres</command> as a coordinator with a specific
+ To start <command>postgres</command> as a Coordinator with a specific
<!## end>
port, e.g. 1234:
<!## PG>
diff --git a/doc-xc/src/sgml/ref/postmaster.sgmlin b/doc-xc/src/sgml/ref/postmaster.sgmlin
index 498f612cef..f1c83e3d24 100644
--- a/doc-xc/src/sgml/ref/postmaster.sgmlin
+++ b/doc-xc/src/sgml/ref/postmaster.sgmlin
@@ -16,7 +16,7 @@ PostgreSQL documentation
<refpurpose><productname>PostgreSQL</productname> database server</refpurpose>
<!## end>
<!## XC>
- <refpurpose><productname>Postgres-XC</productname> database server for coordinator or datanode</refpurpose>
+ <refpurpose><productname>Postgres-XC</productname> database server for Coordinator or Datanode</refpurpose>
<!## end>
</refnamediv>
diff --git a/doc-xc/src/sgml/ref/prepare_transaction.sgmlin b/doc-xc/src/sgml/ref/prepare_transaction.sgmlin
index bf62e19741..a6a10543dd 100644
--- a/doc-xc/src/sgml/ref/prepare_transaction.sgmlin
+++ b/doc-xc/src/sgml/ref/prepare_transaction.sgmlin
@@ -126,8 +126,8 @@ PREPARE TRANSACTION <replaceable class="PARAMETER">transaction_id</replaceable>
<!## XC>
&xconly;
<para>
- If the transaction is involved by more than one datanode and/or
- coordinator, <command>COMMIT PREPARED</> will be propagated to
+ If the transaction is involved by more than one Datanode and/or
+ Coordinator, <command>COMMIT PREPARED</> will be propagated to
these nodes.
</para>
diff --git a/doc-xc/src/sgml/ref/rollback_prepared.sgmlin b/doc-xc/src/sgml/ref/rollback_prepared.sgmlin
index f9b091fda6..97b0ecf50f 100644
--- a/doc-xc/src/sgml/ref/rollback_prepared.sgmlin
+++ b/doc-xc/src/sgml/ref/rollback_prepared.sgmlin
@@ -78,7 +78,7 @@ ROLLBACK PREPARED <replaceable class="PARAMETER">transaction_id</replaceable>
<!## XC>
&xconly;
<para>
- If more than one datanode and/or coordinator are involved in the
+ If more than one Datanode and/or Coordinator are involved in the
transaction, <command>ROLLBACK PREPARED</> command will propagate to
all these nodes.
</para>
diff --git a/doc-xc/src/sgml/ref/vacuum.sgmlin b/doc-xc/src/sgml/ref/vacuum.sgmlin
index 48b2c03c79..74f6996ab3 100644
--- a/doc-xc/src/sgml/ref/vacuum.sgmlin
+++ b/doc-xc/src/sgml/ref/vacuum.sgmlin
@@ -86,7 +86,7 @@ VACUUM [ FULL ] [ FREEZE ] [ VERBOSE ] ANALYZE [ <replaceable class="PARAMETER">
&xconly;
<para>
In <productname>Postgres-XC</>, <command>VACUUM</> will be performed
- on all the datanodes as well.
+ on all the Datanodes as well.
</para>
<!## end>
</refsect1>
diff --git a/doc-xc/src/sgml/ref/vacuumdb.sgmlin b/doc-xc/src/sgml/ref/vacuumdb.sgmlin
index fd46e3be02..25e620ffa3 100644
--- a/doc-xc/src/sgml/ref/vacuumdb.sgmlin
+++ b/doc-xc/src/sgml/ref/vacuumdb.sgmlin
@@ -77,7 +77,7 @@ PostgreSQL documentation
&xconly;
<para>
In <productname>Postgres-XC</>, <command>VACUUM</> will be
- performed in all the datanodes as well.
+ performed in all the Datanodes as well.
</para>
<!## end>
diff --git a/doc-xc/src/sgml/release-xc-1.0.sgmlin b/doc-xc/src/sgml/release-xc-1.0.sgmlin
index 6c25cd13a1..1a47519b19 100644
--- a/doc-xc/src/sgml/release-xc-1.0.sgmlin
+++ b/doc-xc/src/sgml/release-xc-1.0.sgmlin
@@ -480,18 +480,18 @@
<para>Maximum number of connections in pool</para>
</listitem>
<listitem>
- <para><varname>persistent_datanode_connections</></para>
+ <para><varname>persistent_Datanode_connections</></para>
<para>On/off switch to make sessions keep the same connections when used.
This may be useful in lower concurrency environments with many session
parameters set.
</para>
</listitem>
<listitem>
- <para><varname>max_coordinators</></para>
+ <para><varname>max_Coordinators</></para>
<para>Maximum number of Coordinators that can be defined in local node</para>
</listitem>
<listitem>
- <para><varname>max_datanodes</></para>
+ <para><varname>max_Datanodes</></para>
<para>Maximum number of Datanodes that can be defined in local node</para>
</listitem>
<listitem>
diff --git a/doc-xc/src/sgml/runtime.sgmlin b/doc-xc/src/sgml/runtime.sgmlin
index b7e63e074f..2621413c02 100644
--- a/doc-xc/src/sgml/runtime.sgmlin
+++ b/doc-xc/src/sgml/runtime.sgmlin
@@ -78,7 +78,7 @@
&xconly;
<para>
You should initialize <firstterm>database cluster</firstterm> for
- each <firstterm>coordinator</> and <firstterm>data node</>.
+ each <firstterm>Coordinator</> and <firstterm>data node</>.
</para>
<!## end>
&common;
@@ -114,7 +114,7 @@
<productname>Postgres-XC</productname> user account, which is
described in the previous section. You should assign
separate <firstterm>data directory</> to
- each <firstterm>coordinator</> and <firstterm>datanode</> if you
+ each <firstterm>Coordinator</> and <firstterm>Datanode</> if you
are configuring them in a same server.
<!## end>
</para>
@@ -127,7 +127,7 @@
</para>
<!## XC>
<para>
- If you configure multiple <firstterm>coordinator</>
+ If you configure multiple <firstterm>Coordinator</>
and/or <firstterm>data node</>, you cannot
share <envar>PGDATA</envar> among them and you must
specify<firstterm>data directory</> explicitly.
@@ -301,10 +301,10 @@ postgres$ <userinput>initdb -D /usr/local/pgsql/data</userinput>
initialize these database.
Coordinator holds just database catalog and temporary data store.
Datanode holds most of your data.
- First of all, you should determine how many coordinators/datanodes
+ First of all, you should determine how many Coordinators/Datanodes
to run and where they should run.
- It is a good convention that you run a coordinator where you run a
- datanode.
+ It is a good convention that you run a Coordinator where you run a
+ Datanode.
In this case, you should run <filename>GTM-Proxy</> on the same
server too.
It simplifies <productname>XC</> configuration and help to make
@@ -312,7 +312,7 @@ postgres$ <userinput>initdb -D /usr/local/pgsql/data</userinput>
</para>
<para>
- Both <filename>coordinator</> and <filename>datanode</> have their
+ Both <filename>Coordinator</> and <filename>Datanode</> have their
own databases, essentially <productname>PostgreSQL</> databases.
They are separate and you should initialize them separately.
</para>
@@ -325,7 +325,7 @@ postgres$ <userinput>initdb -D /usr/local/pgsql/data</userinput>
GTM provides global transaction management feature to all the
other components in <productname>Postgres-XC</> database cluster.
Because <filename>GTM</> handles transaction requirements from all
- the coordinators and datanodes, it is highly advised to run this
+ the Coordinators and Datanodes, it is highly advised to run this
in a separate server.
</para>
@@ -343,8 +343,8 @@ postgres$ <userinput>initdb -D /usr/local/pgsql/data</userinput>
Because <filename>GTM</> receives all the request to begin/end
transactions and to refer to sequence values, you should
run <filename>GTM</> in a separate server. If you
- run <filename>GTM</> in the same server as datanode or
- coordinator, it will become harder to make workload reasonably
+ run <filename>GTM</> in the same server as Datanode or
+ Coordinator, it will become harder to make workload reasonably
balanced.
</para>
<para>
@@ -412,7 +412,7 @@ $ <userinput>gtm_ctl -S gtm start -D /usr/local/pgsql/gtm -o "-i 1 -h gtm -p 200
requirements and deliver responses to them, <filename>GTM</>'s
workload becomes an issue.
Because <filename>GTM-Proxy</> groups requirements and response
- from <filename>coordinators</> and <filename>datanode</>, it is
+ from <filename>Coordinators</> and <filename>Datanode</>, it is
important to keep <filename>GTM</> workload in reasonable level.
</para>
@@ -447,12 +447,12 @@ $ <userinput>gtm_ctl start -S gtm_proxy -i 1 -D /usr/local/pgsql/gtm_proxy -i 1
</sect2>
- <sect2 id="datanode-configuration">
+ <sect2 id="Datanode-configuration">
<title>Configuring Datanode</title>
&xconly;
<para>
- Before starting coordinator or datanode, you must configure them.
- You can configure coordinator or datanode by
+ Before starting Coordinator or Datanode, you must configure them.
+ You can configure Coordinator or Datanode by
editing <filename>postgresql.conf</> file located at their working
directory as you specified by <option>-D</> option
in <filename>initdb</> command.
@@ -462,7 +462,7 @@ $ <userinput>gtm_ctl start -S gtm_proxy -i 1 -D /usr/local/pgsql/gtm_proxy -i 1
Datanode is almost native <productname>PostgreSQL</> with some
extension.
Additional options in <filename>postgresql.conf</> for the
- datanode are as follows:
+ Datanode are as follows:
</para>
<variablelist>
@@ -473,12 +473,12 @@ $ <userinput>gtm_ctl start -S gtm_proxy -i 1 -D /usr/local/pgsql/gtm_proxy -i 1
<para>
This value is not just a number of connections you expect to each
- coordinator.
- Each coordinator backend has a chance to connect to all the
- datanode.
- You should specify number of total connections whole coordinator
+ Coordinator.
+ Each Coordinator backend has a chance to connect to all the
+ Datanode.
+ You should specify number of total connections whole Coordinator
may accept.
- For example, if you have five coordinators and each of them may
+ For example, if you have five Coordinators and each of them may
accept forty connections, you should specify 200 as this
parameter value.
</para>
@@ -490,8 +490,8 @@ $ <userinput>gtm_ctl start -S gtm_proxy -i 1 -D /usr/local/pgsql/gtm_proxy -i 1
<listitem>
<para>
Even though your application does not intend to
- issue <command>PREPARE TRANSACTION</>, coordinator may issue this
- internally when more than one datanode are involved.
+ issue <command>PREPARE TRANSACTION</>, Coordinator may issue this
+ internally when more than one Datanode are involved.
You should specify this parameter the same value
as <filename>max_connections</>.
</para>
@@ -502,7 +502,7 @@ $ <userinput>gtm_ctl start -S gtm_proxy -i 1 -D /usr/local/pgsql/gtm_proxy -i 1
<term>pgxc_node_name</term>
<listitem>
<para>
- <filename>GTM</> needs to identify each datanode, as specified by
+ <filename>GTM</> needs to identify each Datanode, as specified by
this parameter.
The value should be unique and start with one.
</para>
@@ -513,8 +513,8 @@ $ <userinput>gtm_ctl start -S gtm_proxy -i 1 -D /usr/local/pgsql/gtm_proxy -i 1
<term>port</term>
<listitem>
<para>
- Because both coordinator and datanode may run on the same server,
- you may want to assign separate port number to the datanode.
+ Because both Coordinator and Datanode may run on the same server,
+ you may want to assign separate port number to the Datanode.
</para>
</listitem>
</varlistentry>
@@ -545,11 +545,11 @@ $ <userinput>gtm_ctl start -S gtm_proxy -i 1 -D /usr/local/pgsql/gtm_proxy -i 1
</sect2>
- <sect2 id="coordinator-configuration">
+ <sect2 id="Coordinator-configuration">
<title>Configuring Coordinator</title>
&xconly;
<para>
- Although coordinator and datanode shares the same binary, their
+ Although Coordinator and Datanode shares the same binary, their
configuration is a little different due to their functionalities.
</para>
@@ -559,9 +559,9 @@ $ <userinput>gtm_ctl start -S gtm_proxy -i 1 -D /usr/local/pgsql/gtm_proxy -i 1
<term>max_connections</term>
<listitem>
<para>
- You don't have to take other coordinator or datanode into
+ You don't have to take other Coordinator or Datanode into
account.
- Just specify the number of connections the coordinator accepts
+ Just specify the number of connections the Coordinator accepts
from applications.
</para>
</listitem>
@@ -571,7 +571,7 @@ $ <userinput>gtm_ctl start -S gtm_proxy -i 1 -D /usr/local/pgsql/gtm_proxy -i 1
<term>max_prepared_transactions</term>
<listitem>
<para>
- Specify at least total number of coordinators in the cluster.
+ Specify at least total number of Coordinators in the cluster.
</para>
</listitem>
</varlistentry>
@@ -580,7 +580,7 @@ $ <userinput>gtm_ctl start -S gtm_proxy -i 1 -D /usr/local/pgsql/gtm_proxy -i 1
<term>pgxc_node_name</term>
<listitem>
<para>
- <filename>GTM</> needs to identify each datanode, as specified by
+ <filename>GTM</> needs to identify each Datanode, as specified by
this parameter.
</para>
</listitem>
@@ -590,8 +590,8 @@ $ <userinput>gtm_ctl start -S gtm_proxy -i 1 -D /usr/local/pgsql/gtm_proxy -i 1
<term>port</term>
<listitem>
<para>
- Because both coordinator and datanode may run on the same server,
- you may want to assign separate port number to the coordinator.
+ Because both Coordinator and Datanode may run on the same server,
+ you may want to assign separate port number to the Coordinator.
It may be convenient to use default value of PostgreSQL listen
port.
</para>
@@ -645,7 +645,7 @@ $ <userinput>gtm_ctl start -S gtm_proxy -i 1 -D /usr/local/pgsql/gtm_proxy -i 1
</varlistentry>
<varlistentry>
- <term>max_coordinators</term>
+ <term>max_Coordinators</term>
<listitem>
<para>
This is the maximum number of Coordinators that can be configured in the cluster.
@@ -657,7 +657,7 @@ $ <userinput>gtm_ctl start -S gtm_proxy -i 1 -D /usr/local/pgsql/gtm_proxy -i 1
</varlistentry>
<varlistentry>
- <term>max_datanodes</term>
+ <term>max_Datanodes</term>
<listitem>
<para>
This is the maximum number of Datanodes configured in the cluster.
@@ -683,47 +683,47 @@ $ <userinput>gtm_ctl start -S gtm_proxy -i 1 -D /usr/local/pgsql/gtm_proxy -i 1
</sect2>
- <sect2 id="starting-datanode">
+ <sect2 id="starting-Datanode">
<title>Starting Datanodes</title>
<para>
Now you can start central component
- of <productname>Postgres-XC</>, datanode and coordinator.
+ of <productname>Postgres-XC</>, Datanode and Coordinator.
If you're familiar with starting <productname>PostgreSQL</>
database server, this step is very similar to <productname>PostgreSQL</>.
</para>
<para>
- You can start a datanode as follows:
+ You can start a Datanode as follows:
<screen>
-$ <userinput>postgres -X -D /usr/local/pgsql/datanode -i</userinput>
+$ <userinput>postgres -X -D /usr/local/pgsql/Datanode -i</userinput>
</screen>
<option>-X</> specifies <command>postgres</> should run as a
- datanode. <option>-i</> specifies <command>postgres</> to
+ Datanode. <option>-i</> specifies <command>postgres</> to
accept connection from TCP/IP connections.
</para>
<para>
- You should start all the datanodes you configured.
+ You should start all the Datanodes you configured.
</para>
</sect2>
- <sect2 id="starting-coordinator">
+ <sect2 id="starting-Coordinator">
<title>Starting Coordinators</title>
&xconly;
<para>
- You can start a coordinator as follows:
+ You can start a Coordinator as follows:
<screen>
-$ <userinput>postgres -C -D /usr/local/pgsql/datanode -i</userinput>
+$ <userinput>postgres -C -D /usr/local/pgsql/Datanode -i</userinput>
</screen>
<option>-C</> specifies <command>postgres</> should run as a
- coordinator. <option>-i</> specifies <command>postgres</> to
+ Coordinator. <option>-i</> specifies <command>postgres</> to
accept connection from TCP/IP connections.
</para>
<para>
- You should start all the coordinators you configured.
+ You should start all the Coordinators you configured.
</para>
</sect2>
@@ -2021,13 +2021,13 @@ echo -17 > /proc/self/oom_adj
&xconly;
<!## XC>
<para>
- This section describes how to shutdown each coordinator and
- datanode.
- Please note that you should shutdown coordinator first and then
- datanodes, GTM-Proxy and GTM.
+ This section describes how to shutdown each Coordinator and
+ Datanode.
+ Please note that you should shutdown Coordinator first and then
+ Datanodes, GTM-Proxy and GTM.
</para>
- <sect2 id="coordinator-datanode-shutdown">
+ <sect2 id="Coordinator-Datanode-shutdown">
<title>Shutting Down Coordinators and Datanodes</title>
<!## end>
@@ -2179,7 +2179,7 @@ $ <userinput>kill -INT `head -1 /usr/local/pgsql/data/postmaster.pid`</userinput
&xconly;
<!## XC>
<para>
- Because <productname>Postgres-XC</>'s coordinators and datanodes
+ Because <productname>Postgres-XC</>'s Coordinators and Datanodes
are essentially <productname>PostgreSQL</> servers, you can follw
the steps described below to upgrade each of them. Please note
that you should do this manually.
@@ -2498,7 +2498,7 @@ pg_dumpall -p 5432 | psql -d postgres -p 5433
While the server is running, it is not possible for a malicious user
<!## end>
<!## XC>
- While coordinators and datanodes are running, it is not possible for a malicious user
+ While Coordinators and Datanodes are running, it is not possible for a malicious user
<!## end>
to take the place of the normal database server. However, when the
server is down, it is possible for a local user to spoof the normal
diff --git a/doc-xc/src/sgml/start.sgmlin b/doc-xc/src/sgml/start.sgmlin
index e3b9a9adb3..3a61a0cf30 100644
--- a/doc-xc/src/sgml/start.sgmlin
+++ b/doc-xc/src/sgml/start.sgmlin
@@ -124,46 +124,46 @@
Datanode. GTM is responsible to provide ACID property of
transactions. Datanode stores table data and handle SQL statements
locally. Coordinator handles each SQL statements from
- applications, determines which datanode to go, and decomposes it
- into local SQL statements for each datanode.
+ applications, determines which Datanode to go, and decomposes it
+ into local SQL statements for each Datanode.
</para>
<para>
You should run GTM in a separate server because GTM has to take
- care of transaction requirements from all the coordinators and
- datanodes. To group multiple requirements and responses from
- coordinator and datanode running on the same server, you can
+ care of transaction requirements from all the Coordinators and
+ Datanodes. To group multiple requirements and responses from
+ Coordinator and Datanode running on the same server, you can
configure GTM-Proxy. GTM-Proxy reduces the number of interaction
and the amount of data to GTM. GTM-Proxy also helps to take care
of GTM failure.
</para>
<para>
- It is a good convention to run both coordinator and datanode in a
+ It is a good convention to run both Coordinator and Datanode in a
same server because we don't have to worry about workload balance
between the two. You can have any number of servers where these
- two components are running. Because both coordinator and datanode
+ two components are running. Because both Coordinator and Datanode
are essentially PostgreSQL database, you should configure them to
avoid resource conflict. It is very important to assign them
different working directory and port number.
</para>
<para>
- Postgres-XC allow multiple coordinators which accept statments
+ Postgres-XC allow multiple Coordinators which accept statments
from applications independently but in an integrated way. Any
- writes from any coordinator is available from any other
- coordinators. They acts as if they are single database.
- Coordinator's role is to accept statments, find what datanodes are
- involved, Ode-compose incoming statements for each datanode if
- needed, proxy statements to target datanode, collect the results
+ writes from any Coordinator is available from any other
+ Coordinators. They acts as if they are single database.
+ Coordinator's role is to accept statments, find what Datanodes are
+ involved, Ode-compose incoming statements for each Datanode if
+ needed, proxy statements to target Datanode, collect the results
and write them back to applications.
</para>
<para>
Coordinator does not store any user data. It stores only catalog
data to determine how to decompose the statement, where the target
- datanodes are, among others. Therefore, you don't have to worry
- about coordinator failure much. When the coordinator fails, you
+ Datanodes are, among others. Therefore, you don't have to worry
+ about Coordinator failure much. When the Coordinator fails, you
can just switch to the other one.
</para>
@@ -182,7 +182,7 @@
(programs):
<!## end>
<!## XC>
- As described above, coordinator and datanode
+ As described above, Coordinator and Datanode
of <productname>Postgres-XC</> are
essentially <productname>PostgreSQL</> database servers. In database
jargon, <productname>PostgreSQL</productname> uses a client/server
@@ -287,7 +287,7 @@
<!## XC>
The first test to see whether you can access the database server
is to try to create a database. Running
- <productname>Postgres-XC</productname> servers (coordinators and datanodes) can manage many
+ <productname>Postgres-XC</productname> servers (Coordinators and Datanodes) can manage many
databases. Typically, a separate database is used for each
project or for each user.
<!## end>
@@ -414,7 +414,7 @@ createdb: database creation failed: ERROR: permission denied to create database
createdb: database creation failed: ERROR: source database "template1" is being accessed by other users
DETAIL: There are 1 other session(s) using the database.
</screen>
- This means that at least one of the coordinator pooler still holds a connection to template1 database to one of the datanodes. To release them, you should do the following as the database superuser.
+ This means that at least one of the Coordinator pooler still holds a connection to template1 database to one of the Datanodes. To release them, you should do the following as the database superuser.
<screen>
<prompt>$</prompt> <userinput>psql</userinput>
...
diff --git a/doc-xc/src/sgml/wal.sgmlin b/doc-xc/src/sgml/wal.sgmlin
index 274705ede2..06e1f0d1e6 100644
--- a/doc-xc/src/sgml/wal.sgmlin
+++ b/doc-xc/src/sgml/wal.sgmlin
@@ -760,23 +760,23 @@
</para>
<para>
- User must connect to one of the coordinators and issue the
+ User must connect to one of the Coordinators and issue the
<command>BARRIER</command> command, optionally followed by an
identifier. If user does not specify an identifier, an unique
identifier will be generated and returned to the caller. Upon
- receiving the <command>BARRIER</command> command, the coordinator
+ receiving the <command>BARRIER</command> command, the Coordinator
temporarily pauses any new two-phase commits. It also communicates with
- other coordinators to ensure that there are no in-progress two-phase
+ other Coordinators to ensure that there are no in-progress two-phase
commits in the cluster. At that point, a barrier <acronym>WAL</acronym>
record along with the user-given or system-generated BARRIER identifier
is writtent to the <acronym>WAL</acronym> stream of all data nodes and
- the coordinators.
+ the Coordinators.
</para>
<para>
User can create as many barriers as she wants to. At the time of
point-in-time-recovery, same barrier id must be specified in the
- <filename>recovery.conf</filename> files of all the coordinators and
+ <filename>recovery.conf</filename> files of all the Coordinators and
data nodes. When every node in the cluster recovers to the same barrier
id, a cluster-wide consistent state is reached. Its important that
the recovery must be started from a backup taken before the barrier