You can subscribe to this list here.
2010 |
Jan
|
Feb
|
Mar
|
Apr
(4) |
May
(28) |
Jun
(12) |
Jul
(11) |
Aug
(12) |
Sep
(5) |
Oct
(19) |
Nov
(14) |
Dec
(12) |
---|---|---|---|---|---|---|---|---|---|---|---|---|
2011 |
Jan
(18) |
Feb
(30) |
Mar
(115) |
Apr
(89) |
May
(50) |
Jun
(44) |
Jul
(22) |
Aug
(13) |
Sep
(11) |
Oct
(30) |
Nov
(28) |
Dec
(39) |
2012 |
Jan
(38) |
Feb
(18) |
Mar
(43) |
Apr
(91) |
May
(108) |
Jun
(46) |
Jul
(37) |
Aug
(44) |
Sep
(33) |
Oct
(29) |
Nov
(36) |
Dec
(15) |
2013 |
Jan
(35) |
Feb
(611) |
Mar
(5) |
Apr
(55) |
May
(30) |
Jun
(28) |
Jul
(458) |
Aug
(34) |
Sep
(9) |
Oct
(39) |
Nov
(22) |
Dec
(32) |
2014 |
Jan
(16) |
Feb
(16) |
Mar
(42) |
Apr
(179) |
May
(7) |
Jun
(6) |
Jul
(9) |
Aug
|
Sep
(4) |
Oct
|
Nov
(3) |
Dec
|
2015 |
Jan
|
Feb
|
Mar
|
Apr
(2) |
May
(4) |
Jun
|
Jul
|
Aug
|
Sep
|
Oct
|
Nov
|
Dec
|
S | M | T | W | T | F | S |
---|---|---|---|---|---|---|
|
|
|
|
1
|
2
(1) |
3
|
4
|
5
(1) |
6
|
7
(2) |
8
|
9
|
10
|
11
|
12
(1) |
13
(1) |
14
|
15
|
16
|
17
|
18
|
19
|
20
|
21
|
22
(5) |
23
|
24
|
25
|
26
|
27
|
28
|
29
|
30
|
31
|
From: Koichi S. <koi...@us...> - 2010-07-22 02:33:49
|
Project "website". The branch, master has been updated via 8c2cfec1e2cf6263a2a1bbea33d09643fb6a942a (commit) via 35cd128872b78192c961c85a3982cee3ec2c2ca0 (commit) via d481c92b1778158839d16077325eb3e6052b83d9 (commit) from b318cefa94aafbf4724f619d2a29941cb5e3d615 (commit) - Log ----------------------------------------------------------------- commit 8c2cfec1e2cf6263a2a1bbea33d09643fb6a942a Author: Koichi Suzuki <koichi@willey.(none)> Date: Thu Jul 22 11:35:09 2010 +0900 Modified: roadmap.html Corrected upcoming version numbers and schedule. diff --git a/roadmap.html b/roadmap.html index 65167e8..4f8802e 100755 --- a/roadmap.html +++ b/roadmap.html @@ -62,9 +62,9 @@ Upcoming Releases and Features Current plan of future releases and features are as follows: </p> -<!-- ==== For version 1.0 ==== --> +<!-- ==== For version 0.9.3 ==== --> <h4> -Version 1.0 (Late in September, 2010) +Version 0.9.3 (Late in September, 2010) </h4> <p class="inner"> @@ -82,9 +82,9 @@ Forward Cursor (w/o <code>ORDER BY</code>)<br> subqueries<br> </p> -<!-- ==== Beyond Version 1.0 ==== --> +<!-- ==== For Version 1.0 ==== --> <h4> -Beyond Version 1.0 +Version 1.0 (Late in December, 2010) </h4> <p class="inner"> @@ -106,6 +106,17 @@ Tuple relocation (distrubute key update)<br> Performance improvement <br> Regression tests </p> + +<!-- === Beyond Version 1.0 === ---> +<h4> +Beyond Version 1.0 +</h4> + +<p class="inner"> +HA Capability<br> +GTM-Standby<br> +</p> + </body> </html> commit 35cd128872b78192c961c85a3982cee3ec2c2ca0 Merge: d481c92 b318cef Author: Koichi Suzuki <koichi@willey.(none)> Date: Thu Jul 22 11:25:58 2010 +0900 Merge branch 'master' of ssh://postgres-xc.git.sourceforge.net/gitroot/postgres-xc/pgxcweb commit d481c92b1778158839d16077325eb3e6052b83d9 Author: Koichi Suzuki <koichi@willey.localdomain> Date: Wed Jun 16 11:00:40 2010 +0900 New file: materials/Postgres-XC_20100521.pdf Added a likn to the above file, PGCon2010 presentation materials. diff --git a/download.html b/download.html index cfe22b6..d2b6a44 100755 --- a/download.html +++ b/download.html @@ -159,6 +159,13 @@ Description of the outline of Postgres-XC internals.   </h4> +<!-- PGCon2010 Presentation Materials --> +<h4> +<a href="materials/Postgres-XC_20100521.pdf"> +Presentation material for PGCon2010. +</a> + +<!-- previous versions --> <h4> <a href="prev_vers/version0_9.html" target="main">Previous Versions</a>   diff --git a/download.html b/download.html~ similarity index 100% copy from download.html copy to download.html~ diff --git a/materials/Postgres-XC_20100521.pdf b/materials/Postgres-XC_20100521.pdf new file mode 100644 index 0000000..6915b3f Binary files /dev/null and b/materials/Postgres-XC_20100521.pdf differ ----------------------------------------------------------------------- Summary of changes: download.html | 7 +++ download.html => download.html~ | 78 ++++++++++++++++++------------------ materials/Postgres-XC_20100521.pdf | Bin 0 -> 1087782 bytes roadmap.html | 19 +++++++-- 4 files changed, 61 insertions(+), 43 deletions(-) copy download.html => download.html~ (66%) create mode 100644 materials/Postgres-XC_20100521.pdf hooks/post-receive -- website |
From: Michael P <mic...@us...> - 2010-07-22 02:10:58
|
Project "website". The branch, master has been updated via b318cefa94aafbf4724f619d2a29941cb5e3d615 (commit) via 84caa6fc61b61ea5db46269f4889615b1fa09f76 (commit) from 95fa0dca242148522ad78a1b0df1e1ebae9397fc (commit) - Log ----------------------------------------------------------------- commit b318cefa94aafbf4724f619d2a29941cb5e3d615 Author: Michael P <mic...@us...> Date: Thu Jul 22 11:09:25 2010 +0900 Events and current release number corrected. Some typing issues with members corrected diff --git a/events.html b/events.html index 52fee74..1cb57b5 100755 --- a/events.html +++ b/events.html @@ -12,24 +12,28 @@ ==== UPCOMING EVENTS ==== --> <h2 class="plain">Events</h2> -<!-- CHAR(10) --> +<p class="plain"> +Upcoming events to be decided soon! +</p> + +<!-- Event title --> +<!-- <p class="plain"> Jul.1 to 3, 2010, -<a href="http://www.char10.org/" target="_blank"> -CHAR(10) +<a href="http link to this event" target="_blank"> +Event title </a> -conference dedicated for PostgreSQL cluster. +Description of this event. </p> +--> <!-- UPDATES --> -<h2 class="plain"> -Updates -</h2> -<!-- Postgres-XC 0.9.1 download --> +<h2 class="plain">Updates</h2> +<!-- Postgres-XC 0.9.2 download --> <p class="plain"> -Postgres-XC 0.9.1 is now available!! Download -<a href="https://sourceforge.net/projects/postgres-xc/files/Version_0.9.1/pgxc_v0_9_1.tar.gz/download" target="_blank"> +Postgres-XC 0.9.2 is now available!! Download +<a href="https://sourceforge.net/projects/postgres-xc/files/Version_0.9.2/pgxc_v0_9_2.tar.gz/download" target="_blank"> here. </a> </p> diff --git a/members.html b/members.html index 17ae2a0..c230c54 100755 --- a/members.html +++ b/members.html @@ -26,31 +26,31 @@ Postgres-XC development team <h4>Koichi Suzuki</h4> <p class="inner"> -Project leader and architect. -His background includes object relational database engine (UniSQL) and +Project leader and architect.<br> +His background includes object relational database engine (UniSQL) and<br> PostgreSQL development. </p> <h4>Mason Sharp</h4> <p class="inner"> -Architect and development leader. -Coordinator developer. -He is also the main architect of GridSQL database cluster. +Architect and development leader.<br> +Coordinator developer.<br> +He is also the main architect of GridSQL database cluster.<br> </p> <h4>Pavan Deolasee</h4> <p class="inner"> -Global Transaction Manager developer. -He is well known as HOT developer in PostgreSQL. -He is also helping in source code review and PostgreSQL internals. +Global Transaction Manager developer.<br> +He is well known as HOT developer in PostgreSQL.<br> +He is also helping in source code review and PostgreSQL internals.<br> </p> <h4>Andrei Martsinchyk</h4> <p class="inner"> -Data Node and connection pooling developer. +Data Node and connection pooling developer.<br> He is also GridSQL developer and is now developping aggregate functions and other cross-node operation. </p> @@ -58,8 +58,9 @@ functions and other cross-node operation. <h4>Michael Paquier</h4> <p class="inner"> -Coordinator feature developer. Currently working on user-defined function and DDLs. -He helped in modifying DBT-1 benchmark for Postgres-XC. +Coordinator feature developer.<br> +Currently working on user-defined function, Sequence handling and Global values.<br> +He helped in modifying DBT-1 benchmark for Postgres-XC.<br> He also contributed to enhance pgbench and 2PC. </p> @@ -72,7 +73,7 @@ Test, performance evaluation and analysis, related documents and utilities. <h4>Devrim Gunduz</h4> <p class="inner"> -Binary buiding for releases. He is also developping binary packages of PostgreSQL. +Binary buiding for releases.<br> He is also developping binary packages of PostgreSQL. </p> </body> commit 84caa6fc61b61ea5db46269f4889615b1fa09f76 Author: Michael P <mic...@us...> Date: Thu Jul 22 10:55:52 2010 +0900 Website update with Postggres-XC 0.9.2 release diff --git a/download.html b/download.html index cfe22b6..584eab1 100755 --- a/download.html +++ b/download.html @@ -25,8 +25,8 @@ List of Release Materials </h3> <p> -The current release includes the following materials. -Please note that documentation is not included in the source material. +The current release includes the following materials.<br> +Please note that documentation is not included in the source material.<br> Please download documentation from <a href="https://sourceforge.net/projects/postgres-xc/files/" target="_blank"> the project download page. @@ -36,121 +36,121 @@ the project download page. Please also note tarball files do not include Postgres-XC documents. </p> -<!-- Documents of version 0.9.1 --> +<!-- Documents of version 0.9.2 --> <h4> -Version 0.9.1 +Version 0.9.2 </h4> <p> <ul> -<!-- tarball --> +<!-- tarball of 0.9.2, main download--> <li> -<code>pgxc_v0.9.1.tar.gz</code>:  -Latest version of Postgres-XC available. +<code>pgxc_v0.9.2.tar.gz</code>: <br> +Latest version of Postgres-XC available.<br> Please note that Postgres-XC documentation is not included in this file. ⇒ -<a href="https://sourceforge.net/projects/postgres-xc/files/Version_0.9.1/pgxc_v0_9_1.tar.gz/download" target="_blank"> +<a href="https://sourceforge.net/projects/postgres-xc/files/Version_0.9.2/pgxc_v0_9_2.tar.gz/download" target="_blank"> (download) </a> </li> <!-- tarball (diff) --> <li> -<code>PGXC_v0_9_1-PG_REL8_4_3.patch.gz</code>:  +<code>PGXC_v0_9_2-PG_REL8_4_3.patch.gz</code>: <br> The same material as above, but this file includes only the patch to apply -to the PostgreSQL 8.4.3 release source code. +to the PostgreSQL 8.4.3 release source code.<br> It is useful if you would like to see just a difference between PostgreSQL -and Postgres-XC. +and Postgres-XC.<br> No Postgres-XC documentation is included in this file either. ⇒ -<a href="https://sourceforge.net/projects/postgres-xc/files/Version_0.9.1/PGXC_v0_9_1-PG_REL8_4_3.patch.gz/download" target="_blank"> +<a href="https://sourceforge.net/projects/postgres-xc/files/Version_0.9.2/PGXC_v0_9_2-PG_REL8_4_3.patch.gz/download" target="_blank"> (download) </a> </li> <!-- License --> <li> -<code>COPYING</code>:  +<code>COPYING</code>: <br> License description. Postgres-XC is distributed under LGPL version 2.1 ⇒ -<a href="https://sourceforge.net/projects/postgres-xc/files/Version_0.9.1/COPYING/download" target="_blank"> +<a href="https://sourceforge.net/projects/postgres-xc/files/Version_0.9.2/COPYING/download" target="_blank"> (download) </a> </li> <!-- Files --> <li> -<code>FILES</code>:  -Description of files included in Postgres-XC 0.9.1 release. +<code>FILES</code>: <br> +Description of files included in Postgres-XC 0.9.2 release. ⇒ -<a href="https://sourceforge.net/projects/postgres-xc/files/Version_0.9.1/FILES/download" target="_blank"> -(download) -</a> -</li> - -<!-- Readme --> -<li> -<code>README</code>:  -Overview of the release. -⇒ -<a href="https://sourceforge.net/projects/postgres-xc/files/Version_0.9.1/README/download" target="_blank"> +<a href="https://sourceforge.net/projects/postgres-xc/files/Version_0.9.2/FILES/download" target="_blank"> (download) </a> </li> <!-- Reference Manual --> <li> -<code>PG-XC_ReferenceManual_v0_9_1.pdf</code>:  +<code>PG-XC_ReferenceManual_v0_9_2.pdf</code>: <br> Reference of Postgres-XC extension. ⇒ -<a href="https://sourceforge.net/projects/postgres-xc/files/Version_0.9.1/PG-XC_ReferenceManual_v0_9_1.pdf/download" target="_blank"> +<a href="https://sourceforge.net/projects/postgres-xc/files/Version_0.9.2/PG-XC_ReferenceManual_v0_9_2.pdf/download" target="_blank"> (download) </a> </li> <!-- pgbench Tutorial Manual --> <li> -<code>PG-XC_pgbench_Tutorial_v0_9_1.pdf</code>:  +<code>PG-XC_pgbench_Tutorial_v0_9_2.pdf</code>: <br> Step by step description how to build and configure pgbench to run with Postgres-XC. ⇒ -<a href="https://sourceforge.net/projects/postgres-xc/files/Version_0.9.1/PG-XC_pgbench_Tutorial_v0_9_1.pdf/download" target="_blank"> +<a href="https://sourceforge.net/projects/postgres-xc/files/Version_0.9.2/PG-XC_pgbench_Tutorial_v0_9_2.pdf/download" target="_blank"> (download) </a> </li> <!-- DBT-1 Tutorial Manual --> <li> -<code>PG-XC_DBT1_Tutorial_v0_9_1.pdf</code>:  +<code>PG-XC_DBT1_Tutorial_v0_9_2.pdf</code>: <br> Step by step description how to build and configure DBT-1 to run with Postgres-XC. ⇒ -<a href="https://sourceforge.net/projects/postgres-xc/files/Version_0.9.1/PG-XC_DBT1_Tutorial_v0_9_1.pdf/download" target="_blank"> +<a href="https://sourceforge.net/projects/postgres-xc/files/Version_0.9.2/PG-XC_DBT1_Tutorial_v0_9_2.pdf/download" target="_blank"> (download) </a> </li> <!-- Install Manual --> <li> -<code>PG-XC_InstallManual_v0_9_1.pdf</code>:  +<code>PG-XC_InstallManual_v0_9_2.pdf</code>: <br> Step by step description how to build, install and configure Postgres-XC. ⇒ -<a href="https://sourceforge.net/projects/postgres-xc/files/Version_0.9.1/PG-XC_InstallManual_v0_9_1.pdf/download" target="_blank"> +<a href="https://sourceforge.net/projects/postgres-xc/files/Version_0.9.2/PG-XC_InstallManual_v0_9_2.pdf/download" target="_blank"> +(download) +</a> +</li> + +<!-- SQL limitation manual --> +<li> +<code>PG-XC_SQL_Limitations_v0_9_2.pdf</code>: <br> +SQL restrictions available for Postgres-XC 0.9.2. +⇒ +<a href="https://sourceforge.net/projects/postgres-xc/files/Version_0.9.2/PG-XC_SQL_Limitations_v0_9_2.pdf/download" target="_blank"> (download) </a> </li> <!-- Architecture Document --> <li> -<code>PG-XC_Architecture_v0_9.pdf</code>:  +<code>PG-XC_Architecture_v0_9.pdf</code>: <br> Description of the outline of Postgres-XC internals. ⇒ -<a href="https://sourceforge.net/projects/postgres-xc/files/Version_0.9/PG-XC_Architecture.pdf/download" target="_blank"> +<a href="https://sourceforge.net/projects/postgres-xc/files/Version_0.9/PG-XC_Architecture.pdf/download" target="_blank"> (download) </a> </li> -</ul> +</ul> </p> <!--div align="left" style="font-size:95%;"--> @@ -166,4 +166,4 @@ Description of the outline of Postgres-XC internals. </div> </body> -</html> \ No newline at end of file +</html> diff --git a/prev_vers/version0_9.html b/prev_vers/version0_9.html index 83e008d..4ed4cf1 100644 --- a/prev_vers/version0_9.html +++ b/prev_vers/version0_9.html @@ -32,8 +32,9 @@ Version 0.9.0 <ul> <!-- tarball --> <li> -<code>pgxc_v0.9.tar.gz</code>:  -This is a collection of source materials used to build the binaries. +<code>pgxc_v0.9.tar.gz</code>: <br> +Previous version of Postgres-XC released in April 2010.<br> +This is a collection of source materials used to build the binaries.<br> Please note that Postgres-XC documentation is not included in this file. ⇒ <a href="https://sourceforge.net/projects/postgres-xc/files/Version_0.9/pgxc_v0.9.tar.gz/download" target="_blank"> @@ -41,11 +42,12 @@ Please note that Postgres-XC documentation is not included in this file. </a> </li> -<code>PGXC-PG_REL8_4_3.patch.gz</code>:  +<li> +<code>PGXC-PG_REL8_4_3.patch.gz</code>: <br> The same material as above, but this file includes only the patch to apply -to the PostgreSQL 8.4.3 release source code. +to the PostgreSQL 8.4.3 release source code.<br> It is useful if you would like to see just a difference between PostgreSQL -and Postgres-XC. +and Postgres-XC.<br> No Postgres-XC documentation is included in this file either. ⇒ <a href="https://sourceforge.net/projects/postgres-xc/files/Version_0.9/PGXC-PG_REL8_4_3.patch.gz/download" target="_blank"> @@ -55,7 +57,7 @@ No Postgres-XC documentation is included in this file either. <!-- License --> <li> -<code>COPYING</code>:  +<code>COPYING</code>: <br> License description. Postgres-XC is distributed under LGPL version 2.1 ⇒ <a href="https://sourceforge.net/projects/postgres-xc/files/Version_0.9/COPYING/download" target="_blank"> @@ -65,7 +67,7 @@ License description. Postgres-XC is distributed under LGPL version 2.1 <!-- Readme --> <li> -<code>README</code>:  +<code>README</code>: <br> Overview of the release. ⇒ <a href="https://sourceforge.net/projects/postgres-xc/files/Version_0.9/README/download" target="_blank"> @@ -75,7 +77,7 @@ Overview of the release. <!-- Reference Manual --> <li> -<code>ReferenceManual.pdf</code>:  +<code>ReferenceManual.pdf</code>: <br> Reference of Postgres-XC extension. ⇒ <a href="https://sourceforge.net/projects/postgres-xc/files/Version_0.9/ReferenceManual.pdf/download" target="_blank"> @@ -85,7 +87,7 @@ Reference of Postgres-XC extension. <!-- Tutorial Manual --> <li> -<code>PG-XC_TutorialManual.pdf</code>:  +<code>PG-XC_TutorialManual.pdf</code>: <br> Step by step description how to build and configure DBT-1 to run with Postgres-XC. ⇒ @@ -96,7 +98,7 @@ Postgres-XC. <!-- Install Manual --> <li> -<code>PG-XC_InstallManual_Revision1.pdf</code>:  +<code>PG-XC_InstallManual_Revision1.pdf</code>: <br> Step by step description how to build, install and configure Postgres-XC. ⇒ <a href="https://sourceforge.net/projects/postgres-xc/files/Version_0.9/PG-XC_InstallManual_Revision1.pdf/download" target="_blank"> @@ -106,14 +108,133 @@ Step by step description how to build, install and configure Postgres-XC. <!-- Architecture Document --> <li> -<code>PG-XC_Architecture.pdf</code>:  +<code>PG-XC_Architecture.pdf</code>: <br> +Description of the outline of Postgres-XC internals. +⇒ +<a href="https://sourceforge.net/projects/postgres-xc/files/Version_0.9/PG-XC_Architecture.pdf/download" target="_blank"> +(download) +</a> +</li> +</ul> +</p> + +<!-- Documents of version 0.9.1 --> +<h4> +Version 0.9.1 +</h4> + +<p> +<ul> +<!-- tarball --> +<li> +<code>pgxc_v0.9.1.tar.gz</code>: <br> +Previous version of Postgres-XC released in May 2010.<br> +Please note that Postgres-XC documentation is not included in this file. +⇒ +<a href="https://sourceforge.net/projects/postgres-xc/files/Version_0.9.1/pgxc_v0_9_1.tar.gz/download" target="_blank"> +(download) +</a> +</li> + +<!-- tarball (diff) --> +<li> +<code>PGXC_v0_9_1-PG_REL8_4_3.patch.gz</code>: <br> +The same material as above, but this file includes only the patch to apply +to the PostgreSQL 8.4.3 release source code.<br> +It is useful if you would like to see just a difference between PostgreSQL +and Postgres-XC.<br> +No Postgres-XC documentation is included in this file either. +⇒ +<a href="https://sourceforge.net/projects/postgres-xc/files/Version_0.9.1/PGXC_v0_9_1-PG_REL8_4_3.patch.gz/download" target="_blank"> +(download) +</a> +</li> + +<!-- License --> +<li> +<code>COPYING</code>: <br> +License description. Postgres-XC is distributed under LGPL version 2.1 +⇒ +<a href="https://sourceforge.net/projects/postgres-xc/files/Version_0.9.1/COPYING/download" target="_blank"> +(download) +</a> +</li> + +<!-- Files --> +<li> +<code>FILES</code>: <br> +Description of files included in Postgres-XC 0.9.1 release. +⇒ +<a href="https://sourceforge.net/projects/postgres-xc/files/Version_0.9.1/FILES/download" target="_blank"> +(download) +</a> +</li> + +<!-- Readme --> +<li> +<code>README</code>: <br> +Overview of the release. +⇒ +<a href="https://sourceforge.net/projects/postgres-xc/files/Version_0.9.1/README/download" target="_blank"> +(download) +</a> +</li> + +<!-- Reference Manual --> +<li> +<code>PG-XC_ReferenceManual_v0_9_1.pdf</code>: <br> +Reference of Postgres-XC extension. +⇒ +<a href="https://sourceforge.net/projects/postgres-xc/files/Version_0.9.1/PG-XC_ReferenceManual_v0_9_1.pdf/download" target="_blank"> +(download) +</a> +</li> + +<!-- pgbench Tutorial Manual --> +<li> +<code>PG-XC_pgbench_Tutorial_v0_9_1.pdf</code>: <br> +Step by step description how to build and configure pgbench to run with +Postgres-XC. +⇒ +<a href="https://sourceforge.net/projects/postgres-xc/files/Version_0.9.1/PG-XC_pgbench_Tutorial_v0_9_1.pdf/download" target="_blank"> +(download) +</a> +</li> + + +<!-- DBT-1 Tutorial Manual --> +<li> +<code>PG-XC_DBT1_Tutorial_v0_9_1.pdf</code>: <br> +Step by step description how to build and configure DBT-1 to run with +Postgres-XC. +⇒ +<a href="https://sourceforge.net/projects/postgres-xc/files/Version_0.9.1/PG-XC_DBT1_Tutorial_v0_9_1.pdf/download" target="_blank"> +(download) +</a> +</li> + +<!-- Install Manual --> +<li> +<code>PG-XC_InstallManual_v0_9_1.pdf</code>: <br> +Step by step description how to build, install and configure Postgres-XC. +⇒ +<a href="https://sourceforge.net/projects/postgres-xc/files/Version_0.9.1/PG-XC_InstallManual_v0_9_1.pdf/download" target="_blank"> +(download) +</a> +</li> + +<!-- Architecture Document --> +<li> +<code>PG-XC_Architecture_v0_9.pdf</code>: <br> Description of the outline of Postgres-XC internals. ⇒ <a href="https://sourceforge.net/projects/postgres-xc/files/Version_0.9/PG-XC_Architecture.pdf/download" target="_blank"> (download) </a> </li> + </ul> </p> + </body> -</html> \ No newline at end of file +</html> diff --git a/roadmap.html b/roadmap.html index 8299fbd..65167e8 100755 --- a/roadmap.html +++ b/roadmap.html @@ -33,11 +33,22 @@ similar to PostgreSQL, except for two phase commit (2PC) and savepoints. (XC uses 2PC for internal use). </p> <p> -On the other hand, Postgres-XC needs to enhance support for general statements. -As of Version 0.9.1, Postgres-XC supports statements which can be executed -on a single data node, or on multiple nodes but as a single step. -It does not support yet complex statements such as -subquery, view, ORDER BY, DISTINCT or +On the other hand, Postgres-XC needs to enhance support for general statements.<br> +As of Version 0.9.2, Postgres-XC supports statements which can be executed +on a single data node, or on multiple nodes but as a single step.<br> +This new version adds support for: +- views<br> +- extra DDLs<br> +- ORDER BY/DISTINCT<br> +- pg_dump, pg_restore<br> +- sequence full support with GTM<br> +- basic stored function support.<br> +- Cold synchronization of Coordinator's Catalog files<br> +However there are some limitations please refer to <a href="https://sourceforge.net/projects/postgres-xc/files/Version_0.9.2/PG-XC_SQL_Limitations_v0_9_2.pdf/download" target="_blank"> +SQL Limitations </a> document for further details. +</p> +<p> +There is no support yet for <code>SELECT</code> in <code>FROM</code> clause. Support for <code>CURSOR</code> is a future issue too. </p> @@ -53,41 +64,27 @@ Current plan of future releases and features are as follows: <!-- ==== For version 1.0 ==== --> <h4> -Version 1.0 (Late in July, 2010) +Version 1.0 (Late in September, 2010) </h4> -<p class="inner"> -<code>ORDER BY</code><br> -<code>DISTINCT</code><br> -Stored functions<br> -subqueries<br> -Views<br> -Rules<br> -DDLs<br> -Regression tests<br> -<p> - -<!-- ==== For version 1.1 ==== --> -<h4> -Version 1.1 (Late in September, 2010) -</h4> - -<p class="inner"> +<p class="inner"> Cluster-wide installer<br> Cluster-wide operation utilities<br> Regression tests<br> Logical backup/restore (pg_dump, pg_restore)<br> Basic cross-node operation<br> TEMP Table<br> +Cursor support<br> Extended Query Protocol (for JDBC)<br> Global timestamp<br> Driver support (ECPG, JDBC, PHP, etc.)<br> Forward Cursor (w/o <code>ORDER BY</code>)<br> +subqueries<br> </p> -<!-- ==== Beyond Version 1.1 ==== --> +<!-- ==== Beyond Version 1.0 ==== --> <h4> -Beyond Version 1.1 +Beyond Version 1.0 </h4> <p class="inner"> ----------------------------------------------------------------------- Summary of changes: download.html | 78 ++++++++++++------------ events.html | 24 ++++--- members.html | 25 ++++---- prev_vers/version0_9.html | 145 +++++++++++++++++++++++++++++++++++++++++---- roadmap.html | 47 +++++++-------- 5 files changed, 221 insertions(+), 98 deletions(-) hooks/post-receive -- website |
From: Michael P <mic...@us...> - 2010-07-22 00:17:49
|
Project "Postgres-XC". The tag, v0.9.2 has been created at d7ca431066efe320107581186ab853b28fa5f7a7 (commit) - Log ----------------------------------------------------------------- commit d7ca431066efe320107581186ab853b28fa5f7a7 Author: Michael P <mic...@us...> Date: Thu Jul 22 08:59:07 2010 +0900 Support for cold synchronization of catalog table of coordinator. This cold solution is temporary. Hot synchronization will be introduced in one of Postgres-XC's next release. Cold synchronization method means that once a DDL is launched, all the coordinators are stopped. And then the catalog copy begins from a coordinator chosen by the user. It is also possible to synchronize catalogs without launching a DDL file on one coordinator. Options possible to use for this script -D locate the data folder, necessary to find pgxc.conf, containing the characteristics of all the coordinators -l to locate the folder where applications are -f for a DDL file -d for a Database name -n coordinator number where to launch DDl, number based on the one written in pgxc.conf -t base name of folder where to save configuration files, by default /tmp/pgxc_config, completed by $$ Synchronization uses a new configuration file called pgxc.conf gathering all the coordinator data, such as port number, data folder and host for each one. Please refer to Postgres-XC 0.9.2 reference manual for further details. ----------------------------------------------------------------------- hooks/post-receive -- Postgres-XC |
From: Michael P <mic...@us...> - 2010-07-22 00:17:21
|
Project "Postgres-XC". The branch, REL0_9_2_STABLE has been created at d7ca431066efe320107581186ab853b28fa5f7a7 (commit) - Log ----------------------------------------------------------------- ----------------------------------------------------------------------- hooks/post-receive -- Postgres-XC |
From: Michael P <mic...@us...> - 2010-07-22 00:07:57
|
Project "Postgres-XC". The branch, master has been updated via d7ca431066efe320107581186ab853b28fa5f7a7 (commit) from 0fdcc0b44b395df2e546ba90feaa0d656ad58f4d (commit) - Log ----------------------------------------------------------------- commit d7ca431066efe320107581186ab853b28fa5f7a7 Author: Michael P <mic...@us...> Date: Thu Jul 22 08:59:07 2010 +0900 Support for cold synchronization of catalog table of coordinator. This cold solution is temporary. Hot synchronization will be introduced in one of Postgres-XC's next release. Cold synchronization method means that once a DDL is launched, all the coordinators are stopped. And then the catalog copy begins from a coordinator chosen by the user. It is also possible to synchronize catalogs without launching a DDL file on one coordinator. Options possible to use for this script -D locate the data folder, necessary to find pgxc.conf, containing the characteristics of all the coordinators -l to locate the folder where applications are -f for a DDL file -d for a Database name -n coordinator number where to launch DDl, number based on the one written in pgxc.conf -t base name of folder where to save configuration files, by default /tmp/pgxc_config, completed by $$ Synchronization uses a new configuration file called pgxc.conf gathering all the coordinator data, such as port number, data folder and host for each one. Please refer to Postgres-XC 0.9.2 reference manual for further details. diff --git a/src/backend/Makefile b/src/backend/Makefile index df707e7..984c951 100644 --- a/src/backend/Makefile +++ b/src/backend/Makefile @@ -195,6 +195,7 @@ endif $(INSTALL_DATA) $(srcdir)/libpq/pg_hba.conf.sample '$(DESTDIR)$(datadir)/pg_hba.conf.sample' $(INSTALL_DATA) $(srcdir)/libpq/pg_ident.conf.sample '$(DESTDIR)$(datadir)/pg_ident.conf.sample' $(INSTALL_DATA) $(srcdir)/utils/misc/postgresql.conf.sample '$(DESTDIR)$(datadir)/postgresql.conf.sample' + $(INSTALL_DATA) $(srcdir)/utils/misc/pgxc.conf.sample '$(DESTDIR)$(datadir)/pgxc.conf.sample' $(INSTALL_DATA) $(srcdir)/access/transam/recovery.conf.sample '$(DESTDIR)$(datadir)/recovery.conf.sample' install-bin: postgres $(POSTGRES_IMP) installdirs @@ -248,8 +249,9 @@ endif $(MAKE) -C catalog uninstall-data $(MAKE) -C tsearch uninstall-data rm -f '$(DESTDIR)$(datadir)/pg_hba.conf.sample' \ + '$(DESTDIR)$(datadir)/pgxc.conf.sample' \ '$(DESTDIR)$(datadir)/pg_ident.conf.sample' \ - '$(DESTDIR)$(datadir)/postgresql.conf.sample' \ + '$(DESTDIR)$(datadir)/postgresql.conf.sample' \ '$(DESTDIR)$(datadir)/recovery.conf.sample' diff --git a/src/backend/utils/misc/pgxc.conf.sample b/src/backend/utils/misc/pgxc.conf.sample new file mode 100644 index 0000000..9dcc0c7 --- /dev/null +++ b/src/backend/utils/misc/pgxc.conf.sample @@ -0,0 +1,20 @@ +# ----------------------------- +# Postgres-XC configuration file +# ----------------------------- +# +# This file consists of lines of the form: +# +# name = value +# +# It describes the list of coordinators used in the cluster + +#------------------------------------------------------------------------------ +# POSTGRES-XC COORDINATORS +#------------------------------------------------------------------------------ + +#coordinator_hosts = 'localhost' # Host names or addresses of data nodes + # (change requires restart) +#coordinator_ports = '5451,5452' # Port numbers of coordinators + # (change requires restart) +#coordinator_folders = '/pgxc/data' # List of Data folders of coordinators + # (change require restart) \ No newline at end of file diff --git a/src/bin/initdb/initdb.c b/src/bin/initdb/initdb.c index 2d0b244..b4dd50b 100644 --- a/src/bin/initdb/initdb.c +++ b/src/bin/initdb/initdb.c @@ -100,6 +100,9 @@ static char *shdesc_file; static char *hba_file; static char *ident_file; static char *conf_file; +#ifdef PGXC +static char *pgxc_conf_file; +#endif static char *conversion_file; static char *dictionary_file; static char *info_schema_file; @@ -1296,6 +1299,19 @@ setup_config(void) free(conflines); +#ifdef PGXC + /* pgxc.conf */ + + conflines = readfile(pgxc_conf_file); + + snprintf(path, sizeof(path), "%s/pgxc.conf", pg_data); + + writefile(path, conflines); + chmod(path, 0600); + + free(conflines); +#endif + check_ok(); } @@ -2810,6 +2826,9 @@ main(int argc, char *argv[]) set_input(&hba_file, "pg_hba.conf.sample"); set_input(&ident_file, "pg_ident.conf.sample"); set_input(&conf_file, "postgresql.conf.sample"); +#ifdef PGXC + set_input(&pgxc_conf_file, "pgxc.conf.sample"); +#endif set_input(&conversion_file, "conversion_create.sql"); set_input(&dictionary_file, "snowball_create.sql"); set_input(&info_schema_file, "information_schema.sql"); @@ -2826,12 +2845,18 @@ main(int argc, char *argv[]) "POSTGRES_SUPERUSERNAME=%s\nPOSTGRES_BKI=%s\n" "POSTGRES_DESCR=%s\nPOSTGRES_SHDESCR=%s\n" "POSTGRESQL_CONF_SAMPLE=%s\n" +#ifdef PGXC + "PGXC_CONF_SAMPLE=%s\n" +#endif "PG_HBA_SAMPLE=%s\nPG_IDENT_SAMPLE=%s\n", PG_VERSION, pg_data, share_path, bin_path, username, bki_file, desc_file, shdesc_file, conf_file, +#ifdef PGXC + pgxc_conf_file, +#endif hba_file, ident_file); if (show_setting) exit(0); @@ -2842,6 +2867,9 @@ main(int argc, char *argv[]) check_input(shdesc_file); check_input(hba_file); check_input(ident_file); +#ifdef PGXC + check_input(pgxc_conf_file); +#endif check_input(conf_file); check_input(conversion_file); check_input(dictionary_file); diff --git a/src/bin/scripts/Makefile b/src/bin/scripts/Makefile index c28a066..48f9c20 100644 --- a/src/bin/scripts/Makefile +++ b/src/bin/scripts/Makefile @@ -52,6 +52,8 @@ install: all installdirs $(INSTALL_PROGRAM) clusterdb$(X) '$(DESTDIR)$(bindir)'/clusterdb$(X) $(INSTALL_PROGRAM) vacuumdb$(X) '$(DESTDIR)$(bindir)'/vacuumdb$(X) $(INSTALL_PROGRAM) reindexdb$(X) '$(DESTDIR)$(bindir)'/reindexdb$(X) + $(INSTALL_PROGRAM) pgxc_ddl$(X) '$(DESTDIR)$(bindir)'/pgxc_ddl$(X) + chmod 555 '$(DESTDIR)$(bindir)'/pgxc_ddl$(X) installdirs: $(mkinstalldirs) '$(DESTDIR)$(bindir)' diff --git a/src/bin/scripts/pgxc_ddl b/src/bin/scripts/pgxc_ddl new file mode 100644 index 0000000..efc2f69 --- /dev/null +++ b/src/bin/scripts/pgxc_ddl @@ -0,0 +1,443 @@ +#!/bin/bash +# Copyright (c) 2010 Nippon Telegraph and Telephone Corporation + +#Scripts to launch DDL in PGXC cluster using a cold_backup method +#Be sure to have set a correct ssl environment in all the servers of the cluster + +#This script uses pgxc.conf as a base to find the settings of all the coordinators + +#Options possible to use for this script +# -D to locate the data folder, necessary to find pgxc.conf, containing the characteristics of all the coordinators +# -l to locate the folder where applications are +# -f for a DDL file +# -d for a Database name +# -n coordinator number where to launch DDl, number based on the one written in pgxc.conf +# -t base name of folder where to save configuration files, by default /tmp/pgxc_config, completed by $$ + +count=0 + +#Default options +#local folder used to save temporary the configuration files of coordinator's data folder being erased +CONFIG_FOLDER=/tmp/pgxc_config_files.$$ +PGXC_BASE= +#options to launch the coordinator +#don't forget to add -i as we are in a cluster :) +COORD_OPTIONS="-C -i" + +#----------------------------------------------------------------------- +# Option Management +#----------------------------------------------------------------------- +while getopts 'f:d:D:l:hn:t:' OPTION +do + count=$((count +2)) + case $OPTION in + d) #for a database name + DB_NAME="$OPTARG" + ;; + + D) #for a data folder, to find pgxc.conf + DATA_FOLDER="$OPTARG" + ;; + + f) #for a DDL file + DDL_FILE_NAME="$OPTARG" + ;; + + l) #To define folder where applications are if necessary + PGXC_BASE="$OPTARG"/ + ;; + + n) #for a coordinator number + COORD_NUM_ORIGIN="$OPTARG" + ;; + + h) printf "Usage: %s: [-d dbname] [-l bin folder] [-D data folder] [-n coord number] [-f ddl file] [-t save folder name in /tmp/]\n" $(basename $0) >&2 + exit 0 + ;; + t) #to set the name of the folder where to save conf files. All is mandatory saved in /tmp + CONFIG_FOLDER=/tmp/"$OPTARG" + ;; + + ?) printf "Usage: %s: [-d dbname] [-l bin folder] [-D data folder] [-n coord number] [-f ddl file] [-t save folder name in /tmp/]\n" $(basename $0) >&2 + exit 0 + ;; + esac +done + +if [ $# -lt "1" ] +then + echo "No arguments defined, you should try help -h" + exit 2 +fi + +#A couple of option checks +if [ "$count" -ne "$#" ] +then + echo "Arguments not correctly set, try -h for help" + exit 2 +fi + +if [ -z $COORD_NUM_ORIGIN ] +then + echo "Coordinator number not defined, mandatory -n argument missing" + exit 2 +fi +if [ -z $DATA_FOLDER ] +then + echo "Data folder not defined, mandatory -D argument missing" + exit 2 +fi + +#Check if Argument of -n is an integer +if [ ! $(echo "$COORD_NUM_ORIGIN" | grep -E "^[0-9]+$") ] + then + echo "Argument -n is not a valid integer" + exit 2 +fi + +#Check if DDL file exists +if [ "$DDL_FILE_NAME" != "" ] +then + if [ ! -e $DDL_FILE_NAME ] + then + echo "DDL file not defined" + exit 2 + fi + if [ -z $DB_NAME ] + then + echo "Dbname not defined, mandatory -d argument missing when using a ddl file" + exit 2 + fi +fi + +#----------------------------------------------------------------------- +# Begin to read the pgxc.conf to get coordinator characteristics +#----------------------------------------------------------------------- +PGXC_CONF=$DATA_FOLDER/pgxc.conf + +if [ ! -e $PGXC_CONF ] +then + echo "pgxc.conf not defined in the directory defined by -D" + exit 2 +fi + +#Find parameters +hosts=`cat $PGXC_CONF | grep coordinator_hosts | cut -d "'" -f 2` +ports=`cat $PGXC_CONF | grep coordinator_ports | cut -d "'" -f 2` +folders=`cat $PGXC_CONF | grep coordinator_folders | cut -d "'" -f 2` +if [ "hosts" = "" ] +then + echo "coordinator_hosts not defined in pgxc.conf" + exit 2 +fi +if [ "ports" = "" ] +then + echo "coordinator_ports not defined in pgxc.conf" + exit 2 +fi +if [ "folders" = "" ] +then + echo "coordinator_folders not defined in pgxc.conf" + exit 2 +fi + +#Check if the strings are using commas as separators +hosts_sep="${hosts//[^,]/}" +ports_sep="${ports//[^,]/}" +folders_sep="${folders//[^,]/}" +if [ "$hosts_sep" = "" ] +then + echo "coordinator_hosts should use commas as a separator" + exit 2 +fi +if [ "$ports_sep" = "" ] +then + echo "coordinator_ports should use commas as a separator" + exit 2 +fi +if [ "$folders_sep" = "" ] +then + echo "coordinator_folders should use commas as a separator" + exit 2 +fi + + +#----------------------------------------------------------------------- +# Fill in Arrays that are used for the process from pgxc configuration file +#----------------------------------------------------------------------- + +count=1 +#Coordinator list +host_local=`echo $hosts | cut -d "," -f $count` +while [ "$host_local" != "" ] +do + COORD_HOSTNAMES[$((count -1))]=`echo $host_local` + count=$((count +1)) + host_local=`echo $hosts | cut -d "," -f $count` +done +COORD_COUNT=${#COORD_HOSTNAMES[*]} + +#Port list corresponding to the coordinators +#If all the coordinators use the same port on different servers, +#it is possible to define that with a unique element array. +count=1 +port_local=`echo $ports | cut -d "," -f $count` +while [ "$port_local" != "" ] +do + COORD_PORTS[$((count -1))]=$port_local + count=$((count +1)) + port_local=`echo $ports | cut -d "," -f $count` +done +COORD_PORTS_COUNT=${#COORD_PORTS[*]} + +#Data folder list corresponding to the coordinators +#If all the coordinators use the same data folder name on different servers, +#it is possible to define that with a unique element array. +count=1 +folder_local=`echo $folders | cut -d "," -f $count` + +while [ "$folder_local" != "" ] +do + COORD_PGDATA[$((count -1))]=$folder_local + count=$((count +1)) + folder_local=`echo $folders | cut -d "," -f $count` +done +COORD_PGDATA_COUNT=${#COORD_PGDATA[*]} + + +#----------------------------------------------------------------------- +# Start DDL process +#----------------------------------------------------------------------- + +#It is supposed that the same bin folders are used among the servers +#to call postgres processes +#This can be customized by the user with option -l +COORD_SERVER_PROCESS=postgres +PGCTL_SERVER_PROCESS=pg_ctl +PSQL_CLIENT_PROCESS=psql + +COORD_SERVER=$PGXC_BASE$COORD_SERVER_PROCESS +PGCTL_SERVER=$PGXC_BASE$PGCTL_SERVER_PROCESS +PSQL_CLIENT=$PGXC_BASE$PSQL_CLIENT_PROCESS + +#reajust coord number with index number +COORD_NUM_ORIGIN=$((COORD_NUM_ORIGIN -1)) + +#check data validity +#Note: Add other checks here + +if [ $COORD_COUNT -eq "1" ] +then + echo "Are you sure you want to use this utility with one only coordinator??" + exit 2 +fi + +if [ $COORD_PGDATA_COUNT -ne $COORD_COUNT ] +then + echo "Number of pgdata folders must be the same as coordinator server number" + exit 2 +fi + +if [ $COORD_PORTS_COUNT -ne $COORD_COUNT ] +then + echo "Number of coordinator ports defined must be the same as coordinator server number" + exit 2 +fi + +#Check if coordinator number is not outbounds +if [ $COORD_NUM_ORIGIN -gt $((COORD_COUNT -1)) ] +then + echo "coordinator number is out of bounds" + exit 2 +fi +COORD_ORIG_INDEX=$COORD_NUM_ORIGIN + +#Check if the data folders are defined +for index in ${!COORD_HOSTNAMES[*]} +do + targethost=${COORD_HOSTNAMES[$index]} + targetdata=${COORD_PGDATA[$index]} + if [[ `ssh $targethost test -d $targetdata && echo exists` ]] + then + echo "defined directory exists for "$targethost + else + echo "defined directory does not exist for "$targethost + exit 2 + fi +done + +#Origin Coordinator Index has been found? +if [ -z $COORD_ORIG_INDEX ] +then + echo "origin coordinator is not in the coordinator list" + exit 2 +fi + +#Main process begins + +#Check if the database is defined, This could lead to coordinator being stopped uselessly +if [ $DB_NAME != "" ] +then + #Simply launch a fake SQL on the Database wanted + $PSQL_CLIENT -h ${COORD_HOSTNAMES[$COORD_ORIG_INDEX]} -p ${COORD_PORTS[$COORD_ORIG_INDEX]} -c 'select now()' -d $DB_NAME; err=$? + if [ $err -gt "0" ] + then + echo "Database not defined" + exit 2 + fi +fi + +#1) stop all the coordinators +echo "Stopping all the coordinators" +for index in ${!COORD_HOSTNAMES[*]} +do + targethost=${COORD_HOSTNAMES[$index]} + targetdata=${COORD_PGDATA[$index]} + echo ssh $targethost $PGCTL_SERVER stop -D $targetdata + ssh $targethost $PGCTL_SERVER stop -D $targetdata; err=$? + if [ $err -gt "0" ] + then + "pg_ctl couldn't stop server" + exit 2 + fi +done + +#If a DDL file is not set by the user, just synchronize the catalogs with the catalog of the chosen coordinator +if [ "$DDL_FILE_NAME" != "" ] +then + echo "-f activated, DDL being launched" + + #2) restart the one we want to launch DDL to... + echo ssh ${COORD_HOSTNAMES[$COORD_ORIG_INDEX]} $COORD_SERVER $COORD_OPTIONS -p ${COORD_PORTS[$COORD_ORIG_INDEX]} -D ${COORD_PGDATA[$COORD_ORIG_INDEX]} + ssh ${COORD_HOSTNAMES[$COORD_ORIG_INDEX]} $COORD_SERVER $COORD_OPTIONS -p ${COORD_PORTS[$COORD_ORIG_INDEX]} -D ${COORD_PGDATA[$COORD_ORIG_INDEX]} & + + #wait a little bit to be sure it switched on + sleep 3 + + #3) launch the DDL + #This has to be done depending on if the user has defined a file or a command + echo $PSQL_CLIENT -h ${COORD_HOSTNAMES[$COORD_ORIG_INDEX]} -p ${COORD_PORTS[$COORD_ORIG_INDEX]} -f $DDL_FILE_NAME -d $DB_NAME + $PSQL_CLIENT -h ${COORD_HOSTNAMES[$COORD_ORIG_INDEX]} -p ${COORD_PORTS[$COORD_ORIG_INDEX]} -f $DDL_FILE_NAME -d $DB_NAME; err=$? + if [ $err -gt "0" ] + then + echo "psql error, is Database defined?" + exit 2 + fi + + #4) Stop again the origin coordinator as we cannot copy the lock files to other coordinators + echo ssh ${COORD_HOSTNAMES[$COORD_ORIG_INDEX]} $PGCTL_SERVER stop -D ${COORD_PGDATA[$COORD_ORIG_INDEX]} + ssh ${COORD_HOSTNAMES[$COORD_ORIG_INDEX]} $PGCTL_SERVER stop -D ${COORD_PGDATA[$COORD_ORIG_INDEX]}; err=$? + if [ $err -gt "0" ] + then + "pg_ctl couldn't stop server" + exit 2 + fi +fi + +#5) before copying the catalogs, save the configuration files or they are erased by the catalog copy +#make a copy of them in a folder in /tmp/pgxc_conf (default folder) +if [ -d $CONFIG_FOLDER ] +then + rm -rf $CONFIG_FOLDER +fi +mkdir $CONFIG_FOLDER + +for index in ${!COORD_HOSTNAMES[*]} +do + if [ $index -ne $COORD_ORIG_INDEX ] + then + targethost=${COORD_HOSTNAMES[$index]} + targetdata=${COORD_PGDATA[$index]} + echo scp -pr $targethost:$targetdata/postgresql.conf $CONFIG_FOLDER/postgresql.conf.$index + echo scp -pr $targethost:$targetdata/pg_hba.conf $CONFIG_FOLDER/pg_hba.conf.$index + scp -pr $targethost:$targetdata/postgresql.conf $CONFIG_FOLDER/postgresql.conf.$index; err=$? + if [ $err -gt "0" ] + then + echo "deleting saved configuration files" + rm -rf $CONFIG_FOLDER + echo "scp failed with "$targethost + exit 2 + fi + scp -pr $targethost:$targetdata/pg_hba.conf $CONFIG_FOLDER/pg_hba.conf.$index; err=$? + if [ $err -gt "0" ] + then + echo "deleting saved configuration files" + rm -rf $CONFIG_FOLDER + echo "scp failed with "$targethost + exit 2 + fi + fi +done + +#6) copy catalog files to all coordinators but not to the origin one +for index in ${!COORD_HOSTNAMES[*]} +do + if [ $index -ne $COORD_ORIG_INDEX ] + then + srchost=${COORD_HOSTNAMES[$COORD_ORIG_INDEX]} + srcdata=${COORD_PGDATA[$COORD_ORIG_INDEX]} + targethost=${COORD_HOSTNAMES[$index]} + targetdata=${COORD_PGDATA[$index]} + #First erase the data to have a nice cleanup + echo ssh $targethost rm -rf $targetdata + ssh $targethost rm -rf $targetdata + + #Just to be sure that catalog files of origin coordinator are copied well + echo scp -pr $srchost:$srcdata $targethost:$targetdata + scp -pr $srchost:$srcdata $targethost:$targetdata; err=$? + if [ $err -gt "0" ] + then + echo "deleting saved configuration files" + rm -rf $CONFIG_FOLDER + echo "scp failed with "$targethost + exit 2 + fi + fi +done + +#7) copy back the configuration files to the corresponding fresh folders +#but not the configuration files of the origin coordinator +for index in ${!COORD_HOSTNAMES[*]} +do + if [ $index -ne $COORD_ORIG_INDEX ] + then + targethost=${COORD_HOSTNAMES[$index]} + targetdata=${COORD_PGDATA[$index]} + echo scp -pr $CONFIG_FOLDER/postgresql.conf.$index $targethost:$targetdata/postgresql.conf + echo scp -pr $CONFIG_FOLDER/pg_hba.conf.$index $targethost:$targetdata/pg_hba.conf + scp -pr $CONFIG_FOLDER/postgresql.conf.$index $targethost:$targetdata/postgresql.conf; err=$? + if [ $err -gt "0" ] + then + echo "deleting saved configuration files" + rm -rf $CONFIG_FOLDER + echo "scp failed with "$targethost + exit 2 + fi + scp -pr $CONFIG_FOLDER/pg_hba.conf.$index $targethost:$targetdata/pg_hba.conf; err=$? + if [ $err -gt "0" ] + then + echo "deleting saved configuration files" + rm -rf $CONFIG_FOLDER + echo "scp failed with "$targethost + exit 2 + fi + fi +done + +#8) wait a little bit... +sleep 1 + +#9) restart all the other coordinators, origin coordinator has been stopped after DDL run +for index in ${!COORD_HOSTNAMES[*]} +do + echo ssh ${COORD_HOSTNAMES[$index]} $COORD_SERVER $COORD_OPTIONS -p ${COORD_PORTS[$index]} -D ${COORD_PGDATA[$index]} & + ssh ${COORD_HOSTNAMES[$index]} $COORD_SERVER $COORD_OPTIONS -p ${COORD_PORTS[$index]} -D ${COORD_PGDATA[$index]} & +done + +sleep 2 + +#Clean also the folder in tmp keeping the configuration files +rm -rf $CONFIG_FOLDER + +#10) finished :p +exit \ No newline at end of file ----------------------------------------------------------------------- Summary of changes: src/backend/Makefile | 4 +- src/backend/utils/misc/pgxc.conf.sample | 20 ++ src/bin/initdb/initdb.c | 28 ++ src/bin/scripts/Makefile | 2 + src/bin/scripts/pgxc_ddl | 443 +++++++++++++++++++++++++++++++ 5 files changed, 496 insertions(+), 1 deletions(-) create mode 100644 src/backend/utils/misc/pgxc.conf.sample create mode 100644 src/bin/scripts/pgxc_ddl hooks/post-receive -- Postgres-XC |
From: Michael P <mic...@us...> - 2010-07-13 01:23:31
|
Project "Postgres-XC". The branch, master has been updated via 0fdcc0b44b395df2e546ba90feaa0d656ad58f4d (commit) from d1efc186e0272a095ae14f4230b8da9ba49a24b7 (commit) - Log ----------------------------------------------------------------- commit 0fdcc0b44b395df2e546ba90feaa0d656ad58f4d Author: Michael P <mic...@us...> Date: Tue Jul 13 10:04:01 2010 +0900 Support for RENAME/DROP SCHEMA with sequences Since commit for the support of ALTER SEQUENCE, sequences use a global name in GTM based on: db_name.schema_name.sequence_name This commit permits to rename sequences on GTM if their schema's name is modified. This patch permits also to drop a sequence on GTM in the case that its schema is being dropped in cascade. diff --git a/src/backend/catalog/dependency.c b/src/backend/catalog/dependency.c index 8090e2f..af57e68 100644 --- a/src/backend/catalog/dependency.c +++ b/src/backend/catalog/dependency.c @@ -53,6 +53,10 @@ #include "catalog/pg_user_mapping.h" #ifdef PGXC #include "catalog/pgxc_class.h" +#include "pgxc/pgxc.h" +#include "commands/sequence.h" +#include "gtm/gtm_c.h" +#include "access/gtm.h" #endif #include "commands/comment.h" #include "commands/dbcommands.h" @@ -338,6 +342,89 @@ performMultipleDeletions(const ObjectAddresses *objects, heap_close(depRel, RowExclusiveLock); } +#ifdef PGXC +/* + * Check type and class of the given object and rename it properly on GTM + */ +static void +doRename(const ObjectAddress *object, const char *oldname, const char *newname) +{ + switch (getObjectClass(object)) + { + case OCLASS_CLASS: + { + char relKind = get_rel_relkind(object->objectId); + + /* + * If we are here, a schema is being renamed, a sequence depends on it. + * as sequences' global name use the schema name, this sequence + * has also to be renamed on GTM. + */ + if (relKind == RELKIND_SEQUENCE && IS_PGXC_COORDINATOR) + { + Relation relseq = relation_open(object->objectId, AccessShareLock); + char *seqname = GetGlobalSeqName(relseq, NULL, oldname); + char *newseqname = GetGlobalSeqName(relseq, NULL, newname); + + /* We also need to rename this sequence on GTM, it has a global name ! */ + if (RenameSequenceGTM(seqname, newseqname) < 0) + ereport(ERROR, + (errcode(ERRCODE_CONNECTION_FAILURE), + errmsg("GTM error, could not rename sequence"))); + + pfree(seqname); + pfree(newseqname); + + relation_close(relseq, AccessShareLock); + } + } + default: + /* Nothing to do, this object has not to be renamed, end of the story... */ + break; + } +} + +/* + * performRename: used to rename objects + * on GTM depending on another object(s) + */ +void +performRename(const ObjectAddress *object, const char *oldname, const char *newname) +{ + Relation depRel; + ObjectAddresses *targetObjects; + int i; + + /* + * Check the dependencies on this object + * And rename object dependent if necessary + */ + + depRel = heap_open(DependRelationId, RowExclusiveLock); + + targetObjects = new_object_addresses(); + + findDependentObjects(object, + DEPFLAG_ORIGINAL, + NULL, /* empty stack */ + targetObjects, + NULL, + depRel); + + /* Check Objects one by one to see if some of them have to be renamed on GTM */ + for (i = 0; i < targetObjects->numrefs; i++) + { + ObjectAddress *thisobj = targetObjects->refs + i; + doRename(thisobj, oldname, newname); + } + + /* And clean up */ + free_object_addresses(targetObjects); + + heap_close(depRel, RowExclusiveLock); +} +#endif + /* * deleteWhatDependsOn: attempt to drop everything that depends on the * specified object, though not the object itself. Behavior is always @@ -1047,6 +1134,33 @@ doDeletion(const ObjectAddress *object) else heap_drop_with_catalog(object->objectId); } + +#ifdef PGXC + /* Drop the sequence on GTM */ + if (relKind == RELKIND_SEQUENCE && IS_PGXC_COORDINATOR) + { + /* + * The sequence has already been removed from coordinator, + * finish the stuff on GTM too + */ + /* PGXCTODO: allow the ability to rollback or abort dropping sequences. */ + + Relation relseq; + char *seqname; + /* + * A relation is opened to get the schema and database name as + * such data is not available before when dropping a function. + */ + relseq = relation_open(object->objectId, AccessShareLock); + seqname = GetGlobalSeqName(relseq, NULL, NULL); + + DropSequenceGTM(seqname); + pfree(seqname); + + /* Then close the relation opened previously */ + relation_close(relseq, AccessShareLock); + } +#endif /* PGXC */ break; } diff --git a/src/backend/commands/schemacmds.c b/src/backend/commands/schemacmds.c index 0d047cf..5704c99 100644 --- a/src/backend/commands/schemacmds.c +++ b/src/backend/commands/schemacmds.c @@ -31,6 +31,9 @@ #include "utils/lsyscache.h" #include "utils/syscache.h" +#ifdef PGXC +#include "pgxc/pgxc.h" +#endif static void AlterSchemaOwner_internal(HeapTuple tup, Relation rel, Oid newOwnerId); @@ -298,6 +301,26 @@ RenameSchema(const char *oldname, const char *newname) simple_heap_update(rel, &tup->t_self, tup); CatalogUpdateIndexes(rel, tup); +#ifdef PGXC + if (IS_PGXC_COORDINATOR) + { + ObjectAddress object; + Oid namespaceId; + + /* Check object dependency and see if there is a sequence. If yes rename it */ + namespaceId = GetSysCacheOid(NAMESPACENAME, + CStringGetDatum(oldname), + 0, 0, 0); + /* Create the object that will be checked for the dependencies */ + object.classId = NamespaceRelationId; + object.objectId = namespaceId; + object.objectSubId = 0; + + /* Rename all the objects depending on this schema */ + performRename(&object, oldname, newname); + } +#endif + heap_close(rel, NoLock); heap_freetuple(tup); } diff --git a/src/backend/commands/sequence.c b/src/backend/commands/sequence.c index ba30206..83ddbab 100644 --- a/src/backend/commands/sequence.c +++ b/src/backend/commands/sequence.c @@ -352,7 +352,7 @@ DefineSequence(CreateSeqStmt *seq) #ifdef PGXC /* PGXC_COORD */ if (IS_PGXC_COORDINATOR) { - char *seqname = GetGlobalSeqName(rel, NULL); + char *seqname = GetGlobalSeqName(rel, NULL, NULL); /* We also need to create it on the GTM */ if (CreateSequenceGTM(seqname, @@ -494,7 +494,7 @@ AlterSequenceInternal(Oid relid, List *options) #ifdef PGXC if (IS_PGXC_COORDINATOR) { - char *seqname = GetGlobalSeqName(seqrel, NULL); + char *seqname = GetGlobalSeqName(seqrel, NULL, NULL); /* We also need to create it on the GTM */ if (AlterSequenceGTM(seqname, @@ -587,7 +587,7 @@ nextval_internal(Oid relid) #ifdef PGXC /* PGXC_COORD */ if (IS_PGXC_COORDINATOR) { - char *seqname = GetGlobalSeqName(seqrel, NULL); + char *seqname = GetGlobalSeqName(seqrel, NULL, NULL); /* * Above, we still use the page as a locking mechanism to handle @@ -785,7 +785,7 @@ currval_oid(PG_FUNCTION_ARGS) #ifdef PGXC if (IS_PGXC_COORDINATOR) { - char *seqname = GetGlobalSeqName(seqrel, NULL); + char *seqname = GetGlobalSeqName(seqrel, NULL, NULL); result = (int64) GetCurrentValGTM(seqname); if (result < 0) @@ -911,7 +911,7 @@ do_setval(Oid relid, int64 next, bool iscalled) #ifdef PGXC if (IS_PGXC_COORDINATOR) { - char *seqname = GetGlobalSeqName(seqrel, NULL); + char *seqname = GetGlobalSeqName(seqrel, NULL, NULL); if (SetValGTM(seqname, next, iscalled) < 0) ereport(ERROR, @@ -1423,20 +1423,24 @@ init_params(List *options, bool isInit, */ char * -GetGlobalSeqName(Relation seqrel, const char *new_seqname) +GetGlobalSeqName(Relation seqrel, const char *new_seqname, const char *new_schemaname) { char *seqname, *dbname, *schemaname, *relname; int charlen; /* Get all the necessary relation names */ dbname = get_database_name(seqrel->rd_node.dbNode); - schemaname = get_namespace_name(RelationGetNamespace(seqrel)); if (new_seqname) - relname = new_seqname; + relname = (char *) new_seqname; else relname = RelationGetRelationName(seqrel); + if (new_schemaname) + schemaname = (char *) new_schemaname; + else + schemaname = get_namespace_name(RelationGetNamespace(seqrel)); + /* Calculate the global name size including the dots and \0 */ charlen = strlen(dbname) + strlen(schemaname) + strlen(relname) + 3; seqname = (char *) palloc(charlen); diff --git a/src/backend/commands/tablecmds.c b/src/backend/commands/tablecmds.c index 33782c4..22fd416 100644 --- a/src/backend/commands/tablecmds.c +++ b/src/backend/commands/tablecmds.c @@ -768,29 +768,6 @@ RemoveRelations(DropStmt *drop) add_exact_object_address(&obj, objects); -#ifdef PGXC /* PGXC_COORD */ - /* PGXCTODO: allow the ability to rollback dropping sequences. */ - - /* Drop the sequence */ - if (IS_PGXC_COORDINATOR && classform->relkind == RELKIND_SEQUENCE) - { - Relation relseq; - char *seqname; - - /* - * A relation is opened to get the schema and database name as - * such data is not available before when dropping a function. - */ - relseq = relation_open(obj.objectId, AccessShareLock); - seqname = GetGlobalSeqName(relseq, NULL); - - DropSequenceGTM(seqname); - pfree(seqname); - - /* Then close the relation opened previously */ - relation_close(relseq, AccessShareLock); - } -#endif ReleaseSysCache(tuple); } @@ -2120,14 +2097,17 @@ RenameRelation(Oid myrelid, const char *newrelname, ObjectType reltype) if (IS_PGXC_COORDINATOR && (reltype == OBJECT_SEQUENCE || relkind == RELKIND_SEQUENCE)) /* It is possible to rename a sequence with ALTER TABLE */ { - char *seqname = GetGlobalSeqName(targetrelation, NULL); - char *newseqname = GetGlobalSeqName(targetrelation, newrelname); + char *seqname = GetGlobalSeqName(targetrelation, NULL, NULL); + char *newseqname = GetGlobalSeqName(targetrelation, newrelname, NULL); /* We also need to rename it on the GTM */ if (RenameSequenceGTM(seqname, newseqname) < 0) ereport(ERROR, (errcode(ERRCODE_CONNECTION_FAILURE), errmsg("GTM error, could not rename sequence"))); + + pfree(seqname); + pfree(newseqname); } #endif diff --git a/src/backend/tcop/utility.c b/src/backend/tcop/utility.c index 6965e2e..7dd2a7e 100644 --- a/src/backend/tcop/utility.c +++ b/src/backend/tcop/utility.c @@ -601,7 +601,6 @@ ProcessUtility(Node *parsetree, { uint64 processed; #ifdef PGXC - bool done; processed = DoCopy((CopyStmt *) parsetree, queryString, true); #else processed = DoCopy((CopyStmt *) parsetree, queryString): diff --git a/src/include/catalog/dependency.h b/src/include/catalog/dependency.h index a4049c3..74c6d15 100644 --- a/src/include/catalog/dependency.h +++ b/src/include/catalog/dependency.h @@ -162,6 +162,12 @@ extern void performDeletion(const ObjectAddress *object, extern void performMultipleDeletions(const ObjectAddresses *objects, DropBehavior behavior); +#ifdef PGXC +extern void performRename(const ObjectAddress *object, + const char *oldname, + const char *newname); +#endif + extern void deleteWhatDependsOn(const ObjectAddress *object, bool showNotices); diff --git a/src/include/commands/sequence.h b/src/include/commands/sequence.h index f54f74f..adb70ec 100644 --- a/src/include/commands/sequence.h +++ b/src/include/commands/sequence.h @@ -103,7 +103,7 @@ extern void seq_redo(XLogRecPtr lsn, XLogRecord *rptr); extern void seq_desc(StringInfo buf, uint8 xl_info, char *rec); #ifdef PGXC -extern char *GetGlobalSeqName(Relation rel, const char *new_seqname); +extern char *GetGlobalSeqName(Relation rel, const char *new_seqname, const char *new_schemaname); #endif #endif /* SEQUENCE_H */ ----------------------------------------------------------------------- Summary of changes: src/backend/catalog/dependency.c | 114 +++++++++++++++++++++++++++++++++++++ src/backend/commands/schemacmds.c | 23 ++++++++ src/backend/commands/sequence.c | 20 ++++--- src/backend/commands/tablecmds.c | 30 ++-------- src/backend/tcop/utility.c | 1 - src/include/catalog/dependency.h | 6 ++ src/include/commands/sequence.h | 2 +- 7 files changed, 161 insertions(+), 35 deletions(-) hooks/post-receive -- Postgres-XC |
From: Pavan D. <pa...@us...> - 2010-07-12 07:45:51
|
Project "Postgres-XC". The branch, master has been updated via d1efc186e0272a095ae14f4230b8da9ba49a24b7 (commit) from 8ce8906c2d45e0aa1164c9beaedb2637853a2e03 (commit) - Log ----------------------------------------------------------------- commit d1efc186e0272a095ae14f4230b8da9ba49a24b7 Author: Pavan Deolasee <pav...@gm...> Date: Mon Jul 12 13:12:57 2010 +0530 Handling ALTER SEQUENCE at the GTM proxy as well. Michael Paquier. diff --git a/src/gtm/proxy/proxy_main.c b/src/gtm/proxy/proxy_main.c index 75c7baf..f5f6e65 100644 --- a/src/gtm/proxy/proxy_main.c +++ b/src/gtm/proxy/proxy_main.c @@ -86,7 +86,7 @@ static void ProcessTransactionCommand(GTMProxy_ConnectionInfo *conninfo, GTM_Conn *gtm_conn, GTM_MessageType mtype, StringInfo message); static void ProcessSnapshotCommand(GTMProxy_ConnectionInfo *conninfo, GTM_Conn *gtm_conn, GTM_MessageType mtype, StringInfo message); -static void ProcessSeqeunceCommand(GTMProxy_ConnectionInfo *conninfo, +static void ProcessSequenceCommand(GTMProxy_ConnectionInfo *conninfo, GTM_Conn *gtm_conn, GTM_MessageType mtype, StringInfo message); static void GTMProxy_RegisterCoordinator(GTMProxy_ConnectionInfo *conninfo, @@ -579,7 +579,6 @@ GTMProxy_ThreadMain(void *argp) char gtm_connect_string[1024]; elog(DEBUG3, "Starting the connection helper thread"); - /* * Create the memory context we will use in the main loop. @@ -595,7 +594,7 @@ GTMProxy_ThreadMain(void *argp) ALLOCSET_DEFAULT_INITSIZE, ALLOCSET_DEFAULT_MAXSIZE, false); - + /* * Set up connection with the GTM server */ @@ -808,7 +807,7 @@ GTMProxy_ThreadMain(void *argp) ProcessCommand(thrinfo->thr_conn, thrinfo->thr_gtm_conn, &input_message); break; - + case 'X': case EOF: /* @@ -917,7 +916,7 @@ GTMProxyAddConnection(Port *port) { ereport(ERROR, (ENOMEM, - errmsg("Out of memory"))); + errmsg("Out of memory"))); return STATUS_ERROR; } @@ -942,31 +941,35 @@ ProcessCommand(GTMProxy_ConnectionInfo *conninfo, GTM_Conn *gtm_conn, switch (mtype) { - case MSG_UNREGISTER_COORD: + case MSG_UNREGISTER_COORD: ProcessCoordinatorCommand(conninfo, gtm_conn, mtype, input_message); break; - case MSG_TXN_BEGIN: - case MSG_TXN_BEGIN_GETGXID: + case MSG_TXN_BEGIN: + case MSG_TXN_BEGIN_GETGXID: case MSG_TXN_BEGIN_GETGXID_AUTOVACUUM: - case MSG_TXN_PREPARE: - case MSG_TXN_COMMIT: - case MSG_TXN_ROLLBACK: + case MSG_TXN_PREPARE: + case MSG_TXN_COMMIT: + case MSG_TXN_ROLLBACK: case MSG_TXN_GET_GXID: ProcessTransactionCommand(conninfo, gtm_conn, mtype, input_message); break; - case MSG_SNAPSHOT_GET: + case MSG_SNAPSHOT_GET: case MSG_SNAPSHOT_GXID_GET: ProcessSnapshotCommand(conninfo, gtm_conn, mtype, input_message); break; - case MSG_SEQUENCE_INIT: + case MSG_SEQUENCE_INIT: case MSG_SEQUENCE_GET_CURRENT: case MSG_SEQUENCE_GET_NEXT: + case MSG_SEQUENCE_GET_LAST: + case MSG_SEQUENCE_SET_VAL: case MSG_SEQUENCE_RESET: case MSG_SEQUENCE_CLOSE: - ProcessSeqeunceCommand(conninfo, gtm_conn, mtype, input_message); + case MSG_SEQUENCE_RENAME: + case MSG_SEQUENCE_ALTER: + ProcessSequenceCommand(conninfo, gtm_conn, mtype, input_message); break; default: @@ -1104,16 +1107,20 @@ ProcessResponse(GTMProxy_ThreadInfo *thrinfo, GTMProxy_CommandInfo *cmdinfo, cmdinfo->ci_conn->con_pending_msg = MSG_TYPE_INVALID; break; - case MSG_TXN_BEGIN: + case MSG_TXN_BEGIN: case MSG_TXN_BEGIN_GETGXID_AUTOVACUUM: - case MSG_TXN_PREPARE: + case MSG_TXN_PREPARE: case MSG_TXN_GET_GXID: case MSG_SNAPSHOT_GXID_GET: - case MSG_SEQUENCE_INIT: + case MSG_SEQUENCE_INIT: case MSG_SEQUENCE_GET_CURRENT: case MSG_SEQUENCE_GET_NEXT: + case MSG_SEQUENCE_GET_LAST: + case MSG_SEQUENCE_SET_VAL: case MSG_SEQUENCE_RESET: case MSG_SEQUENCE_CLOSE: + case MSG_SEQUENCE_RENAME: + case MSG_SEQUENCE_ALTER: if ((res->gr_proxyhdr.ph_conid == InvalidGTMProxyConnID) || (res->gr_proxyhdr.ph_conid >= GTM_PROXY_MAX_CONNECTIONS) || (thrinfo->thr_all_conns[res->gr_proxyhdr.ph_conid] != cmdinfo->ci_conn)) @@ -1251,13 +1258,13 @@ ProcessTransactionCommand(GTMProxy_ConnectionInfo *conninfo, GTM_Conn *gtm_conn, switch (mtype) { - case MSG_TXN_BEGIN_GETGXID: + case MSG_TXN_BEGIN_GETGXID: cmd_data.cd_beg.iso_level = pq_getmsgint(message, sizeof (GTM_IsolationLevel)); cmd_data.cd_beg.rdonly = pq_getmsgbyte(message); - GTMProxy_CommandPending(conninfo, mtype, cmd_data); + GTMProxy_CommandPending(conninfo, mtype, cmd_data); break; - case MSG_TXN_COMMIT: + case MSG_TXN_COMMIT: case MSG_TXN_ROLLBACK: cmd_data.cd_rc.isgxid = pq_getmsgbyte(message); if (cmd_data.cd_rc.isgxid) @@ -1281,7 +1288,7 @@ ProcessTransactionCommand(GTMProxy_ConnectionInfo *conninfo, GTM_Conn *gtm_conn, memcpy(&cmd_data.cd_rc.handle, data, sizeof (GTM_TransactionHandle)); } pq_getmsgend(message); - GTMProxy_CommandPending(conninfo, mtype, cmd_data); + GTMProxy_CommandPending(conninfo, mtype, cmd_data); break; case MSG_TXN_BEGIN: @@ -1291,7 +1298,7 @@ ProcessTransactionCommand(GTMProxy_ConnectionInfo *conninfo, GTM_Conn *gtm_conn, case MSG_TXN_BEGIN_GETGXID_AUTOVACUUM: case MSG_TXN_PREPARE: - GTMProxy_ProxyCommand(conninfo, gtm_conn, mtype, message); + GTMProxy_ProxyCommand(conninfo, gtm_conn, mtype, message); break; default: @@ -1336,7 +1343,7 @@ ProcessSnapshotCommand(GTMProxy_ConnectionInfo *conninfo, GTM_Conn *gtm_conn, memcpy(&cmd_data.cd_snap.handle, data, sizeof (GTM_TransactionHandle)); } pq_getmsgend(message); - GTMProxy_CommandPending(conninfo, mtype, cmd_data); + GTMProxy_CommandPending(conninfo, mtype, cmd_data); } break; @@ -1351,7 +1358,7 @@ ProcessSnapshotCommand(GTMProxy_ConnectionInfo *conninfo, GTM_Conn *gtm_conn, } static void -ProcessSeqeunceCommand(GTMProxy_ConnectionInfo *conninfo, GTM_Conn *gtm_conn, +ProcessSequenceCommand(GTMProxy_ConnectionInfo *conninfo, GTM_Conn *gtm_conn, GTM_MessageType mtype, StringInfo message) { /* ----------------------------------------------------------------------- Summary of changes: src/gtm/proxy/proxy_main.c | 55 ++++++++++++++++++++++++------------------- 1 files changed, 31 insertions(+), 24 deletions(-) hooks/post-receive -- Postgres-XC |
From: mason_s <ma...@us...> - 2010-07-07 13:34:41
|
Project "Postgres-XC". The branch, master has been updated via 8ce8906c2d45e0aa1164c9beaedb2637853a2e03 (commit) from 5800b1b7b84dac3759f25a4a37afcb2ed26a1a63 (commit) - Log ----------------------------------------------------------------- commit 8ce8906c2d45e0aa1164c9beaedb2637853a2e03 Author: Mason S <masonsharp@mason-sharps-macbook.local> Date: Wed Jul 7 15:33:45 2010 +0200 Fix a crash that may occur within the pooler when a data node crashes. Written by Andrei Matsinchyk diff --git a/src/backend/pgxc/pool/poolmgr.c b/src/backend/pgxc/pool/poolmgr.c index 79106b5..6427da3 100644 --- a/src/backend/pgxc/pool/poolmgr.c +++ b/src/backend/pgxc/pool/poolmgr.c @@ -4,15 +4,15 @@ * * Connection pool manager handles connections to DataNodes * - * The pooler runs as a separate process and is forked off from a - * coordinator postmaster. If the coordinator needs a connection from a + * The pooler runs as a separate process and is forked off from a + * coordinator postmaster. If the coordinator needs a connection from a * data node, it asks for one from the pooler, which maintains separate * pools for each data node. A group of connections can be requested in - * a single request, and the pooler returns a list of file descriptors + * a single request, and the pooler returns a list of file descriptors * to use for the connections. * * Note the current implementation does not yet shrink the pool over time - * as connections are idle. Also, it does not queue requests; if a + * as connections are idle. Also, it does not queue requests; if a * connection is unavailable, it will simply fail. This should be implemented * one day, although there is a chance for deadlocks. For now, limiting * connections should be done between the application and coordinator. @@ -113,8 +113,8 @@ extern int pqReadReady(PGconn *conn); static volatile sig_atomic_t shutdown_requested = false; -/* - * Initialize internal structures +/* + * Initialize internal structures */ int PoolManagerInit() @@ -433,8 +433,8 @@ PoolManagerInit() } -/* - * Destroy internal structures +/* + * Destroy internal structures */ int PoolManagerDestroy(void) @@ -575,8 +575,8 @@ PoolManagerConnect(PoolHandle *handle, const char *database) } -/* - * Init PoolAgent +/* + * Init PoolAgent */ static void agent_init(PoolAgent *agent, const char *database, List *nodes) @@ -598,8 +598,8 @@ agent_init(PoolAgent *agent, const char *database, List *nodes) } -/* - * Destroy PoolAgent +/* + * Destroy PoolAgent */ static void agent_destroy(PoolAgent *agent) @@ -636,8 +636,8 @@ agent_destroy(PoolAgent *agent) } -/* - * Release handle to pool manager +/* + * Release handle to pool manager */ void PoolManagerDisconnect(PoolHandle *handle) @@ -653,8 +653,8 @@ PoolManagerDisconnect(PoolHandle *handle) } -/* - * Get pooled connections +/* + * Get pooled connections */ int * PoolManagerGetConnections(List *nodelist) @@ -759,7 +759,7 @@ agent_handle_input(PoolAgent * agent, StringInfo s) } -/* +/* * acquire connection */ static int * @@ -827,8 +827,8 @@ agent_acquire_connections(PoolAgent *agent, List *nodelist) } -/* - * Retun connections back to the pool +/* + * Retun connections back to the pool */ void PoolManagerReleaseConnections(void) @@ -972,8 +972,8 @@ destroy_database_pool(const char *database) } -/* - * Insert new database pool to the list +/* + * Insert new database pool to the list */ static void insert_database_pool(DatabasePool *databasePool) @@ -991,8 +991,8 @@ insert_database_pool(DatabasePool *databasePool) } -/* - * Find pool for specified database in the list +/* + * Find pool for specified database in the list */ static DatabasePool * @@ -1015,8 +1015,8 @@ find_database_pool(const char *database) } -/* - * Remove pool for specified database from the list +/* + * Remove pool for specified database from the list */ static DatabasePool * remove_database_pool(const char *database) @@ -1075,41 +1075,40 @@ acquire_connection(DatabasePool *dbPool, int node) } /* Check available connections */ - if (nodePool && nodePool->freeSize > 0) + while (nodePool && nodePool->freeSize > 0) { int poll_result; - while (nodePool->freeSize > 0) - { - slot = nodePool->slot[--(nodePool->freeSize)]; + slot = nodePool->slot[--(nodePool->freeSize)]; retry: - /* Make sure connection is ok */ - poll_result = pqReadReady(slot->conn); - - if (poll_result == 0) - break; /* ok, no data */ - else if (poll_result < 0) - { - if (errno == EAGAIN || errno == EINTR) - goto retry; + /* Make sure connection is ok */ + poll_result = pqReadReady(slot->conn); - elog(WARNING, "Error in checking connection, errno = %d", errno); - } - else - elog(WARNING, "Unexpected data on connection, cleaning."); + if (poll_result == 0) + break; /* ok, no data */ + else if (poll_result < 0) + { + if (errno == EAGAIN || errno == EINTR) + goto retry; - destroy_slot(slot); - /* Decrement current max pool size */ - (nodePool->size)--; - /* Ensure we are not below minimum size */ - grow_pool(dbPool, node - 1); + elog(WARNING, "Error in checking connection, errno = %d", errno); } + else + elog(WARNING, "Unexpected data on connection, cleaning."); + + destroy_slot(slot); + slot = NULL; + + /* Decrement current max pool size */ + (nodePool->size)--; + /* Ensure we are not below minimum size */ + grow_pool(dbPool, node - 1); } - else - ereport(LOG, - (errcode(ERRCODE_INSUFFICIENT_RESOURCES), - errmsg("connection pool is empty"))); + + if (slot == NULL) + elog(WARNING, "can not connect to data node %d", node); + return slot; } ----------------------------------------------------------------------- Summary of changes: src/backend/pgxc/pool/poolmgr.c | 101 +++++++++++++++++++-------------------- 1 files changed, 50 insertions(+), 51 deletions(-) hooks/post-receive -- Postgres-XC |
From: mason_s <ma...@us...> - 2010-07-07 13:33:07
|
Project "Postgres-XC". The branch, master has been updated via 5800b1b7b84dac3759f25a4a37afcb2ed26a1a63 (commit) via 73249fbb42f4c05de85181428a7ae143c0c8d254 (commit) from 47f4b06f6e25426bb775d7a7372309b68c7e1f47 (commit) - Log ----------------------------------------------------------------- commit 5800b1b7b84dac3759f25a4a37afcb2ed26a1a63 Author: Mason S <masonsharp@mason-sharps-macbook.local> Date: Wed Jul 7 15:31:15 2010 +0200 In Postgres-XC, the error stack may overflow because AbortTransaction may be called multiple times, each time calling DataNodeRollback, which may fail again if a data node is down. Instead, if we are already in an abort state, we do not bother repeating abort actions. diff --git a/src/backend/access/transam/xact.c b/src/backend/access/transam/xact.c index 757f99d..491d0d5 100644 --- a/src/backend/access/transam/xact.c +++ b/src/backend/access/transam/xact.c @@ -2697,6 +2697,20 @@ AbortCurrentTransaction(void) } } +#ifdef PGXC +/* + * AbortCurrentTransactionOnce + * + * Abort transaction, but only if we have not already. + */ +void +AbortCurrentTransactionOnce(void) +{ + if (CurrentTransactionState->state != TRANS_ABORT) + AbortCurrentTransaction(); +} +#endif + /* * PreventTransactionChain * diff --git a/src/backend/tcop/postgres.c b/src/backend/tcop/postgres.c index 4cb0b27..553a682 100644 --- a/src/backend/tcop/postgres.c +++ b/src/backend/tcop/postgres.c @@ -3768,7 +3768,15 @@ PostgresMain(int argc, char *argv[], const char *username) /* * Abort the current transaction in order to recover. */ +#ifdef PGXC + /* + * Temporarily do not abort if we are already in an abort state. + * This change tries to handle the case where the error data stack fills up. + */ + AbortCurrentTransactionOnce(); +#else AbortCurrentTransaction(); +#endif /* * Now return to normal top-level context and clear ErrorContext for diff --git a/src/include/access/xact.h b/src/include/access/xact.h index fe69611..5bd157b 100644 --- a/src/include/access/xact.h +++ b/src/include/access/xact.h @@ -163,6 +163,9 @@ extern void CommandCounterIncrement(void); extern void ForceSyncCommit(void); extern void StartTransactionCommand(void); extern void CommitTransactionCommand(void); +#ifdef PGXC +extern void AbortCurrentTransactionOnce(void); +#endif extern void AbortCurrentTransaction(void); extern void BeginTransactionBlock(void); extern bool EndTransactionBlock(void); commit 73249fbb42f4c05de85181428a7ae143c0c8d254 Author: Mason S <masonsharp@mason-sharps-macbook.local> Date: Wed Jul 7 15:30:16 2010 +0200 Changed some error messages so that they will not be duplicates to better pinpoint some issues. diff --git a/src/backend/pgxc/pool/execRemote.c b/src/backend/pgxc/pool/execRemote.c index 1ee1d59..c6f9042 100644 --- a/src/backend/pgxc/pool/execRemote.c +++ b/src/backend/pgxc/pool/execRemote.c @@ -475,7 +475,7 @@ HandleCopyOutComplete(RemoteQueryState *combiner) /* Inconsistent responses */ ereport(ERROR, (errcode(ERRCODE_DATA_CORRUPTED), - errmsg("Unexpected response from the data nodes"))); + errmsg("Unexpected response from the data nodes for 'c' message, current request type %d", combiner->request_type))); /* Just do nothing, close message is managed by the coordinator */ combiner->copy_out_count++; } @@ -559,7 +559,7 @@ HandleRowDescription(RemoteQueryState *combiner, char *msg_body, size_t len) /* Inconsistent responses */ ereport(ERROR, (errcode(ERRCODE_DATA_CORRUPTED), - errmsg("Unexpected response from the data nodes"))); + errmsg("Unexpected response from the data nodes for 'T' message, current request type %d", combiner->request_type))); } /* Increment counter and check if it was first */ if (combiner->description_count++ == 0) @@ -583,7 +583,7 @@ HandleParameterStatus(RemoteQueryState *combiner, char *msg_body, size_t len) /* Inconsistent responses */ ereport(ERROR, (errcode(ERRCODE_DATA_CORRUPTED), - errmsg("Unexpected response from the data nodes"))); + errmsg("Unexpected response from the data nodes for 'S' message, current request type %d", combiner->request_type))); } /* Proxy last */ if (++combiner->description_count == combiner->node_count) @@ -605,7 +605,7 @@ HandleCopyIn(RemoteQueryState *combiner) /* Inconsistent responses */ ereport(ERROR, (errcode(ERRCODE_DATA_CORRUPTED), - errmsg("Unexpected response from the data nodes"))); + errmsg("Unexpected response from the data nodes for 'G' message, current request type %d", combiner->request_type))); } /* * The normal PG code will output an G message when it runs in the @@ -627,7 +627,7 @@ HandleCopyOut(RemoteQueryState *combiner) /* Inconsistent responses */ ereport(ERROR, (errcode(ERRCODE_DATA_CORRUPTED), - errmsg("Unexpected response from the data nodes"))); + errmsg("Unexpected response from the data nodes for 'H' message, current request type %d", combiner->request_type))); } /* * The normal PG code will output an H message when it runs in the @@ -649,7 +649,7 @@ HandleCopyDataRow(RemoteQueryState *combiner, char *msg_body, size_t len) if (combiner->request_type != REQUEST_TYPE_COPY_OUT) ereport(ERROR, (errcode(ERRCODE_DATA_CORRUPTED), - errmsg("Unexpected response from the data nodes"))); + errmsg("Unexpected response from the data nodes for 'd' message, current request type %d", combiner->request_type))); /* If there is a copy file, data has to be sent to the local file */ if (combiner->copy_file) @@ -675,7 +675,7 @@ HandleDataRow(RemoteQueryState *combiner, char *msg_body, size_t len) /* Inconsistent responses */ ereport(ERROR, (errcode(ERRCODE_DATA_CORRUPTED), - errmsg("Unexpected response from the data nodes"))); + errmsg("Unexpected response from the data nodes for 'D' message, current request type %d", combiner->request_type))); } /* @@ -943,7 +943,8 @@ data_node_receive_responses(const int conn_count, DataNodeHandle ** connections, data_node_receive(count, to_receive, timeout); while (i < count) { - switch (handle_response(to_receive[i], combiner)) + int result = handle_response(to_receive[i], combiner); + switch (result) { case RESPONSE_EOF: /* have something to read, keep receiving */ i++; @@ -960,7 +961,7 @@ data_node_receive_responses(const int conn_count, DataNodeHandle ** connections, /* Inconsistent responses */ ereport(ERROR, (errcode(ERRCODE_INTERNAL_ERROR), - errmsg("Unexpected response from the data nodes"))); + errmsg("Unexpected response from the data nodes, result = %d, request type %d", result, combiner->request_type))); } } } @@ -1679,7 +1680,7 @@ DataNodeCopyOut(Exec_Nodes *exec_nodes, DataNodeHandle** copy_connections, FILE* pfree(copy_connections); ereport(ERROR, (errcode(ERRCODE_DATA_CORRUPTED), - errmsg("Unexpected response from the data nodes"))); + errmsg("Unexpected response from the data nodes when combining, request type %d", combiner->request_type))); } return processed; ----------------------------------------------------------------------- Summary of changes: src/backend/access/transam/xact.c | 14 ++++++++++++++ src/backend/pgxc/pool/execRemote.c | 21 +++++++++++---------- src/backend/tcop/postgres.c | 8 ++++++++ src/include/access/xact.h | 3 +++ 4 files changed, 36 insertions(+), 10 deletions(-) hooks/post-receive -- Postgres-XC |
From: andrei_mart <and...@us...> - 2010-07-05 06:12:31
|
Project "Postgres-XC". The branch, master has been updated via 47f4b06f6e25426bb775d7a7372309b68c7e1f47 (commit) from c61f6b7e606131d3963ed83bcfa40c000d2e0aab (commit) - Log ----------------------------------------------------------------- commit 47f4b06f6e25426bb775d7a7372309b68c7e1f47 Author: Andrei Martsinchyk <And...@en...> Date: Mon Jul 5 09:08:20 2010 +0300 Fixed a bug when searching terminating semicolon. Initial position was at \0, which is not considered as a witespace. Start from the character immediately before \0. diff --git a/src/backend/pgxc/plan/planner.c b/src/backend/pgxc/plan/planner.c index 461f96a..002e710 100644 --- a/src/backend/pgxc/plan/planner.c +++ b/src/backend/pgxc/plan/planner.c @@ -1675,11 +1675,11 @@ reconstruct_step_query(List *rtable, bool has_order_by, List *extra_sort, /* the same offset in the original string */ int offset = sql_from - sql; /* - * Remove terminating semicolon to be able to append extra - * order by entries. If query is submitted from client other than psql - * the terminator may not present. + * Truncate query at the position of terminating semicolon to be able + * to append extra order by entries. If query is submitted from client + * other than psql the terminator may not present. */ - char *end = step->sql_statement + strlen(step->sql_statement); + char *end = step->sql_statement + strlen(step->sql_statement) - 1; while(isspace((unsigned char) *end) && end > step->sql_statement) end--; if (*end == ';') ----------------------------------------------------------------------- Summary of changes: src/backend/pgxc/plan/planner.c | 8 ++++---- 1 files changed, 4 insertions(+), 4 deletions(-) hooks/post-receive -- Postgres-XC |
From: andrei_mart <and...@us...> - 2010-07-02 16:03:41
|
Project "Postgres-XC". The branch, master has been updated via c61f6b7e606131d3963ed83bcfa40c000d2e0aab (commit) from 5d83e22e3cabc3d1e5dc425f492e4459b30a67a0 (commit) - Log ----------------------------------------------------------------- commit c61f6b7e606131d3963ed83bcfa40c000d2e0aab Author: Andrei Martsinchyk <And...@en...> Date: Fri Jul 2 18:51:15 2010 +0300 If expressions should be added to ORDER BY clause of the step query we search for terminating semicolon to determine position where expressions should be added. Added handling for the case if query is not terminated with a semicolon. Also, small optimization - use sorting on coordinator only if step is going to be executed on two or more nodes. diff --git a/src/backend/pgxc/plan/planner.c b/src/backend/pgxc/plan/planner.c index 2cf488c..461f96a 100644 --- a/src/backend/pgxc/plan/planner.c +++ b/src/backend/pgxc/plan/planner.c @@ -51,7 +51,7 @@ typedef struct long constant; /* assume long PGXCTODO - should be Datum */ } Literal_Comparison; -/* Parent-Child joins for relations being joined on +/* Parent-Child joins for relations being joined on * their respective hash distribuion columns */ typedef struct @@ -114,7 +114,7 @@ typedef struct ColumnBase * the rtable for the particular query. This way we can use * varlevelsup to resolve Vars in nested queries */ -typedef struct XCWalkerContext +typedef struct XCWalkerContext { Query *query; bool isRead; @@ -325,7 +325,7 @@ get_numeric_constant(Expr *expr) * This is required because a RangeTblEntry may actually be another * type, like a join, and we need to then look at the joinaliasvars * to determine what the base table and column really is. - * + * * rtables is a List of rtable Lists. */ static ColumnBase* @@ -338,8 +338,8 @@ get_base_var(Var *var, XCWalkerContext *context) if (!AttrNumberIsForUserDefinedAttr(var->varattno)) return NULL; - /* - * Get the RangeTableEntry + /* + * Get the RangeTableEntry * We take nested subqueries into account first, * we may need to look further up the query tree. * The most recent rtable is at the end of the list; top most one is first. @@ -514,8 +514,8 @@ examine_conditions_walker(Node *expr_node, XCWalkerContext *context) *rel_loc_info2; Const *constant; Expr *checkexpr; - bool result = false; - bool is_and = false; + bool result = false; + bool is_and = false; Assert(context); @@ -534,7 +534,7 @@ examine_conditions_walker(Node *expr_node, XCWalkerContext *context) /* If we get here, that meant the previous call before recursing down did not * find the condition safe yet. * Since we pass down our context, this is the bit of code that will detect - * that we are using more than one relation in a condition which has not + * that we are using more than one relation in a condition which has not * already been deemed safe. */ Var *var_node = (Var *) expr_node; @@ -591,7 +591,7 @@ examine_conditions_walker(Node *expr_node, XCWalkerContext *context) } } - /* + /* * Look for equality conditions on partiioned columns, but only do so * if we are not in an OR or NOT expression */ @@ -743,7 +743,7 @@ examine_conditions_walker(Node *expr_node, XCWalkerContext *context) && IsHashColumn(rel_loc_info2, column_base2->colname)) { /* We found a partitioned join */ - Parent_Child_Join *parent_child = (Parent_Child_Join *) + Parent_Child_Join *parent_child = (Parent_Child_Join *) palloc0(sizeof(Parent_Child_Join)); parent_child->rel_loc_info1 = rel_loc_info1; @@ -762,7 +762,7 @@ examine_conditions_walker(Node *expr_node, XCWalkerContext *context) /* * At this point, there is some other type of join that * can probably not be executed on only a single node. - * Just return, as it may be updated later. + * Just return, as it may be updated later. * Important: We preserve previous * pgxc_join->join_type value, there may be multiple * columns joining two tables, and we want to make sure at @@ -787,7 +787,7 @@ examine_conditions_walker(Node *expr_node, XCWalkerContext *context) /* save parent-child count */ if (context->exec_nodes) - save_parent_child_count = list_length(context->conditions->partitioned_parent_child); + save_parent_child_count = list_length(context->conditions->partitioned_parent_child); context->exec_nodes = NULL; context->multilevel_join = false; @@ -824,14 +824,14 @@ examine_conditions_walker(Node *expr_node, XCWalkerContext *context) if (same_single_node (context->exec_nodes->nodelist, save_exec_nodes->nodelist)) return false; } - else + else /* use old value */ context->exec_nodes = save_exec_nodes; } - } else + } else { if (context->exec_nodes->tableusagetype == TABLE_USAGE_TYPE_USER_REPLICATED) - return false; + return false; /* See if subquery safely joins with parent */ if (!is_multilevel) return true; @@ -993,8 +993,8 @@ get_plan_nodes_walker(Node *query_node, XCWalkerContext *context) from_subquery_count++; - /* - * Recursively call for subqueries. + /* + * Recursively call for subqueries. * Note this also works for views, which are rewritten as subqueries. */ context->rtables = lappend(context->rtables, current_rtable); @@ -1012,7 +1012,7 @@ get_plan_nodes_walker(Node *query_node, XCWalkerContext *context) if (current_nodes) current_usage_type = current_nodes->tableusagetype; - else + else /* could be complicated */ return true; @@ -1088,7 +1088,7 @@ get_plan_nodes_walker(Node *query_node, XCWalkerContext *context) context->exec_nodes = (Exec_Nodes *) palloc0(sizeof(Exec_Nodes)); context->exec_nodes->tableusagetype = TABLE_USAGE_TYPE_PGCATALOG; return false; - } + } /* Examine the WHERE clause, too */ if (examine_conditions_walker(query->jointree->quals, context)) @@ -1129,18 +1129,18 @@ get_plan_nodes_walker(Node *query_node, XCWalkerContext *context) { rte = (RangeTblEntry *) lfirst(lc); - /* - * If the query is rewritten (which can be due to rules or views), - * ignore extra stuff. Also ignore subqueries we have processed + /* + * If the query is rewritten (which can be due to rules or views), + * ignore extra stuff. Also ignore subqueries we have processed */ if ((!rte->inFromCl && query->commandType == CMD_SELECT) || rte->rtekind != RTE_RELATION) continue; /* PGXCTODO - handle RTEs that are functions */ if (rtesave) - /* - * Too complicated, we have multiple relations that still - * cannot be joined safely + /* + * Too complicated, we have multiple relations that still + * cannot be joined safely */ return true; @@ -1209,7 +1209,7 @@ get_plan_nodes_walker(Node *query_node, XCWalkerContext *context) */ Parent_Child_Join *parent_child; - parent_child = (Parent_Child_Join *) + parent_child = (Parent_Child_Join *) linitial(context->conditions->partitioned_parent_child); context->exec_nodes = GetRelationNodes(parent_child->rel_loc_info1, NULL, context->isRead); @@ -1218,7 +1218,7 @@ get_plan_nodes_walker(Node *query_node, XCWalkerContext *context) if (from_query_nodes) { - if (!context->exec_nodes) + if (!context->exec_nodes) { context->exec_nodes = from_query_nodes; return false; @@ -1229,9 +1229,9 @@ get_plan_nodes_walker(Node *query_node, XCWalkerContext *context) else if (from_query_nodes->tableusagetype == TABLE_USAGE_TYPE_USER_REPLICATED || (same_single_node(from_query_nodes->nodelist, context->exec_nodes->nodelist))) return false; - else + else { - /* We allow views, where the (rewritten) subquery may be on all nodes, + /* We allow views, where the (rewritten) subquery may be on all nodes, * but the parent query applies a condition on the from subquery. */ if (list_length(query->jointree->fromlist) == from_subquery_count @@ -1674,9 +1674,16 @@ reconstruct_step_query(List *rtable, bool has_order_by, List *extra_sort, { /* the same offset in the original string */ int offset = sql_from - sql; - /* remove terminating semicolon */ - char *end = strrchr(step->sql_statement, ';'); - *end = '\0'; + /* + * Remove terminating semicolon to be able to append extra + * order by entries. If query is submitted from client other than psql + * the terminator may not present. + */ + char *end = step->sql_statement + strlen(step->sql_statement); + while(isspace((unsigned char) *end) && end > step->sql_statement) + end--; + if (*end == ';') + *end = '\0'; appendStringInfoString(buf, step->sql_statement + offset); } @@ -2069,7 +2076,9 @@ GetQueryPlan(Node *parsetree, const char *sql_statement, List *querytree_list) /* * Add sortring to the step */ - if (query->sortClause || query->distinctClause) + if (query_plan->exec_loc_type == EXEC_ON_DATA_NODES && + list_length(query_step->exec_nodes->nodelist) > 1 && + (query->sortClause || query->distinctClause)) make_simple_sort_from_sortclauses(query, query_step); /* ----------------------------------------------------------------------- Summary of changes: src/backend/pgxc/plan/planner.c | 75 ++++++++++++++++++++++----------------- 1 files changed, 42 insertions(+), 33 deletions(-) hooks/post-receive -- Postgres-XC |