You can subscribe to this list here.
2010 |
Jan
|
Feb
|
Mar
|
Apr
|
May
(2) |
Jun
|
Jul
|
Aug
(6) |
Sep
|
Oct
(19) |
Nov
(1) |
Dec
|
---|---|---|---|---|---|---|---|---|---|---|---|---|
2011 |
Jan
(12) |
Feb
(1) |
Mar
(4) |
Apr
(4) |
May
(32) |
Jun
(12) |
Jul
(11) |
Aug
(1) |
Sep
(6) |
Oct
(3) |
Nov
|
Dec
(10) |
2012 |
Jan
(11) |
Feb
(1) |
Mar
(3) |
Apr
(25) |
May
(53) |
Jun
(38) |
Jul
(103) |
Aug
(54) |
Sep
(31) |
Oct
(66) |
Nov
(77) |
Dec
(20) |
2013 |
Jan
(91) |
Feb
(86) |
Mar
(103) |
Apr
(107) |
May
(25) |
Jun
(37) |
Jul
(17) |
Aug
(59) |
Sep
(38) |
Oct
(78) |
Nov
(29) |
Dec
(15) |
2014 |
Jan
(23) |
Feb
(82) |
Mar
(118) |
Apr
(101) |
May
(103) |
Jun
(45) |
Jul
(6) |
Aug
(10) |
Sep
|
Oct
(32) |
Nov
|
Dec
(9) |
2015 |
Jan
(3) |
Feb
(5) |
Mar
|
Apr
(1) |
May
|
Jun
|
Jul
(9) |
Aug
(4) |
Sep
(3) |
Oct
|
Nov
|
Dec
|
2016 |
Jan
(3) |
Feb
|
Mar
|
Apr
|
May
|
Jun
|
Jul
|
Aug
|
Sep
|
Oct
|
Nov
|
Dec
|
2017 |
Jan
|
Feb
|
Mar
|
Apr
|
May
|
Jun
(3) |
Jul
|
Aug
|
Sep
|
Oct
|
Nov
|
Dec
|
2018 |
Jan
|
Feb
|
Mar
|
Apr
|
May
(4) |
Jun
|
Jul
|
Aug
|
Sep
|
Oct
|
Nov
|
Dec
|
S | M | T | W | T | F | S |
---|---|---|---|---|---|---|
|
|
|
|
|
1
(6) |
2
(3) |
3
(5) |
4
(4) |
5
|
6
(2) |
7
|
8
|
9
|
10
|
11
(1) |
12
|
13
|
14
(10) |
15
(3) |
16
(4) |
17
(4) |
18
(9) |
19
(18) |
20
(1) |
21
(6) |
22
(10) |
23
|
24
|
25
(1) |
26
(5) |
27
(5) |
28
(3) |
29
(1) |
30
(2) |
31
|
|
|
|
|
|
|
From: Theodotos A. <th...@ub...> - 2013-03-30 12:40:07
|
Guys I am using pgxc_ctl to built a HA postgres-xc cluster with 4 coordinator/datanodes and 2 gtm nodes: I use the postgres-xc user and I have passwordless ssh between all nodes and the separate pgxc_ctl machine. The dirs ~/bin and ~/pgxc/bin are in the $PATH of postgres-xc user. These are the steps I followed: 1) Complile pgxc and tools: postgres-xc@node-pgxc-ctl:~/pgxc-v1.0.2$ ./configure --prefix=/var/lib/postgres-xc/pgxc postgres-xc@node-pgxc-ctl:~/pgxc-v1.0.2$ make postgres-xc@node-pgxc-ctl:~/pgxc-v1.0.2$ make install postgres-xc@node-pgxc-ctl:~/pgxc-v1.0.2$ cd contrib/ postgres-xc@node-pgxc-ctl:~/pgxc-v1.0.2/contrib$ make install postgres-xc@node-pgxc-ctl:~/pgxc-v1.0.2/contrib$ cd pgxc_monitor/ # This is not included in latest pgxc. I had to download it separately. postgres-xc@node-pgxc-ctl:~/pgxc-v1.0.2/contrib/pgxc_monitor$ make install 2) Download script: I downloaded the pgxc_ctl in ~/bin and patched line 4566 (put a space in front of ']') 3) Cinfiguration of pgxc_ctl Created the config file: postgres-xc@node-pgxc-ctl:~$ pgxc_ctl prepare config I edited ~/pgxc/pgxcConf such as: http://pastebin.com/DHwpwdr1 4) Deployed: postgres-xc@node-pgxc-ctl:~$ pgxc_ctl deploy all wk.tgz 100% 7365KB 7.2MB/s 00:00 wk.tgz 100% 7365KB 7.2MB/s 00:00 wk.tgz 100% 7365KB 7.2MB/s 00:00 wk.tgz 100% 7365KB 7.2MB/s 00:00 wk.tgz 100% 7365KB 7.2MB/s 00:00 wk.tgz 100% 7365KB 7.2MB/s 00:01 deploy log: http://pastebin.com/eETjNmmV 5) Initialize all: postgres-xc@node-pgxc-ctl:~$ pgxc_ctl init This gives a lot of errors. Init log: http://pastebin.com/ML5HLJ2i 6) Monitor all postgres-xc@node-pgxc-ctl:~$ pgxc_ctl monitor all GTM master (gtm): running. host: node-pgxcgtm01, port: 20001, dir: /var/lib/postgres-xc/pgxc/nodes/gtm GTM slave (gtm): running. host: node-pgxcgtm02, port: 20001, dir: /var/lib/postgres-xc/pgxc/nodes/gtm GTM proxy (gtm_pxy1): running. host: node-pgxcdb01, port: 20001, dir: /var/lib/postgres-xc/pgxc/nodes/gtm_pxy GTM proxy (gtm_pxy2): running. host: node-pgxcdb02, port: 20001, dir: /var/lib/postgres-xc/pgxc/nodes/gtm_pxy GTM proxy (gtm_pxy3): running. host: node-pgxcdb03, port: 20001, dir: /var/lib/postgres-xc/pgxc/nodes/gtm_pxy GTM proxy (gtm_pxy4): running. host: node-pgxcdb04, port: 20001, dir: /var/lib/postgres-xc/pgxc/nodes/gtm_pxy Coordinator master (coord1): running. host: node-pgxcdb01, port: 20004, dir: /var/lib/postgres-xc/pgxc/nodes/coord Coordinator master (coord2): running. host: node-pgxcdb02, port: 20005, dir: /var/lib/postgres-xc/pgxc/nodes/coord Coordinator master (coord3): running. host: node-pgxcdb03, port: 20004, dir: /var/lib/postgres-xc/pgxc/nodes/coord Coordinator master (coord4): running. host: node-pgxcdb04, port: 20005, dir: /var/lib/postgres-xc/pgxc/nodes/coord Coordinator slave (coord1): running. host: node-pgxcdb02, port: 20004, dir: /var/lib/postgres-xc/pgxc/nodes/coord_slave Coordinator slave (coord2): running. host: node-pgxcdb03, port: 20005, dir: /var/lib/postgres-xc/pgxc/nodes/coord_slave Coordinator slave (coord3): running. host: node-pgxcdb04, port: 20004, dir: /var/lib/postgres-xc/pgxc/nodes/coord_slave Coordinator slave (coord4): running. host: node-pgxcdb01, port: 20005, dir: /var/lib/postgres-xc/pgxc/nodes/coord_slave Datanode master (datanode1): not running. host: node-pgxcdb01, port: 20008, dir: /var/lib/postgres-xc/pgxc/nodes/dn_master Datanode master (datanode2): not running. host: node-pgxcdb02, port: 20009, dir: /var/lib/postgres-xc/pgxc/nodes/dn_master Datanode master (datanode3): not running. host: node-pgxcdb03, port: 20008, dir: /var/lib/postgres-xc/pgxc/nodes/dn_master Datanode master (datanode4): not running. host: node-pgxcdb04, port: 20009, dir: /var/lib/postgres-xc/pgxc/nodes/dn_master Datanode slave (datanode1): not running. host: node-pgxcdb02, port: 20008, dir: /var/lib/postgres-xc/pgxc/nodes/dn_slave Datanode slave (datanode2): not running. host: node-pgxcdb03, port: 20009, dir: /var/lib/postgres-xc/pgxc/nodes/dn_slave Datanode slave (datanode3): not running. host: node-pgxcdb04, port: 20008, dir: /var/lib/postgres-xc/pgxc/nodes/dn_slave Datanode slave (datanode4): not running. host: node-pgxcdb01, port: 20009, dir: /var/lib/postgres-xc/pgxc/nodes/dn_slave But not all of the above information is true. Coordinators are all up but only the datanode masters are working: # dsh -a 'echo "" ; echo $HOSTNAME ; echo "------------" ; ps auxw | grep postgres | grep dn' node-pgxcdb01 ------------ 999 13843 0.0 0.3 53236 7308 ? S 13:15 0:00 /var/lib/postgres-xc/pgxc/bin/postgres -X -D /var/lib/postgres-xc/pgxc/nodes/dn_master -i root 18229 0.0 0.0 11008 1432 ? Ss 14:21 0:00 bash -c echo "" ; echo $HOSTNAME ; echo "------------" ; ps auxw | grep postgres | grep dn node-pgxcdb02 ------------ 999 12911 0.0 0.3 53232 7304 ? S 13:15 0:00 /var/lib/postgres-xc/pgxc/bin/postgres -X -D /var/lib/postgres-xc/pgxc/nodes/dn_master -i root 17295 0.0 0.0 11008 1436 ? Ss 14:21 0:00 bash -c echo "" ; echo $HOSTNAME ; echo "------------" ; ps auxw | grep postgres | grep dn node-pgxcdb03 ------------ 999 12918 0.0 0.3 53232 7312 ? S 13:15 0:00 /var/lib/postgres-xc/pgxc/bin/postgres -X -D /var/lib/postgres-xc/pgxc/nodes/dn_master -i root 17383 0.0 0.0 11008 1436 ? Ss 14:21 0:00 bash -c echo "" ; echo $HOSTNAME ; echo "------------" ; ps auxw | grep postgres | grep dn node-pgxcdb04 ------------ 999 12920 0.0 0.3 53240 7316 ? S 13:15 0:00 /var/lib/postgres-xc/pgxc/bin/postgres -X -D /var/lib/postgres-xc/pgxc/nodes/dn_master -i root 17270 0.0 0.0 11008 1432 ? Ss 14:21 0:00 bash -c echo "" ; echo $HOSTNAME ; echo "------------" ; ps auxw | grep postgres | grep dn node-pgxcgtm01 ------------ root 7087 0.0 0.0 11008 1432 ? Ss 14:21 0:00 bash -c echo "" ; echo $HOSTNAME ; echo "------------" ; ps auxw | grep postgres | grep dn node-pgxcgtm02 ------------ root 5913 0.0 0.0 11008 1432 ? Ss 14:21 0:00 bash -c echo "" ; echo $HOSTNAME ; echo "------------" ; ps auxw | grep postgres | grep dn Trying to start the slaves manually I get: postgres-xc@node-pgxcdb01:~$ /var/lib/postgres-xc/pgxc/bin/postgres -X -D /var/lib/postgres-xc/pgxc/nodes/dn_slave/ -i FATAL: "/var/lib/postgres-xc/pgxc/nodes/dn_slave" is not a valid data directory DETAIL: File "/var/lib/postgres-xc/pgxc/nodes/dn_slave/PG_VERSION" is missing. The datanode slaves have not been initialized because pg_basebackup fails with: Questions: a) Any idea what I am doing wrong? Init logs are not very clear what is the problem. b) Can I run "pgxc_ctl deploy all" again? Will this mess up the current setup if I do? c) Why does pgxc_monitor reports that datanode masters are down when are in fact working? |
From: Theodotos A. <th...@ub...> - 2013-03-30 07:05:57
|
Guys hello I get this $ pgxc_ctl start coordinator master /var/lib/postgres-xc/bin/pgxc_ctl: line 4566: [: missing `]' I had to change this: if [ "$1" == "gtm" ] || [ "$1" == "$gtmName"]; then to this: if [ "$1" == "gtm" ] || [ "$1" == "$gtmName" ]; then |
From: Paul J. <pb...@cm...> - 2013-03-29 14:12:27
|
Does anyone know of any companies (willing to reveal it, anyway) that are using Postgres-XC in production today or will be soon? And, so, would they be willing to talk to me about it? Paul Jones |
From: Abbas B. <abb...@en...> - 2013-03-28 20:18:38
|
On Thu, Mar 28, 2013 at 8:24 PM, Paul Jones <pb...@cm...> wrote: > > > If you have an existing cluster with databases, tables, users, etc., > is it possible to add coordinators (or data nodes) after the fact and > synchronize them with the existing configuration? > This feature is currently under development and will be available in the next release. > > We tried adding a coordinator to a small cluster we have and it could > not see any pre-existing data or users. The new coordinator could see > everything done after it was added. > > Paul Jones > > > ------------------------------------------------------------------------------ > Own the Future-Intel® Level Up Game Demo Contest 2013 > Rise to greatness in Intel's independent game demo contest. > Compete for recognition, cash, and the chance to get your game > on Steam. $5K grand prize plus 10 genre and skill prizes. > Submit your demo by 6/6/13. http://p.sf.net/sfu/intel_levelupd2d > _______________________________________________ > Postgres-xc-general mailing list > Pos...@li... > https://lists.sourceforge.net/lists/listinfo/postgres-xc-general > -- -- Abbas Architect EnterpriseDB Corporation The Enterprise PostgreSQL Company Phone: 92-334-5100153 Website: www.enterprisedb.com EnterpriseDB Blog: http://blogs.enterprisedb.com/ Follow us on Twitter: http://www.twitter.com/enterprisedb This e-mail message (and any attachment) is intended for the use of the individual or entity to whom it is addressed. This message contains information from EnterpriseDB Corporation that may be privileged, confidential, or exempt from disclosure under applicable law. If you are not the intended recipient or authorized to receive this for the intended recipient, any use, dissemination, distribution, retention, archiving, or copying of this communication is strictly prohibited. If you have received this e-mail in error, please notify the sender immediately by reply e-mail and delete this message. |
From: Paul J. <pb...@cm...> - 2013-03-28 15:37:15
|
If you have an existing cluster with databases, tables, users, etc., is it possible to add coordinators (or data nodes) after the fact and synchronize them with the existing configuration? We tried adding a coordinator to a small cluster we have and it could not see any pre-existing data or users. The new coordinator could see everything done after it was added. Paul Jones |
From: Koichi S. <koi...@gm...> - 2013-03-28 08:47:32
|
Does anybody has a suggestion where such info should be added? CREATE|ALTER NODE? Configuration? I'm afraid configuration section in the document consists of too many descriptions. Even though we put such info here, we may need some separate sections/materials to guide "how to configure XC". Best; ---------- Koichi Suzuki 2013/3/27 seikath <se...@gm...>: > I was about to put a request for that, most of the available sources do not > have that particular info, all of them mantion only the datanodes creation. > > About the Barcleona, well, I keep my word guys, whoever is in Barcelona and > have some free time, ping me please. > > At Michael: Im not a Spaniard, but I do enjoy watching football ( Visca > Barça !!! ) with friends , some beers and sharing ideas and experience .. :) > > In fact I do that with several friends of mine from SkySQL .. > > Cheers, and thank you all again. > > Ivan > > On 03/27/2013 04:52 AM, Michael Paquier wrote: > > > > On Wed, Mar 27, 2013 at 3:16 AM, Nikhil Sontakke <ni...@st...> > wrote: >> >> > prod-xc-coord01 >> > postgres=# select * from pgxc_node; >> > node_name | node_type | node_port | node_host | nodeis_primary | >> > nodeis_preferred | node_id >> > >> > -----------+-----------+-----------+--------------+----------------+------------------+------------ >> > coord1 | C | 5432 | localhost | f | f >> > | 1885696643 >> > datanode1 | D | 6543 | localhost | t | t >> > | 888802358 >> > datanode2 | D | 6543 | 10.101.51.38 | f | f >> > | -905831925 >> > >> > prod-xc-coord02 >> > postgres=# select * from pgxc_node >> > node_name | node_type | node_port | node_host | nodeis_primary | >> > nodeis_preferred | node_id >> > >> > -----------+-----------+-----------+--------------+----------------+------------------+------------- >> > coord2 | C | 5432 | localhost | f | f >> > | -1197102633 >> > datanode1 | D | 6543 | 10.245.114.8 | t | f >> > | 888802358 >> > datanode2 | D | 6543 | localhost | f | t >> > | -905831925 >> > >> > after that setup I was able to create tabases from the both coordinator >> > nodes, but each coordinator does not see the database created by the other >> > coordinator. >> > >> >> You need to add coord1 metadata on coord2 node and vice versa as well. > > There are so many people making the same mistake and requesting help for > similar stuff on this mailing list that it should be worth improving the > documentation. > -- > Michael > > > ------------------------------------------------------------------------------ > Own the Future-Intel® Level Up Game Demo Contest 2013 > Rise to greatness in Intel's independent game demo contest. > Compete for recognition, cash, and the chance to get your game > on Steam. $5K grand prize plus 10 genre and skill prizes. > Submit your demo by 6/6/13. http://p.sf.net/sfu/intel_levelupd2d > _______________________________________________ > Postgres-xc-general mailing list > Pos...@li... > https://lists.sourceforge.net/lists/listinfo/postgres-xc-general > |
From: seikath <se...@gm...> - 2013-03-27 08:47:14
|
I was about to put a request for that, most of the available sources do not have that particular info, all of them mantion only the datanodes creation. About the Barcleona, well, I keep my word guys, whoever is in *B*arcelona and have some free time, ping me please. At Michael: Im not a /Spaniard/, but I do enjoy watching football ( Visca Barça !!! ) with friends , some beers and sharing ideas and experience .. :) In fact I do that with several friends of mine from SkySQL .. Cheers, and thank you all again. Ivan On 03/27/2013 04:52 AM, Michael Paquier wrote: > > > On Wed, Mar 27, 2013 at 3:16 AM, Nikhil Sontakke <ni...@st... <mailto:ni...@st...>> wrote: > > > prod-xc-coord01 > > postgres=# select * from pgxc_node; > > node_name | node_type | node_port | node_host | nodeis_primary | nodeis_preferred | node_id > > -----------+-----------+-----------+--------------+----------------+------------------+------------ > > coord1 | C | 5432 | localhost | f | f | 1885696643 > > datanode1 | D | 6543 | localhost | t | t | 888802358 > > datanode2 | D | 6543 | 10.101.51.38 | f | f | -905831925 > > > > prod-xc-coord02 > > postgres=# select * from pgxc_node > > node_name | node_type | node_port | node_host | nodeis_primary | nodeis_preferred | node_id > > -----------+-----------+-----------+--------------+----------------+------------------+------------- > > coord2 | C | 5432 | localhost | f | f | -1197102633 > > datanode1 | D | 6543 | 10.245.114.8 | t | f | 888802358 > > datanode2 | D | 6543 | localhost | f | t | -905831925 > > > > after that setup I was able to create tabases from the both coordinator nodes, but each coordinator does not see the database created by the other > coordinator. > > > > You need to add coord1 metadata on coord2 node and vice versa as well. > > There are so many people making the same mistake and requesting help for similar stuff on this mailing list that it should be worth improving the documentation. > -- > Michael |
From: Nikhil S. <ni...@st...> - 2013-03-27 06:06:55
|
> Whenever you have a chance to visit Barcelona, let me know, I owe you one :) > Barcelona! Sure Ivan :) Regards, Nikhils > Kind regards, > > Ivan > > On 03/26/2013 07:16 PM, Nikhil Sontakke wrote: >>> prod-xc-coord01 >>> postgres=# select * from pgxc_node; >>> node_name | node_type | node_port | node_host | nodeis_primary | nodeis_preferred | node_id >>> -----------+-----------+-----------+--------------+----------------+------------------+------------ >>> coord1 | C | 5432 | localhost | f | f | 1885696643 >>> datanode1 | D | 6543 | localhost | t | t | 888802358 >>> datanode2 | D | 6543 | 10.101.51.38 | f | f | -905831925 >>> >>> prod-xc-coord02 >>> postgres=# select * from pgxc_node; >>> node_name | node_type | node_port | node_host | nodeis_primary | nodeis_preferred | node_id >>> -----------+-----------+-----------+--------------+----------------+------------------+------------- >>> coord2 | C | 5432 | localhost | f | f | -1197102633 >>> datanode1 | D | 6543 | 10.245.114.8 | t | f | 888802358 >>> datanode2 | D | 6543 | localhost | f | t | -905831925 >>> >>> after that setup I was able to create tabases from the both coordinator nodes, but each coordinator does not see the database created by the other coordinator. >>> >> You need to add coord1 metadata on coord2 node and vice versa as well. >> >> Regards, >> Nikhils > > ------------------------------------------------------------------------------ > Own the Future-Intel® Level Up Game Demo Contest 2013 > Rise to greatness in Intel's independent game demo contest. > Compete for recognition, cash, and the chance to get your game > on Steam. $5K grand prize plus 10 genre and skill prizes. > Submit your demo by 6/6/13. http://p.sf.net/sfu/intel_levelupd2d > _______________________________________________ > Postgres-xc-general mailing list > Pos...@li... > https://lists.sourceforge.net/lists/listinfo/postgres-xc-general -- StormDB - http://www.stormdb.com The Database Cloud |
From: Michael P. <mic...@gm...> - 2013-03-27 03:56:30
|
On Wed, Mar 27, 2013 at 4:44 AM, seikath <se...@gm...> wrote: > Whenever you have a chance to visit Barcelona, let me know, I owe you one > :) > Avoid to say that... I can answer really, really quickly, and my home country is close to yours. My country even lost a soccer match yesterday against yours... ;) -- Michael |
From: Michael P. <mic...@gm...> - 2013-03-27 03:52:41
|
On Wed, Mar 27, 2013 at 3:16 AM, Nikhil Sontakke <ni...@st...>wrote: > > prod-xc-coord01 > > postgres=# select * from pgxc_node; > > node_name | node_type | node_port | node_host | nodeis_primary | > nodeis_preferred | node_id > > > -----------+-----------+-----------+--------------+----------------+------------------+------------ > > coord1 | C | 5432 | localhost | f | f > | 1885696643 > > datanode1 | D | 6543 | localhost | t | t > | 888802358 > > datanode2 | D | 6543 | 10.101.51.38 | f | f > | -905831925 > > > > prod-xc-coord02 > > postgres=# select * from pgxc_node; > > node_name | node_type | node_port | node_host | nodeis_primary | > nodeis_preferred | node_id > > > -----------+-----------+-----------+--------------+----------------+------------------+------------- > > coord2 | C | 5432 | localhost | f | f > | -1197102633 > > datanode1 | D | 6543 | 10.245.114.8 | t | f > | 888802358 > > datanode2 | D | 6543 | localhost | f | t > | -905831925 > > > > after that setup I was able to create tabases from the both coordinator > nodes, but each coordinator does not see the database created by the other > coordinator. > > > > You need to add coord1 metadata on coord2 node and vice versa as well. > There are so many people making the same mistake and requesting help for similar stuff on this mailing list that it should be worth improving the documentation. -- Michael |
From: Ashutosh B. <ash...@en...> - 2013-03-27 03:25:08
|
Hi Ivan, On Tue, Mar 26, 2013 at 8:01 PM, seikath <se...@gm...> wrote: > Hello Ashutosh, > > My initial setup was datanode1 as primary on all coordinators. > > But the database created on coordinator1 was not visible by coordinator2 > even it was populated at the both datanodes. > > Can you provide some more information? This looks like a bug. > So I tested with one primary datanode on one coordinator, hoping the GTM > will know that and will distribute the info. > Anyway, these are my firsts steps with XC, so I might did some simple > config error .. > > I want to use the coordinators as a entry points for loadbalanced external > SQL requests > > Kind regards, > > Ivan > > > On 03/26/2013 03:21 PM, Ashutosh Bapat wrote: > > > > On Tue, Mar 26, 2013 at 7:46 PM, seikath <se...@gm...> wrote: > >> Hello all, >> >> I have an XC setup of 4 AWS instances: >> >> ============================= >> instance: prod-xc-coord1 >> >> coordinator config at prod-xc-coord1 >> listen_addresses = '*' >> port = 5432 >> max_connections = 100 >> shared_buffers = 120MB >> max_prepared_transactions = 100 >> datestyle = 'iso, mdy' >> lc_messages = 'en_US.UTF-8' >> lc_monetary = 'en_US.UTF-8' >> lc_numeric = 'en_US.UTF-8' >> lc_time = 'en_US.UTF-8' >> default_text_search_config = 'pg_catalog.english' >> pooler_port = 6667 >> min_pool_size = 1 >> max_pool_size = 100 >> max_coordinators = 16 >> max_datanodes = 16 >> gtm_host = '10.196.154.85' >> gtm_port = 6543 >> pgxc_node_name = 'coord1' >> enforce_two_phase_commit = on >> enable_fast_query_shipping = on >> enable_remotejoin = on >> enable_remotegroup = on >> >> datanode config at prod-xc-coord1 >> listen_addresses = '*' >> port = 6543 >> max_connections = 100 >> shared_buffers = 320MB >> max_prepared_transactions = 100 >> datestyle = 'iso, mdy' >> lc_messages = 'en_US.UTF-8' >> lc_monetary = 'en_US.UTF-8' >> lc_numeric = 'en_US.UTF-8' >> lc_time = 'en_US.UTF-8' >> default_text_search_config = 'pg_catalog.english' >> max_coordinators = 16 >> max_datanodes = 16 >> gtm_host = '10.196.154.85' >> gtm_port = 6543 >> pgxc_node_name = 'datanode1' >> enforce_two_phase_commit = on >> enable_fast_query_shipping = on >> enable_remotejoin = on >> enable_remotegroup = on >> >> >> ============================= >> instance : prod-xc-coord2 >> >> coordinator config at prod-xc-coord2 >> listen_addresses = '*' >> port = 5432 >> max_connections = 100 >> superuser_reserved_connections = 3 >> shared_buffers = 120MB >> max_prepared_transactions = 100 >> datestyle = 'iso, mdy' >> lc_messages = 'en_US.UTF-8' >> lc_monetary = 'en_US.UTF-8' >> lc_numeric = 'en_US.UTF-8' >> lc_time = 'en_US.UTF-8' >> default_text_search_config = 'pg_catalog.english' >> pooler_port = 6667 >> min_pool_size = 1 >> max_pool_size = 100 >> max_coordinators = 16 >> max_datanodes = 16 >> gtm_host = '10.196.154.85' >> gtm_port = 6543 >> pgxc_node_name = 'coord2' >> enforce_two_phase_commit = on >> enable_fast_query_shipping = on >> enable_remotejoin = on >> enable_remotegroup = on >> >> datanode config at prod-xc-coord2 >> listen_addresses = '*' >> port = 6543 >> max_connections = 100 >> shared_buffers = 320MB >> max_prepared_transactions = 100 >> datestyle = 'iso, mdy' >> lc_messages = 'en_US.UTF-8' >> lc_monetary = 'en_US.UTF-8' >> lc_numeric = 'en_US.UTF-8' >> lc_time = 'en_US.UTF-8' >> default_text_search_config = 'pg_catalog.english' >> max_coordinators = 16 >> max_datanodes = 16 >> gtm_host = '10.196.154.85' >> gtm_port = 6543 >> pgxc_node_name = 'datanode2' >> enforce_two_phase_commit = on >> enable_fast_query_shipping = on >> enable_remotejoin = on >> enable_remotegroup = on >> >> ============================= >> instance prod-xc-gtm-proxy : IP 10.196.154.85 >> >> proxy config: >> nodename = 'one' >> listen_addresses = '*' >> port = 6543 >> gtm_host = '10.244.158.120' >> gtm_port = 5432 >> >> ============================= >> instance prod-xc-gtm : IP 10.244.158.120 >> gtm config >> nodename = 'one' >> listen_addresses = '*' >> port = 5432 >> >> >> ============================= >> >> the pg_hba,conf of both coordinator and data nodes at both prod-xc-coord1 >> and prod-xc-coord2 >> allows the other node to connect: >> ================================================= >> pg_hba,conf at prod-xc-coord01 IP 10.245.114.8 >> local all all trust >> host all all 127.0.0.1/32 trust >> host all all ::1/128 trust >> host all all 10.101.51.38/32 trust >> >> pg_hba,conf at prod-xc-coord02 IP 10.101.51.38 >> local all all trust >> host all all 127.0.0.1/32 trust >> host all all ::1/128 trust >> host all all 10.245.114.8/32 trust >> >> the connectivity is tested and confirmed. >> ================================================= >> >> initial nodes setup: >> prod-xc-coord01 >> postgres=# select * from pgxc_node; >> node_name | node_type | node_port | node_host | nodeis_primary | >> nodeis_preferred | node_id >> >> -----------+-----------+-----------+--------------+----------------+------------------+------------ >> coord1 | C | 5432 | localhost | f | f >> | 1885696643 >> datanode1 | D | 6543 | localhost | t | t >> | 888802358 >> datanode2 | D | 6543 | 10.101.51.38 | f | f >> | -905831925 >> >> prod-xc-coord02 >> postgres=# select * from pgxc_node; >> node_name | node_type | node_port | node_host | nodeis_primary | >> nodeis_preferred | node_id >> >> -----------+-----------+-----------+--------------+----------------+------------------+------------- >> coord2 | C | 5432 | localhost | f | f >> | -1197102633 >> datanode1 | D | 6543 | 10.245.114.8 | t | f >> | 888802358 >> datanode2 | D | 6543 | localhost | f | t >> | -905831925 >> >> after that setup I was able to create tabases from the both coordinator >> nodes, but each coordinator does not see the database created by the other >> coordinator. >> >> >> >> then tested and the node setup with one only primary node: >> prod-xc-coord02 >> postgres=# alter node datanode1 with (type = 'datanode', host = >> '10.245.114.8', port = 6543, primary=false,preferred=false); >> ALTER NODE >> postgres=# select pgxc_pool_reload(); >> pgxc_pool_reload >> ------------------ >> t >> (1 row) >> postgres=# select * from pgxc_node; >> node_name | node_type | node_port | node_host | nodeis_primary | >> nodeis_preferred | node_id >> >> -----------+-----------+-----------+--------------+----------------+------------------+------------- >> coord2 | C | 5432 | localhost | f | f >> | -1197102633 >> datanode2 | D | 6543 | localhost | f | t >> | -905831925 >> datanode1 | D | 6543 | 10.245.114.8 | f | f >> | 888802358 >> (3 rows) >> >> the result is the same. >> >> I know I am missing something simple as a config or open port, but at the >> moment I cant figure out whats missing in the setup. >> >> > What are you trying to do here? You have set primary node for datanode1 to > false, which is what the query result displays. Can you please elaborate > what's going wrong? > > >> In general our plan is to use loadbalancer in frond of several instances >> hosting one coordinator and one datanode. >> >> I apologize for the ugly paste, but I am not sure if that mail list >> support html formatting. >> >> Kind regards, >> >> Ivan >> >> >> >> >> >> >> >> >> >> ------------------------------------------------------------------------------ >> Own the Future-Intel® Level Up Game Demo Contest 2013 >> Rise to greatness in Intel's independent game demo contest. >> Compete for recognition, cash, and the chance to get your game >> on Steam. $5K grand prize plus 10 genre and skill prizes. >> Submit your demo by 6/6/13. http://p.sf.net/sfu/intel_levelupd2d >> _______________________________________________ >> Postgres-xc-general mailing list >> Pos...@li... >> https://lists.sourceforge.net/lists/listinfo/postgres-xc-general >> > > > > -- > Best Wishes, > Ashutosh Bapat > EntepriseDB Corporation > The Enterprise Postgres Company > > > > ------------------------------------------------------------------------------ > Own the Future-Intel® Level Up Game Demo Contest 2013 > Rise to greatness in Intel's independent game demo contest. > Compete for recognition, cash, and the chance to get your game > on Steam. $5K grand prize plus 10 genre and skill prizes. > Submit your demo by 6/6/13. http://p.sf.net/sfu/intel_levelupd2d > _______________________________________________ > Postgres-xc-general mailing list > Pos...@li... > https://lists.sourceforge.net/lists/listinfo/postgres-xc-general > > -- Best Wishes, Ashutosh Bapat EntepriseDB Corporation The Enterprise Postgres Company |
From: seikath <se...@gm...> - 2013-03-26 19:44:13
|
Nikhil, thank you, that did the job. Whenever you have a chance to visit Barcelona, let me know, I owe you one :) Kind regards, Ivan On 03/26/2013 07:16 PM, Nikhil Sontakke wrote: >> prod-xc-coord01 >> postgres=# select * from pgxc_node; >> node_name | node_type | node_port | node_host | nodeis_primary | nodeis_preferred | node_id >> -----------+-----------+-----------+--------------+----------------+------------------+------------ >> coord1 | C | 5432 | localhost | f | f | 1885696643 >> datanode1 | D | 6543 | localhost | t | t | 888802358 >> datanode2 | D | 6543 | 10.101.51.38 | f | f | -905831925 >> >> prod-xc-coord02 >> postgres=# select * from pgxc_node; >> node_name | node_type | node_port | node_host | nodeis_primary | nodeis_preferred | node_id >> -----------+-----------+-----------+--------------+----------------+------------------+------------- >> coord2 | C | 5432 | localhost | f | f | -1197102633 >> datanode1 | D | 6543 | 10.245.114.8 | t | f | 888802358 >> datanode2 | D | 6543 | localhost | f | t | -905831925 >> >> after that setup I was able to create tabases from the both coordinator nodes, but each coordinator does not see the database created by the other coordinator. >> > You need to add coord1 metadata on coord2 node and vice versa as well. > > Regards, > Nikhils |
From: Nikhil S. <ni...@st...> - 2013-03-26 18:16:47
|
> prod-xc-coord01 > postgres=# select * from pgxc_node; > node_name | node_type | node_port | node_host | nodeis_primary | nodeis_preferred | node_id > -----------+-----------+-----------+--------------+----------------+------------------+------------ > coord1 | C | 5432 | localhost | f | f | 1885696643 > datanode1 | D | 6543 | localhost | t | t | 888802358 > datanode2 | D | 6543 | 10.101.51.38 | f | f | -905831925 > > prod-xc-coord02 > postgres=# select * from pgxc_node; > node_name | node_type | node_port | node_host | nodeis_primary | nodeis_preferred | node_id > -----------+-----------+-----------+--------------+----------------+------------------+------------- > coord2 | C | 5432 | localhost | f | f | -1197102633 > datanode1 | D | 6543 | 10.245.114.8 | t | f | 888802358 > datanode2 | D | 6543 | localhost | f | t | -905831925 > > after that setup I was able to create tabases from the both coordinator nodes, but each coordinator does not see the database created by the other coordinator. > You need to add coord1 metadata on coord2 node and vice versa as well. Regards, Nikhils -- StormDB - http://www.stormdb.com The Database Cloud |
From: seikath <se...@gm...> - 2013-03-26 14:31:18
|
Hello Ashutosh, My initial setup was datanode1 as primary on all coordinators. But the database created on coordinator1 was not visible by coordinator2 even it was populated at the both datanodes. So I tested with one primary datanode on one coordinator, hoping the GTM will know that and will distribute the info. Anyway, these are my firsts steps with XC, so I might did some simple config error .. I want to use the coordinators as a entry points for loadbalanced external SQL requests Kind regards, Ivan On 03/26/2013 03:21 PM, Ashutosh Bapat wrote: > > > On Tue, Mar 26, 2013 at 7:46 PM, seikath <se...@gm... <mailto:se...@gm...>> wrote: > > Hello all, > > I have an XC setup of 4 AWS instances: > > ============================= > instance: prod-xc-coord1 > > coordinator config at prod-xc-coord1 > listen_addresses = '*' > port = 5432 > max_connections = 100 > shared_buffers = 120MB > max_prepared_transactions = 100 > datestyle = 'iso, mdy' > lc_messages = 'en_US.UTF-8' > lc_monetary = 'en_US.UTF-8' > lc_numeric = 'en_US.UTF-8' > lc_time = 'en_US.UTF-8' > default_text_search_config = 'pg_catalog.english' > pooler_port = 6667 > min_pool_size = 1 > max_pool_size = 100 > max_coordinators = 16 > max_datanodes = 16 > gtm_host = '10.196.154.85' > gtm_port = 6543 > pgxc_node_name = 'coord1' > enforce_two_phase_commit = on > enable_fast_query_shipping = on > enable_remotejoin = on > enable_remotegroup = on > > datanode config at prod-xc-coord1 > listen_addresses = '*' > port = 6543 > max_connections = 100 > shared_buffers = 320MB > max_prepared_transactions = 100 > datestyle = 'iso, mdy' > lc_messages = 'en_US.UTF-8' > lc_monetary = 'en_US.UTF-8' > lc_numeric = 'en_US.UTF-8' > lc_time = 'en_US.UTF-8' > default_text_search_config = 'pg_catalog.english' > max_coordinators = 16 > max_datanodes = 16 > gtm_host = '10.196.154.85' > gtm_port = 6543 > pgxc_node_name = 'datanode1' > enforce_two_phase_commit = on > enable_fast_query_shipping = on > enable_remotejoin = on > enable_remotegroup = on > > > ============================= > instance : prod-xc-coord2 > > coordinator config at prod-xc-coord2 > listen_addresses = '*' > port = 5432 > max_connections = 100 > superuser_reserved_connections = 3 > shared_buffers = 120MB > max_prepared_transactions = 100 > datestyle = 'iso, mdy' > lc_messages = 'en_US.UTF-8' > lc_monetary = 'en_US.UTF-8' > lc_numeric = 'en_US.UTF-8' > lc_time = 'en_US.UTF-8' > default_text_search_config = 'pg_catalog.english' > pooler_port = 6667 > min_pool_size = 1 > max_pool_size = 100 > max_coordinators = 16 > max_datanodes = 16 > gtm_host = '10.196.154.85' > gtm_port = 6543 > pgxc_node_name = 'coord2' > enforce_two_phase_commit = on > enable_fast_query_shipping = on > enable_remotejoin = on > enable_remotegroup = on > > datanode config at prod-xc-coord2 > listen_addresses = '*' > port = 6543 > max_connections = 100 > shared_buffers = 320MB > max_prepared_transactions = 100 > datestyle = 'iso, mdy' > lc_messages = 'en_US.UTF-8' > lc_monetary = 'en_US.UTF-8' > lc_numeric = 'en_US.UTF-8' > lc_time = 'en_US.UTF-8' > default_text_search_config = 'pg_catalog.english' > max_coordinators = 16 > max_datanodes = 16 > gtm_host = '10.196.154.85' > gtm_port = 6543 > pgxc_node_name = 'datanode2' > enforce_two_phase_commit = on > enable_fast_query_shipping = on > enable_remotejoin = on > enable_remotegroup = on > > ============================= > instance prod-xc-gtm-proxy : IP 10.196.154.85 > > proxy config: > nodename = 'one' > listen_addresses = '*' > port = 6543 > gtm_host = '10.244.158.120' > gtm_port = 5432 > > ============================= > instance prod-xc-gtm : IP 10.244.158.120 > gtm config > nodename = 'one' > listen_addresses = '*' > port = 5432 > > > ============================= > > the pg_hba,conf of both coordinator and data nodes at both prod-xc-coord1 and prod-xc-coord2 > allows the other node to connect: > ================================================= > pg_hba,conf at prod-xc-coord01 IP 10.245.114.8 > local all all trust > host all all 127.0.0.1/32 <http://127.0.0.1/32> trust > host all all ::1/128 trust > host all all 10.101.51.38/32 <http://10.101.51.38/32> trust > > pg_hba,conf at prod-xc-coord02 IP 10.101.51.38 > local all all trust > host all all 127.0.0.1/32 <http://127.0.0.1/32> trust > host all all ::1/128 trust > host all all 10.245.114.8/32 <http://10.245.114.8/32> trust > > the connectivity is tested and confirmed. > ================================================= > > initial nodes setup: > prod-xc-coord01 > postgres=# select * from pgxc_node; > node_name | node_type | node_port | node_host | nodeis_primary | nodeis_preferred | node_id > -----------+-----------+-----------+--------------+----------------+------------------+------------ > coord1 | C | 5432 | localhost | f | f | 1885696643 > datanode1 | D | 6543 | localhost | t | t | 888802358 > datanode2 | D | 6543 | 10.101.51.38 | f | f | -905831925 > > prod-xc-coord02 > postgres=# select * from pgxc_node; > node_name | node_type | node_port | node_host | nodeis_primary | nodeis_preferred | node_id > -----------+-----------+-----------+--------------+----------------+------------------+------------- > coord2 | C | 5432 | localhost | f | f | -1197102633 > datanode1 | D | 6543 | 10.245.114.8 | t | f | 888802358 > datanode2 | D | 6543 | localhost | f | t | -905831925 > > after that setup I was able to create tabases from the both coordinator nodes, but each coordinator does not see the database created by the other > coordinator. > > > > then tested and the node setup with one only primary node: > prod-xc-coord02 > postgres=# alter node datanode1 with (type = 'datanode', host = '10.245.114.8', port = 6543, primary=false,preferred=false); > ALTER NODE > postgres=# select pgxc_pool_reload(); > pgxc_pool_reload > ------------------ > t > (1 row) > postgres=# select * from pgxc_node; > node_name | node_type | node_port | node_host | nodeis_primary | nodeis_preferred | node_id > -----------+-----------+-----------+--------------+----------------+------------------+------------- > coord2 | C | 5432 | localhost | f | f | -1197102633 > datanode2 | D | 6543 | localhost | f | t | -905831925 > datanode1 | D | 6543 | 10.245.114.8 | f | f | 888802358 > (3 rows) > > the result is the same. > > I know I am missing something simple as a config or open port, but at the moment I cant figure out whats missing in the setup. > > > What are you trying to do here? You have set primary node for datanode1 to false, which is what the query result displays. Can you please elaborate what's > going wrong? > > In general our plan is to use loadbalancer in frond of several instances hosting one coordinator and one datanode. > > I apologize for the ugly paste, but I am not sure if that mail list support html formatting. > > Kind regards, > > Ivan > > > > > > > > > ------------------------------------------------------------------------------ > Own the Future-Intel® Level Up Game Demo Contest 2013 > Rise to greatness in Intel's independent game demo contest. > Compete for recognition, cash, and the chance to get your game > on Steam. $5K grand prize plus 10 genre and skill prizes. > Submit your demo by 6/6/13. http://p.sf.net/sfu/intel_levelupd2d > _______________________________________________ > Postgres-xc-general mailing list > Pos...@li... <mailto:Pos...@li...> > https://lists.sourceforge.net/lists/listinfo/postgres-xc-general > > > > > -- > Best Wishes, > Ashutosh Bapat > EntepriseDB Corporation > The Enterprise Postgres Company |
From: Ashutosh B. <ash...@en...> - 2013-03-26 14:21:13
|
On Tue, Mar 26, 2013 at 7:46 PM, seikath <se...@gm...> wrote: > Hello all, > > I have an XC setup of 4 AWS instances: > > ============================= > instance: prod-xc-coord1 > > coordinator config at prod-xc-coord1 > listen_addresses = '*' > port = 5432 > max_connections = 100 > shared_buffers = 120MB > max_prepared_transactions = 100 > datestyle = 'iso, mdy' > lc_messages = 'en_US.UTF-8' > lc_monetary = 'en_US.UTF-8' > lc_numeric = 'en_US.UTF-8' > lc_time = 'en_US.UTF-8' > default_text_search_config = 'pg_catalog.english' > pooler_port = 6667 > min_pool_size = 1 > max_pool_size = 100 > max_coordinators = 16 > max_datanodes = 16 > gtm_host = '10.196.154.85' > gtm_port = 6543 > pgxc_node_name = 'coord1' > enforce_two_phase_commit = on > enable_fast_query_shipping = on > enable_remotejoin = on > enable_remotegroup = on > > datanode config at prod-xc-coord1 > listen_addresses = '*' > port = 6543 > max_connections = 100 > shared_buffers = 320MB > max_prepared_transactions = 100 > datestyle = 'iso, mdy' > lc_messages = 'en_US.UTF-8' > lc_monetary = 'en_US.UTF-8' > lc_numeric = 'en_US.UTF-8' > lc_time = 'en_US.UTF-8' > default_text_search_config = 'pg_catalog.english' > max_coordinators = 16 > max_datanodes = 16 > gtm_host = '10.196.154.85' > gtm_port = 6543 > pgxc_node_name = 'datanode1' > enforce_two_phase_commit = on > enable_fast_query_shipping = on > enable_remotejoin = on > enable_remotegroup = on > > > ============================= > instance : prod-xc-coord2 > > coordinator config at prod-xc-coord2 > listen_addresses = '*' > port = 5432 > max_connections = 100 > superuser_reserved_connections = 3 > shared_buffers = 120MB > max_prepared_transactions = 100 > datestyle = 'iso, mdy' > lc_messages = 'en_US.UTF-8' > lc_monetary = 'en_US.UTF-8' > lc_numeric = 'en_US.UTF-8' > lc_time = 'en_US.UTF-8' > default_text_search_config = 'pg_catalog.english' > pooler_port = 6667 > min_pool_size = 1 > max_pool_size = 100 > max_coordinators = 16 > max_datanodes = 16 > gtm_host = '10.196.154.85' > gtm_port = 6543 > pgxc_node_name = 'coord2' > enforce_two_phase_commit = on > enable_fast_query_shipping = on > enable_remotejoin = on > enable_remotegroup = on > > datanode config at prod-xc-coord2 > listen_addresses = '*' > port = 6543 > max_connections = 100 > shared_buffers = 320MB > max_prepared_transactions = 100 > datestyle = 'iso, mdy' > lc_messages = 'en_US.UTF-8' > lc_monetary = 'en_US.UTF-8' > lc_numeric = 'en_US.UTF-8' > lc_time = 'en_US.UTF-8' > default_text_search_config = 'pg_catalog.english' > max_coordinators = 16 > max_datanodes = 16 > gtm_host = '10.196.154.85' > gtm_port = 6543 > pgxc_node_name = 'datanode2' > enforce_two_phase_commit = on > enable_fast_query_shipping = on > enable_remotejoin = on > enable_remotegroup = on > > ============================= > instance prod-xc-gtm-proxy : IP 10.196.154.85 > > proxy config: > nodename = 'one' > listen_addresses = '*' > port = 6543 > gtm_host = '10.244.158.120' > gtm_port = 5432 > > ============================= > instance prod-xc-gtm : IP 10.244.158.120 > gtm config > nodename = 'one' > listen_addresses = '*' > port = 5432 > > > ============================= > > the pg_hba,conf of both coordinator and data nodes at both prod-xc-coord1 > and prod-xc-coord2 > allows the other node to connect: > ================================================= > pg_hba,conf at prod-xc-coord01 IP 10.245.114.8 > local all all trust > host all all 127.0.0.1/32 trust > host all all ::1/128 trust > host all all 10.101.51.38/32 trust > > pg_hba,conf at prod-xc-coord02 IP 10.101.51.38 > local all all trust > host all all 127.0.0.1/32 trust > host all all ::1/128 trust > host all all 10.245.114.8/32 trust > > the connectivity is tested and confirmed. > ================================================= > > initial nodes setup: > prod-xc-coord01 > postgres=# select * from pgxc_node; > node_name | node_type | node_port | node_host | nodeis_primary | > nodeis_preferred | node_id > > -----------+-----------+-----------+--------------+----------------+------------------+------------ > coord1 | C | 5432 | localhost | f | f > | 1885696643 > datanode1 | D | 6543 | localhost | t | t > | 888802358 > datanode2 | D | 6543 | 10.101.51.38 | f | f > | -905831925 > > prod-xc-coord02 > postgres=# select * from pgxc_node; > node_name | node_type | node_port | node_host | nodeis_primary | > nodeis_preferred | node_id > > -----------+-----------+-----------+--------------+----------------+------------------+------------- > coord2 | C | 5432 | localhost | f | f > | -1197102633 > datanode1 | D | 6543 | 10.245.114.8 | t | f > | 888802358 > datanode2 | D | 6543 | localhost | f | t > | -905831925 > > after that setup I was able to create tabases from the both coordinator > nodes, but each coordinator does not see the database created by the other > coordinator. > > > > then tested and the node setup with one only primary node: > prod-xc-coord02 > postgres=# alter node datanode1 with (type = 'datanode', host = > '10.245.114.8', port = 6543, primary=false,preferred=false); > ALTER NODE > postgres=# select pgxc_pool_reload(); > pgxc_pool_reload > ------------------ > t > (1 row) > postgres=# select * from pgxc_node; > node_name | node_type | node_port | node_host | nodeis_primary | > nodeis_preferred | node_id > > -----------+-----------+-----------+--------------+----------------+------------------+------------- > coord2 | C | 5432 | localhost | f | f > | -1197102633 > datanode2 | D | 6543 | localhost | f | t > | -905831925 > datanode1 | D | 6543 | 10.245.114.8 | f | f > | 888802358 > (3 rows) > > the result is the same. > > I know I am missing something simple as a config or open port, but at the > moment I cant figure out whats missing in the setup. > > What are you trying to do here? You have set primary node for datanode1 to false, which is what the query result displays. Can you please elaborate what's going wrong? > In general our plan is to use loadbalancer in frond of several instances > hosting one coordinator and one datanode. > > I apologize for the ugly paste, but I am not sure if that mail list > support html formatting. > > Kind regards, > > Ivan > > > > > > > > > > ------------------------------------------------------------------------------ > Own the Future-Intel® Level Up Game Demo Contest 2013 > Rise to greatness in Intel's independent game demo contest. > Compete for recognition, cash, and the chance to get your game > on Steam. $5K grand prize plus 10 genre and skill prizes. > Submit your demo by 6/6/13. http://p.sf.net/sfu/intel_levelupd2d > _______________________________________________ > Postgres-xc-general mailing list > Pos...@li... > https://lists.sourceforge.net/lists/listinfo/postgres-xc-general > -- Best Wishes, Ashutosh Bapat EntepriseDB Corporation The Enterprise Postgres Company |
From: seikath <se...@gm...> - 2013-03-26 14:16:53
|
Hello all, I have an XC setup of 4 AWS instances: ============================= instance: prod-xc-coord1 coordinator config at prod-xc-coord1 listen_addresses = '*' port = 5432 max_connections = 100 shared_buffers = 120MB max_prepared_transactions = 100 datestyle = 'iso, mdy' lc_messages = 'en_US.UTF-8' lc_monetary = 'en_US.UTF-8' lc_numeric = 'en_US.UTF-8' lc_time = 'en_US.UTF-8' default_text_search_config = 'pg_catalog.english' pooler_port = 6667 min_pool_size = 1 max_pool_size = 100 max_coordinators = 16 max_datanodes = 16 gtm_host = '10.196.154.85' gtm_port = 6543 pgxc_node_name = 'coord1' enforce_two_phase_commit = on enable_fast_query_shipping = on enable_remotejoin = on enable_remotegroup = on datanode config at prod-xc-coord1 listen_addresses = '*' port = 6543 max_connections = 100 shared_buffers = 320MB max_prepared_transactions = 100 datestyle = 'iso, mdy' lc_messages = 'en_US.UTF-8' lc_monetary = 'en_US.UTF-8' lc_numeric = 'en_US.UTF-8' lc_time = 'en_US.UTF-8' default_text_search_config = 'pg_catalog.english' max_coordinators = 16 max_datanodes = 16 gtm_host = '10.196.154.85' gtm_port = 6543 pgxc_node_name = 'datanode1' enforce_two_phase_commit = on enable_fast_query_shipping = on enable_remotejoin = on enable_remotegroup = on ============================= instance : prod-xc-coord2 coordinator config at prod-xc-coord2 listen_addresses = '*' port = 5432 max_connections = 100 superuser_reserved_connections = 3 shared_buffers = 120MB max_prepared_transactions = 100 datestyle = 'iso, mdy' lc_messages = 'en_US.UTF-8' lc_monetary = 'en_US.UTF-8' lc_numeric = 'en_US.UTF-8' lc_time = 'en_US.UTF-8' default_text_search_config = 'pg_catalog.english' pooler_port = 6667 min_pool_size = 1 max_pool_size = 100 max_coordinators = 16 max_datanodes = 16 gtm_host = '10.196.154.85' gtm_port = 6543 pgxc_node_name = 'coord2' enforce_two_phase_commit = on enable_fast_query_shipping = on enable_remotejoin = on enable_remotegroup = on datanode config at prod-xc-coord2 listen_addresses = '*' port = 6543 max_connections = 100 shared_buffers = 320MB max_prepared_transactions = 100 datestyle = 'iso, mdy' lc_messages = 'en_US.UTF-8' lc_monetary = 'en_US.UTF-8' lc_numeric = 'en_US.UTF-8' lc_time = 'en_US.UTF-8' default_text_search_config = 'pg_catalog.english' max_coordinators = 16 max_datanodes = 16 gtm_host = '10.196.154.85' gtm_port = 6543 pgxc_node_name = 'datanode2' enforce_two_phase_commit = on enable_fast_query_shipping = on enable_remotejoin = on enable_remotegroup = on ============================= instance prod-xc-gtm-proxy : IP 10.196.154.85 proxy config: nodename = 'one' listen_addresses = '*' port = 6543 gtm_host = '10.244.158.120' gtm_port = 5432 ============================= instance prod-xc-gtm : IP 10.244.158.120 gtm config nodename = 'one' listen_addresses = '*' port = 5432 ============================= the pg_hba,conf of both coordinator and data nodes at both prod-xc-coord1 and prod-xc-coord2 allows the other node to connect: ================================================= pg_hba,conf at prod-xc-coord01 IP 10.245.114.8 local all all trust host all all 127.0.0.1/32 trust host all all ::1/128 trust host all all 10.101.51.38/32 trust pg_hba,conf at prod-xc-coord02 IP 10.101.51.38 local all all trust host all all 127.0.0.1/32 trust host all all ::1/128 trust host all all 10.245.114.8/32 trust the connectivity is tested and confirmed. ================================================= initial nodes setup: prod-xc-coord01 postgres=# select * from pgxc_node; node_name | node_type | node_port | node_host | nodeis_primary | nodeis_preferred | node_id -----------+-----------+-----------+--------------+----------------+------------------+------------ coord1 | C | 5432 | localhost | f | f | 1885696643 datanode1 | D | 6543 | localhost | t | t | 888802358 datanode2 | D | 6543 | 10.101.51.38 | f | f | -905831925 prod-xc-coord02 postgres=# select * from pgxc_node; node_name | node_type | node_port | node_host | nodeis_primary | nodeis_preferred | node_id -----------+-----------+-----------+--------------+----------------+------------------+------------- coord2 | C | 5432 | localhost | f | f | -1197102633 datanode1 | D | 6543 | 10.245.114.8 | t | f | 888802358 datanode2 | D | 6543 | localhost | f | t | -905831925 after that setup I was able to create tabases from the both coordinator nodes, but each coordinator does not see the database created by the other coordinator. then tested and the node setup with one only primary node: prod-xc-coord02 postgres=# alter node datanode1 with (type = 'datanode', host = '10.245.114.8', port = 6543, primary=false,preferred=false); ALTER NODE postgres=# select pgxc_pool_reload(); pgxc_pool_reload ------------------ t (1 row) postgres=# select * from pgxc_node; node_name | node_type | node_port | node_host | nodeis_primary | nodeis_preferred | node_id -----------+-----------+-----------+--------------+----------------+------------------+------------- coord2 | C | 5432 | localhost | f | f | -1197102633 datanode2 | D | 6543 | localhost | f | t | -905831925 datanode1 | D | 6543 | 10.245.114.8 | f | f | 888802358 (3 rows) the result is the same. I know I am missing something simple as a config or open port, but at the moment I cant figure out whats missing in the setup. In general our plan is to use loadbalancer in frond of several instances hosting one coordinator and one datanode. I apologize for the ugly paste, but I am not sure if that mail list support html formatting. Kind regards, Ivan |
From: Ashutosh B. <ash...@en...> - 2013-03-25 03:54:48
|
Hi Arni, If you have development resources, we will welcome your patches for improvements in this area. On Fri, Mar 22, 2013 at 7:17 PM, Arni Sumarlidason < Arn...@md...> wrote: > Thank you for your responses and support! We will find a work around!**** > > ** ** > > Best,**** > > ** ** > > *From:* Ashutosh Bapat [mailto:ash...@en...] > *Sent:* Friday, March 22, 2013 10:56 AM > > *To:* Arni Sumarlidason > *Cc:* Koichi Suzuki; pos...@li... > *Subject:* Re: [Postgres-xc-general] Planner join logic**** > > ** ** > > We haven't yet worked to optimize any case of inheritance. It's not that > it can't be done, it's just there are more pressing issues to be solved.** > ** > > On Fri, Mar 22, 2013 at 6:25 PM, Arni Sumarlidason < > Arn...@md...> wrote:**** > > Is this in the case of joins, or will we have the same issue with a flat > table?**** > > Please advise.**** > > **** > > *From:* Ashutosh Bapat [mailto:ash...@en...<ash...@en...>] > > *Sent:* Friday, March 22, 2013 1:14 AM**** > > > *To:* Arni Sumarlidason > *Cc:* Koichi Suzuki; pos...@li... > *Subject:* Re: [Postgres-xc-general] Planner join logic**** > > **** > > In case children table, we need an Append node covering scans on each > table. We haven't optimized planner for the case of inheritance yet.**** > > **** > > > > > -- > Best Wishes, > Ashutosh Bapat > EntepriseDB Corporation > The Enterprise Postgres Company**** > -- Best Wishes, Ashutosh Bapat EntepriseDB Corporation The Enterprise Postgres Company |
From: Arni S. <Arn...@md...> - 2013-03-22 15:17:56
|
Thank you for your responses and support! We will find a work around! Best, From: Ashutosh Bapat [mailto:ash...@en...] Sent: Friday, March 22, 2013 10:56 AM To: Arni Sumarlidason Cc: Koichi Suzuki; pos...@li... Subject: Re: [Postgres-xc-general] Planner join logic We haven't yet worked to optimize any case of inheritance. It's not that it can't be done, it's just there are more pressing issues to be solved. On Fri, Mar 22, 2013 at 6:25 PM, Arni Sumarlidason <Arn...@md...<mailto:Arn...@md...>> wrote: Is this in the case of joins, or will we have the same issue with a flat table? Please advise. From: Ashutosh Bapat [mailto:ash...@en...] Sent: Friday, March 22, 2013 1:14 AM To: Arni Sumarlidason Cc: Koichi Suzuki; pos...@li...<mailto:pos...@li...> Subject: Re: [Postgres-xc-general] Planner join logic In case children table, we need an Append node covering scans on each table. We haven't optimized planner for the case of inheritance yet. -- Best Wishes, Ashutosh Bapat EntepriseDB Corporation The Enterprise Postgres Company |
From: Ashutosh B. <ash...@en...> - 2013-03-22 14:56:35
|
We haven't yet worked to optimize any case of inheritance. It's not that it can't be done, it's just there are more pressing issues to be solved. On Fri, Mar 22, 2013 at 6:25 PM, Arni Sumarlidason < Arn...@md...> wrote: > Is this in the case of joins, or will we have the same issue with a flat > table?**** > > Please advise.**** > > ** ** > > *From:* Ashutosh Bapat [mailto:ash...@en...<ash...@en...>] > > *Sent:* Friday, March 22, 2013 1:14 AM > > *To:* Arni Sumarlidason > *Cc:* Koichi Suzuki; pos...@li... > *Subject:* Re: [Postgres-xc-general] Planner join logic**** > > ** ** > > In case children table, we need an Append node covering scans on each > table. We haven't optimized planner for the case of inheritance yet.**** > > ** ** > -- Best Wishes, Ashutosh Bapat EntepriseDB Corporation The Enterprise Postgres Company |
From: Arni S. <Arn...@md...> - 2013-03-22 14:25:20
|
Is this in the case of joins, or will we have the same issue with a flat table? Please advise. From: Ashutosh Bapat [mailto:ash...@en...] Sent: Friday, March 22, 2013 1:14 AM To: Arni Sumarlidason Cc: Koichi Suzuki; pos...@li...<mailto:pos...@li...> Subject: Re: [Postgres-xc-general] Planner join logic In case children table, we need an Append node covering scans on each table. We haven't optimized planner for the case of inheritance yet. |
From: Arni S. <Arn...@md...> - 2013-03-22 14:24:26
|
Is this in the case of joins, or will we have the same issue with a flat table? Please advise. From: Ashutosh Bapat [mailto:ash...@en...] Sent: Friday, March 22, 2013 1:14 AM To: Arni Sumarlidason Cc: Koichi Suzuki; pos...@li... Subject: Re: [Postgres-xc-general] Planner join logic In case children table, we need an Append node covering scans on each table. We haven't optimized planner for the case of inheritance yet. On Fri, Mar 22, 2013 at 9:10 AM, Arni Sumarlidason <Arn...@md...<mailto:Arn...@md...>> wrote: Here we go, Table "public.table" Column | Type | Modifiers | Storage | Stats target | Description ---------------------------+--------------------------+-----------+----------+--------------+------------- id | bigint | not null | plain | | seq | integer | not null | plain | | text | character varying(256) | | extended | | date_updated | timestamp with time zone | | plain | | Indexes: "table_pkey" PRIMARY KEY, btree (id, seq) "idx_table_dateupdated" btree (date_updated) Child tables: table_1202, table_1203, table_1204, table_1205, table_1206, table_1207, table_1208, table_1209, table_1210, table_1211, table_20121111, table_20121118, table_20121125, table_20121202, table_20121209, table_20121216, table_20121223, table_20121230, table_20130106, table_20130113, table_20130120, table_20130127, table_20130203, table_20130210, table_20130217, table_20130224, table_20130303, table_20130310, table_20130317, table_20130324, table_20130331, table_20130407, table_20130414, table_20130421, table_20130428 Has OIDs: no Table "public.table_2" Column | Type | Modifiers | Storage | Stats target | Description ------------------------------------+--------------------------+-----------+----------+--------------+------------- id | bigint | not null | plain | | text_updated | character varying(1024) | | extended | | date_updated | timestamp with time zone | | plain | | seq | integer | not null | plain | | Indexes: "table_2_pkey" PRIMARY KEY, btree (id, seq) "idx_table2_dateupdated" btree (date_updated) "idx_table2_id" btree (id) Child tables: table_2_120806, table_2_120813, table_2_120820, table_2_120827, table_2_120903, table_2_120910, table_2_120917, table_2_120924, table_2_121001, table_2_121008, table_2_121015, table_2_121022, table_2_121029, table_2_121105, table_2_121112, table_2_121119, table_2_121126, table_2_121203, table_2_121210, table_2_121217, table_2_121224, table_2_121231, table_2_130107, table_2_130114, table_2_130121, table_2_130128, table_2_130204, table_2_130211, table_2_130218, table_2_130225, table_2_130304, table_2_130311, table_2_130318, table_2_130325, table_2_130401, table_2_130408, table_2_130415, table_2_130422, table_2_130429, table_2_130506, table_2_130513, table_2_130520, table_2_130527, table_2_130603, table_2_130610, table_2_130617, table_2_130624, table_2_130701, table_2_130708, table_2_130715, table_2_130722, table_2_130729, table_2_130805, table_2_130812, table_2_130819, table_2_130826, table_2_130902, table_2_130909, table_2_130916, table_2_130923, table_2_130930, table_2_131007, table_2_131014, table_2_131021, table_2_131028, table_2_131104, table_2_131111, table_2_131118, table_2_131125, table_2_131202, table_2_131209, table_2_131216, table_2_131223, table_2_131230, table_2_140106, table_2_140113, table_2_140120, table_2_140127, table_2_140203, table_2_140210, table_2_140217, table_2_140224, table_2_140303, table_2_140310, table_2_140317, table_2_140324, table_2_140331, table_2_140407, table_2_140414, table_2_140421, table_2_140428, table_2_140505, table_2_140512, table_2_140519, table_2_140526, table_2_140602, table_2_140609, table_2_140616, table_2_140623, table_2_140630, table_2_140707, table_2_140714, table_2_140721, table_2_140728, table_2_140804, table_2_140811, table_2_140818, table_2_140825, table_2_140901, table_2_140908, table_2_140915, table_2_140922, table_2_140929, table_2_141006, table_2_141013, table_2_141020, table_2_141027, table_2_141103, table_2_141110, table_2_141117, table_2_141124, table_2_141201, table_2_141208, table_2_141215, table_2_141222, table_2_141229, table_2_150105, table_2_150112, table_2_150119, table_2_150126 Has OIDs: no -- Best Wishes, Ashutosh Bapat EntepriseDB Corporation The Enterprise Postgres Company |
From: Ashutosh B. <ash...@en...> - 2013-03-22 05:14:00
|
In case children table, we need an Append node covering scans on each table. We haven't optimized planner for the case of inheritance yet. On Fri, Mar 22, 2013 at 9:10 AM, Arni Sumarlidason < Arn...@md...> wrote: > Here we go, **** > > ** ** > > ** ** > > Table "public.table"**** > > Column | Type | Modifiers | > Storage | Stats target | Description**** > > > ---------------------------+--------------------------+-----------+----------+--------------+------------- > **** > > id | bigint | not null | > plain | |**** > > seq | integer | not null | > plain | |**** > > text | character varying(256) | | > extended | |**** > > date_updated | timestamp with time zone | | plain > | |**** > > Indexes:**** > > "table_pkey" PRIMARY KEY, btree (id, seq)**** > > "idx_table_dateupdated" btree (date_updated)**** > > Child tables: table_1202,**** > > table_1203,**** > > table_1204,**** > > table_1205,**** > > table_1206,**** > > table_1207,**** > > table_1208,**** > > table_1209,**** > > table_1210,**** > > table_1211,**** > > table_20121111,**** > > table_20121118,**** > > table_20121125,**** > > table_20121202,**** > > table_20121209,**** > > table_20121216,**** > > table_20121223,**** > > table_20121230,**** > > table_20130106,**** > > table_20130113,**** > > table_20130120,**** > > table_20130127,**** > > table_20130203,**** > > table_20130210,**** > > table_20130217,**** > > table_20130224,**** > > table_20130303,**** > > table_20130310,**** > > table_20130317,**** > > table_20130324,**** > > table_20130331,**** > > table_20130407,**** > > table_20130414,**** > > table_20130421,**** > > table_20130428**** > > Has OIDs: no**** > > ** ** > > ** ** > > ** ** > > Table "public.table_2"**** > > Column | Type | Modifiers > | Storage | Stats target | Description**** > > > ------------------------------------+--------------------------+-----------+----------+--------------+------------- > **** > > id | bigint | not null > | plain | |**** > > text_updated | character varying(1024) | | > extended | |**** > > date_updated | timestamp with time zone | | > plain | |**** > > seq | integer | not null > | plain | |**** > > Indexes:**** > > "table_2_pkey" PRIMARY KEY, btree (id, seq)**** > > "idx_table2_dateupdated" btree (date_updated)**** > > "idx_table2_id" btree (id)**** > > Child tables: table_2_120806,**** > > table_2_120813,**** > > table_2_120820,**** > > table_2_120827,**** > > table_2_120903,**** > > table_2_120910,**** > > table_2_120917,**** > > table_2_120924,**** > > table_2_121001,**** > > table_2_121008,**** > > table_2_121015,**** > > table_2_121022,**** > > table_2_121029,**** > > table_2_121105,**** > > table_2_121112,**** > > table_2_121119,**** > > table_2_121126,**** > > table_2_121203,**** > > table_2_121210,**** > > table_2_121217,**** > > table_2_121224,**** > > table_2_121231,**** > > table_2_130107,**** > > table_2_130114,**** > > table_2_130121,**** > > table_2_130128,**** > > table_2_130204,**** > > table_2_130211,**** > > table_2_130218,**** > > table_2_130225,**** > > table_2_130304,**** > > table_2_130311,**** > > table_2_130318,**** > > table_2_130325,**** > > table_2_130401,**** > > table_2_130408,**** > > table_2_130415,**** > > table_2_130422,**** > > table_2_130429,**** > > table_2_130506,**** > > table_2_130513,**** > > table_2_130520,**** > > table_2_130527,**** > > table_2_130603,**** > > table_2_130610,**** > > table_2_130617,**** > > table_2_130624,**** > > table_2_130701,**** > > table_2_130708,**** > > table_2_130715,**** > > table_2_130722,**** > > table_2_130729,**** > > table_2_130805,**** > > table_2_130812,**** > > table_2_130819,**** > > table_2_130826,**** > > table_2_130902,**** > > table_2_130909,**** > > table_2_130916,**** > > table_2_130923,**** > > table_2_130930,**** > > table_2_131007,**** > > table_2_131014,**** > > table_2_131021,**** > > table_2_131028,**** > > table_2_131104,**** > > table_2_131111,**** > > table_2_131118,**** > > table_2_131125,**** > > table_2_131202,**** > > table_2_131209,**** > > table_2_131216,**** > > table_2_131223,**** > > table_2_131230,**** > > table_2_140106,**** > > table_2_140113,**** > > table_2_140120,**** > > table_2_140127,**** > > table_2_140203,**** > > table_2_140210,**** > > table_2_140217,**** > > table_2_140224,**** > > table_2_140303,**** > > table_2_140310,**** > > table_2_140317,**** > > table_2_140324,**** > > table_2_140331,**** > > table_2_140407,**** > > table_2_140414,**** > > table_2_140421,**** > > table_2_140428,**** > > table_2_140505,**** > > table_2_140512,**** > > table_2_140519,**** > > table_2_140526,**** > > table_2_140602,**** > > table_2_140609,**** > > table_2_140616,**** > > table_2_140623,**** > > table_2_140630,**** > > table_2_140707,**** > > table_2_140714,**** > > table_2_140721,**** > > table_2_140728,**** > > table_2_140804,**** > > table_2_140811,**** > > table_2_140818,**** > > table_2_140825,**** > > table_2_140901,**** > > table_2_140908,**** > > table_2_140915,**** > > table_2_140922,**** > > table_2_140929,**** > > table_2_141006,**** > > table_2_141013,**** > > table_2_141020,**** > > table_2_141027,**** > > table_2_141103,**** > > table_2_141110,**** > > table_2_141117,**** > > table_2_141124,**** > > table_2_141201,**** > > table_2_141208,**** > > table_2_141215,**** > > table_2_141222,**** > > table_2_141229,**** > > table_2_150105,**** > > table_2_150112,**** > > table_2_150119,**** > > table_2_150126**** > > Has OIDs: no**** > > ** ** > -- Best Wishes, Ashutosh Bapat EntepriseDB Corporation The Enterprise Postgres Company |
From: Arni S. <Arn...@md...> - 2013-03-22 05:10:50
|
Here we go, Table "public.table" Column | Type | Modifiers | Storage | Stats target | Description ---------------------------+--------------------------+-----------+----------+--------------+------------- id | bigint | not null | plain | | seq | integer | not null | plain | | text | character varying(256) | | extended | | date_updated | timestamp with time zone | | plain | | Indexes: "table_pkey" PRIMARY KEY, btree (id, seq) "idx_table_dateupdated" btree (date_updated) Child tables: table_1202, table_1203, table_1204, table_1205, table_1206, table_1207, table_1208, table_1209, table_1210, table_1211, table_20121111, table_20121118, table_20121125, table_20121202, table_20121209, table_20121216, table_20121223, table_20121230, table_20130106, table_20130113, table_20130120, table_20130127, table_20130203, table_20130210, table_20130217, table_20130224, table_20130303, table_20130310, table_20130317, table_20130324, table_20130331, table_20130407, table_20130414, table_20130421, table_20130428 Has OIDs: no Table "public.table_2" Column | Type | Modifiers | Storage | Stats target | Description ------------------------------------+--------------------------+-----------+----------+--------------+------------- id | bigint | not null | plain | | text_updated | character varying(1024) | | extended | | date_updated | timestamp with time zone | | plain | | seq | integer | not null | plain | | Indexes: "table_2_pkey" PRIMARY KEY, btree (id, seq) "idx_table2_dateupdated" btree (date_updated) "idx_table2_id" btree (id) Child tables: table_2_120806, table_2_120813, table_2_120820, table_2_120827, table_2_120903, table_2_120910, table_2_120917, table_2_120924, table_2_121001, table_2_121008, table_2_121015, table_2_121022, table_2_121029, table_2_121105, table_2_121112, table_2_121119, table_2_121126, table_2_121203, table_2_121210, table_2_121217, table_2_121224, table_2_121231, table_2_130107, table_2_130114, table_2_130121, table_2_130128, table_2_130204, table_2_130211, table_2_130218, table_2_130225, table_2_130304, table_2_130311, table_2_130318, table_2_130325, table_2_130401, table_2_130408, table_2_130415, table_2_130422, table_2_130429, table_2_130506, table_2_130513, table_2_130520, table_2_130527, table_2_130603, table_2_130610, table_2_130617, table_2_130624, table_2_130701, table_2_130708, table_2_130715, table_2_130722, table_2_130729, table_2_130805, table_2_130812, table_2_130819, table_2_130826, table_2_130902, table_2_130909, table_2_130916, table_2_130923, table_2_130930, table_2_131007, table_2_131014, table_2_131021, table_2_131028, table_2_131104, table_2_131111, table_2_131118, table_2_131125, table_2_131202, table_2_131209, table_2_131216, table_2_131223, table_2_131230, table_2_140106, table_2_140113, table_2_140120, table_2_140127, table_2_140203, table_2_140210, table_2_140217, table_2_140224, table_2_140303, table_2_140310, table_2_140317, table_2_140324, table_2_140331, table_2_140407, table_2_140414, table_2_140421, table_2_140428, table_2_140505, table_2_140512, table_2_140519, table_2_140526, table_2_140602, table_2_140609, table_2_140616, table_2_140623, table_2_140630, table_2_140707, table_2_140714, table_2_140721, table_2_140728, table_2_140804, table_2_140811, table_2_140818, table_2_140825, table_2_140901, table_2_140908, table_2_140915, table_2_140922, table_2_140929, table_2_141006, table_2_141013, table_2_141020, table_2_141027, table_2_141103, table_2_141110, table_2_141117, table_2_141124, table_2_141201, table_2_141208, table_2_141215, table_2_141222, table_2_141229, table_2_150105, table_2_150112, table_2_150119, table_2_150126 Has OIDs: no |
From: Arni S. <Arn...@md...> - 2013-03-22 05:09:02
|
Table "public.table" Column | Type | Modifiers | Storage | Stats target | Description ---------------------------+--------------------------+-----------+----------+--------------+------------- id | bigint | not null | plain | | seq | integer | not null | plain | | text | character varying(256) | | extended | | date_updated | timestamp with time zone | | plain | | Indexes: "table_pkey" PRIMARY KEY, btree (id, seq) "idx_table_dateupdated" btree (date_updated) Child tables: table_1202, table_1203, table_1204, table_1205, table_1206, table_1207, table_1208, table_1209, table_1210, table_1211, table_20121111, table_20121118, table_20121125, table_20121202, table_20121209, table_20121216, table_20121223, table_20121230, table_20130106, table_20130113, table_20130120, table_20130127, table_20130203, table_20130210, table_20130217, table_20130224, table_20130303, table_20130310, table_20130317, table_20130324, table_20130331, table_20130407, table_20130414, table_20130421, table_20130428 Has OIDs: no Table "public.table_2" Column | Type | Modifiers | Storage | Stats target | Description ------------------------------------+--------------------------+-----------+----------+--------------+------------- id | bigint | not null | plain | | text_updated | character varying(1024) | | extended | | date_updated | timestamp with time zone | | plain | | seq | integer | not null | plain | | Indexes: "table_2_pkey" PRIMARY KEY, btree (id, seq) "idx_table2_dateupdated" btree (date_updated) "idx_table2_id" btree (id) Child tables: table_2_120806, table_2_120813, table_2_120820, table_2_120827, table_2_120903, table_2_120910, table_2_120917, table_2_120924, table_2_121001, table_2_121008, table_2_121015, table_2_121022, table_2_121029, table_2_121105, table_2_121112, table_2_121119, table_2_121126, table_2_121203, table_2_121210, table_2_121217, table_2_121224, table_2_121231, table_2_130107, table_2_130114, table_2_130121, table_2_130128, table_2_130204, table_2_130211, table_2_130218, table_2_130225, table_2_130304, table_2_130311, table_2_130318, table_2_130325, table_2_130401, table_2_130408, table_2_130415, table_2_130422, table_2_130429, table_2_130506, table_2_130513, table_2_130520, table_2_130527, table_2_130603, table_2_130610, table_2_130617, table_2_130624, table_2_130701, table_2_130708, table_2_130715, table_2_130722, table_2_130729, table_2_130805, table_2_130812, table_2_130819, table_2_130826, table_2_130902, table_2_130909, table_2_130916, table_2_130923, table_2_130930, table_2_131007, table_2_131014, table_2_131021, table_2_131028, table_2_131104, table_2_131111, table_2_131118, table_2_131125, table_2_131202, table_2_131209, table_2_131216, table_2_131223, table_2_131230, table_2_140106, table_2_140113, table_2_140120, table_2_140127, table_2_140203, table_2_140210, table_2_140217, table_2_140224, table_2_140303, table_2_140310, table_2_140317, table_2_140324, table_2_140331, table_2_140407, table_2_140414, table_2_140421, table_2_140428, table_2_140505, table_2_140512, table_2_140519, table_2_140526, table_2_140602, table_2_140609, table_2_140616, table_2_140623, table_2_140630, table_2_140707, table_2_140714, table_2_140721, table_2_140728, table_2_140804, table_2_140811, table_2_140818, table_2_140825, table_2_140901, table_2_140908, table_2_140915, table_2_140922, table_2_140929, table_2_141006, table_2_141013, table_2_141020, table_2_141027, table_2_141103, table_2_141110, table_2_141117, table_2_141124, table_2_141201, table_2_141208, table_2_141215, table_2_141222, table_2_141229, table_2_150105, table_2_150112, table_2_150119, table_2_150126 Has OIDs: no From: Ashutosh Bapat [mailto:ash...@en...] Sent: Friday, March 22, 2013 12:57 AM To: Arni Sumarlidason Cc: Koichi Suzuki; pos...@li... Subject: Re: [Postgres-xc-general] Planner join logic Hi Arni, By any chance, you have children inheriting from table1 and table2? Can you please send \d+ output on these tables? On Fri, Mar 22, 2013 at 8:49 AM, Arni Sumarlidason <Arn...@md...<mailto:Arn...@md...>> wrote: Hello Mr. Bapat, I modified the output a little, but only the fields selected, QUERY PLAN ------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------- Hash Join (cost=1.64..5.35 rows=236 width=10726) Output: t.id<https://console.mxlogic.com/redir/?hP3z1EVud79EVdLzCm6kjhOqejo0evigdTVBOZXTLuZPtPuQ9JPHgDN3lJC2LxfUHhBjQ27bCQTQnTPqbbz9EVjujpKroodwLQzh0qmMJmAOcvEn3X5OfM-eDCBQQg1o_og-9Ew4qOwq83WApmdPYfDwedECTzrb38UsqenT3pYEg_QCosxO0>, t.seq, t.text, t.date_updated Hash Cond: (t.id<https://console.mxlogic.com/redir/?2eosod7bNEVd79JYsOMOyqejhOr01PWi1K_cKnLuZXTKrKrSxdKtq4-8qJIMlY9_5qcGuwgVsSC-y--rhpspd7arOrdPr31I5-Aq83iS5GQChzZ2UvoKh-7NQYQKCy0b7X27Nd40zmk3h0vkzaNKvxYY1NJcSYrpop73zhO-UrarRR5I62> = l.id<https://console.mxlogic.com/redir/?4sMUMqenzhOqejrUVBxB4QsCzAS03fWi1K_cKnLuZXTKrKrSxdKtq4-8qJIMlY9_5qcGuwgVsSC-y--rhpspd7arOrdPr31I5-Aq83iS5GQChzZ2UvoKh-7NQYQKCy0b7X27Nd40zmk3h0vkzaNKvxYY1NJASYrpop73zhO-Ur3j24>) -> Append (cost=0.00..0.00 rows=36000 width=5638) -> Data Node Scan on table "_REMOTE_TABLE_QUERY_" (cost=0.00..0.00 rows=1000 width=5638) Output: t.id<https://console.mxlogic.com/redir/?hP3z1EVud79EVdLzCm6kjhOqejo0evigdTVBOZXTLuZPtPuQ9JPHgDN3lJC2LxfUHhBjQ27bCQTQnTPqbbz9EVjujpKroodwLQzh0qmMJmAOcvEn3X5OfM-eDCBQQg1o_og-9Ew4qOwq83WApmdPYfDwedI6Tzrb38UsqenT3ueCo8533qnKK>, t.seq, t.text, t.date_updated Node/s: datanode01d, datanode02d, datanode03d, datanode04d, datanode05d, datanode06d, datanode07d, datanode08d, datanode09d, datanode10d, datanode11d, datanode12d, datanode13d, datanode14d, datanode15d, datanode16d, datanode17d, datanode18d, datanode19d, datanode20d Remote query: SELECT id, seq, text, date_updated FROM ONLY table t WHERE true + 10-20 similar -> Hash (cost=0.00..0.00 rows=131000 width=5088) Output: l.id<https://console.mxlogic.com/redir/?4sMUMqenzhOqejrUVBxB4QsCzAS03fWi1K_cKnLuZXTKrKrSxdKtq4-8qJIMlY9_5qcGuwgVsSC-y--rhpspd7arOrdPr31I5-Aq83iS5GQChzZ2UvoKh-7NQYQKCy0b7X27Nd40zmk3h0vkzaNKvxYY1NJYSYrpop73zhO-Urs1e07wQU>, l.date_updated, l.seq, l.text -> Append (cost=0.00..0.00 rows=131000 width=5088) -> Data Node Scan on table_2 "_REMOTE_TABLE_QUERY_" (cost=0.00..0.00 rows=1000 width=5088) Output: l.id<https://console.mxlogic.com/redir/?4sMUMqenzhOqejrUVBxB4QsCzAS03fWi1K_cKnLuZXTKrKrSxdKtq4-8qJIMlY9_5qcGuwgVsSC-y--rhpspd7arOrdPr31I5-Aq83iS5GQChzZ2UvoKh-7NQYQKCy0b7X27Nd40zmk3h0vkzaNKvxYY1NJMSYrpop73zhO-UrrbcP>, l.date_updated, l.seq, l.text Node/s: datanode01d, datanode02d, datanode03d, datanode04d, datanode05d, datanode06d, datanode07d, datanode08d, datanode09d, datanode10d, datanode11d, datanode12d, datanode13d, datanode14d, datanode15d, datanode16d, datanode17d, datanode18d, datanode19d, datanode20d Remote query: SELECT id, date_updated, seq, text FROM ONLY table_2 l WHERE ((date_updated >= '2012-12-15 00:00:00-05'::timestamp with time zone) AND (date_updated < '2013-01-01 00:00:00-05'::timestamp with time zone)) + 30-50 Similar Thank you for your time, From: Ashutosh Bapat [mailto:ash...@en...<mailto:ash...@en...>] Sent: Friday, March 22, 2013 12:28 AM To: Arni Sumarlidason Cc: Koichi Suzuki; pos...@li...<mailto:pos...@li...> Subject: Re: [Postgres-xc-general] Planner join logic Hi Arni, Can you please send an explain verbose output of this query? On Thu, Mar 21, 2013 at 5:57 PM, Arni Sumarlidason <Arn...@md...<mailto:Arn...@md...>> wrote: As always, thank you for your responses, appreciate it! We are using trunk from around Feb. 18th. Here are some tables that replicate the issue, create table table1( id bigint, seq integer, text data(256), created_at timestamp with time zone, PRIMARY KEY(id, seq)) DISTRIBUTE BY HASH(id); create table table2( id bigint, seq integer, text data_complex(256), update_time timestamp with time zone, PRIMARY KEY(id, seq)) DISTRIBUTE BY HASH(id); Select * from table1 t1, table2 t2 where t2.update_time >= '2012-12-15' and t2.update_time < '2013-01-01' and t2.id<https://console.mxlogic.com/redir/?2eosod7bNEVd79JYsOMOyqejhOr01PjWi1K_cKnLuZXTKrKrSxdKtq4-8qJIMlY9_5qcGuwgVsSYYyCrhpspd7arz3XxJxwS2_id41Fr2Rqj8N-xsfIn8_3UWuqnjh05zZx3UCy0hHa1EwfGhBoTfM-u0USyrjdIIczxNEVvsdVuuN3I-tei91>=t1.id<https://console.mxlogic.com/redir/?zC763hOYqejhOrv7cIcECzAQsCM0sQuAwrLPbBXTLuZXCXCZEjrDmxfy6Hrc5v2vNmzaDE4endLf8FCQmn6jhOCUM-UroodwLQzh0qmMJmAOcvEn3X5OfM-eDCBQQg1o_og-9Ew4qOwq83WApmdPYfDwedFCQPrb38UsqenT3qiXMGWXj> Thank you, Arni -- Best Wishes, Ashutosh Bapat EntepriseDB Corporation The Enterprise Postgres Company -- Best Wishes, Ashutosh Bapat EntepriseDB Corporation The Enterprise Postgres Company |
From: Ashutosh B. <ash...@en...> - 2013-03-22 04:57:16
|
Hi Arni, By any chance, you have children inheriting from table1 and table2? Can you please send \d+ output on these tables? On Fri, Mar 22, 2013 at 8:49 AM, Arni Sumarlidason < Arn...@md...> wrote: > Hello Mr. Bapat,**** > > ** ** > > I modified the output a little, but only the fields selected,**** > > > QUERY > PLAN > > **** > > > ------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------- > **** > > Hash Join (cost=1.64..5.35 rows=236 width=10726)**** > > Output: t.id, t.seq, t.text, t.date_updated**** > > Hash Cond: (t.id = l.id)**** > > -> Append (cost=0.00..0.00 rows=36000 width=5638)**** > > -> Data Node Scan on table "_REMOTE_TABLE_QUERY_" > (cost=0.00..0.00 rows=1000 width=5638)**** > > Output: t.id, t.seq, t.text, t.date_updated**** > > Node/s: datanode01d, datanode02d, datanode03d, datanode04d, > datanode05d, datanode06d, datanode07d, datanode08d, datanode09d, > datanode10d, datanode11d, datanode12d, datanode13d, datanode14d, > datanode15d, datanode16d, datanode17d, datanode18d, datanode19d, datanode20d > **** > > Remote query: SELECT id, seq, text, date_updated FROM ONLY > table t WHERE true**** > > ** ** > > + 10-20 similar**** > > **** > > -> Hash (cost=0.00..0.00 rows=131000 width=5088)**** > > Output: l.id, l.date_updated, l.seq, l.text**** > > -> Append (cost=0.00..0.00 rows=131000 width=5088)**** > > -> Data Node Scan on table_2 "_REMOTE_TABLE_QUERY_" > (cost=0.00..0.00 rows=1000 width=5088)**** > > Output: l.id, l.date_updated, l.seq, l.text**** > > Node/s: datanode01d, datanode02d, datanode03d, > datanode04d, datanode05d, datanode06d, datanode07d, datanode08d, > datanode09d, datanode10d, datanode11d, datanode12d, datanode13d, > datanode14d, datanode15d, datanode16d, datanode17d, datanode18d, > datanode19d, datanode20d**** > > Remote query: SELECT id, date_updated, seq, text FROM > ONLY table_2 l WHERE ((date_updated >= '2012-12-15 00:00:00-05'::timestamp > with time zone) AND (date_updated < '2013-01-01 00:00:00-05'::timestamp > with time zone))**** > > ** ** > > + 30-50 Similar**** > > ** ** > > Thank you for your time,**** > > ** ** > > *From:* Ashutosh Bapat [mailto:ash...@en...] > *Sent:* Friday, March 22, 2013 12:28 AM > *To:* Arni Sumarlidason > *Cc:* Koichi Suzuki; pos...@li... > *Subject:* Re: [Postgres-xc-general] Planner join logic**** > > ** ** > > Hi Arni, > Can you please send an explain verbose output of this query?**** > > On Thu, Mar 21, 2013 at 5:57 PM, Arni Sumarlidason < > Arn...@md...> wrote:**** > > As always, thank you for your responses, appreciate it! **** > > **** > > We are using trunk from around Feb. 18th.**** > > **** > > Here are some tables that replicate the issue,**** > > create table table1(**** > > id bigint,**** > > seq integer,**** > > text data(256),**** > > created_at timestamp with time zone,**** > > PRIMARY KEY(id, seq)) DISTRIBUTE BY HASH(id);**** > > **** > > create table table2(**** > > id bigint,**** > > seq integer,**** > > text data_complex(256),**** > > update_time timestamp with time zone,**** > > PRIMARY KEY(id, seq)) DISTRIBUTE BY HASH(id);**** > > **** > > Select * from table1 t1, table2 t2 where t2.update_time >= ‘2012-12-15’ > and t2.update_time < ‘2013-01-01’ and t2.id<https://console.mxlogic.com/redir/?2eosod7bNEVd79JYsOMOyqejhOr01PjWi1K_cKnLuZXTKrKrSxdKtq4-8qJIMlY9_5qcGuwgVsSYYyCrhpspd7arz3XxJxwS2_id41Fr2Rqj8N-xsfIn8_3UWuqnjh05zZx3UCy0hHa1EwfGhBoTfM-u0USyrjdIIczxNEVvsdVuuN3I-tei91> > =t1.id<https://console.mxlogic.com/redir/?zC763hOYqejhOrv7cIcECzAQsCM0sQuAwrLPbBXTLuZXCXCZEjrDmxfy6Hrc5v2vNmzaDE4endLf8FCQmn6jhOCUM-UroodwLQzh0qmMJmAOcvEn3X5OfM-eDCBQQg1o_og-9Ew4qOwq83WApmdPYfDwedFCQPrb38UsqenT3qiXMGWXj> > **** > > **** > > Thank you,**** > > Arni**** > > > > > -- > Best Wishes, > Ashutosh Bapat > EntepriseDB Corporation > The Enterprise Postgres Company**** > -- Best Wishes, Ashutosh Bapat EntepriseDB Corporation The Enterprise Postgres Company |