You can subscribe to this list here.
| 2010 |
Jan
|
Feb
|
Mar
|
Apr
|
May
(2) |
Jun
|
Jul
|
Aug
(6) |
Sep
|
Oct
(19) |
Nov
(1) |
Dec
|
|---|---|---|---|---|---|---|---|---|---|---|---|---|
| 2011 |
Jan
(12) |
Feb
(1) |
Mar
(4) |
Apr
(4) |
May
(32) |
Jun
(12) |
Jul
(11) |
Aug
(1) |
Sep
(6) |
Oct
(3) |
Nov
|
Dec
(10) |
| 2012 |
Jan
(11) |
Feb
(1) |
Mar
(3) |
Apr
(25) |
May
(53) |
Jun
(38) |
Jul
(103) |
Aug
(54) |
Sep
(31) |
Oct
(66) |
Nov
(77) |
Dec
(20) |
| 2013 |
Jan
(91) |
Feb
(86) |
Mar
(103) |
Apr
(107) |
May
(25) |
Jun
(37) |
Jul
(17) |
Aug
(59) |
Sep
(38) |
Oct
(78) |
Nov
(29) |
Dec
(15) |
| 2014 |
Jan
(23) |
Feb
(82) |
Mar
(118) |
Apr
(101) |
May
(103) |
Jun
(45) |
Jul
(6) |
Aug
(10) |
Sep
|
Oct
(32) |
Nov
|
Dec
(9) |
| 2015 |
Jan
(3) |
Feb
(5) |
Mar
|
Apr
(1) |
May
|
Jun
|
Jul
(9) |
Aug
(4) |
Sep
(3) |
Oct
|
Nov
|
Dec
|
| 2016 |
Jan
(3) |
Feb
|
Mar
|
Apr
|
May
|
Jun
|
Jul
|
Aug
|
Sep
|
Oct
|
Nov
|
Dec
|
| 2017 |
Jan
|
Feb
|
Mar
|
Apr
|
May
|
Jun
(3) |
Jul
|
Aug
|
Sep
|
Oct
|
Nov
|
Dec
|
| 2018 |
Jan
|
Feb
|
Mar
|
Apr
|
May
(4) |
Jun
|
Jul
|
Aug
|
Sep
|
Oct
|
Nov
|
Dec
|
|
From: Theodotos A. <th...@ub...> - 2013-03-17 06:08:49
|
Hi guys, I am using Koichi's HA guide to setup a two node pgxc cluster: wiki.postgresql.org/images/4/44/Pgxc_HA_20121024.pdf <https://duckduckgo.com/k/?u=https%3A%2F%2Ffanyv88.com%3A443%2Fhttps%2Fwiki.postgresql.org%2Fimages%2F4%2F44%2FPgxc_HA_20121024.pdf> I am in page 48 "configuring coordinator slaves" I have this config: $ cat CN2/recovery.conf standby_mode = on primary_conninfo = 'host = node01 port = 5432 user = postgres-xc' application_name = 'cnmaster01' restore_command = 'cp CN2_ArchLog/%f %p' archive_cleanup_command = 'pg_archivecleanup CN2_ArchLog %r' $ cat CN2/postgresql.conf listen_addresses = '*' port = 5433 max_connections = 150 shared_buffers = 1024MB work_mem = 64MB maintenance_work_mem = 400MB wal_level = hot_standby synchronous_commit = on archive_mode = on archive_command = 'rsync %p node02:CN2_ArchLog/%f' max_wal_senders = 5 synchronous_standby_names = 'cnmaster01' hot_standby = on cpu_tuple_cost = 0.1 effective_cache_size = 2048MB logging_collector = on log_rotation_age = 1d log_rotation_size = 1GB log_min_duration_statement = 250ms log_checkpoints = on log_connections = on log_disconnections = on log_lock_waits = on log_temp_files = 0 datestyle = 'iso, mdy' lc_messages = 'en_US.UTF-8' lc_monetary = 'en_US.UTF-8' lc_numeric = 'en_US.UTF-8' lc_time = 'en_US.UTF-8' default_text_search_config = 'pg_catalog.english' pooler_port = 6668 gtm_host = 'localhost' gtm_port = 6666 pgxc_node_name = 'cnmaster01' When I try to start the slave node: $ pg_ctl start -Z coordinator -D CN2 I get this in the logs: LOG: database system was interrupted; last known up at 2013-03-17 06:29:44 EET FATAL: unrecognized recovery parameter "application_name" LOG: startup process (PID 6562) exited with exit code 1 LOG: aborting startup due to startup process failure Another question: Why do I have to start the server: 'pg_ctl start -Z coordinator -D coordSlaveDir" then reconfigure: "cat >> coordMasterDir/postgresql.conf" <EOF synchronous_commit = on synchronous_standby_names = 'coordName' EOF "cat >> coordSlaveDir/postgresql.conf" <<EOF hot_standby = on port = coordPort EOF and then reload? pg_ctl reload -Z coordinator -D coordMasterDir Can I just use all the configuration at once and just start the server? I am using postgres-xc 1.0.2 |
|
From: Chris B. <cb...@in...> - 2013-03-16 07:58:43
|
Thanks, that's a big help. It confirms what I expected. ________________________________________ From: Koichi Suzuki [koi...@gm...] Sent: March 15, 2013 8:22 PM To: Chris Brown Cc: pos...@li... Subject: Re: [Postgres-xc-general] XC Configurations One of the example is: 5 gtm proxies, 5 datanodes and 5 coordinator, one for each box, gtm master in one of the box, gtm slave for another. GTM does not need much CPU/Memory but deal with rather big amount of network traffic. So gtm/gtm slave should be given separate network segmemnt. Each datanode/coordinator can have their slaves at the next box. Please try to see pgxc_ctl bash script at https://github.com/koichi-szk/PGXC-Tools/tree/master/pgxc_ctl It comes with the similar case with 4 boxes (gtm and gtm slave are at the separate box but you can configure them at some of 5 boxes you have). Hope it helps. Regards; ---------- Koichi Suzuki 2013/3/15 Chris Brown <cb...@in...>: > Hi all, > > I'm doing some preliminary testing of XC (now that I have it working) and > I'm wondering what sort of configurations are recommended. > > For example, let's say I want write performance on well segregated data (few > cross-node writes) and I have 3 boxes to be the server farm. How would you > recommend that I configure those? > > Now let's say I have 5 boxes and want redundancy, how do I configure it and > how to I achieve redundancy? > > I know every configuration is going to be a bit different but I'm looking > for guidance on best practices. > > Thanks, > Chris... > > > ------------------------------------------------------------------------------ > Everyone hates slow websites. So do we. > Make your web apps faster with AppDynamics > Download AppDynamics Lite for free today: > http://p.sf.net/sfu/appdyn_d2d_mar > _______________________________________________ > Postgres-xc-general mailing list > Pos...@li... > https://lists.sourceforge.net/lists/listinfo/postgres-xc-general > |
|
From: Arni S. <Arn...@md...> - 2013-03-16 05:12:39
|
Thank you for your response, Table creation: create table Table( id bigint, seq integer, stuff text, PRIMARY KEY(id, seq)) DISTRIBUTE BY HASH(id); And some partitions: CREATE TABLE table_20121125 (CHECK (((created_at >= '2012-11-25'::date) AND (created_at < '2012-12-02'::date))) ) INHERITS (Table) DISTRIBUTE BY HASH(id); CREATE TABLE table_20121202 (CHECK (((created_at >= '2012-12-02'::date) AND (created_at < '2012-12-09'::date))) ) INHERITS (Table) DISTRIBUTE BY HASH(id); CREATE TABLE table_20121209 (CHECK (((created_at >= '2012-12-09'::date) AND (created_at < '2012-12-16'::date))) ) INHERITS (Table) DISTRIBUTE BY HASH(id); Best, Arni -----Original Message----- From: Koichi Suzuki [mailto:koi...@gm...] Sent: Friday, March 15, 2013 11:30 PM To: Arni Sumarlidason Cc: Ashutosh Bapat; pos...@li... Subject: Re: [Postgres-xc-general] Configuration Error - or? The plan looks strange, because it contains too many datanode scans. Ashutosh, could your review if the plan is reasonable? Arni; It will be very helpful if we get CREATE TABLE statement you used for involved tables. Table distribution information helps much to see what's going on. Best; ---------- Koichi Suzuki 2013/3/15 Arni Sumarlidason <Arn...@md...>: > Thank you for help, > > > > I used the script authored by Koichi Suzuki to deploy the cluster, so > each node contains a data node, coordinator, and gtm proxy. > > We have 20 nodes and a gtm master. > > We distribute data to nodes based on a hash(id), and we partition data > by > date: usually by week > > Unfortunately I cannot give data to reproduce, I think it would be too > large anyways. > > > > I have not tried it on an empty table, but I think it would be successful. > > If we lower the amount of requested data the query is successful. > > if I join on single partitions instead of the master tables the query > is successful. > > > > I think the problem has to do with the amount of data I’m requesting, > the select statement I shared earlier should bring data that is about > 1.4-2.0GB on disk. Postgres on the coordinator consumes over 16GB > before crashing. I attached a query plan. > > > > From: Ashutosh Bapat [mailto:ash...@en...] > Sent: Thursday, March 14, 2013 11:26 AM > > > To: Arni Sumarlidason > Cc: pos...@li... > Subject: Re: [Postgres-xc-general] Configuration Error - or? > > > > > > On Thu, Mar 14, 2013 at 7:20 PM, Arni Sumarlidason > <Arn...@md...> wrote: > > I ran this query, > > select * from table t, table_lang l where t.id=l.id and l.date >= > '2012-12-01' and l.date < '2013-01-01'; > > with these results, > > > Does this produce a crash without any data in the tables? If not, you > need to give the data as well. Please provide exact reproduction > steps, which when run without any modification, reproduce the crash. > Also, you will need to tell how the table is distributed, what XC > configuration you are using like how many datanodes/coordinators etc. > > > The connection to the server was lost. Attempting reset: Failed. > > Time: 241861.411 ms > > > > Thank you for quick response, > > Arni > > > > From: Ashutosh Bapat [mailto:ash...@en...] > Sent: Thursday, March 14, 2013 11:17 AM > To: Arni Sumarlidason > Cc: pos...@li... > Subject: Re: [Postgres-xc-general] Configuration Error - or? > > > > This should not happen, can you please provide reproduction steps. > > On Thu, Mar 14, 2013 at 6:40 PM, Arni Sumarlidason > <Arn...@md...> wrote: > > Good Morning / Evening, > > > > I think I may have a configuration problem… I have data distributed > relationally with date based partitions to 20 nodes. When I try to > select it from a coordinator the query executes quickly and the data > nodes promptly pump data to the coordinator. However I run out of > memory(16bg) and the postgres process crashes. Is there some way to > make the coordinator cache the data to its local table(local disk), or > is there some way to get around this memory issue? > > > > Arni Sumarlidason | Software Engineer, Information Technology > > MDA | 820 West Diamond Ave | Gaithersburg, MD | USA > > O: 240-833-8200 D: 240-833-8318 M: 256-393-2803 > > arn...@md...| > https://console.mxlogic.com/redir/?8VxNwQsL6zAQsCTNPb3a9EVd79I04GxHten > dTVcsCej79zANNKVIHmlysKIvIp0Q54hfBPoWXOpJ5BNByX3ybxJZ4S2_id41Fr2Rqj8N- > xsfIn8_3UWuqnjh09lwX8yT7pOH0Qg3koRyq82VEwH4ArDUvf5zZB0SyrvdIIczxNEVvsd > BVKhC4Nxyz4 > > > > > ---------------------------------------------------------------------- > -------- Everyone hates slow websites. So do we. > Make your web apps faster with AppDynamics Download AppDynamics Lite > for free today: > https://console.mxlogic.com/redir/?17cec6zBUQsCzAS-epophd79EVdw0RKEXMd > OzHuqIIwmwaE9E2GKNdTVcsCej79zANNKVIHmlysKIvIp0Q54hfBPoWXOpJ5BNByX3ybxJ > Z4S2_id41Fr2Rqj8N-xsfIn8_3UWuqnjh09lwX8yT7pOH0Qg3koRyq82VEwH4ArDUvf5zZ > B0SCrvdIIczxNEVvsdL3aSsXZkrfUM > _______________________________________________ > Postgres-xc-general mailing list > Pos...@li... > https://console.mxlogic.com/redir/?17cec6zBUQsCzAS-epophd79EVdAVPmEBC7 > OFeDkGT6TKw0vU6UvaAWovaAVgtHyRqj8N-xsfIn8_3UWurLOoVcsCej79zztPpmIH4Vto > _oO1Ea8yvbCNRTAPqbbzb5S74n3rW9I5-Aq83iS5GQChzZ2UvoKh-7NQYQKCy0iH1Sh5Ke > PBm1Ew6ENH4Qg5Ph1m98TfM-ub7Xa1JAS-rpop73zhO-UrQ5iatyH8 > > > > > -- > Best Wishes, > Ashutosh Bapat > EntepriseDB Corporation > The Enterprise Postgres Company > > > > > -- > Best Wishes, > Ashutosh Bapat > EntepriseDB Corporation > The Enterprise Postgres Company > > > ---------------------------------------------------------------------- > -------- Everyone hates slow websites. So do we. > Make your web apps faster with AppDynamics Download AppDynamics Lite > for free today: > https://console.mxlogic.com/redir/?zC763hOYqejhOrv7cIcECzAQsCM0qTktU6V > hRLdmmgbg5k4Q1lnoCXYCej79zANOoUTsSlHaNenmfScwq2y8DOVIttVcSyOUONtxN5MS- > yr1vF6y0QJxqJ9Ao_gK7SbAvxYtfdbFEw4GMtAhrzIVlwq81GcqNd41sQglyidPYfDyN-O > wrodLCSm6hMUQsLK6ZQTWGvX3 > _______________________________________________ > Postgres-xc-general mailing list > Pos...@li... > https://console.mxlogic.com/redir/?8VxNwQsL6zAQsCTNPb3a9EVd79IDeqR4IM- > l9QWBmUSZQ03_0T3VkDj3VkDa3JsmHip6fQbxZyV7Uv7jPt-j79zANOoVcsrKraRBoDbH7 > X6gd1h4jVsSeKYCrhpspoKMUyUrvhdwLQzh0qmMJmAOcvEn3X5OfM-eDCBQQg2loeO8JNS > sGMd40R6doCy0Kq8aN96V-7PNo_pgdLCTPrb38UsqenT3vYdFwKb1 > |
|
From: Koichi S. <koi...@gm...> - 2013-03-16 03:30:22
|
The plan looks strange, because it contains too many datanode scans. Ashutosh, could your review if the plan is reasonable? Ami; It will be very helpful if we get CREATE TABLE statement you used for involved tables. Table distribution information helps much to see what's going on. Best; ---------- Koichi Suzuki 2013/3/15 Arni Sumarlidason <Arn...@md...>: > Thank you for help, > > > > I used the script authored by Koichi Suzuki to deploy the cluster, so each > node contains a data node, coordinator, and gtm proxy. > > We have 20 nodes and a gtm master. > > We distribute data to nodes based on a hash(id), and we partition data by > date: usually by week > > Unfortunately I cannot give data to reproduce, I think it would be too large > anyways. > > > > I have not tried it on an empty table, but I think it would be successful. > > If we lower the amount of requested data the query is successful. > > if I join on single partitions instead of the master tables the query is > successful. > > > > I think the problem has to do with the amount of data I’m requesting, the > select statement I shared earlier should bring data that is about 1.4-2.0GB > on disk. Postgres on the coordinator consumes over 16GB before crashing. I > attached a query plan. > > > > From: Ashutosh Bapat [mailto:ash...@en...] > Sent: Thursday, March 14, 2013 11:26 AM > > > To: Arni Sumarlidason > Cc: pos...@li... > Subject: Re: [Postgres-xc-general] Configuration Error - or? > > > > > > On Thu, Mar 14, 2013 at 7:20 PM, Arni Sumarlidason > <Arn...@md...> wrote: > > I ran this query, > > select * from table t, table_lang l where t.id=l.id and l.date >= > '2012-12-01' and l.date < '2013-01-01'; > > with these results, > > > Does this produce a crash without any data in the tables? If not, you need > to give the data as well. Please provide exact reproduction steps, which > when run without any modification, reproduce the crash. Also, you will need > to tell how the table is distributed, what XC configuration you are using > like how many datanodes/coordinators etc. > > > The connection to the server was lost. Attempting reset: Failed. > > Time: 241861.411 ms > > > > Thank you for quick response, > > Arni > > > > From: Ashutosh Bapat [mailto:ash...@en...] > Sent: Thursday, March 14, 2013 11:17 AM > To: Arni Sumarlidason > Cc: pos...@li... > Subject: Re: [Postgres-xc-general] Configuration Error - or? > > > > This should not happen, can you please provide reproduction steps. > > On Thu, Mar 14, 2013 at 6:40 PM, Arni Sumarlidason > <Arn...@md...> wrote: > > Good Morning / Evening, > > > > I think I may have a configuration problem… I have data distributed > relationally with date based partitions to 20 nodes. When I try to select it > from a coordinator the query executes quickly and the data nodes promptly > pump data to the coordinator. However I run out of memory(16bg) and the > postgres process crashes. Is there some way to make the coordinator cache > the data to its local table(local disk), or is there some way to get around > this memory issue? > > > > Arni Sumarlidason | Software Engineer, Information Technology > > MDA | 820 West Diamond Ave | Gaithersburg, MD | USA > > O: 240-833-8200 D: 240-833-8318 M: 256-393-2803 > > arn...@md...| http://www.mdaus.com > > > > > ------------------------------------------------------------------------------ > Everyone hates slow websites. So do we. > Make your web apps faster with AppDynamics > Download AppDynamics Lite for free today: > http://p.sf.net/sfu/appdyn_d2d_mar > _______________________________________________ > Postgres-xc-general mailing list > Pos...@li... > https://lists.sourceforge.net/lists/listinfo/postgres-xc-general > > > > > -- > Best Wishes, > Ashutosh Bapat > EntepriseDB Corporation > The Enterprise Postgres Company > > > > > -- > Best Wishes, > Ashutosh Bapat > EntepriseDB Corporation > The Enterprise Postgres Company > > > ------------------------------------------------------------------------------ > Everyone hates slow websites. So do we. > Make your web apps faster with AppDynamics > Download AppDynamics Lite for free today: > http://p.sf.net/sfu/appdyn_d2d_mar > _______________________________________________ > Postgres-xc-general mailing list > Pos...@li... > https://lists.sourceforge.net/lists/listinfo/postgres-xc-general > |
|
From: Koichi S. <koi...@gm...> - 2013-03-16 03:23:07
|
One of the example is: 5 gtm proxies, 5 datanodes and 5 coordinator, one for each box, gtm master in one of the box, gtm slave for another. GTM does not need much CPU/Memory but deal with rather big amount of network traffic. So gtm/gtm slave should be given separate network segmemnt. Each datanode/coordinator can have their slaves at the next box. Please try to see pgxc_ctl bash script at https://github.com/koichi-szk/PGXC-Tools/tree/master/pgxc_ctl It comes with the similar case with 4 boxes (gtm and gtm slave are at the separate box but you can configure them at some of 5 boxes you have). Hope it helps. Regards; ---------- Koichi Suzuki 2013/3/15 Chris Brown <cb...@in...>: > Hi all, > > I'm doing some preliminary testing of XC (now that I have it working) and > I'm wondering what sort of configurations are recommended. > > For example, let's say I want write performance on well segregated data (few > cross-node writes) and I have 3 boxes to be the server farm. How would you > recommend that I configure those? > > Now let's say I have 5 boxes and want redundancy, how do I configure it and > how to I achieve redundancy? > > I know every configuration is going to be a bit different but I'm looking > for guidance on best practices. > > Thanks, > Chris... > > > ------------------------------------------------------------------------------ > Everyone hates slow websites. So do we. > Make your web apps faster with AppDynamics > Download AppDynamics Lite for free today: > http://p.sf.net/sfu/appdyn_d2d_mar > _______________________________________________ > Postgres-xc-general mailing list > Pos...@li... > https://lists.sourceforge.net/lists/listinfo/postgres-xc-general > |
|
From: Chris B. <cb...@in...> - 2013-03-15 20:50:08
|
This link was helpful: http://www.pgcon.org/2012/schedule/attachments/224_Postgres-XC_tutorial.pdf From: Chris Brown <cb...@in...<mailto:cb...@in...>> Date: Thursday, March 14, 2013 9:35 PM To: "pos...@li...<mailto:pos...@li...>" <pos...@li...<mailto:pos...@li...>> Subject: [Postgres-xc-general] XC Configurations Hi all, I'm doing some preliminary testing of XC (now that I have it working) and I'm wondering what sort of configurations are recommended. For example, let's say I want write performance on well segregated data (few cross-node writes) and I have 3 boxes to be the server farm. How would you recommend that I configure those? Now let's say I have 5 boxes and want redundancy, how do I configure it and how to I achieve redundancy? I know every configuration is going to be a bit different but I'm looking for guidance on best practices. Thanks, Chris... |
|
From: Chris B. <cb...@in...> - 2013-03-15 04:35:21
|
Hi all, I'm doing some preliminary testing of XC (now that I have it working) and I'm wondering what sort of configurations are recommended. For example, let's say I want write performance on well segregated data (few cross-node writes) and I have 3 boxes to be the server farm. How would you recommend that I configure those? Now let's say I have 5 boxes and want redundancy, how do I configure it and how to I achieve redundancy? I know every configuration is going to be a bit different but I'm looking for guidance on best practices. Thanks, Chris... |
|
From: Chris B. <cb...@in...> - 2013-03-15 00:10:07
|
It looks like this may be the problem. I turned connections on for the datanodes and coordinator. The coordinator reports the connections from 'createdb' and 'psql', but the datanodes don't report incoming connections from the coordinator. Netstat shows the datanodes listening to localhost only. Config error... Adding: listen_addresses = '*' And fixed. Thanks very much! I guess that should have been obvious. Perhaps a multi-node quick-start would help. Chris... ________________________________ From: Michael Paquier [mic...@gm...] Sent: March 14, 2013 4:42 PM To: Chris Brown Cc: pos...@li... Subject: Re: [Postgres-xc-general] Initial configuration of XC On Fri, Mar 15, 2013 at 8:27 AM, Chris Brown <cb...@in...<mailto:cb...@in...>> wrote: Okay: postgres=# select pgxc_pool_check(); pgxc_pool_check ----------------- f (1 row) postgres=# select pgxc_pool_reload(); pgxc_pool_reload ------------------ t (1 row) postgres=# select pgxc_pool_check(); pgxc_pool_check ----------------- t (1 row) In between these calls I called 'createdb test' (which failed as below). postgres=# select pgxc_pool_check(); pgxc_pool_check ----------------- f (1 row) So now to look at pg_hba.conf (on both datanodes and coordinator): local all all trust host all all 127.0.0.1/32<http://127.0.0.1/32> trust host all all ::1/128 trust host all all 10.0.0.0/8<http://10.0.0.0/8> trust It would be interesting to have a look at the connection logs on Datanode side to see if actually a connection is established. You can do that by setting log_connections to on. -- Michael |
|
From: Michael P. <mic...@gm...> - 2013-03-14 23:42:35
|
On Fri, Mar 15, 2013 at 8:27 AM, Chris Brown <cb...@in...> wrote: > Okay: > > postgres=# select pgxc_pool_check(); > pgxc_pool_check > ----------------- > f > (1 row) > > postgres=# select pgxc_pool_reload(); > pgxc_pool_reload > ------------------ > t > (1 row) > > postgres=# select pgxc_pool_check(); > pgxc_pool_check > ----------------- > t > (1 row) > > In between these calls I called 'createdb test' (which failed as below). > > > > postgres=# select pgxc_pool_check(); > pgxc_pool_check > ----------------- > f > (1 row) > > > So now to look at pg_hba.conf (on both datanodes and coordinator): > > local all all trust > host all all 127.0.0.1/32 trust > host all all ::1/128 trust > host all all 10.0.0.0/8 trust > It would be interesting to have a look at the connection logs on Datanode side to see if actually a connection is established. You can do that by setting log_connections to on. -- Michael |
|
From: Chris B. <cb...@in...> - 2013-03-14 23:28:07
|
Okay: postgres=# select pgxc_pool_check(); pgxc_pool_check ----------------- f (1 row) postgres=# select pgxc_pool_reload(); pgxc_pool_reload ------------------ t (1 row) postgres=# select pgxc_pool_check(); pgxc_pool_check ----------------- t (1 row) In between these calls I called 'createdb test' (which failed as below). postgres=# select pgxc_pool_check(); pgxc_pool_check ----------------- f (1 row) So now to look at pg_hba.conf (on both datanodes and coordinator): local all all trust host all all 127.0.0.1/32 trust host all all ::1/128 trust host all all 10.0.0.0/8 trust Chris... ________________________________ From: Michael Paquier [mic...@gm...] Sent: March 14, 2013 3:58 PM To: Chris Brown Cc: pos...@li... Subject: Re: [Postgres-xc-general] Initial configuration of XC On Fri, Mar 15, 2013 at 4:35 AM, Chris Brown <cb...@in...<mailto:cb...@in...>> wrote: Hi all, I'm a new postgres-xc user and am trying to set up my first cluster. I seem to have everything configured according to the directions but it's not working as expected. When I try to create a table I get (on box 3): -bash-4.2$ /usr/local/pgxc/bin/createdb test createdb: database creation failed: ERROR: Failed to get pooled connections In the logs I see (gtm.log): 1:140475820087040:2013-03-14 12:27:06.769 PDT -LOG: Any GTM standby node not found in registered node(s). LOCATION: gtm_standby_connect_to_standby_int, gtm_standby.c:378 1:140475794900736:2013-03-14 12:27:06.769 PDT -LOG: Assigning new transaction ID = 10212 LOCATION: GTM_GetGlobalTransactionIdMulti, gtm_txn.c:581 1:140475794900736:2013-03-14 12:27:06.769 PDT -LOG: Sending transaction id 10212 LOCATION: ProcessBeginTransactionGetGXIDCommand, gtm_txn.c:1172 1:140475794900736:2013-03-14 12:27:06.769 PDT -LOG: Received transaction ID 10212 for snapshot obtention LOCATION: ProcessGetSnapshotCommand, gtm_snap.c:307 1:140475794900736:2013-03-14 12:27:06.988 PDT -LOG: Cancelling transaction id 10212 LOCATION: ProcessRollbackTransactionCommand, gtm_txn.c:1989 1:140475794900736:2013-03-14 12:27:06.989 PDT -LOG: Cleaning up thread state LOCATION: GTM_ThreadCleanup, gtm_thread.c:265 coordinator.log: LOG: failed to connect to Datanode WARNING: can not connect to node 16384 LOG: failed to acquire connections STATEMENT: CREATE DATABASE test; ERROR: Failed to get pooled connections This error message means that Coordinator pooler is not able to get a connection with a remote node. So, what am I doing wrong? Why is the createdb not working? And/or where should I be looking for more diagnostics? Those settings look to be correct. What is the result output of "select pgxc_pool_check"? If it is false, you will need to launch pgxc_pool_reload to update the pooler cache with correct node information. If it is true, I imagine that you have either issues with your firewall or pg_hba.conf is not set correctly. -- Michael |
|
From: Michael P. <mic...@gm...> - 2013-03-14 22:59:03
|
On Fri, Mar 15, 2013 at 4:35 AM, Chris Brown <cb...@in...> wrote: > Hi all, > > I'm a new postgres-xc user and am trying to set up my first cluster. I > seem to have everything configured according to the directions but it's not > working as expected. When I try to create a table I get (on box 3): > > -bash-4.2$ /usr/local/pgxc/bin/createdb test > createdb: database creation failed: ERROR: Failed to get pooled > connections > > In the logs I see (gtm.log): > > 1:140475820087040:2013-03-14 12:27:06.769 PDT -LOG: Any GTM standby node > not found in registered node(s). > LOCATION: gtm_standby_connect_to_standby_int, gtm_standby.c:378 > 1:140475794900736:2013-03-14 12:27:06.769 PDT -LOG: Assigning new > transaction ID = 10212 > LOCATION: GTM_GetGlobalTransactionIdMulti, gtm_txn.c:581 > 1:140475794900736:2013-03-14 12:27:06.769 PDT -LOG: Sending transaction > id 10212 > LOCATION: ProcessBeginTransactionGetGXIDCommand, gtm_txn.c:1172 > 1:140475794900736:2013-03-14 12:27:06.769 PDT -LOG: Received transaction > ID 10212 for snapshot obtention > LOCATION: ProcessGetSnapshotCommand, gtm_snap.c:307 > 1:140475794900736:2013-03-14 12:27:06.988 PDT -LOG: Cancelling > transaction id 10212 > LOCATION: ProcessRollbackTransactionCommand, gtm_txn.c:1989 > 1:140475794900736:2013-03-14 12:27:06.989 PDT -LOG: Cleaning up thread > state > LOCATION: GTM_ThreadCleanup, gtm_thread.c:265 > > coordinator.log: > > LOG: failed to connect to Datanode > WARNING: can not connect to node 16384 > LOG: failed to acquire connections > STATEMENT: CREATE DATABASE test; > > ERROR: Failed to get pooled connections > This error message means that Coordinator pooler is not able to get a connection with a remote node. > So, what am I doing wrong? Why is the createdb not working? And/or where > should I be looking for more diagnostics? > Those settings look to be correct. What is the result output of "select pgxc_pool_check"? If it is false, you will need to launch pgxc_pool_reload to update the pooler cache with correct node information. If it is true, I imagine that you have either issues with your firewall or pg_hba.conf is not set correctly. -- Michael |
|
From: Chris B. <cb...@in...> - 2013-03-14 19:52:36
|
Hi all,
I'm a new postgres-xc user and am trying to set up my first cluster. I seem to have everything configured according to the directions but it's not working as expected. When I try to create a table I get (on box 3):
-bash-4.2$ /usr/local/pgxc/bin/createdb test
createdb: database creation failed: ERROR: Failed to get pooled connections
In the logs I see (gtm.log):
1:140475820087040:2013-03-14 12:27:06.769 PDT -LOG: Any GTM standby node not found in registered node(s).
LOCATION: gtm_standby_connect_to_standby_int, gtm_standby.c:378
1:140475794900736:2013-03-14 12:27:06.769 PDT -LOG: Assigning new transaction ID = 10212
LOCATION: GTM_GetGlobalTransactionIdMulti, gtm_txn.c:581
1:140475794900736:2013-03-14 12:27:06.769 PDT -LOG: Sending transaction id 10212
LOCATION: ProcessBeginTransactionGetGXIDCommand, gtm_txn.c:1172
1:140475794900736:2013-03-14 12:27:06.769 PDT -LOG: Received transaction ID 10212 for snapshot obtention
LOCATION: ProcessGetSnapshotCommand, gtm_snap.c:307
1:140475794900736:2013-03-14 12:27:06.988 PDT -LOG: Cancelling transaction id 10212
LOCATION: ProcessRollbackTransactionCommand, gtm_txn.c:1989
1:140475794900736:2013-03-14 12:27:06.989 PDT -LOG: Cleaning up thread state
LOCATION: GTM_ThreadCleanup, gtm_thread.c:265
coordinator.log:
LOG: failed to connect to Datanode
WARNING: can not connect to node 16384
LOG: failed to acquire connections
STATEMENT: CREATE DATABASE test;
ERROR: Failed to get pooled connections
STATEMENT: CREATE DATABASE test;
I have 3 boxes (10.120.26.12, 10.120.26.13, 10.120.26,14). On box 1 and 2 are datanodes and box 3 is gtm and coordinator. I am using the default ports. iptables is set to allow all on all boxes. I am using the 1.0.2 tar and built locally on each machine. On the datanodes I have changed:
max_prepared_transactions = 16
gtm_host = '10.120.26.14'
#gtm_port = 6666
pgxc_node_name = 'dn_q2' # dn_q1 on box 1
On box 1 (datanode):
postgres=# select * from pgxc_node;
node_name | node_type | node_port | node_host | nodeis_primary | nodeis_preferred | node_id
-----------+-----------+-----------+-----------+----------------+------------------+-----------
dn_q1 | C | 5432 | localhost | f | f | 891718832
(1 row)
On box 2 (datanode):
postgres=# select * from pgxc_node;
node_name | node_type | node_port | node_host | nodeis_primary | nodeis_preferred | node_id
-----------+-----------+-----------+-----------+----------------+------------------+-------------
dn_q2 | C | 5432 | localhost | f | f | -2106179335
(1 row)
On box 3 (coordinator):
postgres=# select * from pgxc_node;
node_name | node_type | node_port | node_host | nodeis_primary | nodeis_preferred | node_id
-----------+-----------+-----------+--------------+----------------+------------------+-------------
coord_q3 | C | 5432 | localhost | f | f | -2063466557
dn_q1 | D | 5432 | 10.120.26.12 | f | f | 891718832
dn_q2 | D | 5432 | 10.120.26.13 | f | f | -2106179335
(3 rows)
So, what am I doing wrong? Why is the createdb not working? And/or where should I be looking for more diagnostics?
Thanks,
Chris...
|
|
From: Arni S. <Arn...@md...> - 2013-03-14 16:54:43
|
QUERY PLAN
--------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
Hash Join (cost=1.64..5.35 rows=236 width=10726)
Hash Cond: (t.id = l.id)
-> Append (cost=0.00..0.00 rows=36000 width=5638)
-> Data Node Scan on table "_REMOTE_TABLE_QUERY_" (cost=0.00..0.00 rows=1000 width=5638)
Node/s: node01d, node02d, node03d, node04d, node05d, node06d, node07d, node08d, node09d, node10d, node11d, node12d, node13d, node14d, node15d, node16d, node17d, node18d, node19d, node20d
-> Data Node Scan on table_20121111 "_REMOTE_TABLE_QUERY_" (cost=0.00..0.00 rows=1000 width=5638)
Node/s: node01d, node02d, node03d, node04d, node05d, node06d, node07d, node08d, node09d, node10d, node11d, node12d, node13d, node14d, node15d, node16d, node17d, node18d, node19d, node20d
-> Data Node Scan on table_20121118 "_REMOTE_TABLE_QUERY_" (cost=0.00..0.00 rows=1000 width=5638)
Node/s: node01d, node02d, node03d, node04d, node05d, node06d, node07d, node08d, node09d, node10d, node11d, node12d, node13d, node14d, node15d, node16d, node17d, node18d, node19d, node20d
-> Data Node Scan on table_20121125 "_REMOTE_TABLE_QUERY_" (cost=0.00..0.00 rows=1000 width=5638)
Node/s: node01d, node02d, node03d, node04d, node05d, node06d, node07d, node08d, node09d, node10d, node11d, node12d, node13d, node14d, node15d, node16d, node17d, node18d, node19d, node20d
-> Data Node Scan on table_20121202 "_REMOTE_TABLE_QUERY_" (cost=0.00..0.00 rows=1000 width=5638)
Node/s: node01d, node02d, node03d, node04d, node05d, node06d, node07d, node08d, node09d, node10d, node11d, node12d, node13d, node14d, node15d, node16d, node17d, node18d, node19d, node20d
-> Data Node Scan on table_20121209 "_REMOTE_TABLE_QUERY_" (cost=0.00..0.00 rows=1000 width=5638)
Node/s: node01d, node02d, node03d, node04d, node05d, node06d, node07d, node08d, node09d, node10d, node11d, node12d, node13d, node14d, node15d, node16d, node17d, node18d, node19d, node20d
-> Data Node Scan on table_20121216 "_REMOTE_TABLE_QUERY_" (cost=0.00..0.00 rows=1000 width=5638)
Node/s: node01d, node02d, node03d, node04d, node05d, node06d, node07d, node08d, node09d, node10d, node11d, node12d, node13d, node14d, node15d, node16d, node17d, node18d, node19d, node20d
-> Data Node Scan on table_20121223 "_REMOTE_TABLE_QUERY_" (cost=0.00..0.00 rows=1000 width=5638)
Node/s: node01d, node02d, node03d, node04d, node05d, node06d, node07d, node08d, node09d, node10d, node11d, node12d, node13d, node14d, node15d, node16d, node17d, node18d, node19d, node20d
-> Data Node Scan on table_20121230 "_REMOTE_TABLE_QUERY_" (cost=0.00..0.00 rows=1000 width=5638)
Node/s: node01d, node02d, node03d, node04d, node05d, node06d, node07d, node08d, node09d, node10d, node11d, node12d, node13d, node14d, node15d, node16d, node17d, node18d, node19d, node20d
-> Data Node Scan on table_20130106 "_REMOTE_TABLE_QUERY_" (cost=0.00..0.00 rows=1000 width=5638)
Node/s: node01d, node02d, node03d, node04d, node05d, node06d, node07d, node08d, node09d, node10d, node11d, node12d, node13d, node14d, node15d, node16d, node17d, node18d, node19d, node20d
-> Data Node Scan on table_20130113 "_REMOTE_TABLE_QUERY_" (cost=0.00..0.00 rows=1000 width=5638)
Node/s: node01d, node02d, node03d, node04d, node05d, node06d, node07d, node08d, node09d, node10d, node11d, node12d, node13d, node14d, node15d, node16d, node17d, node18d, node19d, node20d
-> Data Node Scan on table_20130120 "_REMOTE_TABLE_QUERY_" (cost=0.00..0.00 rows=1000 width=5638)
Node/s: node01d, node02d, node03d, node04d, node05d, node06d, node07d, node08d, node09d, node10d, node11d, node12d, node13d, node14d, node15d, node16d, node17d, node18d, node19d, node20d
-> Data Node Scan on table_20130127 "_REMOTE_TABLE_QUERY_" (cost=0.00..0.00 rows=1000 width=5638)
Node/s: node01d, node02d, node03d, node04d, node05d, node06d, node07d, node08d, node09d, node10d, node11d, node12d, node13d, node14d, node15d, node16d, node17d, node18d, node19d, node20d
-> Data Node Scan on table_20130203 "_REMOTE_TABLE_QUERY_" (cost=0.00..0.00 rows=1000 width=5638)
Node/s: node01d, node02d, node03d, node04d, node05d, node06d, node07d, node08d, node09d, node10d, node11d, node12d, node13d, node14d, node15d, node16d, node17d, node18d, node19d, node20d
-> Data Node Scan on table_20130210 "_REMOTE_TABLE_QUERY_" (cost=0.00..0.00 rows=1000 width=5638)
Node/s: node01d, node02d, node03d, node04d, node05d, node06d, node07d, node08d, node09d, node10d, node11d, node12d, node13d, node14d, node15d, node16d, node17d, node18d, node19d, node20d
-> Data Node Scan on table_20130217 "_REMOTE_TABLE_QUERY_" (cost=0.00..0.00 rows=1000 width=5638)
Node/s: node01d, node02d, node03d, node04d, node05d, node06d, node07d, node08d, node09d, node10d, node11d, node12d, node13d, node14d, node15d, node16d, node17d, node18d, node19d, node20d
-> Data Node Scan on table_20130224 "_REMOTE_TABLE_QUERY_" (cost=0.00..0.00 rows=1000 width=5638)
Node/s: node01d, node02d, node03d, node04d, node05d, node06d, node07d, node08d, node09d, node10d, node11d, node12d, node13d, node14d, node15d, node16d, node17d, node18d, node19d, node20d
-> Data Node Scan on table_20130303 "_REMOTE_TABLE_QUERY_" (cost=0.00..0.00 rows=1000 width=5638)
Node/s: node01d, node02d, node03d, node04d, node05d, node06d, node07d, node08d, node09d, node10d, node11d, node12d, node13d, node14d, node15d, node16d, node17d, node18d, node19d, node20d
-> Data Node Scan on table_20130310 "_REMOTE_TABLE_QUERY_" (cost=0.00..0.00 rows=1000 width=5638)
Node/s: node01d, node02d, node03d, node04d, node05d, node06d, node07d, node08d, node09d, node10d, node11d, node12d, node13d, node14d, node15d, node16d, node17d, node18d, node19d, node20d
-> Data Node Scan on table_20130317 "_REMOTE_TABLE_QUERY_" (cost=0.00..0.00 rows=1000 width=5638)
Node/s: node01d, node02d, node03d, node04d, node05d, node06d, node07d, node08d, node09d, node10d, node11d, node12d, node13d, node14d, node15d, node16d, node17d, node18d, node19d, node20d
-> Data Node Scan on table_20130324 "_REMOTE_TABLE_QUERY_" (cost=0.00..0.00 rows=1000 width=5638)
Node/s: node01d, node02d, node03d, node04d, node05d, node06d, node07d, node08d, node09d, node10d, node11d, node12d, node13d, node14d, node15d, node16d, node17d, node18d, node19d, node20d
-> Data Node Scan on table_20130331 "_REMOTE_TABLE_QUERY_" (cost=0.00..0.00 rows=1000 width=5638)
Node/s: node01d, node02d, node03d, node04d, node05d, node06d, node07d, node08d, node09d, node10d, node11d, node12d, node13d, node14d, node15d, node16d, node17d, node18d, node19d, node20d
-> Data Node Scan on table_20130407 "_REMOTE_TABLE_QUERY_" (cost=0.00..0.00 rows=1000 width=5638)
Node/s: node01d, node02d, node03d, node04d, node05d, node06d, node07d, node08d, node09d, node10d, node11d, node12d, node13d, node14d, node15d, node16d, node17d, node18d, node19d, node20d
-> Data Node Scan on table_20130414 "_REMOTE_TABLE_QUERY_" (cost=0.00..0.00 rows=1000 width=5638)
Node/s: node01d, node02d, node03d, node04d, node05d, node06d, node07d, node08d, node09d, node10d, node11d, node12d, node13d, node14d, node15d, node16d, node17d, node18d, node19d, node20d
-> Data Node Scan on table_20130421 "_REMOTE_TABLE_QUERY_" (cost=0.00..0.00 rows=1000 width=5638)
Node/s: node01d, node02d, node03d, node04d, node05d, node06d, node07d, node08d, node09d, node10d, node11d, node12d, node13d, node14d, node15d, node16d, node17d, node18d, node19d, node20d
-> Data Node Scan on table_20130428 "_REMOTE_TABLE_QUERY_" (cost=0.00..0.00 rows=1000 width=5638)
Node/s: node01d, node02d, node03d, node04d, node05d, node06d, node07d, node08d, node09d, node10d, node11d, node12d, node13d, node14d, node15d, node16d, node17d, node18d, node19d, node20d
-> Data Node Scan on table_1202 "_REMOTE_TABLE_QUERY_" (cost=0.00..0.00 rows=1000 width=5638)
Node/s: node01d, node02d, node03d, node04d, node05d, node06d, node07d, node08d, node09d, node10d, node11d, node12d, node13d, node14d, node15d, node16d, node17d, node18d, node19d, node20d
-> Data Node Scan on table_1203 "_REMOTE_TABLE_QUERY_" (cost=0.00..0.00 rows=1000 width=5638)
Node/s: node01d, node02d, node03d, node04d, node05d, node06d, node07d, node08d, node09d, node10d, node11d, node12d, node13d, node14d, node15d, node16d, node17d, node18d, node19d, node20d
-> Data Node Scan on table_1204 "_REMOTE_TABLE_QUERY_" (cost=0.00..0.00 rows=1000 width=5638)
Node/s: node01d, node02d, node03d, node04d, node05d, node06d, node07d, node08d, node09d, node10d, node11d, node12d, node13d, node14d, node15d, node16d, node17d, node18d, node19d, node20d
-> Data Node Scan on table_1205 "_REMOTE_TABLE_QUERY_" (cost=0.00..0.00 rows=1000 width=5638)
Node/s: node01d, node02d, node03d, node04d, node05d, node06d, node07d, node08d, node09d, node10d, node11d, node12d, node13d, node14d, node15d, node16d, node17d, node18d, node19d, node20d
-> Data Node Scan on table_1206 "_REMOTE_TABLE_QUERY_" (cost=0.00..0.00 rows=1000 width=5638)
Node/s: node01d, node02d, node03d, node04d, node05d, node06d, node07d, node08d, node09d, node10d, node11d, node12d, node13d, node14d, node15d, node16d, node17d, node18d, node19d, node20d
-> Data Node Scan on table_1207 "_REMOTE_TABLE_QUERY_" (cost=0.00..0.00 rows=1000 width=5638)
Node/s: node01d, node02d, node03d, node04d, node05d, node06d, node07d, node08d, node09d, node10d, node11d, node12d, node13d, node14d, node15d, node16d, node17d, node18d, node19d, node20d
-> Data Node Scan on table_1208 "_REMOTE_TABLE_QUERY_" (cost=0.00..0.00 rows=1000 width=5638)
Node/s: node01d, node02d, node03d, node04d, node05d, node06d, node07d, node08d, node09d, node10d, node11d, node12d, node13d, node14d, node15d, node16d, node17d, node18d, node19d, node20d
-> Data Node Scan on table_1209 "_REMOTE_TABLE_QUERY_" (cost=0.00..0.00 rows=1000 width=5638)
Node/s: node01d, node02d, node03d, node04d, node05d, node06d, node07d, node08d, node09d, node10d, node11d, node12d, node13d, node14d, node15d, node16d, node17d, node18d, node19d, node20d
-> Data Node Scan on table_1210 "_REMOTE_TABLE_QUERY_" (cost=0.00..0.00 rows=1000 width=5638)
Node/s: node01d, node02d, node03d, node04d, node05d, node06d, node07d, node08d, node09d, node10d, node11d, node12d, node13d, node14d, node15d, node16d, node17d, node18d, node19d, node20d
-> Data Node Scan on table_1211 "_REMOTE_TABLE_QUERY_" (cost=0.00..0.00 rows=1000 width=5638)
Node/s: node01d, node02d, node03d, node04d, node05d, node06d, node07d, node08d, node09d, node10d, node11d, node12d, node13d, node14d, node15d, node16d, node17d, node18d, node19d, node20d
-> Hash (cost=0.00..0.00 rows=131000 width=5088)
-> Append (cost=0.00..0.00 rows=131000 width=5088)
-> Data Node Scan on table_join "_REMOTE_TABLE_QUERY_" (cost=0.00..0.00 rows=1000 width=5088)
Node/s: node01d, node02d, node03d, node04d, node05d, node06d, node07d, node08d, node09d, node10d, node11d, node12d, node13d, node14d, node15d, node16d, node17d, node18d, node19d, node20d
-> Data Node Scan on table_join_120806 "_REMOTE_TABLE_QUERY_" (cost=0.00..0.00 rows=1000 width=5088)
Node/s: node01d, node02d, node03d, node04d, node05d, node06d, node07d, node08d, node09d, node10d, node11d, node12d, node13d, node14d, node15d, node16d, node17d, node18d, node19d, node20d
-> Data Node Scan on table_join_120813 "_REMOTE_TABLE_QUERY_" (cost=0.00..0.00 rows=1000 width=5088)
Node/s: node01d, node02d, node03d, node04d, node05d, node06d, node07d, node08d, node09d, node10d, node11d, node12d, node13d, node14d, node15d, node16d, node17d, node18d, node19d, node20d
-> Data Node Scan on table_join_120820 "_REMOTE_TABLE_QUERY_" (cost=0.00..0.00 rows=1000 width=5088)
Node/s: node01d, node02d, node03d, node04d, node05d, node06d, node07d, node08d, node09d, node10d, node11d, node12d, node13d, node14d, node15d, node16d, node17d, node18d, node19d, node20d
-> Data Node Scan on table_join_120827 "_REMOTE_TABLE_QUERY_" (cost=0.00..0.00 rows=1000 width=5088)
Node/s: node01d, node02d, node03d, node04d, node05d, node06d, node07d, node08d, node09d, node10d, node11d, node12d, node13d, node14d, node15d, node16d, node17d, node18d, node19d, node20d
-> Data Node Scan on table_join_120903 "_REMOTE_TABLE_QUERY_" (cost=0.00..0.00 rows=1000 width=5088)
Node/s: node01d, node02d, node03d, node04d, node05d, node06d, node07d, node08d, node09d, node10d, node11d, node12d, node13d, node14d, node15d, node16d, node17d, node18d, node19d, node20d
-> Data Node Scan on table_join_120910 "_REMOTE_TABLE_QUERY_" (cost=0.00..0.00 rows=1000 width=5088)
Node/s: node01d, node02d, node03d, node04d, node05d, node06d, node07d, node08d, node09d, node10d, node11d, node12d, node13d, node14d, node15d, node16d, node17d, node18d, node19d, node20d
-> Data Node Scan on table_join_120917 "_REMOTE_TABLE_QUERY_" (cost=0.00..0.00 rows=1000 width=5088)
Node/s: node01d, node02d, node03d, node04d, node05d, node06d, node07d, node08d, node09d, node10d, node11d, node12d, node13d, node14d, node15d, node16d, node17d, node18d, node19d, node20d
-> Data Node Scan on table_join_120924 "_REMOTE_TABLE_QUERY_" (cost=0.00..0.00 rows=1000 width=5088)
Node/s: node01d, node02d, node03d, node04d, node05d, node06d, node07d, node08d, node09d, node10d, node11d, node12d, node13d, node14d, node15d, node16d, node17d, node18d, node19d, node20d
-> Data Node Scan on table_join_121001 "_REMOTE_TABLE_QUERY_" (cost=0.00..0.00 rows=1000 width=5088)
Node/s: node01d, node02d, node03d, node04d, node05d, node06d, node07d, node08d, node09d, node10d, node11d, node12d, node13d, node14d, node15d, node16d, node17d, node18d, node19d, node20d
-> Data Node Scan on table_join_121008 "_REMOTE_TABLE_QUERY_" (cost=0.00..0.00 rows=1000 width=5088)
Node/s: node01d, node02d, node03d, node04d, node05d, node06d, node07d, node08d, node09d, node10d, node11d, node12d, node13d, node14d, node15d, node16d, node17d, node18d, node19d, node20d
-> Data Node Scan on table_join_121015 "_REMOTE_TABLE_QUERY_" (cost=0.00..0.00 rows=1000 width=5088)
Node/s: node01d, node02d, node03d, node04d, node05d, node06d, node07d, node08d, node09d, node10d, node11d, node12d, node13d, node14d, node15d, node16d, node17d, node18d, node19d, node20d
-> Data Node Scan on table_join_121022 "_REMOTE_TABLE_QUERY_" (cost=0.00..0.00 rows=1000 width=5088)
Node/s: node01d, node02d, node03d, node04d, node05d, node06d, node07d, node08d, node09d, node10d, node11d, node12d, node13d, node14d, node15d, node16d, node17d, node18d, node19d, node20d
-> Data Node Scan on table_join_121029 "_REMOTE_TABLE_QUERY_" (cost=0.00..0.00 rows=1000 width=5088)
Node/s: node01d, node02d, node03d, node04d, node05d, node06d, node07d, node08d, node09d, node10d, node11d, node12d, node13d, node14d, node15d, node16d, node17d, node18d, node19d, node20d
-> Data Node Scan on table_join_121105 "_REMOTE_TABLE_QUERY_" (cost=0.00..0.00 rows=1000 width=5088)
Node/s: node01d, node02d, node03d, node04d, node05d, node06d, node07d, node08d, node09d, node10d, node11d, node12d, node13d, node14d, node15d, node16d, node17d, node18d, node19d, node20d
-> Data Node Scan on table_join_121112 "_REMOTE_TABLE_QUERY_" (cost=0.00..0.00 rows=1000 width=5088)
Node/s: node01d, node02d, node03d, node04d, node05d, node06d, node07d, node08d, node09d, node10d, node11d, node12d, node13d, node14d, node15d, node16d, node17d, node18d, node19d, node20d
-> Data Node Scan on table_join_121119 "_REMOTE_TABLE_QUERY_" (cost=0.00..0.00 rows=1000 width=5088)
Node/s: node01d, node02d, node03d, node04d, node05d, node06d, node07d, node08d, node09d, node10d, node11d, node12d, node13d, node14d, node15d, node16d, node17d, node18d, node19d, node20d
-> Data Node Scan on table_join_121126 "_REMOTE_TABLE_QUERY_" (cost=0.00..0.00 rows=1000 width=5088)
Node/s: node01d, node02d, node03d, node04d, node05d, node06d, node07d, node08d, node09d, node10d, node11d, node12d, node13d, node14d, node15d, node16d, node17d, node18d, node19d, node20d
-> Data Node Scan on table_join_121203 "_REMOTE_TABLE_QUERY_" (cost=0.00..0.00 rows=1000 width=5088)
Node/s: node01d, node02d, node03d, node04d, node05d, node06d, node07d, node08d, node09d, node10d, node11d, node12d, node13d, node14d, node15d, node16d, node17d, node18d, node19d, node20d
-> Data Node Scan on table_join_121210 "_REMOTE_TABLE_QUERY_" (cost=0.00..0.00 rows=1000 width=5088)
Node/s: node01d, node02d, node03d, node04d, node05d, node06d, node07d, node08d, node09d, node10d, node11d, node12d, node13d, node14d, node15d, node16d, node17d, node18d, node19d, node20d
-> Data Node Scan on table_join_121217 "_REMOTE_TABLE_QUERY_" (cost=0.00..0.00 rows=1000 width=5088)
Node/s: node01d, node02d, node03d, node04d, node05d, node06d, node07d, node08d, node09d, node10d, node11d, node12d, node13d, node14d, node15d, node16d, node17d, node18d, node19d, node20d
-> Data Node Scan on table_join_121224 "_REMOTE_TABLE_QUERY_" (cost=0.00..0.00 rows=1000 width=5088)
Node/s: node01d, node02d, node03d, node04d, node05d, node06d, node07d, node08d, node09d, node10d, node11d, node12d, node13d, node14d, node15d, node16d, node17d, node18d, node19d, node20d
-> Data Node Scan on table_join_121231 "_REMOTE_TABLE_QUERY_" (cost=0.00..0.00 rows=1000 width=5088)
Node/s: node01d, node02d, node03d, node04d, node05d, node06d, node07d, node08d, node09d, node10d, node11d, node12d, node13d, node14d, node15d, node16d, node17d, node18d, node19d, node20d
-> Data Node Scan on table_join_130107 "_REMOTE_TABLE_QUERY_" (cost=0.00..0.00 rows=1000 width=5088)
Node/s: node01d, node02d, node03d, node04d, node05d, node06d, node07d, node08d, node09d, node10d, node11d, node12d, node13d, node14d, node15d, node16d, node17d, node18d, node19d, node20d
-> Data Node Scan on table_join_130114 "_REMOTE_TABLE_QUERY_" (cost=0.00..0.00 rows=1000 width=5088)
Node/s: node01d, node02d, node03d, node04d, node05d, node06d, node07d, node08d, node09d, node10d, node11d, node12d, node13d, node14d, node15d, node16d, node17d, node18d, node19d, node20d
-> Data Node Scan on table_join_130121 "_REMOTE_TABLE_QUERY_" (cost=0.00..0.00 rows=1000 width=5088)
Node/s: node01d, node02d, node03d, node04d, node05d, node06d, node07d, node08d, node09d, node10d, node11d, node12d, node13d, node14d, node15d, node16d, node17d, node18d, node19d, node20d
-> Data Node Scan on table_join_130128 "_REMOTE_TABLE_QUERY_" (cost=0.00..0.00 rows=1000 width=5088)
Node/s: node01d, node02d, node03d, node04d, node05d, node06d, node07d, node08d, node09d, node10d, node11d, node12d, node13d, node14d, node15d, node16d, node17d, node18d, node19d, node20d
-> Data Node Scan on table_join_130204 "_REMOTE_TABLE_QUERY_" (cost=0.00..0.00 rows=1000 width=5088)
Node/s: node01d, node02d, node03d, node04d, node05d, node06d, node07d, node08d, node09d, node10d, node11d, node12d, node13d, node14d, node15d, node16d, node17d, node18d, node19d, node20d
-> Data Node Scan on table_join_130211 "_REMOTE_TABLE_QUERY_" (cost=0.00..0.00 rows=1000 width=5088)
Node/s: node01d, node02d, node03d, node04d, node05d, node06d, node07d, node08d, node09d, node10d, node11d, node12d, node13d, node14d, node15d, node16d, node17d, node18d, node19d, node20d
-> Data Node Scan on table_join_130218 "_REMOTE_TABLE_QUERY_" (cost=0.00..0.00 rows=1000 width=5088)
Node/s: node01d, node02d, node03d, node04d, node05d, node06d, node07d, node08d, node09d, node10d, node11d, node12d, node13d, node14d, node15d, node16d, node17d, node18d, node19d, node20d
-> Data Node Scan on table_join_130225 "_REMOTE_TABLE_QUERY_" (cost=0.00..0.00 rows=1000 width=5088)
Node/s: node01d, node02d, node03d, node04d, node05d, node06d, node07d, node08d, node09d, node10d, node11d, node12d, node13d, node14d, node15d, node16d, node17d, node18d, node19d, node20d
-> Data Node Scan on table_join_130304 "_REMOTE_TABLE_QUERY_" (cost=0.00..0.00 rows=1000 width=5088)
Node/s: node01d, node02d, node03d, node04d, node05d, node06d, node07d, node08d, node09d, node10d, node11d, node12d, node13d, node14d, node15d, node16d, node17d, node18d, node19d, node20d
-> Data Node Scan on table_join_130311 "_REMOTE_TABLE_QUERY_" (cost=0.00..0.00 rows=1000 width=5088)
Node/s: node01d, node02d, node03d, node04d, node05d, node06d, node07d, node08d, node09d, node10d, node11d, node12d, node13d, node14d, node15d, node16d, node17d, node18d, node19d, node20d
-> Data Node Scan on table_join_130318 "_REMOTE_TABLE_QUERY_" (cost=0.00..0.00 rows=1000 width=5088)
Node/s: node01d, node02d, node03d, node04d, node05d, node06d, node07d, node08d, node09d, node10d, node11d, node12d, node13d, node14d, node15d, node16d, node17d, node18d, node19d, node20d
-> Data Node Scan on table_join_130325 "_REMOTE_TABLE_QUERY_" (cost=0.00..0.00 rows=1000 width=5088)
Node/s: node01d, node02d, node03d, node04d, node05d, node06d, node07d, node08d, node09d, node10d, node11d, node12d, node13d, node14d, node15d, node16d, node17d, node18d, node19d, node20d
-> Data Node Scan on table_join_130401 "_REMOTE_TABLE_QUERY_" (cost=0.00..0.00 rows=1000 width=5088)
Node/s: node01d, node02d, node03d, node04d, node05d, node06d, node07d, node08d, node09d, node10d, node11d, node12d, node13d, node14d, node15d, node16d, node17d, node18d, node19d, node20d
-> Data Node Scan on table_join_130408 "_REMOTE_TABLE_QUERY_" (cost=0.00..0.00 rows=1000 width=5088)
Node/s: node01d, node02d, node03d, node04d, node05d, node06d, node07d, node08d, node09d, node10d, node11d, node12d, node13d, node14d, node15d, node16d, node17d, node18d, node19d, node20d
-> Data Node Scan on table_join_130415 "_REMOTE_TABLE_QUERY_" (cost=0.00..0.00 rows=1000 width=5088)
Node/s: node01d, node02d, node03d, node04d, node05d, node06d, node07d, node08d, node09d, node10d, node11d, node12d, node13d, node14d, node15d, node16d, node17d, node18d, node19d, node20d
-> Data Node Scan on table_join_130422 "_REMOTE_TABLE_QUERY_" (cost=0.00..0.00 rows=1000 width=5088)
Node/s: node01d, node02d, node03d, node04d, node05d, node06d, node07d, node08d, node09d, node10d, node11d, node12d, node13d, node14d, node15d, node16d, node17d, node18d, node19d, node20d
-> Data Node Scan on table_join_130429 "_REMOTE_TABLE_QUERY_" (cost=0.00..0.00 rows=1000 width=5088)
Node/s: node01d, node02d, node03d, node04d, node05d, node06d, node07d, node08d, node09d, node10d, node11d, node12d, node13d, node14d, node15d, node16d, node17d, node18d, node19d, node20d
-> Data Node Scan on table_join_130506 "_REMOTE_TABLE_QUERY_" (cost=0.00..0.00 rows=1000 width=5088)
Node/s: node01d, node02d, node03d, node04d, node05d, node06d, node07d, node08d, node09d, node10d, node11d, node12d, node13d, node14d, node15d, node16d, node17d, node18d, node19d, node20d
-> Data Node Scan on table_join_130513 "_REMOTE_TABLE_QUERY_" (cost=0.00..0.00 rows=1000 width=5088)
Node/s: node01d, node02d, node03d, node04d, node05d, node06d, node07d, node08d, node09d, node10d, node11d, node12d, node13d, node14d, node15d, node16d, node17d, node18d, node19d, node20d
-> Data Node Scan on table_join_130520 "_REMOTE_TABLE_QUERY_" (cost=0.00..0.00 rows=1000 width=5088)
Node/s: node01d, node02d, node03d, node04d, node05d, node06d, node07d, node08d, node09d, node10d, node11d, node12d, node13d, node14d, node15d, node16d, node17d, node18d, node19d, node20d
-> Data Node Scan on table_join_130527 "_REMOTE_TABLE_QUERY_" (cost=0.00..0.00 rows=1000 width=5088)
Node/s: node01d, node02d, node03d, node04d, node05d, node06d, node07d, node08d, node09d, node10d, node11d, node12d, node13d, node14d, node15d, node16d, node17d, node18d, node19d, node20d
-> Data Node Scan on table_join_130603 "_REMOTE_TABLE_QUERY_" (cost=0.00..0.00 rows=1000 width=5088)
Node/s: node01d, node02d, node03d, node04d, node05d, node06d, node07d, node08d, node09d, node10d, node11d, node12d, node13d, node14d, node15d, node16d, node17d, node18d, node19d, node20d
-> Data Node Scan on table_join_130610 "_REMOTE_TABLE_QUERY_" (cost=0.00..0.00 rows=1000 width=5088)
Node/s: node01d, node02d, node03d, node04d, node05d, node06d, node07d, node08d, node09d, node10d, node11d, node12d, node13d, node14d, node15d, node16d, node17d, node18d, node19d, node20d
-> Data Node Scan on table_join_130617 "_REMOTE_TABLE_QUERY_" (cost=0.00..0.00 rows=1000 width=5088)
Node/s: node01d, node02d, node03d, node04d, node05d, node06d, node07d, node08d, node09d, node10d, node11d, node12d, node13d, node14d, node15d, node16d, node17d, node18d, node19d, node20d
-> Data Node Scan on table_join_130624 "_REMOTE_TABLE_QUERY_" (cost=0.00..0.00 rows=1000 width=5088)
Node/s: node01d, node02d, node03d, node04d, node05d, node06d, node07d, node08d, node09d, node10d, node11d, node12d, node13d, node14d, node15d, node16d, node17d, node18d, node19d, node20d
-> Data Node Scan on table_join_130701 "_REMOTE_TABLE_QUERY_" (cost=0.00..0.00 rows=1000 width=5088)
Node/s: node01d, node02d, node03d, node04d, node05d, node06d, node07d, node08d, node09d, node10d, node11d, node12d, node13d, node14d, node15d, node16d, node17d, node18d, node19d, node20d
-> Data Node Scan on table_join_130708 "_REMOTE_TABLE_QUERY_" (cost=0.00..0.00 rows=1000 width=5088)
Node/s: node01d, node02d, node03d, node04d, node05d, node06d, node07d, node08d, node09d, node10d, node11d, node12d, node13d, node14d, node15d, node16d, node17d, node18d, node19d, node20d
-> Data Node Scan on table_join_130715 "_REMOTE_TABLE_QUERY_" (cost=0.00..0.00 rows=1000 width=5088)
Node/s: node01d, node02d, node03d, node04d, node05d, node06d, node07d, node08d, node09d, node10d, node11d, node12d, node13d, node14d, node15d, node16d, node17d, node18d, node19d, node20d
-> Data Node Scan on table_join_130722 "_REMOTE_TABLE_QUERY_" (cost=0.00..0.00 rows=1000 width=5088)
Node/s: node01d, node02d, node03d, node04d, node05d, node06d, node07d, node08d, node09d, node10d, node11d, node12d, node13d, node14d, node15d, node16d, node17d, node18d, node19d, node20d
-> Data Node Scan on table_join_130729 "_REMOTE_TABLE_QUERY_" (cost=0.00..0.00 rows=1000 width=5088)
Node/s: node01d, node02d, node03d, node04d, node05d, node06d, node07d, node08d, node09d, node10d, node11d, node12d, node13d, node14d, node15d, node16d, node17d, node18d, node19d, node20d
-> Data Node Scan on table_join_130805 "_REMOTE_TABLE_QUERY_" (cost=0.00..0.00 rows=1000 width=5088)
Node/s: node01d, node02d, node03d, node04d, node05d, node06d, node07d, node08d, node09d, node10d, node11d, node12d, node13d, node14d, node15d, node16d, node17d, node18d, node19d, node20d
-> Data Node Scan on table_join_130812 "_REMOTE_TABLE_QUERY_" (cost=0.00..0.00 rows=1000 width=5088)
Node/s: node01d, node02d, node03d, node04d, node05d, node06d, node07d, node08d, node09d, node10d, node11d, node12d, node13d, node14d, node15d, node16d, node17d, node18d, node19d, node20d
-> Data Node Scan on table_join_130819 "_REMOTE_TABLE_QUERY_" (cost=0.00..0.00 rows=1000 width=5088)
Node/s: node01d, node02d, node03d, node04d, node05d, node06d, node07d, node08d, node09d, node10d, node11d, node12d, node13d, node14d, node15d, node16d, node17d, node18d, node19d, node20d
-> Data Node Scan on table_join_130826 "_REMOTE_TABLE_QUERY_" (cost=0.00..0.00 rows=1000 width=5088)
Node/s: node01d, node02d, node03d, node04d, node05d, node06d, node07d, node08d, node09d, node10d, node11d, node12d, node13d, node14d, node15d, node16d, node17d, node18d, node19d, node20d
-> Data Node Scan on table_join_130902 "_REMOTE_TABLE_QUERY_" (cost=0.00..0.00 rows=1000 width=5088)
Node/s: node01d, node02d, node03d, node04d, node05d, node06d, node07d, node08d, node09d, node10d, node11d, node12d, node13d, node14d, node15d, node16d, node17d, node18d, node19d, node20d
-> Data Node Scan on table_join_130909 "_REMOTE_TABLE_QUERY_" (cost=0.00..0.00 rows=1000 width=5088)
Node/s: node01d, node02d, node03d, node04d, node05d, node06d, node07d, node08d, node09d, node10d, node11d, node12d, node13d, node14d, node15d, node16d, node17d, node18d, node19d, node20d
-> Data Node Scan on table_join_130916 "_REMOTE_TABLE_QUERY_" (cost=0.00..0.00 rows=1000 width=5088)
Node/s: node01d, node02d, node03d, node04d, node05d, node06d, node07d, node08d, node09d, node10d, node11d, node12d, node13d, node14d, node15d, node16d, node17d, node18d, node19d, node20d
-> Data Node Scan on table_join_130923 "_REMOTE_TABLE_QUERY_" (cost=0.00..0.00 rows=1000 width=5088)
Node/s: node01d, node02d, node03d, node04d, node05d, node06d, node07d, node08d, node09d, node10d, node11d, node12d, node13d, node14d, node15d, node16d, node17d, node18d, node19d, node20d
-> Data Node Scan on table_join_130930 "_REMOTE_TABLE_QUERY_" (cost=0.00..0.00 rows=1000 width=5088)
Node/s: node01d, node02d, node03d, node04d, node05d, node06d, node07d, node08d, node09d, node10d, node11d, node12d, node13d, node14d, node15d, node16d, node17d, node18d, node19d, node20d
-> Data Node Scan on table_join_131007 "_REMOTE_TABLE_QUERY_" (cost=0.00..0.00 rows=1000 width=5088)
Node/s: node01d, node02d, node03d, node04d, node05d, node06d, node07d, node08d, node09d, node10d, node11d, node12d, node13d, node14d, node15d, node16d, node17d, node18d, node19d, node20d
-> Data Node Scan on table_join_131014 "_REMOTE_TABLE_QUERY_" (cost=0.00..0.00 rows=1000 width=5088)
Node/s: node01d, node02d, node03d, node04d, node05d, node06d, node07d, node08d, node09d, node10d, node11d, node12d, node13d, node14d, node15d, node16d, node17d, node18d, node19d, node20d
-> Data Node Scan on table_join_131021 "_REMOTE_TABLE_QUERY_" (cost=0.00..0.00 rows=1000 width=5088)
Node/s: node01d, node02d, node03d, node04d, node05d, node06d, node07d, node08d, node09d, node10d, node11d, node12d, node13d, node14d, node15d, node16d, node17d, node18d, node19d, node20d
-> Data Node Scan on table_join_131028 "_REMOTE_TABLE_QUERY_" (cost=0.00..0.00 rows=1000 width=5088)
Node/s: node01d, node02d, node03d, node04d, node05d, node06d, node07d, node08d, node09d, node10d, node11d, node12d, node13d, node14d, node15d, node16d, node17d, node18d, node19d, node20d
-> Data Node Scan on table_join_131104 "_REMOTE_TABLE_QUERY_" (cost=0.00..0.00 rows=1000 width=5088)
Node/s: node01d, node02d, node03d, node04d, node05d, node06d, node07d, node08d, node09d, node10d, node11d, node12d, node13d, node14d, node15d, node16d, node17d, node18d, node19d, node20d
-> Data Node Scan on table_join_131111 "_REMOTE_TABLE_QUERY_" (cost=0.00..0.00 rows=1000 width=5088)
Node/s: node01d, node02d, node03d, node04d, node05d, node06d, node07d, node08d, node09d, node10d, node11d, node12d, node13d, node14d, node15d, node16d, node17d, node18d, node19d, node20d
-> Data Node Scan on table_join_131118 "_REMOTE_TABLE_QUERY_" (cost=0.00..0.00 rows=1000 width=5088)
Node/s: node01d, node02d, node03d, node04d, node05d, node06d, node07d, node08d, node09d, node10d, node11d, node12d, node13d, node14d, node15d, node16d, node17d, node18d, node19d, node20d
-> Data Node Scan on table_join_131125 "_REMOTE_TABLE_QUERY_" (cost=0.00..0.00 rows=1000 width=5088)
Node/s: node01d, node02d, node03d, node04d, node05d, node06d, node07d, node08d, node09d, node10d, node11d, node12d, node13d, node14d, node15d, node16d, node17d, node18d, node19d, node20d
-> Data Node Scan on table_join_131202 "_REMOTE_TABLE_QUERY_" (cost=0.00..0.00 rows=1000 width=5088)
Node/s: node01d, node02d, node03d, node04d, node05d, node06d, node07d, node08d, node09d, node10d, node11d, node12d, node13d, node14d, node15d, node16d, node17d, node18d, node19d, node20d
-> Data Node Scan on table_join_131209 "_REMOTE_TABLE_QUERY_" (cost=0.00..0.00 rows=1000 width=5088)
Node/s: node01d, node02d, node03d, node04d, node05d, node06d, node07d, node08d, node09d, node10d, node11d, node12d, node13d, node14d, node15d, node16d, node17d, node18d, node19d, node20d
-> Data Node Scan on table_join_131216 "_REMOTE_TABLE_QUERY_" (cost=0.00..0.00 rows=1000 width=5088)
Node/s: node01d, node02d, node03d, node04d, node05d, node06d, node07d, node08d, node09d, node10d, node11d, node12d, node13d, node14d, node15d, node16d, node17d, node18d, node19d, node20d
-> Data Node Scan on table_join_131223 "_REMOTE_TABLE_QUERY_" (cost=0.00..0.00 rows=1000 width=5088)
Node/s: node01d, node02d, node03d, node04d, node05d, node06d, node07d, node08d, node09d, node10d, node11d, node12d, node13d, node14d, node15d, node16d, node17d, node18d, node19d, node20d
-> Data Node Scan on table_join_131230 "_REMOTE_TABLE_QUERY_" (cost=0.00..0.00 rows=1000 width=5088)
Node/s: node01d, node02d, node03d, node04d, node05d, node06d, node07d, node08d, node09d, node10d, node11d, node12d, node13d, node14d, node15d, node16d, node17d, node18d, node19d, node20d
-> Data Node Scan on table_join_140106 "_REMOTE_TABLE_QUERY_" (cost=0.00..0.00 rows=1000 width=5088)
Node/s: node01d, node02d, node03d, node04d, node05d, node06d, node07d, node08d, node09d, node10d, node11d, node12d, node13d, node14d, node15d, node16d, node17d, node18d, node19d, node20d
-> Data Node Scan on table_join_140113 "_REMOTE_TABLE_QUERY_" (cost=0.00..0.00 rows=1000 width=5088)
Node/s: node01d, node02d, node03d, node04d, node05d, node06d, node07d, node08d, node09d, node10d, node11d, node12d, node13d, node14d, node15d, node16d, node17d, node18d, node19d, node20d
-> Data Node Scan on table_join_140120 "_REMOTE_TABLE_QUERY_" (cost=0.00..0.00 rows=1000 width=5088)
Node/s: node01d, node02d, node03d, node04d, node05d, node06d, node07d, node08d, node09d, node10d, node11d, node12d, node13d, node14d, node15d, node16d, node17d, node18d, node19d, node20d
-> Data Node Scan on table_join_140127 "_REMOTE_TABLE_QUERY_" (cost=0.00..0.00 rows=1000 width=5088)
Node/s: node01d, node02d, node03d, node04d, node05d, node06d, node07d, node08d, node09d, node10d, node11d, node12d, node13d, node14d, node15d, node16d, node17d, node18d, node19d, node20d
-> Data Node Scan on table_join_140203 "_REMOTE_TABLE_QUERY_" (cost=0.00..0.00 rows=1000 width=5088)
Node/s: node01d, node02d, node03d, node04d, node05d, node06d, node07d, node08d, node09d, node10d, node11d, node12d, node13d, node14d, node15d, node16d, node17d, node18d, node19d, node20d
-> Data Node Scan on table_join_140210 "_REMOTE_TABLE_QUERY_" (cost=0.00..0.00 rows=1000 width=5088)
Node/s: node01d, node02d, node03d, node04d, node05d, node06d, node07d, node08d, node09d, node10d, node11d, node12d, node13d, node14d, node15d, node16d, node17d, node18d, node19d, node20d
-> Data Node Scan on table_join_140217 "_REMOTE_TABLE_QUERY_" (cost=0.00..0.00 rows=1000 width=5088)
Node/s: node01d, node02d, node03d, node04d, node05d, node06d, node07d, node08d, node09d, node10d, node11d, node12d, node13d, node14d, node15d, node16d, node17d, node18d, node19d, node20d
-> Data Node Scan on table_join_140224 "_REMOTE_TABLE_QUERY_" (cost=0.00..0.00 rows=1000 width=5088)
Node/s: node01d, node02d, node03d, node04d, node05d, node06d, node07d, node08d, node09d, node10d, node11d, node12d, node13d, node14d, node15d, node16d, node17d, node18d, node19d, node20d
-> Data Node Scan on table_join_140303 "_REMOTE_TABLE_QUERY_" (cost=0.00..0.00 rows=1000 width=5088)
Node/s: node01d, node02d, node03d, node04d, node05d, node06d, node07d, node08d, node09d, node10d, node11d, node12d, node13d, node14d, node15d, node16d, node17d, node18d, node19d, node20d
-> Data Node Scan on table_join_140310 "_REMOTE_TABLE_QUERY_" (cost=0.00..0.00 rows=1000 width=5088)
Node/s: node01d, node02d, node03d, node04d, node05d, node06d, node07d, node08d, node09d, node10d, node11d, node12d, node13d, node14d, node15d, node16d, node17d, node18d, node19d, node20d
-> Data Node Scan on table_join_140317 "_REMOTE_TABLE_QUERY_" (cost=0.00..0.00 rows=1000 width=5088)
Node/s: node01d, node02d, node03d, node04d, node05d, node06d, node07d, node08d, node09d, node10d, node11d, node12d, node13d, node14d, node15d, node16d, node17d, node18d, node19d, node20d
-> Data Node Scan on table_join_140324 "_REMOTE_TABLE_QUERY_" (cost=0.00..0.00 rows=1000 width=5088)
Node/s: node01d, node02d, node03d, node04d, node05d, node06d, node07d, node08d, node09d, node10d, node11d, node12d, node13d, node14d, node15d, node16d, node17d, node18d, node19d, node20d
-> Data Node Scan on table_join_140331 "_REMOTE_TABLE_QUERY_" (cost=0.00..0.00 rows=1000 width=5088)
Node/s: node01d, node02d, node03d, node04d, node05d, node06d, node07d, node08d, node09d, node10d, node11d, node12d, node13d, node14d, node15d, node16d, node17d, node18d, node19d, node20d
-> Data Node Scan on table_join_140407 "_REMOTE_TABLE_QUERY_" (cost=0.00..0.00 rows=1000 width=5088)
Node/s: node01d, node02d, node03d, node04d, node05d, node06d, node07d, node08d, node09d, node10d, node11d, node12d, node13d, node14d, node15d, node16d, node17d, node18d, node19d, node20d
-> Data Node Scan on table_join_140414 "_REMOTE_TABLE_QUERY_" (cost=0.00..0.00 rows=1000 width=5088)
Node/s: node01d, node02d, node03d, node04d, node05d, node06d, node07d, node08d, node09d, node10d, node11d, node12d, node13d, node14d, node15d, node16d, node17d, node18d, node19d, node20d
-> Data Node Scan on table_join_140421 "_REMOTE_TABLE_QUERY_" (cost=0.00..0.00 rows=1000 width=5088)
Node/s: node01d, node02d, node03d, node04d, node05d, node06d, node07d, node08d, node09d, node10d, node11d, node12d, node13d, node14d, node15d, node16d, node17d, node18d, node19d, node20d
-> Data Node Scan on table_join_140428 "_REMOTE_TABLE_QUERY_" (cost=0.00..0.00 rows=1000 width=5088)
Node/s: node01d, node02d, node03d, node04d, node05d, node06d, node07d, node08d, node09d, node10d, node11d, node12d, node13d, node14d, node15d, node16d, node17d, node18d, node19d, node20d
-> Data Node Scan on table_join_140505 "_REMOTE_TABLE_QUERY_" (cost=0.00..0.00 rows=1000 width=5088)
Node/s: node01d, node02d, node03d, node04d, node05d, node06d, node07d, node08d, node09d, node10d, node11d, node12d, node13d, node14d, node15d, node16d, node17d, node18d, node19d, node20d
-> Data Node Scan on table_join_140512 "_REMOTE_TABLE_QUERY_" (cost=0.00..0.00 rows=1000 width=5088)
Node/s: node01d, node02d, node03d, node04d, node05d, node06d, node07d, node08d, node09d, node10d, node11d, node12d, node13d, node14d, node15d, node16d, node17d, node18d, node19d, node20d
-> Data Node Scan on table_join_140519 "_REMOTE_TABLE_QUERY_" (cost=0.00..0.00 rows=1000 width=5088)
Node/s: node01d, node02d, node03d, node04d, node05d, node06d, node07d, node08d, node09d, node10d, node11d, node12d, node13d, node14d, node15d, node16d, node17d, node18d, node19d, node20d
-> Data Node Scan on table_join_140526 "_REMOTE_TABLE_QUERY_" (cost=0.00..0.00 rows=1000 width=5088)
Node/s: node01d, node02d, node03d, node04d, node05d, node06d, node07d, node08d, node09d, node10d, node11d, node12d, node13d, node14d, node15d, node16d, node17d, node18d, node19d, node20d
-> Data Node Scan on table_join_140602 "_REMOTE_TABLE_QUERY_" (cost=0.00..0.00 rows=1000 width=5088)
Node/s: node01d, node02d, node03d, node04d, node05d, node06d, node07d, node08d, node09d, node10d, node11d, node12d, node13d, node14d, node15d, node16d, node17d, node18d, node19d, node20d
-> Data Node Scan on table_join_140609 "_REMOTE_TABLE_QUERY_" (cost=0.00..0.00 rows=1000 width=5088)
Node/s: node01d, node02d, node03d, node04d, node05d, node06d, node07d, node08d, node09d, node10d, node11d, node12d, node13d, node14d, node15d, node16d, node17d, node18d, node19d, node20d
-> Data Node Scan on table_join_140616 "_REMOTE_TABLE_QUERY_" (cost=0.00..0.00 rows=1000 width=5088)
Node/s: node01d, node02d, node03d, node04d, node05d, node06d, node07d, node08d, node09d, node10d, node11d, node12d, node13d, node14d, node15d, node16d, node17d, node18d, node19d, node20d
-> Data Node Scan on table_join_140623 "_REMOTE_TABLE_QUERY_" (cost=0.00..0.00 rows=1000 width=5088)
Node/s: node01d, node02d, node03d, node04d, node05d, node06d, node07d, node08d, node09d, node10d, node11d, node12d, node13d, node14d, node15d, node16d, node17d, node18d, node19d, node20d
-> Data Node Scan on table_join_140630 "_REMOTE_TABLE_QUERY_" (cost=0.00..0.00 rows=1000 width=5088)
Node/s: node01d, node02d, node03d, node04d, node05d, node06d, node07d, node08d, node09d, node10d, node11d, node12d, node13d, node14d, node15d, node16d, node17d, node18d, node19d, node20d
-> Data Node Scan on table_join_140707 "_REMOTE_TABLE_QUERY_" (cost=0.00..0.00 rows=1000 width=5088)
Node/s: node01d, node02d, node03d, node04d, node05d, node06d, node07d, node08d, node09d, node10d, node11d, node12d, node13d, node14d, node15d, node16d, node17d, node18d, node19d, node20d
-> Data Node Scan on table_join_140714 "_REMOTE_TABLE_QUERY_" (cost=0.00..0.00 rows=1000 width=5088)
Node/s: node01d, node02d, node03d, node04d, node05d, node06d, node07d, node08d, node09d, node10d, node11d, node12d, node13d, node14d, node15d, node16d, node17d, node18d, node19d, node20d
-> Data Node Scan on table_join_140721 "_REMOTE_TABLE_QUERY_" (cost=0.00..0.00 rows=1000 width=5088)
Node/s: node01d, node02d, node03d, node04d, node05d, node06d, node07d, node08d, node09d, node10d, node11d, node12d, node13d, node14d, node15d, node16d, node17d, node18d, node19d, node20d
-> Data Node Scan on table_join_140728 "_REMOTE_TABLE_QUERY_" (cost=0.00..0.00 rows=1000 width=5088)
Node/s: node01d, node02d, node03d, node04d, node05d, node06d, node07d, node08d, node09d, node10d, node11d, node12d, node13d, node14d, node15d, node16d, node17d, node18d, node19d, node20d
-> Data Node Scan on table_join_140804 "_REMOTE_TABLE_QUERY_" (cost=0.00..0.00 rows=1000 width=5088)
Node/s: node01d, node02d, node03d, node04d, node05d, node06d, node07d, node08d, node09d, node10d, node11d, node12d, node13d, node14d, node15d, node16d, node17d, node18d, node19d, node20d
-> Data Node Scan on table_join_140811 "_REMOTE_TABLE_QUERY_" (cost=0.00..0.00 rows=1000 width=5088)
Node/s: node01d, node02d, node03d, node04d, node05d, node06d, node07d, node08d, node09d, node10d, node11d, node12d, node13d, node14d, node15d, node16d, node17d, node18d, node19d, node20d
-> Data Node Scan on table_join_140818 "_REMOTE_TABLE_QUERY_" (cost=0.00..0.00 rows=1000 width=5088)
Node/s: node01d, node02d, node03d, node04d, node05d, node06d, node07d, node08d, node09d, node10d, node11d, node12d, node13d, node14d, node15d, node16d, node17d, node18d, node19d, node20d
-> Data Node Scan on table_join_140825 "_REMOTE_TABLE_QUERY_" (cost=0.00..0.00 rows=1000 width=5088)
Node/s: node01d, node02d, node03d, node04d, node05d, node06d, node07d, node08d, node09d, node10d, node11d, node12d, node13d, node14d, node15d, node16d, node17d, node18d, node19d, node20d
-> Data Node Scan on table_join_140901 "_REMOTE_TABLE_QUERY_" (cost=0.00..0.00 rows=1000 width=5088)
Node/s: node01d, node02d, node03d, node04d, node05d, node06d, node07d, node08d, node09d, node10d, node11d, node12d, node13d, node14d, node15d, node16d, node17d, node18d, node19d, node20d
-> Data Node Scan on table_join_140908 "_REMOTE_TABLE_QUERY_" (cost=0.00..0.00 rows=1000 width=5088)
Node/s: node01d, node02d, node03d, node04d, node05d, node06d, node07d, node08d, node09d, node10d, node11d, node12d, node13d, node14d, node15d, node16d, node17d, node18d, node19d, node20d
-> Data Node Scan on table_join_140915 "_REMOTE_TABLE_QUERY_" (cost=0.00..0.00 rows=1000 width=5088)
Node/s: node01d, node02d, node03d, node04d, node05d, node06d, node07d, node08d, node09d, node10d, node11d, node12d, node13d, node14d, node15d, node16d, node17d, node18d, node19d, node20d
-> Data Node Scan on table_join_140922 "_REMOTE_TABLE_QUERY_" (cost=0.00..0.00 rows=1000 width=5088)
Node/s: node01d, node02d, node03d, node04d, node05d, node06d, node07d, node08d, node09d, node10d, node11d, node12d, node13d, node14d, node15d, node16d, node17d, node18d, node19d, node20d
-> Data Node Scan on table_join_140929 "_REMOTE_TABLE_QUERY_" (cost=0.00..0.00 rows=1000 width=5088)
Node/s: node01d, node02d, node03d, node04d, node05d, node06d, node07d, node08d, node09d, node10d, node11d, node12d, node13d, node14d, node15d, node16d, node17d, node18d, node19d, node20d
-> Data Node Scan on table_join_141006 "_REMOTE_TABLE_QUERY_" (cost=0.00..0.00 rows=1000 width=5088)
Node/s: node01d, node02d, node03d, node04d, node05d, node06d, node07d, node08d, node09d, node10d, node11d, node12d, node13d, node14d, node15d, node16d, node17d, node18d, node19d, node20d
-> Data Node Scan on table_join_141013 "_REMOTE_TABLE_QUERY_" (cost=0.00..0.00 rows=1000 width=5088)
Node/s: node01d, node02d, node03d, node04d, node05d, node06d, node07d, node08d, node09d, node10d, node11d, node12d, node13d, node14d, node15d, node16d, node17d, node18d, node19d, node20d
-> Data Node Scan on table_join_141020 "_REMOTE_TABLE_QUERY_" (cost=0.00..0.00 rows=1000 width=5088)
Node/s: node01d, node02d, node03d, node04d, node05d, node06d, node07d, node08d, node09d, node10d, node11d, node12d, node13d, node14d, node15d, node16d, node17d, node18d, node19d, node20d
-> Data Node Scan on table_join_141027 "_REMOTE_TABLE_QUERY_" (cost=0.00..0.00 rows=1000 width=5088)
Node/s: node01d, node02d, node03d, node04d, node05d, node06d, node07d, node08d, node09d, node10d, node11d, node12d, node13d, node14d, node15d, node16d, node17d, node18d, node19d, node20d
-> Data Node Scan on table_join_141103 "_REMOTE_TABLE_QUERY_" (cost=0.00..0.00 rows=1000 width=5088)
Node/s: node01d, node02d, node03d, node04d, node05d, node06d, node07d, node08d, node09d, node10d, node11d, node12d, node13d, node14d, node15d, node16d, node17d, node18d, node19d, node20d
-> Data Node Scan on table_join_141110 "_REMOTE_TABLE_QUERY_" (cost=0.00..0.00 rows=1000 width=5088)
Node/s: node01d, node02d, node03d, node04d, node05d, node06d, node07d, node08d, node09d, node10d, node11d, node12d, node13d, node14d, node15d, node16d, node17d, node18d, node19d, node20d
-> Data Node Scan on table_join_141117 "_REMOTE_TABLE_QUERY_" (cost=0.00..0.00 rows=1000 width=5088)
Node/s: node01d, node02d, node03d, node04d, node05d, node06d, node07d, node08d, node09d, node10d, node11d, node12d, node13d, node14d, node15d, node16d, node17d, node18d, node19d, node20d
-> Data Node Scan on table_join_141124 "_REMOTE_TABLE_QUERY_" (cost=0.00..0.00 rows=1000 width=5088)
Node/s: node01d, node02d, node03d, node04d, node05d, node06d, node07d, node08d, node09d, node10d, node11d, node12d, node13d, node14d, node15d, node16d, node17d, node18d, node19d, node20d
-> Data Node Scan on table_join_141201 "_REMOTE_TABLE_QUERY_" (cost=0.00..0.00 rows=1000 width=5088)
Node/s: node01d, node02d, node03d, node04d, node05d, node06d, node07d, node08d, node09d, node10d, node11d, node12d, node13d, node14d, node15d, node16d, node17d, node18d, node19d, node20d
-> Data Node Scan on table_join_141208 "_REMOTE_TABLE_QUERY_" (cost=0.00..0.00 rows=1000 width=5088)
Node/s: node01d, node02d, node03d, node04d, node05d, node06d, node07d, node08d, node09d, node10d, node11d, node12d, node13d, node14d, node15d, node16d, node17d, node18d, node19d, node20d
-> Data Node Scan on table_join_141215 "_REMOTE_TABLE_QUERY_" (cost=0.00..0.00 rows=1000 width=5088)
Node/s: node01d, node02d, node03d, node04d, node05d, node06d, node07d, node08d, node09d, node10d, node11d, node12d, node13d, node14d, node15d, node16d, node17d, node18d, node19d, node20d
-> Data Node Scan on table_join_141222 "_REMOTE_TABLE_QUERY_" (cost=0.00..0.00 rows=1000 width=5088)
Node/s: node01d, node02d, node03d, node04d, node05d, node06d, node07d, node08d, node09d, node10d, node11d, node12d, node13d, node14d, node15d, node16d, node17d, node18d, node19d, node20d
-> Data Node Scan on table_join_141229 "_REMOTE_TABLE_QUERY_" (cost=0.00..0.00 rows=1000 width=5088)
Node/s: node01d, node02d, node03d, node04d, node05d, node06d, node07d, node08d, node09d, node10d, node11d, node12d, node13d, node14d, node15d, node16d, node17d, node18d, node19d, node20d
-> Data Node Scan on table_join_150105 "_REMOTE_TABLE_QUERY_" (cost=0.00..0.00 rows=1000 width=5088)
Node/s: node01d, node02d, node03d, node04d, node05d, node06d, node07d, node08d, node09d, node10d, node11d, node12d, node13d, node14d, node15d, node16d, node17d, node18d, node19d, node20d
-> Data Node Scan on table_join_150112 "_REMOTE_TABLE_QUERY_" (cost=0.00..0.00 rows=1000 width=5088)
Node/s: node01d, node02d, node03d, node04d, node05d, node06d, node07d, node08d, node09d, node10d, node11d, node12d, node13d, node14d, node15d, node16d, node17d, node18d, node19d, node20d
-> Data Node Scan on table_join_150119 "_REMOTE_TABLE_QUERY_" (cost=0.00..0.00 rows=1000 width=5088)
Node/s: node01d, node02d, node03d, node04d, node05d, node06d, node07d, node08d, node09d, node10d, node11d, node12d, node13d, node14d, node15d, node16d, node17d, node18d, node19d, node20d
-> Data Node Scan on table_join_150126 "_REMOTE_TABLE_QUERY_" (cost=0.00..0.00 rows=1000 width=5088)
Node/s: node01d, node02d, node03d, node04d, node05d, node06d, node07d, node08d, node09d, node10d, node11d, node12d, node13d, node14d, node15d, node16d, node17d, node18d, node19d, node20d
(339 rows)
|
|
From: Ashutosh B. <ash...@en...> - 2013-03-14 15:26:19
|
On Thu, Mar 14, 2013 at 7:20 PM, Arni Sumarlidason < Arn...@md...> wrote: > *I ran this query,* > > select * from table t, table_lang l where t.id=l.id and l.date >= > '2012-12-01' and l.date < '2013-01-01';**** > > *with these results,* > Does this produce a crash without any data in the tables? If not, you need to give the data as well. Please provide exact reproduction steps, which when run without any modification, reproduce the crash. Also, you will need to tell how the table is distributed, what XC configuration you are using like how many datanodes/coordinators etc. > ** > > The connection to the server was lost. Attempting reset: Failed.**** > > Time: 241861.411 ms**** > > ** ** > > Thank you for quick response,**** > > Arni**** > > ** ** > > *From:* Ashutosh Bapat [mailto:ash...@en...] > *Sent:* Thursday, March 14, 2013 11:17 AM > *To:* Arni Sumarlidason > *Cc:* pos...@li... > *Subject:* Re: [Postgres-xc-general] Configuration Error - or?**** > > ** ** > > This should not happen, can you please provide reproduction steps.**** > > On Thu, Mar 14, 2013 at 6:40 PM, Arni Sumarlidason < > Arn...@md...> wrote:**** > > Good Morning / Evening,**** > > **** > > I think I may have a configuration problem… I have data distributed > relationally with date based partitions to 20 nodes. When I try to select > it from a coordinator the query executes quickly and the data nodes > promptly pump data to the coordinator. However I run out of memory(16bg) > and the postgres process crashes. Is there some way to make the coordinator > cache the data to its local table(local disk), or is there some way to get > around this memory issue? **** > > **** > > *Arni Sumarlidason | Software Engineer, Information Technology***** > > MDA | 820 West Diamond Ave | Gaithersburg, MD | USA**** > > O: 240-833-8200 D: 240-833-8318 M: 256-393-2803**** > > arn...@md...| http://www.mdaus.com<https://console.mxlogic.com/redir/?8VxNwQsL6zAQsCTNPb3a9EVd79I04GxHtenMTvANOoVcsCej76XCZEjrDmxfy6Hrc5v2vNmzaDE4endK3D7zqbbzaoVxdBwS-yr1vF6y0QJxqJ9Ao_gK7SbAvxYtfdbFEw4GMtAhrzIVlwq81GcqNd41sQglyidPYfDwedECSjrb38UsqenT3uvyc> > **** > > **** > > > > ------------------------------------------------------------------------------ > Everyone hates slow websites. So do we. > Make your web apps faster with AppDynamics > Download AppDynamics Lite for free today: > http://p.sf.net/sfu/appdyn_d2d_mar<https://console.mxlogic.com/redir/?8VxNwQsL6zAQsCTNPb3a9EVd79I06JR7u1KktrPlBA2Q1l1d0llS9K_9zANOoVcsCedTdXgCTeJ2v4dmSoa-4_yJ6lfg8sKrs7ef6Qmn6kNP2rb1JZ4S2_id41Fr2Rqj8N-xsfIn8_3UWuqnjh09lwX8yT7pOH0Qg3koRyq82VEwH4ArDUvf0srjdICSm6hMUQsLK6VP9UQ-a4ueI> > _______________________________________________ > Postgres-xc-general mailing list > Pos...@li... > https://lists.sourceforge.net/lists/listinfo/postgres-xc-general<https://console.mxlogic.com/redir/?4sMUMqenzhOqejrUVBxB4QsCzASjDdqymovaAWtiHsruW01_wrxYGjFxYGjB1SKblFcz7W5M-NszYfzFVK_9zANOoVcsCedTdXgCTeJ2v4dmSoa-4_yJ6lfg8sKrs7ef6Qmn6kNP2rb1JZ4S2_id41Fr2Rqj8N-xsfIn8_3UWuqnjh09lwX8yT7pOH0Qg3koRyq82VEwH4ArDUvf0srpdICSm6hMUQsLK6QuvVWDEq> > **** > > > > > -- > Best Wishes, > Ashutosh Bapat > EntepriseDB Corporation > The Enterprise Postgres Company**** > -- Best Wishes, Ashutosh Bapat EntepriseDB Corporation The Enterprise Postgres Company |
|
From: Arni S. <Arn...@md...> - 2013-03-14 15:21:24
|
I ran this query, select * from table t, table_lang l where t.id=l.id and l.date >= '2012-12-01' and l.date < '2013-01-01'; with these results, The connection to the server was lost. Attempting reset: Failed. Time: 241861.411 ms Thank you for quick response, Arni From: Ashutosh Bapat [mailto:ash...@en...] Sent: Thursday, March 14, 2013 11:17 AM To: Arni Sumarlidason Cc: pos...@li... Subject: Re: [Postgres-xc-general] Configuration Error - or? This should not happen, can you please provide reproduction steps. On Thu, Mar 14, 2013 at 6:40 PM, Arni Sumarlidason <Arn...@md...<mailto:Arn...@md...>> wrote: Good Morning / Evening, I think I may have a configuration problem... I have data distributed relationally with date based partitions to 20 nodes. When I try to select it from a coordinator the query executes quickly and the data nodes promptly pump data to the coordinator. However I run out of memory(16bg) and the postgres process crashes. Is there some way to make the coordinator cache the data to its local table(local disk), or is there some way to get around this memory issue? Arni Sumarlidason | Software Engineer, Information Technology MDA | 820 West Diamond Ave | Gaithersburg, MD | USA O: 240-833-8200 D: 240-833-8318 M: 256-393-2803 arn...@md...<mailto:arn...@md...>| http://www.mdaus.com<https://console.mxlogic.com/redir/?8VxNwQsL6zAQsCTNPb3a9EVd79I04GxHtenMTvANOoVcsCej76XCZEjrDmxfy6Hrc5v2vNmzaDE4endK3D7zqbbzaoVxdBwS-yr1vF6y0QJxqJ9Ao_gK7SbAvxYtfdbFEw4GMtAhrzIVlwq81GcqNd41sQglyidPYfDwedECSjrb38UsqenT3uvyc> ------------------------------------------------------------------------------ Everyone hates slow websites. So do we. Make your web apps faster with AppDynamics Download AppDynamics Lite for free today: http://p.sf.net/sfu/appdyn_d2d_mar<https://console.mxlogic.com/redir/?8VxNwQsL6zAQsCTNPb3a9EVd79I06JR7u1KktrPlBA2Q1l1d0llS9K_9zANOoVcsCedTdXgCTeJ2v4dmSoa-4_yJ6lfg8sKrs7ef6Qmn6kNP2rb1JZ4S2_id41Fr2Rqj8N-xsfIn8_3UWuqnjh09lwX8yT7pOH0Qg3koRyq82VEwH4ArDUvf0srjdICSm6hMUQsLK6VP9UQ-a4ueI> _______________________________________________ Postgres-xc-general mailing list Pos...@li...<mailto:Pos...@li...> https://lists.sourceforge.net/lists/listinfo/postgres-xc-general<https://console.mxlogic.com/redir/?4sMUMqenzhOqejrUVBxB4QsCzASjDdqymovaAWtiHsruW01_wrxYGjFxYGjB1SKblFcz7W5M-NszYfzFVK_9zANOoVcsCedTdXgCTeJ2v4dmSoa-4_yJ6lfg8sKrs7ef6Qmn6kNP2rb1JZ4S2_id41Fr2Rqj8N-xsfIn8_3UWuqnjh09lwX8yT7pOH0Qg3koRyq82VEwH4ArDUvf0srpdICSm6hMUQsLK6QuvVWDEq> -- Best Wishes, Ashutosh Bapat EntepriseDB Corporation The Enterprise Postgres Company |
|
From: Ashutosh B. <ash...@en...> - 2013-03-14 15:17:22
|
This should not happen, can you please provide reproduction steps. On Thu, Mar 14, 2013 at 6:40 PM, Arni Sumarlidason < Arn...@md...> wrote: > Good Morning / Evening,**** > > ** ** > > I think I may have a configuration problem… I have data distributed > relationally with date based partitions to 20 nodes. When I try to select > it from a coordinator the query executes quickly and the data nodes > promptly pump data to the coordinator. However I run out of memory(16bg) > and the postgres process crashes. Is there some way to make the coordinator > cache the data to its local table(local disk), or is there some way to get > around this memory issue? **** > > ** ** > > *Arni Sumarlidason | Software Engineer, Information Technology***** > > MDA | 820 West Diamond Ave | Gaithersburg, MD | USA**** > > O: 240-833-8200 D: 240-833-8318 M: 256-393-2803**** > > arn...@md...| http://www.mdaus.com **** > > ** ** > > > ------------------------------------------------------------------------------ > Everyone hates slow websites. So do we. > Make your web apps faster with AppDynamics > Download AppDynamics Lite for free today: > http://p.sf.net/sfu/appdyn_d2d_mar > _______________________________________________ > Postgres-xc-general mailing list > Pos...@li... > https://lists.sourceforge.net/lists/listinfo/postgres-xc-general > > -- Best Wishes, Ashutosh Bapat EntepriseDB Corporation The Enterprise Postgres Company |
|
From: Arni S. <Arn...@md...> - 2013-03-14 14:41:00
|
Good Morning / Evening, I think I may have a configuration problem... I have data distributed relationally with date based partitions to 20 nodes. When I try to select it from a coordinator the query executes quickly and the data nodes promptly pump data to the coordinator. However I run out of memory(16bg) and the postgres process crashes. Is there some way to make the coordinator cache the data to its local table(local disk), or is there some way to get around this memory issue? Arni Sumarlidason | Software Engineer, Information Technology MDA | 820 West Diamond Ave | Gaithersburg, MD | USA O: 240-833-8200 D: 240-833-8318 M: 256-393-2803 arn...@md...<mailto:arn...@md...>| http://www.mdaus.com<http://www.mdaus.com/> |
|
From: Arni S. <Arn...@md...> - 2013-03-11 14:35:52
|
Good Morning everyone, I hope everyone had a good weekend, I am attempting to cluster a table based on an index and I am receiving some weird messages. They don't seem to be fatal occurrences, but I am interested in what is causing them. Can anyone shed an light? >From the terminal: db_01=# cluster table_1207 using table_1207_pkey; WARNING: unexpected EOF on datanode connection CLUSTER Time: 5594.429 ms >From pg_log on coord: WARNING: Unexpected data on connection, cleaning. WARNING: Unexpected data on connection, cleaning. WARNING: Unexpected data on connection, cleaning. WARNING: Unexpected data on connection, cleaning. WARNING: Unexpected data on connection, cleaning. WARNING: unexpected EOF on datanode connection WARNING: Unexpected data on connection, cleaning. WARNING: Unexpected data on connection, cleaning. WARNING: Unexpected data on connection, cleaning. WARNING: unexpected EOF on datanode connection Arni Sumarlidason | Software Engineer, Information Technology MDA | 820 West Diamond Ave | Gaithersburg, MD | USA O: 240-833-8200 D: 240-833-8318 M: 256-393-2803 arn...@md...<mailto:arn...@md...>| http://www.mdaus.com<http://www.mdaus.com/> |
|
From: Koichi S. <koi...@gm...> - 2013-03-06 15:21:42
|
I see. Installing XC resources in existing directory could be harmful. I think it is similar reason why pg_basebackup requires empty directory to begin with. pgxc_ctl has a means to clean up the target directory so it will help to some extent. Regards; ---------- Koichi Suzuki 2013/3/6 Theodotos Andreou <th...@ub...>: > Guys just for the record I have found the solution on my original question. > > It seems that the GTM directory using the stock postgres-xc on Ubuntu was > pre-initialised. After I created a new GTM directory and initialise it my > self the error went away. This is from the logs: > > > 1:139942913603392:2013-03-06 11:58:07.168 EET -LOG: Starting GTM server at > (*:6666) -- control file /var/lib/postgres-xc/GTM-master/gtm.control > LOCATION: main, > /build/buildd/postgres-xc-1.0.2/build/../src/gtm/main/main.c:559 > 1:139942913603392:2013-03-06 11:58:07.168 EET -LOG: Restoring last GXID to > 10000 > > LOCATION: GTM_RestoreTxnInfo, > /build/buildd/postgres-xc-1.0.2/build/../src/gtm/main/gtm_txn.c:2599 > 1:139942913603392:2013-03-06 11:58:07.168 EET -LOG: Started to run as > GTM-Active. > LOCATION: main, > /build/buildd/postgres-xc-1.0.2/build/../src/gtm/main/main.c:631 > > Thank you both for your time > > > > On 03/04/2013 06:34 PM, Koichi Suzuki wrote: >> >> Removing the target is just in case. For example, pg_basebackup >> requires the target directory empty. It is better to copy only the >> "needed" ones. It is good thing to clear the resources to avoid >> conflict with other materials. If you'd like to eploy non-standard >> binaries, say, contrib, you can install them locally. It will be in >> your $pgxcInstallDir directories, which automatically goes to all the >> target servers with deploy command. >> >> Regards; >> ---------- >> Koichi Suzuki >> >> >> 2013/3/4 Nikhil Sontakke <ni...@st...>: >>>> >>>> postgres-xc@pgxc-ctl:~$ ls pgxc >>>> bin include lib nodes pgxcConf pgxc_ctl_log share >>>> >>>> >>>> After deployment the above directories were apparently deleted: >>>> >>>> >>>> postgres-xc@pgxc-ctl:~$ ls pgxc >>>> nodes pgxcConf pgxc_ctl_log >>>> >>>> >>> Ok, it seems this node is part of your deploy targets as well! >>> >>> "node13 node12 node06 node07 node08 node09" >>> >>> So that's why the deploy which first clears up existing directories >>> removes them as well. >>> >>> Think of this node where you compiled the sources as the management >>> node and use it to install binaries on other nodes. Obviously since >>> the script is basically a bash script you should be able to modify it >>> to check if this management node is also part of the deploy list as >>> well (and submit it back as a patch) and not remove the directories in >>> that case. >>> >>> >>>> Do you by any chance have the answer to my original question? :) : >>>> >>> The message comes in from a node to register with the GTM. Are there >>> any coordinator/datanode nodes already running when you start GTM? I >>> would suggest that you get a clean cluster up first and then we can >>> investigate further. >>> >>> Regards, >>> Nikhils >>> >>>> Why when I use gtm from Ubuntu repos (gtm_ctl -Z gtm -D >>>> /var/lib/postgres-xc/GTM start ) I get this in the logs: >>>> >>>> >>>> 1:139871852988224:2013-03-01 19:28:36.528 EET -LOG: Any GTM standby >>>> node not found in registered node(s). >>>> LOCATION: gtm_standby_connect_to_standby_int, >>>> /build/buildd/postgres-xc-1.0.2/build/../src/gtm/main/gtm_standby.c:378 >>>> 1:139871844607744:2013-03-01 19:28:36.528 EET -FATAL: Expecting a >>>> startup message, but received � >>>> LOCATION: GTM_ThreadMain, >>>> /build/buildd/postgres-xc-1.0.2/build/../src/gtm/main/main.c:985 >>>> 1:139871844607744:2013-03-01 19:28:36.529 EET -LOG: Cleaning up thread >>>> state >>>> LOCATION: GTM_ThreadCleanup, >>>> /build/buildd/postgres-xc-1.0.2/build/../src/gtm/main/gtm_thread.c:265 >>>> >>>> >>>> If I can solve this I can proceed the installation step by step as per >>>> this >>>> guide: >>>> >>>> http://wiki.postgresql.org/images/4/44/Pgxc_HA_20121024.pdf >>>> >>>> Does the above error means that I will have to set all services in the >>>> cluster? Are there any other dependencies on the GTM? >>>> >>>> >>>> On 03/03/2013 05:21 PM, Nikhil Sontakke wrote: >>>>> >>>>> Hi Theodotos, >>>>> >>>>> What does >>>>> >>>>> "ls /var/lib/postgres-xc/pgxc" show on your system? >>>>> >>>>> Ensure that compile the pgxc sources with >>>>> --prefix=/var/lib/postgres-xc/pgxc and don't forget to "make install" >>>>> >>>>> Regards, >>>>> Nikhils >>>>> >>>>> On Sun, Mar 3, 2013 at 7:18 PM, Theodotos Andreou <th...@ub...> >>>>> wrote: >>>>>> >>>>>> Hi Koichi, me again, >>>>>> >>>>>> I have build postgres-xc under $HOME/pgxc (In fact >>>>>> /var/lib/postgres-xc/pgxc >>>>>> >>>>>> This is how the directory looks before running deploy: >>>>>> >>>>>> postgres-xc@pgxc-ctl:~$ ls pgxc >>>>>> bin include lib nodes pgxcConf pgxc_ctl_log share >>>>>> >>>>>> When I run deploy I get: >>>>>> >>>>>> http://pastebin.com/nhgMrV8u >>>>>> >>>>>> tar fails to find bin, include etc again. >>>>>> >>>>>> Checking the contents of the directory I get: >>>>>> >>>>>> postgres-xc@pgxc-ctl:~$ ls pgxc >>>>>> nodes pgxcConf pgxc_ctl_log >>>>>> >>>>>> Is this a bug in the script or is it me not following the correct >>>>>> procedure? Please advice >>>>>> >>>>>> On 03/03/2013 03:14 PM, Theodotos Andreou wrote: >>>>>>> >>>>>>> OK I think I got it. This is not dowloading/building the binaries. >>>>>>> You >>>>>>> have to do that yourself! And then deploy will send it to the nodes. >>>>>>> Right? >>>>>>> >>>>>>> On 03/03/2013 08:34 AM, Theodotos Andreou wrote: >>>>>>>> >>>>>>>> Hi Koichi, >>>>>>>> >>>>>>>> So I have setup 4 coord/datanodes (node06 - 09) and two gtm nodes >>>>>>>> (node >>>>>>>> 12 - 13) as the default configuration of pgxc_ctl. I also setup a >>>>>>>> separate machine to run pgxc_ctl on. The only thing I change was the >>>>>>>> postgres user from koichi to postgres-xc. The postgres-xc user >>>>>>>> exists >>>>>>>> on >>>>>>>> all nodes and the control machine and there is passwordless >>>>>>>> configuration on all the nodes from the control machine. >>>>>>>> >>>>>>>> This is my config as shown on "xcshow config" >>>>>>>> >>>>>>>> http://pastebin.com/Hiz5bEzw >>>>>>>> >>>>>>>> >>>>>>>> When I run deploy all this is what I get: >>>>>>>> >>>>>>>> postgres-xc@pgxc-ctl:~$ pgxc_ctl deploy all >>>>>>>> tar: bin: Cannot stat: No such file or directory >>>>>>>> tar: include: Cannot stat: No such file or directory >>>>>>>> tar: lib: Cannot stat: No such file or directory >>>>>>>> tar: share: Cannot stat: No such file or directory >>>>>>>> tar: Exiting with failure status due to previous errors >>>>>>>> wk.tgz 100% 45 0.0KB/s 00:00 >>>>>>>> wk.tgz 100% 45 0.0KB/s 00:00 >>>>>>>> wk.tgz 100% 45 0.0KB/s 00:00 >>>>>>>> wk.tgz 100% 45 0.0KB/s 00:00 >>>>>>>> wk.tgz 100% 45 0.0KB/s 00:00 >>>>>>>> wk.tgz 100% 45 0.0KB/s 00:00 >>>>>>>> >>>>>>>> Am I supposed to run "deploy all" as the postgres-xc user or as >>>>>>>> root? I >>>>>>>> tried as root but I get different errors >>>>>>>> >>>>>>>> The ~/bin directory does in fact exist with the pgxc_ctl binary in >>>>>>>> exe >>>>>>>> right and it is owned by the postgres-xc user and group. >>>>>>>> >>>>>>>> Any ideas? >>>>>>>> >>>>>>>> On 03/02/2013 07:03 PM, Theodotos Andreou wrote: >>>>>>>>> >>>>>>>>> Thanks for the tip. I' ll try that and be back with more feedback >>>>>>>>> >>>>>>>>> On 03/02/2013 03:56 AM, Koichi Suzuki wrote: >>>>>>>>>> >>>>>>>>>> Hello, >>>>>>>>>> >>>>>>>>>> THanks a lot for the mail. A log as I see, there*s no proble in >>>>>>>>>> gtm.conf file. I looked into my gtm.log and did see any FATAL or >>>>>>>>>> ERROR messages in it. >>>>>>>>>> >>>>>>>>>> Could you try pgxc_ctl to configure your cluster? It will be >>>>>>>>>> found >>>>>>>>>> in https://github.com/koichi-szk/PGXC-Tools/tree/master/pgxc_ctl >>>>>>>>>> >>>>>>>>>> This is bash script so you can find what to do for XC cluster >>>>>>>>>> operation. Attached is my gtm.conf generated by pgxc_ctl. >>>>>>>>>> >>>>>>>>>> I hope this helps. >>>>>>>>>> >>>>>>>>>> Regards; >>>>>>>>>> ---------- >>>>>>>>>> Koichi Suzuki >>>>>>>>>> >>>>>>>>>> >>>>>>>>>> 2013/3/2 Theodotos Andreou <th...@ub...>: >>>>>>>>>>> >>>>>>>>>>> Hello to all >>>>>>>>>>> >>>>>>>>>>> I am trying to setup a HA postgres-xc cluster according to this >>>>>>>>>>> guide: >>>>>>>>>>> >>>>>>>>>>> http://wiki.postgresql.org/images/4/44/Pgxc_HA_20121024.pdf >>>>>>>>>>> >>>>>>>>>>> I am at the first step, configuring the GTM master (page 39). I >>>>>>>>>>> have >>>>>>>>>>> this configuration: >>>>>>>>>>> >>>>>>>>>>> nodename = 'mygtmnode01' >>>>>>>>>>> listen_addresses = '*' >>>>>>>>>>> port = 6666 >>>>>>>>>>> startup = ACT >>>>>>>>>>> >>>>>>>>>>> I run this command as postgres-xc: >>>>>>>>>>> $ gtm_ctl -Z gtm -D /var/lib/postgres-xc/GTM start >>>>>>>>>>> >>>>>>>>>>> The server start apparently at port 6666: >>>>>>>>>>> >>>>>>>>>>> # netstat -lnptu | grep gtm >>>>>>>>>>> tcp 0 0 0.0.0.0:6666 0.0.0.0:* LISTEN >>>>>>>>>>> 2408/gtm >>>>>>>>>>> tcp6 0 0 :::6666 :::* LISTEN >>>>>>>>>>> 2408/gtm >>>>>>>>>>> >>>>>>>>>>> But checking the logs I get repeatedly the following messages: >>>>>>>>>>> >>>>>>>>>>> 1:139871852988224:2013-03-01 19:28:36.528 EET -LOG: Any GTM >>>>>>>>>>> standby >>>>>>>>>>> node not found in registered node(s). >>>>>>>>>>> LOCATION: gtm_standby_connect_to_standby_int, >>>>>>>>>>> >>>>>>>>>>> >>>>>>>>>>> /build/buildd/postgres-xc-1.0.2/build/../src/gtm/main/gtm_standby.c:378 >>>>>>>>>>> 1:139871844607744:2013-03-01 19:28:36.528 EET -FATAL: Expecting >>>>>>>>>>> a >>>>>>>>>>> startup message, but received � >>>>>>>>>>> LOCATION: GTM_ThreadMain, >>>>>>>>>>> /build/buildd/postgres-xc-1.0.2/build/../src/gtm/main/main.c:985 >>>>>>>>>>> 1:139871844607744:2013-03-01 19:28:36.529 EET -LOG: Cleaning up >>>>>>>>>>> thread >>>>>>>>>>> state >>>>>>>>>>> LOCATION: GTM_ThreadCleanup, >>>>>>>>>>> >>>>>>>>>>> >>>>>>>>>>> /build/buildd/postgres-xc-1.0.2/build/../src/gtm/main/gtm_thread.c:265 >>>>>>>>>>> >>>>>>>>>>> That FATAL error above is scaring me. Am I doing something wrong? >>>>>>>>>>> >>>>>>>>>>> I am running a 64 bit ubuntu (13.04) and I have install >>>>>>>>>>> postegres-xc >>>>>>>>>>> from the repositories. >>>>>>>>>>> >>>>>>>>>>> I haven't set up the GTM standy yet. >>>>>>>>>>> >>>>>>>>>>> Secondary question 1: >>>>>>>>>>> >>>>>>>>>>> In the guide it says: >>>>>>>>>>> nodename = 'gtmName' >>>>>>>>>>> for both master and standby. Does this imply that they should >>>>>>>>>>> have >>>>>>>>>>> the >>>>>>>>>>> same node name? Does it have to be the same as the hostname? >>>>>>>>>>> >>>>>>>>>>> Secondary question 2: >>>>>>>>>>> >>>>>>>>>>> In the GTM proxy procedure when the master fails it suggests to >>>>>>>>>>> reconfigure the proxy to the new master (ex standby). Can we just >>>>>>>>>>> switch >>>>>>>>>>> the IP from master to slave using heartbeat or keepalived and >>>>>>>>>>> avoid >>>>>>>>>>> this >>>>>>>>>>> step? >>>>>>>>>>> >>>>>>>>>>> You would probably have figure it out already that my postgres-xc >>>>>>>>>>> status >>>>>>>>>>> is "newbie" :) >>>>>>>>>>> >>>>>>>>>>> Thanks >>>>>>>>>>> >>>>>>>>>>> Theo >>>>>>>>>>> >>>>>>>>>>> >>>>>>>>>>> >>>>>>>>>>> ------------------------------------------------------------------------------ >>>>>>>>>>> Everyone hates slow websites. So do we. >>>>>>>>>>> Make your web apps faster with AppDynamics >>>>>>>>>>> Download AppDynamics Lite for free today: >>>>>>>>>>> http://p.sf.net/sfu/appdyn_d2d_feb >>>>>>>>>>> _______________________________________________ >>>>>>>>>>> Postgres-xc-general mailing list >>>>>>>>>>> Pos...@li... >>>>>>>>>>> https://lists.sourceforge.net/lists/listinfo/postgres-xc-general >>>>>>>>> >>>>>>>>> >>>>>>>>> >>>>>>>>> ------------------------------------------------------------------------------ >>>>>>>>> Everyone hates slow websites. So do we. >>>>>>>>> Make your web apps faster with AppDynamics >>>>>>>>> Download AppDynamics Lite for free today: >>>>>>>>> http://p.sf.net/sfu/appdyn_d2d_feb >>>>>>>>> _______________________________________________ >>>>>>>>> Postgres-xc-general mailing list >>>>>>>>> Pos...@li... >>>>>>>>> https://lists.sourceforge.net/lists/listinfo/postgres-xc-general >>>>>>>> >>>>>>>> >>>>>>>> >>>>>>>> ------------------------------------------------------------------------------ >>>>>>>> Everyone hates slow websites. So do we. >>>>>>>> Make your web apps faster with AppDynamics >>>>>>>> Download AppDynamics Lite for free today: >>>>>>>> http://p.sf.net/sfu/appdyn_d2d_feb >>>>>>>> _______________________________________________ >>>>>>>> Postgres-xc-general mailing list >>>>>>>> Pos...@li... >>>>>>>> https://lists.sourceforge.net/lists/listinfo/postgres-xc-general >>>>>>> >>>>>>> >>>>>>> >>>>>>> ------------------------------------------------------------------------------ >>>>>>> Everyone hates slow websites. So do we. >>>>>>> Make your web apps faster with AppDynamics >>>>>>> Download AppDynamics Lite for free today: >>>>>>> http://p.sf.net/sfu/appdyn_d2d_feb >>>>>>> _______________________________________________ >>>>>>> Postgres-xc-general mailing list >>>>>>> Pos...@li... >>>>>>> https://lists.sourceforge.net/lists/listinfo/postgres-xc-general >>>>>> >>>>>> >>>>>> >>>>>> >>>>>> ------------------------------------------------------------------------------ >>>>>> Everyone hates slow websites. So do we. >>>>>> Make your web apps faster with AppDynamics >>>>>> Download AppDynamics Lite for free today: >>>>>> http://p.sf.net/sfu/appdyn_d2d_feb >>>>>> _______________________________________________ >>>>>> Postgres-xc-general mailing list >>>>>> Pos...@li... >>>>>> https://lists.sourceforge.net/lists/listinfo/postgres-xc-general >>>>> >>>>> >>>>> >>> >>> >>> -- >>> StormDB - http://www.stormdb.com >>> The Database Cloud >>> Postgres-XC Support and Service > > |
|
From: Theodotos A. <th...@ub...> - 2013-03-06 10:05:12
|
Guys just for the record I have found the solution on my original question. It seems that the GTM directory using the stock postgres-xc on Ubuntu was pre-initialised. After I created a new GTM directory and initialise it my self the error went away. This is from the logs: 1:139942913603392:2013-03-06 11:58:07.168 EET -LOG: Starting GTM server at (*:6666) -- control file /var/lib/postgres-xc/GTM-master/gtm.control LOCATION: main, /build/buildd/postgres-xc-1.0.2/build/../src/gtm/main/main.c:559 1:139942913603392:2013-03-06 11:58:07.168 EET -LOG: Restoring last GXID to 10000 LOCATION: GTM_RestoreTxnInfo, /build/buildd/postgres-xc-1.0.2/build/../src/gtm/main/gtm_txn.c:2599 1:139942913603392:2013-03-06 11:58:07.168 EET -LOG: Started to run as GTM-Active. LOCATION: main, /build/buildd/postgres-xc-1.0.2/build/../src/gtm/main/main.c:631 Thank you both for your time On 03/04/2013 06:34 PM, Koichi Suzuki wrote: > Removing the target is just in case. For example, pg_basebackup > requires the target directory empty. It is better to copy only the > "needed" ones. It is good thing to clear the resources to avoid > conflict with other materials. If you'd like to eploy non-standard > binaries, say, contrib, you can install them locally. It will be in > your $pgxcInstallDir directories, which automatically goes to all the > target servers with deploy command. > > Regards; > ---------- > Koichi Suzuki > > > 2013/3/4 Nikhil Sontakke <ni...@st...>: >>> postgres-xc@pgxc-ctl:~$ ls pgxc >>> bin include lib nodes pgxcConf pgxc_ctl_log share >>> >>> >>> After deployment the above directories were apparently deleted: >>> >>> >>> postgres-xc@pgxc-ctl:~$ ls pgxc >>> nodes pgxcConf pgxc_ctl_log >>> >>> >> Ok, it seems this node is part of your deploy targets as well! >> >> "node13 node12 node06 node07 node08 node09" >> >> So that's why the deploy which first clears up existing directories >> removes them as well. >> >> Think of this node where you compiled the sources as the management >> node and use it to install binaries on other nodes. Obviously since >> the script is basically a bash script you should be able to modify it >> to check if this management node is also part of the deploy list as >> well (and submit it back as a patch) and not remove the directories in >> that case. >> >> >>> Do you by any chance have the answer to my original question? :) : >>> >> The message comes in from a node to register with the GTM. Are there >> any coordinator/datanode nodes already running when you start GTM? I >> would suggest that you get a clean cluster up first and then we can >> investigate further. >> >> Regards, >> Nikhils >> >>> Why when I use gtm from Ubuntu repos (gtm_ctl -Z gtm -D >>> /var/lib/postgres-xc/GTM start ) I get this in the logs: >>> >>> >>> 1:139871852988224:2013-03-01 19:28:36.528 EET -LOG: Any GTM standby >>> node not found in registered node(s). >>> LOCATION: gtm_standby_connect_to_standby_int, >>> /build/buildd/postgres-xc-1.0.2/build/../src/gtm/main/gtm_standby.c:378 >>> 1:139871844607744:2013-03-01 19:28:36.528 EET -FATAL: Expecting a >>> startup message, but received � >>> LOCATION: GTM_ThreadMain, >>> /build/buildd/postgres-xc-1.0.2/build/../src/gtm/main/main.c:985 >>> 1:139871844607744:2013-03-01 19:28:36.529 EET -LOG: Cleaning up thread >>> state >>> LOCATION: GTM_ThreadCleanup, >>> /build/buildd/postgres-xc-1.0.2/build/../src/gtm/main/gtm_thread.c:265 >>> >>> >>> If I can solve this I can proceed the installation step by step as per this >>> guide: >>> >>> http://wiki.postgresql.org/images/4/44/Pgxc_HA_20121024.pdf >>> >>> Does the above error means that I will have to set all services in the >>> cluster? Are there any other dependencies on the GTM? >>> >>> >>> On 03/03/2013 05:21 PM, Nikhil Sontakke wrote: >>>> Hi Theodotos, >>>> >>>> What does >>>> >>>> "ls /var/lib/postgres-xc/pgxc" show on your system? >>>> >>>> Ensure that compile the pgxc sources with >>>> --prefix=/var/lib/postgres-xc/pgxc and don't forget to "make install" >>>> >>>> Regards, >>>> Nikhils >>>> >>>> On Sun, Mar 3, 2013 at 7:18 PM, Theodotos Andreou <th...@ub...> >>>> wrote: >>>>> Hi Koichi, me again, >>>>> >>>>> I have build postgres-xc under $HOME/pgxc (In fact >>>>> /var/lib/postgres-xc/pgxc >>>>> >>>>> This is how the directory looks before running deploy: >>>>> >>>>> postgres-xc@pgxc-ctl:~$ ls pgxc >>>>> bin include lib nodes pgxcConf pgxc_ctl_log share >>>>> >>>>> When I run deploy I get: >>>>> >>>>> http://pastebin.com/nhgMrV8u >>>>> >>>>> tar fails to find bin, include etc again. >>>>> >>>>> Checking the contents of the directory I get: >>>>> >>>>> postgres-xc@pgxc-ctl:~$ ls pgxc >>>>> nodes pgxcConf pgxc_ctl_log >>>>> >>>>> Is this a bug in the script or is it me not following the correct >>>>> procedure? Please advice >>>>> >>>>> On 03/03/2013 03:14 PM, Theodotos Andreou wrote: >>>>>> OK I think I got it. This is not dowloading/building the binaries. You >>>>>> have to do that yourself! And then deploy will send it to the nodes. >>>>>> Right? >>>>>> >>>>>> On 03/03/2013 08:34 AM, Theodotos Andreou wrote: >>>>>>> Hi Koichi, >>>>>>> >>>>>>> So I have setup 4 coord/datanodes (node06 - 09) and two gtm nodes (node >>>>>>> 12 - 13) as the default configuration of pgxc_ctl. I also setup a >>>>>>> separate machine to run pgxc_ctl on. The only thing I change was the >>>>>>> postgres user from koichi to postgres-xc. The postgres-xc user exists >>>>>>> on >>>>>>> all nodes and the control machine and there is passwordless >>>>>>> configuration on all the nodes from the control machine. >>>>>>> >>>>>>> This is my config as shown on "xcshow config" >>>>>>> >>>>>>> http://pastebin.com/Hiz5bEzw >>>>>>> >>>>>>> >>>>>>> When I run deploy all this is what I get: >>>>>>> >>>>>>> postgres-xc@pgxc-ctl:~$ pgxc_ctl deploy all >>>>>>> tar: bin: Cannot stat: No such file or directory >>>>>>> tar: include: Cannot stat: No such file or directory >>>>>>> tar: lib: Cannot stat: No such file or directory >>>>>>> tar: share: Cannot stat: No such file or directory >>>>>>> tar: Exiting with failure status due to previous errors >>>>>>> wk.tgz 100% 45 0.0KB/s 00:00 >>>>>>> wk.tgz 100% 45 0.0KB/s 00:00 >>>>>>> wk.tgz 100% 45 0.0KB/s 00:00 >>>>>>> wk.tgz 100% 45 0.0KB/s 00:00 >>>>>>> wk.tgz 100% 45 0.0KB/s 00:00 >>>>>>> wk.tgz 100% 45 0.0KB/s 00:00 >>>>>>> >>>>>>> Am I supposed to run "deploy all" as the postgres-xc user or as root? I >>>>>>> tried as root but I get different errors >>>>>>> >>>>>>> The ~/bin directory does in fact exist with the pgxc_ctl binary in exe >>>>>>> right and it is owned by the postgres-xc user and group. >>>>>>> >>>>>>> Any ideas? >>>>>>> >>>>>>> On 03/02/2013 07:03 PM, Theodotos Andreou wrote: >>>>>>>> Thanks for the tip. I' ll try that and be back with more feedback >>>>>>>> >>>>>>>> On 03/02/2013 03:56 AM, Koichi Suzuki wrote: >>>>>>>>> Hello, >>>>>>>>> >>>>>>>>> THanks a lot for the mail. A log as I see, there*s no proble in >>>>>>>>> gtm.conf file. I looked into my gtm.log and did see any FATAL or >>>>>>>>> ERROR messages in it. >>>>>>>>> >>>>>>>>> Could you try pgxc_ctl to configure your cluster? It will be found >>>>>>>>> in https://github.com/koichi-szk/PGXC-Tools/tree/master/pgxc_ctl >>>>>>>>> >>>>>>>>> This is bash script so you can find what to do for XC cluster >>>>>>>>> operation. Attached is my gtm.conf generated by pgxc_ctl. >>>>>>>>> >>>>>>>>> I hope this helps. >>>>>>>>> >>>>>>>>> Regards; >>>>>>>>> ---------- >>>>>>>>> Koichi Suzuki >>>>>>>>> >>>>>>>>> >>>>>>>>> 2013/3/2 Theodotos Andreou <th...@ub...>: >>>>>>>>>> Hello to all >>>>>>>>>> >>>>>>>>>> I am trying to setup a HA postgres-xc cluster according to this >>>>>>>>>> guide: >>>>>>>>>> >>>>>>>>>> http://wiki.postgresql.org/images/4/44/Pgxc_HA_20121024.pdf >>>>>>>>>> >>>>>>>>>> I am at the first step, configuring the GTM master (page 39). I have >>>>>>>>>> this configuration: >>>>>>>>>> >>>>>>>>>> nodename = 'mygtmnode01' >>>>>>>>>> listen_addresses = '*' >>>>>>>>>> port = 6666 >>>>>>>>>> startup = ACT >>>>>>>>>> >>>>>>>>>> I run this command as postgres-xc: >>>>>>>>>> $ gtm_ctl -Z gtm -D /var/lib/postgres-xc/GTM start >>>>>>>>>> >>>>>>>>>> The server start apparently at port 6666: >>>>>>>>>> >>>>>>>>>> # netstat -lnptu | grep gtm >>>>>>>>>> tcp 0 0 0.0.0.0:6666 0.0.0.0:* LISTEN >>>>>>>>>> 2408/gtm >>>>>>>>>> tcp6 0 0 :::6666 :::* LISTEN >>>>>>>>>> 2408/gtm >>>>>>>>>> >>>>>>>>>> But checking the logs I get repeatedly the following messages: >>>>>>>>>> >>>>>>>>>> 1:139871852988224:2013-03-01 19:28:36.528 EET -LOG: Any GTM standby >>>>>>>>>> node not found in registered node(s). >>>>>>>>>> LOCATION: gtm_standby_connect_to_standby_int, >>>>>>>>>> >>>>>>>>>> /build/buildd/postgres-xc-1.0.2/build/../src/gtm/main/gtm_standby.c:378 >>>>>>>>>> 1:139871844607744:2013-03-01 19:28:36.528 EET -FATAL: Expecting a >>>>>>>>>> startup message, but received � >>>>>>>>>> LOCATION: GTM_ThreadMain, >>>>>>>>>> /build/buildd/postgres-xc-1.0.2/build/../src/gtm/main/main.c:985 >>>>>>>>>> 1:139871844607744:2013-03-01 19:28:36.529 EET -LOG: Cleaning up >>>>>>>>>> thread >>>>>>>>>> state >>>>>>>>>> LOCATION: GTM_ThreadCleanup, >>>>>>>>>> >>>>>>>>>> /build/buildd/postgres-xc-1.0.2/build/../src/gtm/main/gtm_thread.c:265 >>>>>>>>>> >>>>>>>>>> That FATAL error above is scaring me. Am I doing something wrong? >>>>>>>>>> >>>>>>>>>> I am running a 64 bit ubuntu (13.04) and I have install postegres-xc >>>>>>>>>> from the repositories. >>>>>>>>>> >>>>>>>>>> I haven't set up the GTM standy yet. >>>>>>>>>> >>>>>>>>>> Secondary question 1: >>>>>>>>>> >>>>>>>>>> In the guide it says: >>>>>>>>>> nodename = 'gtmName' >>>>>>>>>> for both master and standby. Does this imply that they should have >>>>>>>>>> the >>>>>>>>>> same node name? Does it have to be the same as the hostname? >>>>>>>>>> >>>>>>>>>> Secondary question 2: >>>>>>>>>> >>>>>>>>>> In the GTM proxy procedure when the master fails it suggests to >>>>>>>>>> reconfigure the proxy to the new master (ex standby). Can we just >>>>>>>>>> switch >>>>>>>>>> the IP from master to slave using heartbeat or keepalived and avoid >>>>>>>>>> this >>>>>>>>>> step? >>>>>>>>>> >>>>>>>>>> You would probably have figure it out already that my postgres-xc >>>>>>>>>> status >>>>>>>>>> is "newbie" :) >>>>>>>>>> >>>>>>>>>> Thanks >>>>>>>>>> >>>>>>>>>> Theo >>>>>>>>>> >>>>>>>>>> >>>>>>>>>> ------------------------------------------------------------------------------ >>>>>>>>>> Everyone hates slow websites. So do we. >>>>>>>>>> Make your web apps faster with AppDynamics >>>>>>>>>> Download AppDynamics Lite for free today: >>>>>>>>>> http://p.sf.net/sfu/appdyn_d2d_feb >>>>>>>>>> _______________________________________________ >>>>>>>>>> Postgres-xc-general mailing list >>>>>>>>>> Pos...@li... >>>>>>>>>> https://lists.sourceforge.net/lists/listinfo/postgres-xc-general >>>>>>>> >>>>>>>> ------------------------------------------------------------------------------ >>>>>>>> Everyone hates slow websites. So do we. >>>>>>>> Make your web apps faster with AppDynamics >>>>>>>> Download AppDynamics Lite for free today: >>>>>>>> http://p.sf.net/sfu/appdyn_d2d_feb >>>>>>>> _______________________________________________ >>>>>>>> Postgres-xc-general mailing list >>>>>>>> Pos...@li... >>>>>>>> https://lists.sourceforge.net/lists/listinfo/postgres-xc-general >>>>>>> >>>>>>> ------------------------------------------------------------------------------ >>>>>>> Everyone hates slow websites. So do we. >>>>>>> Make your web apps faster with AppDynamics >>>>>>> Download AppDynamics Lite for free today: >>>>>>> http://p.sf.net/sfu/appdyn_d2d_feb >>>>>>> _______________________________________________ >>>>>>> Postgres-xc-general mailing list >>>>>>> Pos...@li... >>>>>>> https://lists.sourceforge.net/lists/listinfo/postgres-xc-general >>>>>> >>>>>> ------------------------------------------------------------------------------ >>>>>> Everyone hates slow websites. So do we. >>>>>> Make your web apps faster with AppDynamics >>>>>> Download AppDynamics Lite for free today: >>>>>> http://p.sf.net/sfu/appdyn_d2d_feb >>>>>> _______________________________________________ >>>>>> Postgres-xc-general mailing list >>>>>> Pos...@li... >>>>>> https://lists.sourceforge.net/lists/listinfo/postgres-xc-general >>>>> >>>>> >>>>> ------------------------------------------------------------------------------ >>>>> Everyone hates slow websites. So do we. >>>>> Make your web apps faster with AppDynamics >>>>> Download AppDynamics Lite for free today: >>>>> http://p.sf.net/sfu/appdyn_d2d_feb >>>>> _______________________________________________ >>>>> Postgres-xc-general mailing list >>>>> Pos...@li... >>>>> https://lists.sourceforge.net/lists/listinfo/postgres-xc-general >>>> >>>> >> >> >> -- >> StormDB - http://www.stormdb.com >> The Database Cloud >> Postgres-XC Support and Service |
|
From: Koichi S. <koi...@gm...> - 2013-03-04 16:34:10
|
Removing the target is just in case. For example, pg_basebackup requires the target directory empty. It is better to copy only the "needed" ones. It is good thing to clear the resources to avoid conflict with other materials. If you'd like to eploy non-standard binaries, say, contrib, you can install them locally. It will be in your $pgxcInstallDir directories, which automatically goes to all the target servers with deploy command. Regards; ---------- Koichi Suzuki 2013/3/4 Nikhil Sontakke <ni...@st...>: >> >> postgres-xc@pgxc-ctl:~$ ls pgxc >> bin include lib nodes pgxcConf pgxc_ctl_log share >> >> >> After deployment the above directories were apparently deleted: >> >> >> postgres-xc@pgxc-ctl:~$ ls pgxc >> nodes pgxcConf pgxc_ctl_log >> >> > > Ok, it seems this node is part of your deploy targets as well! > > "node13 node12 node06 node07 node08 node09" > > So that's why the deploy which first clears up existing directories > removes them as well. > > Think of this node where you compiled the sources as the management > node and use it to install binaries on other nodes. Obviously since > the script is basically a bash script you should be able to modify it > to check if this management node is also part of the deploy list as > well (and submit it back as a patch) and not remove the directories in > that case. > > >> Do you by any chance have the answer to my original question? :) : >> > > The message comes in from a node to register with the GTM. Are there > any coordinator/datanode nodes already running when you start GTM? I > would suggest that you get a clean cluster up first and then we can > investigate further. > > Regards, > Nikhils > >> Why when I use gtm from Ubuntu repos (gtm_ctl -Z gtm -D >> /var/lib/postgres-xc/GTM start ) I get this in the logs: >> >> >> 1:139871852988224:2013-03-01 19:28:36.528 EET -LOG: Any GTM standby >> node not found in registered node(s). >> LOCATION: gtm_standby_connect_to_standby_int, >> /build/buildd/postgres-xc-1.0.2/build/../src/gtm/main/gtm_standby.c:378 >> 1:139871844607744:2013-03-01 19:28:36.528 EET -FATAL: Expecting a >> startup message, but received � >> LOCATION: GTM_ThreadMain, >> /build/buildd/postgres-xc-1.0.2/build/../src/gtm/main/main.c:985 >> 1:139871844607744:2013-03-01 19:28:36.529 EET -LOG: Cleaning up thread >> state >> LOCATION: GTM_ThreadCleanup, >> /build/buildd/postgres-xc-1.0.2/build/../src/gtm/main/gtm_thread.c:265 >> >> >> If I can solve this I can proceed the installation step by step as per this >> guide: >> >> http://wiki.postgresql.org/images/4/44/Pgxc_HA_20121024.pdf >> >> Does the above error means that I will have to set all services in the >> cluster? Are there any other dependencies on the GTM? >> >> >> On 03/03/2013 05:21 PM, Nikhil Sontakke wrote: >>> >>> Hi Theodotos, >>> >>> What does >>> >>> "ls /var/lib/postgres-xc/pgxc" show on your system? >>> >>> Ensure that compile the pgxc sources with >>> --prefix=/var/lib/postgres-xc/pgxc and don't forget to "make install" >>> >>> Regards, >>> Nikhils >>> >>> On Sun, Mar 3, 2013 at 7:18 PM, Theodotos Andreou <th...@ub...> >>> wrote: >>>> >>>> Hi Koichi, me again, >>>> >>>> I have build postgres-xc under $HOME/pgxc (In fact >>>> /var/lib/postgres-xc/pgxc >>>> >>>> This is how the directory looks before running deploy: >>>> >>>> postgres-xc@pgxc-ctl:~$ ls pgxc >>>> bin include lib nodes pgxcConf pgxc_ctl_log share >>>> >>>> When I run deploy I get: >>>> >>>> http://pastebin.com/nhgMrV8u >>>> >>>> tar fails to find bin, include etc again. >>>> >>>> Checking the contents of the directory I get: >>>> >>>> postgres-xc@pgxc-ctl:~$ ls pgxc >>>> nodes pgxcConf pgxc_ctl_log >>>> >>>> Is this a bug in the script or is it me not following the correct >>>> procedure? Please advice >>>> >>>> On 03/03/2013 03:14 PM, Theodotos Andreou wrote: >>>>> >>>>> OK I think I got it. This is not dowloading/building the binaries. You >>>>> have to do that yourself! And then deploy will send it to the nodes. >>>>> Right? >>>>> >>>>> On 03/03/2013 08:34 AM, Theodotos Andreou wrote: >>>>>> >>>>>> Hi Koichi, >>>>>> >>>>>> So I have setup 4 coord/datanodes (node06 - 09) and two gtm nodes (node >>>>>> 12 - 13) as the default configuration of pgxc_ctl. I also setup a >>>>>> separate machine to run pgxc_ctl on. The only thing I change was the >>>>>> postgres user from koichi to postgres-xc. The postgres-xc user exists >>>>>> on >>>>>> all nodes and the control machine and there is passwordless >>>>>> configuration on all the nodes from the control machine. >>>>>> >>>>>> This is my config as shown on "xcshow config" >>>>>> >>>>>> http://pastebin.com/Hiz5bEzw >>>>>> >>>>>> >>>>>> When I run deploy all this is what I get: >>>>>> >>>>>> postgres-xc@pgxc-ctl:~$ pgxc_ctl deploy all >>>>>> tar: bin: Cannot stat: No such file or directory >>>>>> tar: include: Cannot stat: No such file or directory >>>>>> tar: lib: Cannot stat: No such file or directory >>>>>> tar: share: Cannot stat: No such file or directory >>>>>> tar: Exiting with failure status due to previous errors >>>>>> wk.tgz 100% 45 0.0KB/s 00:00 >>>>>> wk.tgz 100% 45 0.0KB/s 00:00 >>>>>> wk.tgz 100% 45 0.0KB/s 00:00 >>>>>> wk.tgz 100% 45 0.0KB/s 00:00 >>>>>> wk.tgz 100% 45 0.0KB/s 00:00 >>>>>> wk.tgz 100% 45 0.0KB/s 00:00 >>>>>> >>>>>> Am I supposed to run "deploy all" as the postgres-xc user or as root? I >>>>>> tried as root but I get different errors >>>>>> >>>>>> The ~/bin directory does in fact exist with the pgxc_ctl binary in exe >>>>>> right and it is owned by the postgres-xc user and group. >>>>>> >>>>>> Any ideas? >>>>>> >>>>>> On 03/02/2013 07:03 PM, Theodotos Andreou wrote: >>>>>>> >>>>>>> Thanks for the tip. I' ll try that and be back with more feedback >>>>>>> >>>>>>> On 03/02/2013 03:56 AM, Koichi Suzuki wrote: >>>>>>>> >>>>>>>> Hello, >>>>>>>> >>>>>>>> THanks a lot for the mail. A log as I see, there*s no proble in >>>>>>>> gtm.conf file. I looked into my gtm.log and did see any FATAL or >>>>>>>> ERROR messages in it. >>>>>>>> >>>>>>>> Could you try pgxc_ctl to configure your cluster? It will be found >>>>>>>> in https://github.com/koichi-szk/PGXC-Tools/tree/master/pgxc_ctl >>>>>>>> >>>>>>>> This is bash script so you can find what to do for XC cluster >>>>>>>> operation. Attached is my gtm.conf generated by pgxc_ctl. >>>>>>>> >>>>>>>> I hope this helps. >>>>>>>> >>>>>>>> Regards; >>>>>>>> ---------- >>>>>>>> Koichi Suzuki >>>>>>>> >>>>>>>> >>>>>>>> 2013/3/2 Theodotos Andreou <th...@ub...>: >>>>>>>>> >>>>>>>>> Hello to all >>>>>>>>> >>>>>>>>> I am trying to setup a HA postgres-xc cluster according to this >>>>>>>>> guide: >>>>>>>>> >>>>>>>>> http://wiki.postgresql.org/images/4/44/Pgxc_HA_20121024.pdf >>>>>>>>> >>>>>>>>> I am at the first step, configuring the GTM master (page 39). I have >>>>>>>>> this configuration: >>>>>>>>> >>>>>>>>> nodename = 'mygtmnode01' >>>>>>>>> listen_addresses = '*' >>>>>>>>> port = 6666 >>>>>>>>> startup = ACT >>>>>>>>> >>>>>>>>> I run this command as postgres-xc: >>>>>>>>> $ gtm_ctl -Z gtm -D /var/lib/postgres-xc/GTM start >>>>>>>>> >>>>>>>>> The server start apparently at port 6666: >>>>>>>>> >>>>>>>>> # netstat -lnptu | grep gtm >>>>>>>>> tcp 0 0 0.0.0.0:6666 0.0.0.0:* LISTEN >>>>>>>>> 2408/gtm >>>>>>>>> tcp6 0 0 :::6666 :::* LISTEN >>>>>>>>> 2408/gtm >>>>>>>>> >>>>>>>>> But checking the logs I get repeatedly the following messages: >>>>>>>>> >>>>>>>>> 1:139871852988224:2013-03-01 19:28:36.528 EET -LOG: Any GTM standby >>>>>>>>> node not found in registered node(s). >>>>>>>>> LOCATION: gtm_standby_connect_to_standby_int, >>>>>>>>> >>>>>>>>> /build/buildd/postgres-xc-1.0.2/build/../src/gtm/main/gtm_standby.c:378 >>>>>>>>> 1:139871844607744:2013-03-01 19:28:36.528 EET -FATAL: Expecting a >>>>>>>>> startup message, but received � >>>>>>>>> LOCATION: GTM_ThreadMain, >>>>>>>>> /build/buildd/postgres-xc-1.0.2/build/../src/gtm/main/main.c:985 >>>>>>>>> 1:139871844607744:2013-03-01 19:28:36.529 EET -LOG: Cleaning up >>>>>>>>> thread >>>>>>>>> state >>>>>>>>> LOCATION: GTM_ThreadCleanup, >>>>>>>>> >>>>>>>>> /build/buildd/postgres-xc-1.0.2/build/../src/gtm/main/gtm_thread.c:265 >>>>>>>>> >>>>>>>>> That FATAL error above is scaring me. Am I doing something wrong? >>>>>>>>> >>>>>>>>> I am running a 64 bit ubuntu (13.04) and I have install postegres-xc >>>>>>>>> from the repositories. >>>>>>>>> >>>>>>>>> I haven't set up the GTM standy yet. >>>>>>>>> >>>>>>>>> Secondary question 1: >>>>>>>>> >>>>>>>>> In the guide it says: >>>>>>>>> nodename = 'gtmName' >>>>>>>>> for both master and standby. Does this imply that they should have >>>>>>>>> the >>>>>>>>> same node name? Does it have to be the same as the hostname? >>>>>>>>> >>>>>>>>> Secondary question 2: >>>>>>>>> >>>>>>>>> In the GTM proxy procedure when the master fails it suggests to >>>>>>>>> reconfigure the proxy to the new master (ex standby). Can we just >>>>>>>>> switch >>>>>>>>> the IP from master to slave using heartbeat or keepalived and avoid >>>>>>>>> this >>>>>>>>> step? >>>>>>>>> >>>>>>>>> You would probably have figure it out already that my postgres-xc >>>>>>>>> status >>>>>>>>> is "newbie" :) >>>>>>>>> >>>>>>>>> Thanks >>>>>>>>> >>>>>>>>> Theo >>>>>>>>> >>>>>>>>> >>>>>>>>> ------------------------------------------------------------------------------ >>>>>>>>> Everyone hates slow websites. So do we. >>>>>>>>> Make your web apps faster with AppDynamics >>>>>>>>> Download AppDynamics Lite for free today: >>>>>>>>> http://p.sf.net/sfu/appdyn_d2d_feb >>>>>>>>> _______________________________________________ >>>>>>>>> Postgres-xc-general mailing list >>>>>>>>> Pos...@li... >>>>>>>>> https://lists.sourceforge.net/lists/listinfo/postgres-xc-general >>>>>>> >>>>>>> >>>>>>> ------------------------------------------------------------------------------ >>>>>>> Everyone hates slow websites. So do we. >>>>>>> Make your web apps faster with AppDynamics >>>>>>> Download AppDynamics Lite for free today: >>>>>>> http://p.sf.net/sfu/appdyn_d2d_feb >>>>>>> _______________________________________________ >>>>>>> Postgres-xc-general mailing list >>>>>>> Pos...@li... >>>>>>> https://lists.sourceforge.net/lists/listinfo/postgres-xc-general >>>>>> >>>>>> >>>>>> ------------------------------------------------------------------------------ >>>>>> Everyone hates slow websites. So do we. >>>>>> Make your web apps faster with AppDynamics >>>>>> Download AppDynamics Lite for free today: >>>>>> http://p.sf.net/sfu/appdyn_d2d_feb >>>>>> _______________________________________________ >>>>>> Postgres-xc-general mailing list >>>>>> Pos...@li... >>>>>> https://lists.sourceforge.net/lists/listinfo/postgres-xc-general >>>>> >>>>> >>>>> ------------------------------------------------------------------------------ >>>>> Everyone hates slow websites. So do we. >>>>> Make your web apps faster with AppDynamics >>>>> Download AppDynamics Lite for free today: >>>>> http://p.sf.net/sfu/appdyn_d2d_feb >>>>> _______________________________________________ >>>>> Postgres-xc-general mailing list >>>>> Pos...@li... >>>>> https://lists.sourceforge.net/lists/listinfo/postgres-xc-general >>>> >>>> >>>> >>>> ------------------------------------------------------------------------------ >>>> Everyone hates slow websites. So do we. >>>> Make your web apps faster with AppDynamics >>>> Download AppDynamics Lite for free today: >>>> http://p.sf.net/sfu/appdyn_d2d_feb >>>> _______________________________________________ >>>> Postgres-xc-general mailing list >>>> Pos...@li... >>>> https://lists.sourceforge.net/lists/listinfo/postgres-xc-general >>> >>> >>> >> > > > > -- > StormDB - http://www.stormdb.com > The Database Cloud > Postgres-XC Support and Service |
|
From: Theodotos A. <th...@ub...> - 2013-03-04 15:19:49
|
Actually when I tried the manual method I only setup just the GTM. No other nodes were present. I will try setup the whole cluster according to that HA guide and let you know. Regarding the pgxc_ctl script I think I have followed all the guidelines you guys suggested. Passwordless ssh using the postgre-xc user, keeping the deploy server out of the cluster, setting the correct directory but I still get fails. I will revert to the manual method and if that does not work as expected we can revisit pgxc_ctl. Thanks for your time guys. On 03/04/2013 08:55 AM, Nikhil Sontakke wrote: >> postgres-xc@pgxc-ctl:~$ ls pgxc >> bin include lib nodes pgxcConf pgxc_ctl_log share >> >> >> After deployment the above directories were apparently deleted: >> >> >> postgres-xc@pgxc-ctl:~$ ls pgxc >> nodes pgxcConf pgxc_ctl_log >> >> > Ok, it seems this node is part of your deploy targets as well! > > "node13 node12 node06 node07 node08 node09" > > So that's why the deploy which first clears up existing directories > removes them as well. > > Think of this node where you compiled the sources as the management > node and use it to install binaries on other nodes. Obviously since > the script is basically a bash script you should be able to modify it > to check if this management node is also part of the deploy list as > well (and submit it back as a patch) and not remove the directories in > that case. > > >> Do you by any chance have the answer to my original question? :) : >> > The message comes in from a node to register with the GTM. Are there > any coordinator/datanode nodes already running when you start GTM? I > would suggest that you get a clean cluster up first and then we can > investigate further. > > Regards, > Nikhils > >> Why when I use gtm from Ubuntu repos (gtm_ctl -Z gtm -D >> /var/lib/postgres-xc/GTM start ) I get this in the logs: >> >> >> 1:139871852988224:2013-03-01 19:28:36.528 EET -LOG: Any GTM standby >> node not found in registered node(s). >> LOCATION: gtm_standby_connect_to_standby_int, >> /build/buildd/postgres-xc-1.0.2/build/../src/gtm/main/gtm_standby.c:378 >> 1:139871844607744:2013-03-01 19:28:36.528 EET -FATAL: Expecting a >> startup message, but received � >> LOCATION: GTM_ThreadMain, >> /build/buildd/postgres-xc-1.0.2/build/../src/gtm/main/main.c:985 >> 1:139871844607744:2013-03-01 19:28:36.529 EET -LOG: Cleaning up thread >> state >> LOCATION: GTM_ThreadCleanup, >> /build/buildd/postgres-xc-1.0.2/build/../src/gtm/main/gtm_thread.c:265 >> >> >> If I can solve this I can proceed the installation step by step as per this >> guide: >> >> http://wiki.postgresql.org/images/4/44/Pgxc_HA_20121024.pdf >> >> Does the above error means that I will have to set all services in the >> cluster? Are there any other dependencies on the GTM? >> >> >> On 03/03/2013 05:21 PM, Nikhil Sontakke wrote: >>> Hi Theodotos, >>> >>> What does >>> >>> "ls /var/lib/postgres-xc/pgxc" show on your system? >>> >>> Ensure that compile the pgxc sources with >>> --prefix=/var/lib/postgres-xc/pgxc and don't forget to "make install" >>> >>> Regards, >>> Nikhils >>> >>> On Sun, Mar 3, 2013 at 7:18 PM, Theodotos Andreou <th...@ub...> >>> wrote: >>>> Hi Koichi, me again, >>>> >>>> I have build postgres-xc under $HOME/pgxc (In fact >>>> /var/lib/postgres-xc/pgxc >>>> >>>> This is how the directory looks before running deploy: >>>> >>>> postgres-xc@pgxc-ctl:~$ ls pgxc >>>> bin include lib nodes pgxcConf pgxc_ctl_log share >>>> >>>> When I run deploy I get: >>>> >>>> http://pastebin.com/nhgMrV8u >>>> >>>> tar fails to find bin, include etc again. >>>> >>>> Checking the contents of the directory I get: >>>> >>>> postgres-xc@pgxc-ctl:~$ ls pgxc >>>> nodes pgxcConf pgxc_ctl_log >>>> >>>> Is this a bug in the script or is it me not following the correct >>>> procedure? Please advice >>>> >>>> On 03/03/2013 03:14 PM, Theodotos Andreou wrote: >>>>> OK I think I got it. This is not dowloading/building the binaries. You >>>>> have to do that yourself! And then deploy will send it to the nodes. >>>>> Right? >>>>> >>>>> On 03/03/2013 08:34 AM, Theodotos Andreou wrote: >>>>>> Hi Koichi, >>>>>> >>>>>> So I have setup 4 coord/datanodes (node06 - 09) and two gtm nodes (node >>>>>> 12 - 13) as the default configuration of pgxc_ctl. I also setup a >>>>>> separate machine to run pgxc_ctl on. The only thing I change was the >>>>>> postgres user from koichi to postgres-xc. The postgres-xc user exists >>>>>> on >>>>>> all nodes and the control machine and there is passwordless >>>>>> configuration on all the nodes from the control machine. >>>>>> >>>>>> This is my config as shown on "xcshow config" >>>>>> >>>>>> http://pastebin.com/Hiz5bEzw >>>>>> >>>>>> >>>>>> When I run deploy all this is what I get: >>>>>> >>>>>> postgres-xc@pgxc-ctl:~$ pgxc_ctl deploy all >>>>>> tar: bin: Cannot stat: No such file or directory >>>>>> tar: include: Cannot stat: No such file or directory >>>>>> tar: lib: Cannot stat: No such file or directory >>>>>> tar: share: Cannot stat: No such file or directory >>>>>> tar: Exiting with failure status due to previous errors >>>>>> wk.tgz 100% 45 0.0KB/s 00:00 >>>>>> wk.tgz 100% 45 0.0KB/s 00:00 >>>>>> wk.tgz 100% 45 0.0KB/s 00:00 >>>>>> wk.tgz 100% 45 0.0KB/s 00:00 >>>>>> wk.tgz 100% 45 0.0KB/s 00:00 >>>>>> wk.tgz 100% 45 0.0KB/s 00:00 >>>>>> >>>>>> Am I supposed to run "deploy all" as the postgres-xc user or as root? I >>>>>> tried as root but I get different errors >>>>>> >>>>>> The ~/bin directory does in fact exist with the pgxc_ctl binary in exe >>>>>> right and it is owned by the postgres-xc user and group. >>>>>> >>>>>> Any ideas? >>>>>> >>>>>> On 03/02/2013 07:03 PM, Theodotos Andreou wrote: >>>>>>> Thanks for the tip. I' ll try that and be back with more feedback >>>>>>> >>>>>>> On 03/02/2013 03:56 AM, Koichi Suzuki wrote: >>>>>>>> Hello, >>>>>>>> >>>>>>>> THanks a lot for the mail. A log as I see, there*s no proble in >>>>>>>> gtm.conf file. I looked into my gtm.log and did see any FATAL or >>>>>>>> ERROR messages in it. >>>>>>>> >>>>>>>> Could you try pgxc_ctl to configure your cluster? It will be found >>>>>>>> in https://github.com/koichi-szk/PGXC-Tools/tree/master/pgxc_ctl >>>>>>>> >>>>>>>> This is bash script so you can find what to do for XC cluster >>>>>>>> operation. Attached is my gtm.conf generated by pgxc_ctl. >>>>>>>> >>>>>>>> I hope this helps. >>>>>>>> >>>>>>>> Regards; >>>>>>>> ---------- >>>>>>>> Koichi Suzuki >>>>>>>> >>>>>>>> >>>>>>>> 2013/3/2 Theodotos Andreou <th...@ub...>: >>>>>>>>> Hello to all >>>>>>>>> >>>>>>>>> I am trying to setup a HA postgres-xc cluster according to this >>>>>>>>> guide: >>>>>>>>> >>>>>>>>> http://wiki.postgresql.org/images/4/44/Pgxc_HA_20121024.pdf >>>>>>>>> >>>>>>>>> I am at the first step, configuring the GTM master (page 39). I have >>>>>>>>> this configuration: >>>>>>>>> >>>>>>>>> nodename = 'mygtmnode01' >>>>>>>>> listen_addresses = '*' >>>>>>>>> port = 6666 >>>>>>>>> startup = ACT >>>>>>>>> >>>>>>>>> I run this command as postgres-xc: >>>>>>>>> $ gtm_ctl -Z gtm -D /var/lib/postgres-xc/GTM start >>>>>>>>> >>>>>>>>> The server start apparently at port 6666: >>>>>>>>> >>>>>>>>> # netstat -lnptu | grep gtm >>>>>>>>> tcp 0 0 0.0.0.0:6666 0.0.0.0:* LISTEN >>>>>>>>> 2408/gtm >>>>>>>>> tcp6 0 0 :::6666 :::* LISTEN >>>>>>>>> 2408/gtm >>>>>>>>> >>>>>>>>> But checking the logs I get repeatedly the following messages: >>>>>>>>> >>>>>>>>> 1:139871852988224:2013-03-01 19:28:36.528 EET -LOG: Any GTM standby >>>>>>>>> node not found in registered node(s). >>>>>>>>> LOCATION: gtm_standby_connect_to_standby_int, >>>>>>>>> >>>>>>>>> /build/buildd/postgres-xc-1.0.2/build/../src/gtm/main/gtm_standby.c:378 >>>>>>>>> 1:139871844607744:2013-03-01 19:28:36.528 EET -FATAL: Expecting a >>>>>>>>> startup message, but received � >>>>>>>>> LOCATION: GTM_ThreadMain, >>>>>>>>> /build/buildd/postgres-xc-1.0.2/build/../src/gtm/main/main.c:985 >>>>>>>>> 1:139871844607744:2013-03-01 19:28:36.529 EET -LOG: Cleaning up >>>>>>>>> thread >>>>>>>>> state >>>>>>>>> LOCATION: GTM_ThreadCleanup, >>>>>>>>> >>>>>>>>> /build/buildd/postgres-xc-1.0.2/build/../src/gtm/main/gtm_thread.c:265 >>>>>>>>> >>>>>>>>> That FATAL error above is scaring me. Am I doing something wrong? >>>>>>>>> >>>>>>>>> I am running a 64 bit ubuntu (13.04) and I have install postegres-xc >>>>>>>>> from the repositories. >>>>>>>>> >>>>>>>>> I haven't set up the GTM standy yet. >>>>>>>>> >>>>>>>>> Secondary question 1: >>>>>>>>> >>>>>>>>> In the guide it says: >>>>>>>>> nodename = 'gtmName' >>>>>>>>> for both master and standby. Does this imply that they should have >>>>>>>>> the >>>>>>>>> same node name? Does it have to be the same as the hostname? >>>>>>>>> >>>>>>>>> Secondary question 2: >>>>>>>>> >>>>>>>>> In the GTM proxy procedure when the master fails it suggests to >>>>>>>>> reconfigure the proxy to the new master (ex standby). Can we just >>>>>>>>> switch >>>>>>>>> the IP from master to slave using heartbeat or keepalived and avoid >>>>>>>>> this >>>>>>>>> step? >>>>>>>>> >>>>>>>>> You would probably have figure it out already that my postgres-xc >>>>>>>>> status >>>>>>>>> is "newbie" :) >>>>>>>>> >>>>>>>>> Thanks >>>>>>>>> >>>>>>>>> Theo >>>>>>>>> >>>>>>>>> >>>>>>>>> ------------------------------------------------------------------------------ >>>>>>>>> Everyone hates slow websites. So do we. >>>>>>>>> Make your web apps faster with AppDynamics >>>>>>>>> Download AppDynamics Lite for free today: >>>>>>>>> http://p.sf.net/sfu/appdyn_d2d_feb >>>>>>>>> _______________________________________________ >>>>>>>>> Postgres-xc-general mailing list >>>>>>>>> Pos...@li... >>>>>>>>> https://lists.sourceforge.net/lists/listinfo/postgres-xc-general >>>>>>> >>>>>>> ------------------------------------------------------------------------------ >>>>>>> Everyone hates slow websites. So do we. >>>>>>> Make your web apps faster with AppDynamics >>>>>>> Download AppDynamics Lite for free today: >>>>>>> http://p.sf.net/sfu/appdyn_d2d_feb >>>>>>> _______________________________________________ >>>>>>> Postgres-xc-general mailing list >>>>>>> Pos...@li... >>>>>>> https://lists.sourceforge.net/lists/listinfo/postgres-xc-general >>>>>> >>>>>> ------------------------------------------------------------------------------ >>>>>> Everyone hates slow websites. So do we. >>>>>> Make your web apps faster with AppDynamics >>>>>> Download AppDynamics Lite for free today: >>>>>> http://p.sf.net/sfu/appdyn_d2d_feb >>>>>> _______________________________________________ >>>>>> Postgres-xc-general mailing list >>>>>> Pos...@li... >>>>>> https://lists.sourceforge.net/lists/listinfo/postgres-xc-general >>>>> >>>>> ------------------------------------------------------------------------------ >>>>> Everyone hates slow websites. So do we. >>>>> Make your web apps faster with AppDynamics >>>>> Download AppDynamics Lite for free today: >>>>> http://p.sf.net/sfu/appdyn_d2d_feb >>>>> _______________________________________________ >>>>> Postgres-xc-general mailing list >>>>> Pos...@li... >>>>> https://lists.sourceforge.net/lists/listinfo/postgres-xc-general >>>> >>>> >>>> ------------------------------------------------------------------------------ >>>> Everyone hates slow websites. So do we. >>>> Make your web apps faster with AppDynamics >>>> Download AppDynamics Lite for free today: >>>> http://p.sf.net/sfu/appdyn_d2d_feb >>>> _______________________________________________ >>>> Postgres-xc-general mailing list >>>> Pos...@li... >>>> https://lists.sourceforge.net/lists/listinfo/postgres-xc-general >>> >>> > > |
|
From: Nikhil S. <ni...@st...> - 2013-03-04 06:55:57
|
>
> postgres-xc@pgxc-ctl:~$ ls pgxc
> bin include lib nodes pgxcConf pgxc_ctl_log share
>
>
> After deployment the above directories were apparently deleted:
>
>
> postgres-xc@pgxc-ctl:~$ ls pgxc
> nodes pgxcConf pgxc_ctl_log
>
>
Ok, it seems this node is part of your deploy targets as well!
"node13 node12 node06 node07 node08 node09"
So that's why the deploy which first clears up existing directories
removes them as well.
Think of this node where you compiled the sources as the management
node and use it to install binaries on other nodes. Obviously since
the script is basically a bash script you should be able to modify it
to check if this management node is also part of the deploy list as
well (and submit it back as a patch) and not remove the directories in
that case.
> Do you by any chance have the answer to my original question? :) :
>
The message comes in from a node to register with the GTM. Are there
any coordinator/datanode nodes already running when you start GTM? I
would suggest that you get a clean cluster up first and then we can
investigate further.
Regards,
Nikhils
> Why when I use gtm from Ubuntu repos (gtm_ctl -Z gtm -D
> /var/lib/postgres-xc/GTM start ) I get this in the logs:
>
>
> 1:139871852988224:2013-03-01 19:28:36.528 EET -LOG: Any GTM standby
> node not found in registered node(s).
> LOCATION: gtm_standby_connect_to_standby_int,
> /build/buildd/postgres-xc-1.0.2/build/../src/gtm/main/gtm_standby.c:378
> 1:139871844607744:2013-03-01 19:28:36.528 EET -FATAL: Expecting a
> startup message, but received �
> LOCATION: GTM_ThreadMain,
> /build/buildd/postgres-xc-1.0.2/build/../src/gtm/main/main.c:985
> 1:139871844607744:2013-03-01 19:28:36.529 EET -LOG: Cleaning up thread
> state
> LOCATION: GTM_ThreadCleanup,
> /build/buildd/postgres-xc-1.0.2/build/../src/gtm/main/gtm_thread.c:265
>
>
> If I can solve this I can proceed the installation step by step as per this
> guide:
>
> http://wiki.postgresql.org/images/4/44/Pgxc_HA_20121024.pdf
>
> Does the above error means that I will have to set all services in the
> cluster? Are there any other dependencies on the GTM?
>
>
> On 03/03/2013 05:21 PM, Nikhil Sontakke wrote:
>>
>> Hi Theodotos,
>>
>> What does
>>
>> "ls /var/lib/postgres-xc/pgxc" show on your system?
>>
>> Ensure that compile the pgxc sources with
>> --prefix=/var/lib/postgres-xc/pgxc and don't forget to "make install"
>>
>> Regards,
>> Nikhils
>>
>> On Sun, Mar 3, 2013 at 7:18 PM, Theodotos Andreou <th...@ub...>
>> wrote:
>>>
>>> Hi Koichi, me again,
>>>
>>> I have build postgres-xc under $HOME/pgxc (In fact
>>> /var/lib/postgres-xc/pgxc
>>>
>>> This is how the directory looks before running deploy:
>>>
>>> postgres-xc@pgxc-ctl:~$ ls pgxc
>>> bin include lib nodes pgxcConf pgxc_ctl_log share
>>>
>>> When I run deploy I get:
>>>
>>> http://pastebin.com/nhgMrV8u
>>>
>>> tar fails to find bin, include etc again.
>>>
>>> Checking the contents of the directory I get:
>>>
>>> postgres-xc@pgxc-ctl:~$ ls pgxc
>>> nodes pgxcConf pgxc_ctl_log
>>>
>>> Is this a bug in the script or is it me not following the correct
>>> procedure? Please advice
>>>
>>> On 03/03/2013 03:14 PM, Theodotos Andreou wrote:
>>>>
>>>> OK I think I got it. This is not dowloading/building the binaries. You
>>>> have to do that yourself! And then deploy will send it to the nodes.
>>>> Right?
>>>>
>>>> On 03/03/2013 08:34 AM, Theodotos Andreou wrote:
>>>>>
>>>>> Hi Koichi,
>>>>>
>>>>> So I have setup 4 coord/datanodes (node06 - 09) and two gtm nodes (node
>>>>> 12 - 13) as the default configuration of pgxc_ctl. I also setup a
>>>>> separate machine to run pgxc_ctl on. The only thing I change was the
>>>>> postgres user from koichi to postgres-xc. The postgres-xc user exists
>>>>> on
>>>>> all nodes and the control machine and there is passwordless
>>>>> configuration on all the nodes from the control machine.
>>>>>
>>>>> This is my config as shown on "xcshow config"
>>>>>
>>>>> http://pastebin.com/Hiz5bEzw
>>>>>
>>>>>
>>>>> When I run deploy all this is what I get:
>>>>>
>>>>> postgres-xc@pgxc-ctl:~$ pgxc_ctl deploy all
>>>>> tar: bin: Cannot stat: No such file or directory
>>>>> tar: include: Cannot stat: No such file or directory
>>>>> tar: lib: Cannot stat: No such file or directory
>>>>> tar: share: Cannot stat: No such file or directory
>>>>> tar: Exiting with failure status due to previous errors
>>>>> wk.tgz 100% 45 0.0KB/s 00:00
>>>>> wk.tgz 100% 45 0.0KB/s 00:00
>>>>> wk.tgz 100% 45 0.0KB/s 00:00
>>>>> wk.tgz 100% 45 0.0KB/s 00:00
>>>>> wk.tgz 100% 45 0.0KB/s 00:00
>>>>> wk.tgz 100% 45 0.0KB/s 00:00
>>>>>
>>>>> Am I supposed to run "deploy all" as the postgres-xc user or as root? I
>>>>> tried as root but I get different errors
>>>>>
>>>>> The ~/bin directory does in fact exist with the pgxc_ctl binary in exe
>>>>> right and it is owned by the postgres-xc user and group.
>>>>>
>>>>> Any ideas?
>>>>>
>>>>> On 03/02/2013 07:03 PM, Theodotos Andreou wrote:
>>>>>>
>>>>>> Thanks for the tip. I' ll try that and be back with more feedback
>>>>>>
>>>>>> On 03/02/2013 03:56 AM, Koichi Suzuki wrote:
>>>>>>>
>>>>>>> Hello,
>>>>>>>
>>>>>>> THanks a lot for the mail. A log as I see, there*s no proble in
>>>>>>> gtm.conf file. I looked into my gtm.log and did see any FATAL or
>>>>>>> ERROR messages in it.
>>>>>>>
>>>>>>> Could you try pgxc_ctl to configure your cluster? It will be found
>>>>>>> in https://github.com/koichi-szk/PGXC-Tools/tree/master/pgxc_ctl
>>>>>>>
>>>>>>> This is bash script so you can find what to do for XC cluster
>>>>>>> operation. Attached is my gtm.conf generated by pgxc_ctl.
>>>>>>>
>>>>>>> I hope this helps.
>>>>>>>
>>>>>>> Regards;
>>>>>>> ----------
>>>>>>> Koichi Suzuki
>>>>>>>
>>>>>>>
>>>>>>> 2013/3/2 Theodotos Andreou <th...@ub...>:
>>>>>>>>
>>>>>>>> Hello to all
>>>>>>>>
>>>>>>>> I am trying to setup a HA postgres-xc cluster according to this
>>>>>>>> guide:
>>>>>>>>
>>>>>>>> http://wiki.postgresql.org/images/4/44/Pgxc_HA_20121024.pdf
>>>>>>>>
>>>>>>>> I am at the first step, configuring the GTM master (page 39). I have
>>>>>>>> this configuration:
>>>>>>>>
>>>>>>>> nodename = 'mygtmnode01'
>>>>>>>> listen_addresses = '*'
>>>>>>>> port = 6666
>>>>>>>> startup = ACT
>>>>>>>>
>>>>>>>> I run this command as postgres-xc:
>>>>>>>> $ gtm_ctl -Z gtm -D /var/lib/postgres-xc/GTM start
>>>>>>>>
>>>>>>>> The server start apparently at port 6666:
>>>>>>>>
>>>>>>>> # netstat -lnptu | grep gtm
>>>>>>>> tcp 0 0 0.0.0.0:6666 0.0.0.0:* LISTEN
>>>>>>>> 2408/gtm
>>>>>>>> tcp6 0 0 :::6666 :::* LISTEN
>>>>>>>> 2408/gtm
>>>>>>>>
>>>>>>>> But checking the logs I get repeatedly the following messages:
>>>>>>>>
>>>>>>>> 1:139871852988224:2013-03-01 19:28:36.528 EET -LOG: Any GTM standby
>>>>>>>> node not found in registered node(s).
>>>>>>>> LOCATION: gtm_standby_connect_to_standby_int,
>>>>>>>>
>>>>>>>> /build/buildd/postgres-xc-1.0.2/build/../src/gtm/main/gtm_standby.c:378
>>>>>>>> 1:139871844607744:2013-03-01 19:28:36.528 EET -FATAL: Expecting a
>>>>>>>> startup message, but received �
>>>>>>>> LOCATION: GTM_ThreadMain,
>>>>>>>> /build/buildd/postgres-xc-1.0.2/build/../src/gtm/main/main.c:985
>>>>>>>> 1:139871844607744:2013-03-01 19:28:36.529 EET -LOG: Cleaning up
>>>>>>>> thread
>>>>>>>> state
>>>>>>>> LOCATION: GTM_ThreadCleanup,
>>>>>>>>
>>>>>>>> /build/buildd/postgres-xc-1.0.2/build/../src/gtm/main/gtm_thread.c:265
>>>>>>>>
>>>>>>>> That FATAL error above is scaring me. Am I doing something wrong?
>>>>>>>>
>>>>>>>> I am running a 64 bit ubuntu (13.04) and I have install postegres-xc
>>>>>>>> from the repositories.
>>>>>>>>
>>>>>>>> I haven't set up the GTM standy yet.
>>>>>>>>
>>>>>>>> Secondary question 1:
>>>>>>>>
>>>>>>>> In the guide it says:
>>>>>>>> nodename = 'gtmName'
>>>>>>>> for both master and standby. Does this imply that they should have
>>>>>>>> the
>>>>>>>> same node name? Does it have to be the same as the hostname?
>>>>>>>>
>>>>>>>> Secondary question 2:
>>>>>>>>
>>>>>>>> In the GTM proxy procedure when the master fails it suggests to
>>>>>>>> reconfigure the proxy to the new master (ex standby). Can we just
>>>>>>>> switch
>>>>>>>> the IP from master to slave using heartbeat or keepalived and avoid
>>>>>>>> this
>>>>>>>> step?
>>>>>>>>
>>>>>>>> You would probably have figure it out already that my postgres-xc
>>>>>>>> status
>>>>>>>> is "newbie" :)
>>>>>>>>
>>>>>>>> Thanks
>>>>>>>>
>>>>>>>> Theo
>>>>>>>>
>>>>>>>>
>>>>>>>> ------------------------------------------------------------------------------
>>>>>>>> Everyone hates slow websites. So do we.
>>>>>>>> Make your web apps faster with AppDynamics
>>>>>>>> Download AppDynamics Lite for free today:
>>>>>>>> http://p.sf.net/sfu/appdyn_d2d_feb
>>>>>>>> _______________________________________________
>>>>>>>> Postgres-xc-general mailing list
>>>>>>>> Pos...@li...
>>>>>>>> https://lists.sourceforge.net/lists/listinfo/postgres-xc-general
>>>>>>
>>>>>>
>>>>>> ------------------------------------------------------------------------------
>>>>>> Everyone hates slow websites. So do we.
>>>>>> Make your web apps faster with AppDynamics
>>>>>> Download AppDynamics Lite for free today:
>>>>>> http://p.sf.net/sfu/appdyn_d2d_feb
>>>>>> _______________________________________________
>>>>>> Postgres-xc-general mailing list
>>>>>> Pos...@li...
>>>>>> https://lists.sourceforge.net/lists/listinfo/postgres-xc-general
>>>>>
>>>>>
>>>>> ------------------------------------------------------------------------------
>>>>> Everyone hates slow websites. So do we.
>>>>> Make your web apps faster with AppDynamics
>>>>> Download AppDynamics Lite for free today:
>>>>> http://p.sf.net/sfu/appdyn_d2d_feb
>>>>> _______________________________________________
>>>>> Postgres-xc-general mailing list
>>>>> Pos...@li...
>>>>> https://lists.sourceforge.net/lists/listinfo/postgres-xc-general
>>>>
>>>>
>>>> ------------------------------------------------------------------------------
>>>> Everyone hates slow websites. So do we.
>>>> Make your web apps faster with AppDynamics
>>>> Download AppDynamics Lite for free today:
>>>> http://p.sf.net/sfu/appdyn_d2d_feb
>>>> _______________________________________________
>>>> Postgres-xc-general mailing list
>>>> Pos...@li...
>>>> https://lists.sourceforge.net/lists/listinfo/postgres-xc-general
>>>
>>>
>>>
>>> ------------------------------------------------------------------------------
>>> Everyone hates slow websites. So do we.
>>> Make your web apps faster with AppDynamics
>>> Download AppDynamics Lite for free today:
>>> http://p.sf.net/sfu/appdyn_d2d_feb
>>> _______________________________________________
>>> Postgres-xc-general mailing list
>>> Pos...@li...
>>> https://lists.sourceforge.net/lists/listinfo/postgres-xc-general
>>
>>
>>
>
--
StormDB - http://www.stormdb.com
The Database Cloud
Postgres-XC Support and Service
|
|
From: Koichi S. <koi...@gm...> - 2013-03-04 02:18:07
|
Andreou;
Here's a couple of tips to use pgxc_ctl and it will also be useful in
configuring XC. Because XC spreads across more than one server
machine and because it is not good that root owns the database
resource, pgxc_ctl assumes not to use "root". Also, it is highly
recommended to set up key-based authentication between servers.
Pgxc_ctl depends upon scp and ssh so key-based authentication
eliminates the need to type your password again and again. Anyway,
you cannot/should not use "root" across the servers by scp and root.
This is quite bothering to setup such key-based authentication for all
the path. I'm using the attached script (set_autologin) for this,
which works at least on Ubuntu and Centos and saves some of the time.
Pgxc_ctl requires two user names, $pgxcUser and $pgxcOwner. The
former is Linux user name who owns all the Postgres-XC resouces. The
latter is Postgres-XC database super user name you're using. In my
sample configuration (you can get using prepare command, as found in
manual.txt), both are "koichi". You can choose whatever you like but
please understand the previous paragraph.
Before you "deploy" postgreSQL binaries, you must have them in the
local server you're running pgxc_ctl. This is typically the
directory name you specify with --prefix option in ./configure
command. pgxc_ctl assumes all the binaries, libraries, included
files and all the templatefiles in the same directory. If you don't
specify --prefix option in ./configure command, install directory will
be /usr/local/pgsql as specified by vanilla PostgreSQL binary build.
You should specify this local installation directory as the value of
$pgxcInstallDir. This value is used both in your local server
running pgxc_ctl and other servers you're installing postgres-XC
binaries.
Please note that all the materials will be installes in
$pgxcInstallDir/{bin|lib|include|share}. You should specify these
directories as a part of your $PATH and $LD_LIBRARY_PATH environment
in all the servers running Postgres-XC nodes
(gtm/gtm_proxy/coordinator/datanode).
Then you can specify the rest of your XC configuration to pgxc_ctl
configuration file. Yes, you can edit pgxc_ctl file directly but it
can easily mess things up. Instead, please use "prepare config"
command to create your configuration file. It is safer to edit
this.
Default full path of the configuration file is
$pgxcIndtallDir/pgxcConf. Please edit plgxc_ctl directory to change
this.
I do hope these help. Please do feel free to write to me if you have
any other issues.
Best Regards;
----------
Koichi Suzuki
2013/3/4 Theodotos Andreou <th...@ub...>:
> After I build the system it was showing this:
>
>
> postgres-xc@pgxc-ctl:~$ ls pgxc
> bin include lib nodes pgxcConf pgxc_ctl_log share
>
>
> After deployment the above directories were apparently deleted:
>
>
> postgres-xc@pgxc-ctl:~$ ls pgxc
> nodes pgxcConf pgxc_ctl_log
>
>
> postgres-xc was compiled with "--prefix=/var/lib/postgres-xc/pgxc" and "make
> install" finished successfully.
>
> Do you by any chance have the answer to my original question? :) :
>
> Why when I use gtm from Ubuntu repos (gtm_ctl -Z gtm -D
> /var/lib/postgres-xc/GTM start ) I get this in the logs:
>
>
> 1:139871852988224:2013-03-01 19:28:36.528 EET -LOG: Any GTM standby
> node not found in registered node(s).
> LOCATION: gtm_standby_connect_to_standby_int,
> /build/buildd/postgres-xc-1.0.2/build/../src/gtm/main/gtm_standby.c:378
> 1:139871844607744:2013-03-01 19:28:36.528 EET -FATAL: Expecting a
> startup message, but received �
> LOCATION: GTM_ThreadMain,
> /build/buildd/postgres-xc-1.0.2/build/../src/gtm/main/main.c:985
> 1:139871844607744:2013-03-01 19:28:36.529 EET -LOG: Cleaning up thread
> state
> LOCATION: GTM_ThreadCleanup,
> /build/buildd/postgres-xc-1.0.2/build/../src/gtm/main/gtm_thread.c:265
>
>
> If I can solve this I can proceed the installation step by step as per this
> guide:
>
> http://wiki.postgresql.org/images/4/44/Pgxc_HA_20121024.pdf
>
> Does the above error means that I will have to set all services in the
> cluster? Are there any other dependencies on the GTM?
>
>
> On 03/03/2013 05:21 PM, Nikhil Sontakke wrote:
>>
>> Hi Theodotos,
>>
>> What does
>>
>> "ls /var/lib/postgres-xc/pgxc" show on your system?
>>
>> Ensure that compile the pgxc sources with
>> --prefix=/var/lib/postgres-xc/pgxc and don't forget to "make install"
>>
>> Regards,
>> Nikhils
>>
>> On Sun, Mar 3, 2013 at 7:18 PM, Theodotos Andreou <th...@ub...>
>> wrote:
>>>
>>> Hi Koichi, me again,
>>>
>>> I have build postgres-xc under $HOME/pgxc (In fact
>>> /var/lib/postgres-xc/pgxc
>>>
>>> This is how the directory looks before running deploy:
>>>
>>> postgres-xc@pgxc-ctl:~$ ls pgxc
>>> bin include lib nodes pgxcConf pgxc_ctl_log share
>>>
>>> When I run deploy I get:
>>>
>>> http://pastebin.com/nhgMrV8u
>>>
>>> tar fails to find bin, include etc again.
>>>
>>> Checking the contents of the directory I get:
>>>
>>> postgres-xc@pgxc-ctl:~$ ls pgxc
>>> nodes pgxcConf pgxc_ctl_log
>>>
>>> Is this a bug in the script or is it me not following the correct
>>> procedure? Please advice
>>>
>>> On 03/03/2013 03:14 PM, Theodotos Andreou wrote:
>>>>
>>>> OK I think I got it. This is not dowloading/building the binaries. You
>>>> have to do that yourself! And then deploy will send it to the nodes.
>>>> Right?
>>>>
>>>> On 03/03/2013 08:34 AM, Theodotos Andreou wrote:
>>>>>
>>>>> Hi Koichi,
>>>>>
>>>>> So I have setup 4 coord/datanodes (node06 - 09) and two gtm nodes (node
>>>>> 12 - 13) as the default configuration of pgxc_ctl. I also setup a
>>>>> separate machine to run pgxc_ctl on. The only thing I change was the
>>>>> postgres user from koichi to postgres-xc. The postgres-xc user exists
>>>>> on
>>>>> all nodes and the control machine and there is passwordless
>>>>> configuration on all the nodes from the control machine.
>>>>>
>>>>> This is my config as shown on "xcshow config"
>>>>>
>>>>> http://pastebin.com/Hiz5bEzw
>>>>>
>>>>>
>>>>> When I run deploy all this is what I get:
>>>>>
>>>>> postgres-xc@pgxc-ctl:~$ pgxc_ctl deploy all
>>>>> tar: bin: Cannot stat: No such file or directory
>>>>> tar: include: Cannot stat: No such file or directory
>>>>> tar: lib: Cannot stat: No such file or directory
>>>>> tar: share: Cannot stat: No such file or directory
>>>>> tar: Exiting with failure status due to previous errors
>>>>> wk.tgz 100% 45 0.0KB/s 00:00
>>>>> wk.tgz 100% 45 0.0KB/s 00:00
>>>>> wk.tgz 100% 45 0.0KB/s 00:00
>>>>> wk.tgz 100% 45 0.0KB/s 00:00
>>>>> wk.tgz 100% 45 0.0KB/s 00:00
>>>>> wk.tgz 100% 45 0.0KB/s 00:00
>>>>>
>>>>> Am I supposed to run "deploy all" as the postgres-xc user or as root? I
>>>>> tried as root but I get different errors
>>>>>
>>>>> The ~/bin directory does in fact exist with the pgxc_ctl binary in exe
>>>>> right and it is owned by the postgres-xc user and group.
>>>>>
>>>>> Any ideas?
>>>>>
>>>>> On 03/02/2013 07:03 PM, Theodotos Andreou wrote:
>>>>>>
>>>>>> Thanks for the tip. I' ll try that and be back with more feedback
>>>>>>
>>>>>> On 03/02/2013 03:56 AM, Koichi Suzuki wrote:
>>>>>>>
>>>>>>> Hello,
>>>>>>>
>>>>>>> THanks a lot for the mail. A log as I see, there*s no proble in
>>>>>>> gtm.conf file. I looked into my gtm.log and did see any FATAL or
>>>>>>> ERROR messages in it.
>>>>>>>
>>>>>>> Could you try pgxc_ctl to configure your cluster? It will be found
>>>>>>> in https://github.com/koichi-szk/PGXC-Tools/tree/master/pgxc_ctl
>>>>>>>
>>>>>>> This is bash script so you can find what to do for XC cluster
>>>>>>> operation. Attached is my gtm.conf generated by pgxc_ctl.
>>>>>>>
>>>>>>> I hope this helps.
>>>>>>>
>>>>>>> Regards;
>>>>>>> ----------
>>>>>>> Koichi Suzuki
>>>>>>>
>>>>>>>
>>>>>>> 2013/3/2 Theodotos Andreou <th...@ub...>:
>>>>>>>>
>>>>>>>> Hello to all
>>>>>>>>
>>>>>>>> I am trying to setup a HA postgres-xc cluster according to this
>>>>>>>> guide:
>>>>>>>>
>>>>>>>> http://wiki.postgresql.org/images/4/44/Pgxc_HA_20121024.pdf
>>>>>>>>
>>>>>>>> I am at the first step, configuring the GTM master (page 39). I have
>>>>>>>> this configuration:
>>>>>>>>
>>>>>>>> nodename = 'mygtmnode01'
>>>>>>>> listen_addresses = '*'
>>>>>>>> port = 6666
>>>>>>>> startup = ACT
>>>>>>>>
>>>>>>>> I run this command as postgres-xc:
>>>>>>>> $ gtm_ctl -Z gtm -D /var/lib/postgres-xc/GTM start
>>>>>>>>
>>>>>>>> The server start apparently at port 6666:
>>>>>>>>
>>>>>>>> # netstat -lnptu | grep gtm
>>>>>>>> tcp 0 0 0.0.0.0:6666 0.0.0.0:* LISTEN
>>>>>>>> 2408/gtm
>>>>>>>> tcp6 0 0 :::6666 :::* LISTEN
>>>>>>>> 2408/gtm
>>>>>>>>
>>>>>>>> But checking the logs I get repeatedly the following messages:
>>>>>>>>
>>>>>>>> 1:139871852988224:2013-03-01 19:28:36.528 EET -LOG: Any GTM standby
>>>>>>>> node not found in registered node(s).
>>>>>>>> LOCATION: gtm_standby_connect_to_standby_int,
>>>>>>>>
>>>>>>>> /build/buildd/postgres-xc-1.0.2/build/../src/gtm/main/gtm_standby.c:378
>>>>>>>> 1:139871844607744:2013-03-01 19:28:36.528 EET -FATAL: Expecting a
>>>>>>>> startup message, but received �
>>>>>>>> LOCATION: GTM_ThreadMain,
>>>>>>>> /build/buildd/postgres-xc-1.0.2/build/../src/gtm/main/main.c:985
>>>>>>>> 1:139871844607744:2013-03-01 19:28:36.529 EET -LOG: Cleaning up
>>>>>>>> thread
>>>>>>>> state
>>>>>>>> LOCATION: GTM_ThreadCleanup,
>>>>>>>>
>>>>>>>> /build/buildd/postgres-xc-1.0.2/build/../src/gtm/main/gtm_thread.c:265
>>>>>>>>
>>>>>>>> That FATAL error above is scaring me. Am I doing something wrong?
>>>>>>>>
>>>>>>>> I am running a 64 bit ubuntu (13.04) and I have install postegres-xc
>>>>>>>> from the repositories.
>>>>>>>>
>>>>>>>> I haven't set up the GTM standy yet.
>>>>>>>>
>>>>>>>> Secondary question 1:
>>>>>>>>
>>>>>>>> In the guide it says:
>>>>>>>> nodename = 'gtmName'
>>>>>>>> for both master and standby. Does this imply that they should have
>>>>>>>> the
>>>>>>>> same node name? Does it have to be the same as the hostname?
>>>>>>>>
>>>>>>>> Secondary question 2:
>>>>>>>>
>>>>>>>> In the GTM proxy procedure when the master fails it suggests to
>>>>>>>> reconfigure the proxy to the new master (ex standby). Can we just
>>>>>>>> switch
>>>>>>>> the IP from master to slave using heartbeat or keepalived and avoid
>>>>>>>> this
>>>>>>>> step?
>>>>>>>>
>>>>>>>> You would probably have figure it out already that my postgres-xc
>>>>>>>> status
>>>>>>>> is "newbie" :)
>>>>>>>>
>>>>>>>> Thanks
>>>>>>>>
>>>>>>>> Theo
>>>>>>>>
>>>>>>>>
>>>>>>>> ------------------------------------------------------------------------------
>>>>>>>> Everyone hates slow websites. So do we.
>>>>>>>> Make your web apps faster with AppDynamics
>>>>>>>> Download AppDynamics Lite for free today:
>>>>>>>> http://p.sf.net/sfu/appdyn_d2d_feb
>>>>>>>> _______________________________________________
>>>>>>>> Postgres-xc-general mailing list
>>>>>>>> Pos...@li...
>>>>>>>> https://lists.sourceforge.net/lists/listinfo/postgres-xc-general
>>>>>>
>>>>>>
>>>>>> ------------------------------------------------------------------------------
>>>>>> Everyone hates slow websites. So do we.
>>>>>> Make your web apps faster with AppDynamics
>>>>>> Download AppDynamics Lite for free today:
>>>>>> http://p.sf.net/sfu/appdyn_d2d_feb
>>>>>> _______________________________________________
>>>>>> Postgres-xc-general mailing list
>>>>>> Pos...@li...
>>>>>> https://lists.sourceforge.net/lists/listinfo/postgres-xc-general
>>>>>
>>>>>
>>>>> ------------------------------------------------------------------------------
>>>>> Everyone hates slow websites. So do we.
>>>>> Make your web apps faster with AppDynamics
>>>>> Download AppDynamics Lite for free today:
>>>>> http://p.sf.net/sfu/appdyn_d2d_feb
>>>>> _______________________________________________
>>>>> Postgres-xc-general mailing list
>>>>> Pos...@li...
>>>>> https://lists.sourceforge.net/lists/listinfo/postgres-xc-general
>>>>
>>>>
>>>> ------------------------------------------------------------------------------
>>>> Everyone hates slow websites. So do we.
>>>> Make your web apps faster with AppDynamics
>>>> Download AppDynamics Lite for free today:
>>>> http://p.sf.net/sfu/appdyn_d2d_feb
>>>> _______________________________________________
>>>> Postgres-xc-general mailing list
>>>> Pos...@li...
>>>> https://lists.sourceforge.net/lists/listinfo/postgres-xc-general
>>>
>>>
>>>
>>> ------------------------------------------------------------------------------
>>> Everyone hates slow websites. So do we.
>>> Make your web apps faster with AppDynamics
>>> Download AppDynamics Lite for free today:
>>> http://p.sf.net/sfu/appdyn_d2d_feb
>>> _______________________________________________
>>> Postgres-xc-general mailing list
>>> Pos...@li...
>>> https://lists.sourceforge.net/lists/listinfo/postgres-xc-general
>>
>>
>>
>
|
|
From: Theodotos A. <th...@ub...> - 2013-03-03 16:13:49
|
After I build the system it was showing this: postgres-xc@pgxc-ctl:~$ ls pgxc bin include lib nodes pgxcConf pgxc_ctl_log share After deployment the above directories were apparently deleted: postgres-xc@pgxc-ctl:~$ ls pgxc nodes pgxcConf pgxc_ctl_log postgres-xc was compiled with "--prefix=/var/lib/postgres-xc/pgxc" and "make install" finished successfully. Do you by any chance have the answer to my original question? :) : Why when I use gtm from Ubuntu repos (gtm_ctl -Z gtm -D /var/lib/postgres-xc/GTM start ) I get this in the logs: 1:139871852988224:2013-03-01 19:28:36.528 EET -LOG: Any GTM standby node not found in registered node(s). LOCATION: gtm_standby_connect_to_standby_int, /build/buildd/postgres-xc-1.0.2/build/../src/gtm/main/gtm_standby.c:378 1:139871844607744:2013-03-01 19:28:36.528 EET -FATAL: Expecting a startup message, but received � LOCATION: GTM_ThreadMain, /build/buildd/postgres-xc-1.0.2/build/../src/gtm/main/main.c:985 1:139871844607744:2013-03-01 19:28:36.529 EET -LOG: Cleaning up thread state LOCATION: GTM_ThreadCleanup, /build/buildd/postgres-xc-1.0.2/build/../src/gtm/main/gtm_thread.c:265 If I can solve this I can proceed the installation step by step as per this guide: http://wiki.postgresql.org/images/4/44/Pgxc_HA_20121024.pdf Does the above error means that I will have to set all services in the cluster? Are there any other dependencies on the GTM? On 03/03/2013 05:21 PM, Nikhil Sontakke wrote: > Hi Theodotos, > > What does > > "ls /var/lib/postgres-xc/pgxc" show on your system? > > Ensure that compile the pgxc sources with > --prefix=/var/lib/postgres-xc/pgxc and don't forget to "make install" > > Regards, > Nikhils > > On Sun, Mar 3, 2013 at 7:18 PM, Theodotos Andreou <th...@ub...> wrote: >> Hi Koichi, me again, >> >> I have build postgres-xc under $HOME/pgxc (In fact /var/lib/postgres-xc/pgxc >> >> This is how the directory looks before running deploy: >> >> postgres-xc@pgxc-ctl:~$ ls pgxc >> bin include lib nodes pgxcConf pgxc_ctl_log share >> >> When I run deploy I get: >> >> http://pastebin.com/nhgMrV8u >> >> tar fails to find bin, include etc again. >> >> Checking the contents of the directory I get: >> >> postgres-xc@pgxc-ctl:~$ ls pgxc >> nodes pgxcConf pgxc_ctl_log >> >> Is this a bug in the script or is it me not following the correct >> procedure? Please advice >> >> On 03/03/2013 03:14 PM, Theodotos Andreou wrote: >>> OK I think I got it. This is not dowloading/building the binaries. You >>> have to do that yourself! And then deploy will send it to the nodes. Right? >>> >>> On 03/03/2013 08:34 AM, Theodotos Andreou wrote: >>>> Hi Koichi, >>>> >>>> So I have setup 4 coord/datanodes (node06 - 09) and two gtm nodes (node >>>> 12 - 13) as the default configuration of pgxc_ctl. I also setup a >>>> separate machine to run pgxc_ctl on. The only thing I change was the >>>> postgres user from koichi to postgres-xc. The postgres-xc user exists on >>>> all nodes and the control machine and there is passwordless >>>> configuration on all the nodes from the control machine. >>>> >>>> This is my config as shown on "xcshow config" >>>> >>>> http://pastebin.com/Hiz5bEzw >>>> >>>> >>>> When I run deploy all this is what I get: >>>> >>>> postgres-xc@pgxc-ctl:~$ pgxc_ctl deploy all >>>> tar: bin: Cannot stat: No such file or directory >>>> tar: include: Cannot stat: No such file or directory >>>> tar: lib: Cannot stat: No such file or directory >>>> tar: share: Cannot stat: No such file or directory >>>> tar: Exiting with failure status due to previous errors >>>> wk.tgz 100% 45 0.0KB/s 00:00 >>>> wk.tgz 100% 45 0.0KB/s 00:00 >>>> wk.tgz 100% 45 0.0KB/s 00:00 >>>> wk.tgz 100% 45 0.0KB/s 00:00 >>>> wk.tgz 100% 45 0.0KB/s 00:00 >>>> wk.tgz 100% 45 0.0KB/s 00:00 >>>> >>>> Am I supposed to run "deploy all" as the postgres-xc user or as root? I >>>> tried as root but I get different errors >>>> >>>> The ~/bin directory does in fact exist with the pgxc_ctl binary in exe >>>> right and it is owned by the postgres-xc user and group. >>>> >>>> Any ideas? >>>> >>>> On 03/02/2013 07:03 PM, Theodotos Andreou wrote: >>>>> Thanks for the tip. I' ll try that and be back with more feedback >>>>> >>>>> On 03/02/2013 03:56 AM, Koichi Suzuki wrote: >>>>>> Hello, >>>>>> >>>>>> THanks a lot for the mail. A log as I see, there*s no proble in >>>>>> gtm.conf file. I looked into my gtm.log and did see any FATAL or >>>>>> ERROR messages in it. >>>>>> >>>>>> Could you try pgxc_ctl to configure your cluster? It will be found >>>>>> in https://github.com/koichi-szk/PGXC-Tools/tree/master/pgxc_ctl >>>>>> >>>>>> This is bash script so you can find what to do for XC cluster >>>>>> operation. Attached is my gtm.conf generated by pgxc_ctl. >>>>>> >>>>>> I hope this helps. >>>>>> >>>>>> Regards; >>>>>> ---------- >>>>>> Koichi Suzuki >>>>>> >>>>>> >>>>>> 2013/3/2 Theodotos Andreou <th...@ub...>: >>>>>>> Hello to all >>>>>>> >>>>>>> I am trying to setup a HA postgres-xc cluster according to this guide: >>>>>>> >>>>>>> http://wiki.postgresql.org/images/4/44/Pgxc_HA_20121024.pdf >>>>>>> >>>>>>> I am at the first step, configuring the GTM master (page 39). I have >>>>>>> this configuration: >>>>>>> >>>>>>> nodename = 'mygtmnode01' >>>>>>> listen_addresses = '*' >>>>>>> port = 6666 >>>>>>> startup = ACT >>>>>>> >>>>>>> I run this command as postgres-xc: >>>>>>> $ gtm_ctl -Z gtm -D /var/lib/postgres-xc/GTM start >>>>>>> >>>>>>> The server start apparently at port 6666: >>>>>>> >>>>>>> # netstat -lnptu | grep gtm >>>>>>> tcp 0 0 0.0.0.0:6666 0.0.0.0:* LISTEN 2408/gtm >>>>>>> tcp6 0 0 :::6666 :::* LISTEN 2408/gtm >>>>>>> >>>>>>> But checking the logs I get repeatedly the following messages: >>>>>>> >>>>>>> 1:139871852988224:2013-03-01 19:28:36.528 EET -LOG: Any GTM standby >>>>>>> node not found in registered node(s). >>>>>>> LOCATION: gtm_standby_connect_to_standby_int, >>>>>>> /build/buildd/postgres-xc-1.0.2/build/../src/gtm/main/gtm_standby.c:378 >>>>>>> 1:139871844607744:2013-03-01 19:28:36.528 EET -FATAL: Expecting a >>>>>>> startup message, but received � >>>>>>> LOCATION: GTM_ThreadMain, >>>>>>> /build/buildd/postgres-xc-1.0.2/build/../src/gtm/main/main.c:985 >>>>>>> 1:139871844607744:2013-03-01 19:28:36.529 EET -LOG: Cleaning up thread >>>>>>> state >>>>>>> LOCATION: GTM_ThreadCleanup, >>>>>>> /build/buildd/postgres-xc-1.0.2/build/../src/gtm/main/gtm_thread.c:265 >>>>>>> >>>>>>> That FATAL error above is scaring me. Am I doing something wrong? >>>>>>> >>>>>>> I am running a 64 bit ubuntu (13.04) and I have install postegres-xc >>>>>>> from the repositories. >>>>>>> >>>>>>> I haven't set up the GTM standy yet. >>>>>>> >>>>>>> Secondary question 1: >>>>>>> >>>>>>> In the guide it says: >>>>>>> nodename = 'gtmName' >>>>>>> for both master and standby. Does this imply that they should have the >>>>>>> same node name? Does it have to be the same as the hostname? >>>>>>> >>>>>>> Secondary question 2: >>>>>>> >>>>>>> In the GTM proxy procedure when the master fails it suggests to >>>>>>> reconfigure the proxy to the new master (ex standby). Can we just switch >>>>>>> the IP from master to slave using heartbeat or keepalived and avoid this >>>>>>> step? >>>>>>> >>>>>>> You would probably have figure it out already that my postgres-xc status >>>>>>> is "newbie" :) >>>>>>> >>>>>>> Thanks >>>>>>> >>>>>>> Theo >>>>>>> >>>>>>> ------------------------------------------------------------------------------ >>>>>>> Everyone hates slow websites. So do we. >>>>>>> Make your web apps faster with AppDynamics >>>>>>> Download AppDynamics Lite for free today: >>>>>>> http://p.sf.net/sfu/appdyn_d2d_feb >>>>>>> _______________________________________________ >>>>>>> Postgres-xc-general mailing list >>>>>>> Pos...@li... >>>>>>> https://lists.sourceforge.net/lists/listinfo/postgres-xc-general >>>>> ------------------------------------------------------------------------------ >>>>> Everyone hates slow websites. So do we. >>>>> Make your web apps faster with AppDynamics >>>>> Download AppDynamics Lite for free today: >>>>> http://p.sf.net/sfu/appdyn_d2d_feb >>>>> _______________________________________________ >>>>> Postgres-xc-general mailing list >>>>> Pos...@li... >>>>> https://lists.sourceforge.net/lists/listinfo/postgres-xc-general >>>> ------------------------------------------------------------------------------ >>>> Everyone hates slow websites. So do we. >>>> Make your web apps faster with AppDynamics >>>> Download AppDynamics Lite for free today: >>>> http://p.sf.net/sfu/appdyn_d2d_feb >>>> _______________________________________________ >>>> Postgres-xc-general mailing list >>>> Pos...@li... >>>> https://lists.sourceforge.net/lists/listinfo/postgres-xc-general >>> ------------------------------------------------------------------------------ >>> Everyone hates slow websites. So do we. >>> Make your web apps faster with AppDynamics >>> Download AppDynamics Lite for free today: >>> http://p.sf.net/sfu/appdyn_d2d_feb >>> _______________________________________________ >>> Postgres-xc-general mailing list >>> Pos...@li... >>> https://lists.sourceforge.net/lists/listinfo/postgres-xc-general >> >> ------------------------------------------------------------------------------ >> Everyone hates slow websites. So do we. >> Make your web apps faster with AppDynamics >> Download AppDynamics Lite for free today: >> http://p.sf.net/sfu/appdyn_d2d_feb >> _______________________________________________ >> Postgres-xc-general mailing list >> Pos...@li... >> https://lists.sourceforge.net/lists/listinfo/postgres-xc-general > > |