You can subscribe to this list here.
2010 |
Jan
|
Feb
|
Mar
|
Apr
|
May
(2) |
Jun
|
Jul
|
Aug
(6) |
Sep
|
Oct
(19) |
Nov
(1) |
Dec
|
---|---|---|---|---|---|---|---|---|---|---|---|---|
2011 |
Jan
(12) |
Feb
(1) |
Mar
(4) |
Apr
(4) |
May
(32) |
Jun
(12) |
Jul
(11) |
Aug
(1) |
Sep
(6) |
Oct
(3) |
Nov
|
Dec
(10) |
2012 |
Jan
(11) |
Feb
(1) |
Mar
(3) |
Apr
(25) |
May
(53) |
Jun
(38) |
Jul
(103) |
Aug
(54) |
Sep
(31) |
Oct
(66) |
Nov
(77) |
Dec
(20) |
2013 |
Jan
(91) |
Feb
(86) |
Mar
(103) |
Apr
(107) |
May
(25) |
Jun
(37) |
Jul
(17) |
Aug
(59) |
Sep
(38) |
Oct
(78) |
Nov
(29) |
Dec
(15) |
2014 |
Jan
(23) |
Feb
(82) |
Mar
(118) |
Apr
(101) |
May
(103) |
Jun
(45) |
Jul
(6) |
Aug
(10) |
Sep
|
Oct
(32) |
Nov
|
Dec
(9) |
2015 |
Jan
(3) |
Feb
(5) |
Mar
|
Apr
(1) |
May
|
Jun
|
Jul
(9) |
Aug
(4) |
Sep
(3) |
Oct
|
Nov
|
Dec
|
2016 |
Jan
(3) |
Feb
|
Mar
|
Apr
|
May
|
Jun
|
Jul
|
Aug
|
Sep
|
Oct
|
Nov
|
Dec
|
2017 |
Jan
|
Feb
|
Mar
|
Apr
|
May
|
Jun
(3) |
Jul
|
Aug
|
Sep
|
Oct
|
Nov
|
Dec
|
2018 |
Jan
|
Feb
|
Mar
|
Apr
|
May
(4) |
Jun
|
Jul
|
Aug
|
Sep
|
Oct
|
Nov
|
Dec
|
S | M | T | W | T | F | S |
---|---|---|---|---|---|---|
|
|
|
|
|
1
(6) |
2
(3) |
3
(5) |
4
(4) |
5
|
6
(2) |
7
|
8
|
9
|
10
|
11
(1) |
12
|
13
|
14
(10) |
15
(3) |
16
(4) |
17
(4) |
18
(9) |
19
(18) |
20
(1) |
21
(6) |
22
(10) |
23
|
24
|
25
(1) |
26
(5) |
27
(5) |
28
(3) |
29
(1) |
30
(2) |
31
|
|
|
|
|
|
|
From: Chris B. <cb...@in...> - 2013-03-18 20:47:13
|
Hi all, I just read through: wiki.postgresql.org/images/4/44/Pgxc_HA_20121024.pdf<https://duckduckgo.com/k/?u=https%3A%2F%2Ffanyv88.com%3A443%2Fhttps%2Fwiki.postgresql.org%2Fimages%2F4%2F44%2FPgxc_HA_20121024.pdf> and I wanted to know how PG-XC solves the problem of split brain. For example, let's say there is a pair: Box 1: datanode1-master, datanode2-slave, coordinator master, gtm master Box 2: datanode1-slave, datanode2-master, coordinator slave, gtm slave and communication is severed between the two boxes. Another example: Box 1: datanode1-master, datanode2-slave, coordinator master, gtm proxy Box 2: datanode1-slave, datanode2-master, coordinator slave, gtm proxy Box 3: gtm (no slave exists for simplicity) and communication is severed between box 1 and 2, but both can still communicate with box 3. (Or another network outage scenario - there are obviously several.) Chris... |
From: Koichi S. <koi...@gm...> - 2013-03-18 08:48:00
|
Circular configuration will not make any difference in the recovery. Good thing is this configuration can apply to odd number of datanodes/coordinators. Pair configuration may help to build failover system with Pacemaker/corosync because it have to take care of the resources in just two servers. However, I'm not sure if such configuration of Pacemaker/corosync can be applied to GTM/GTM-slave too. Regards; ---------- Koichi Suzuki 2013/3/18 Theodotos Andreou <th...@ub...>: > Thanks. It gets more clear now. > > On each node I have a master and a standby coordinator in a pair > configuration, just like the example in your slides. Can I use the catalog > of the standby server (which is a copy of the master coord on the active > node) to restore the failed master coord? In this way I will not have to > take the coordinator offline. > > Also would it make recovery easier if I use a circular configuration with 4 > nodes? > > Regards > > Theo > > > On 03/18/2013 07:15 AM, Koichi Suzuki wrote: >> >> Yes, you're correct. Because coordinator is essentially identical, >> you can leave failed coordinators as is and use remaining ones. If >> the failed server is back, you may have to stop the whole cluster, >> copies one of the coordinator to the new (restored) server, change the >> node name etc., to failback the coordinator. If you keep running >> two, when the failed server is back, you can build new slave to the >> restored server and then failover to it. >> >> It is not reasonable to keep using the same port. Instead, (in my >> case) I assign different port to coordinators at different server. >> For example, if you have the server1 and server2 and run coordinator >> master and slave, I'd assign the port as follows: >> >> 5432: coord1, master at server1 and slave at server2, >> 5433: coord2, master at server2 and slave at server1. >> >> You don't have to worry about port number after the failover. >> >> We're now implementing on-line node addition/removal feature. This >> will make things much simpler. You can use only remaining >> coordinators and later add new one to the restored server. >> >> Regards; >> ---------- >> Koichi Suzuki >> >> >> 2013/3/18 Nikhil Sontakke <ni...@st...>: >>> >>> Hi Theodotos, >>> >>>> Now I have two master coordinators running on the same node at different >>>> ports (5432 and 5433). Does that mean I will have to reconfigure the >>>> load >>>> balancer to round robin the traffic on the working node, at two >>>> different >>>> ports or is this taken care by the master node on port 5432? >>>> >>> Why do you want to run two coordinators on one single node? I do not >>> see any benefits to that. Also to answer your question, the master >>> node on port 5432 does not automatically load balance between itself >>> and the other local coordinator. >>> >>> Regards, >>> Nikhils >>> >>>> Expect more stupid questions as I move along the guide! :) >>>> >>>> Thanks >>>> >>>> Theo >>>> >>>> >>>> ------------------------------------------------------------------------------ >>>> Everyone hates slow websites. So do we. >>>> Make your web apps faster with AppDynamics >>>> Download AppDynamics Lite for free today: >>>> http://p.sf.net/sfu/appdyn_d2d_mar >>>> _______________________________________________ >>>> Postgres-xc-general mailing list >>>> Pos...@li... >>>> https://lists.sourceforge.net/lists/listinfo/postgres-xc-general >>>> >>> >>> >>> -- >>> StormDB - http://www.stormdb.com >>> The Database Cloud >>> Postgres-XC Support and Service >>> >>> >>> ------------------------------------------------------------------------------ >>> Everyone hates slow websites. So do we. >>> Make your web apps faster with AppDynamics >>> Download AppDynamics Lite for free today: >>> http://p.sf.net/sfu/appdyn_d2d_mar >>> _______________________________________________ >>> Postgres-xc-general mailing list >>> Pos...@li... >>> https://lists.sourceforge.net/lists/listinfo/postgres-xc-general > > |
From: Theodotos A. <th...@ub...> - 2013-03-18 05:45:00
|
Thanks. It gets more clear now. On each node I have a master and a standby coordinator in a pair configuration, just like the example in your slides. Can I use the catalog of the standby server (which is a copy of the master coord on the active node) to restore the failed master coord? In this way I will not have to take the coordinator offline. Also would it make recovery easier if I use a circular configuration with 4 nodes? Regards Theo On 03/18/2013 07:15 AM, Koichi Suzuki wrote: > Yes, you're correct. Because coordinator is essentially identical, > you can leave failed coordinators as is and use remaining ones. If > the failed server is back, you may have to stop the whole cluster, > copies one of the coordinator to the new (restored) server, change the > node name etc., to failback the coordinator. If you keep running > two, when the failed server is back, you can build new slave to the > restored server and then failover to it. > > It is not reasonable to keep using the same port. Instead, (in my > case) I assign different port to coordinators at different server. > For example, if you have the server1 and server2 and run coordinator > master and slave, I'd assign the port as follows: > > 5432: coord1, master at server1 and slave at server2, > 5433: coord2, master at server2 and slave at server1. > > You don't have to worry about port number after the failover. > > We're now implementing on-line node addition/removal feature. This > will make things much simpler. You can use only remaining > coordinators and later add new one to the restored server. > > Regards; > ---------- > Koichi Suzuki > > > 2013/3/18 Nikhil Sontakke <ni...@st...>: >> Hi Theodotos, >> >>> Now I have two master coordinators running on the same node at different >>> ports (5432 and 5433). Does that mean I will have to reconfigure the load >>> balancer to round robin the traffic on the working node, at two different >>> ports or is this taken care by the master node on port 5432? >>> >> Why do you want to run two coordinators on one single node? I do not >> see any benefits to that. Also to answer your question, the master >> node on port 5432 does not automatically load balance between itself >> and the other local coordinator. >> >> Regards, >> Nikhils >> >>> Expect more stupid questions as I move along the guide! :) >>> >>> Thanks >>> >>> Theo >>> >>> ------------------------------------------------------------------------------ >>> Everyone hates slow websites. So do we. >>> Make your web apps faster with AppDynamics >>> Download AppDynamics Lite for free today: >>> http://p.sf.net/sfu/appdyn_d2d_mar >>> _______________________________________________ >>> Postgres-xc-general mailing list >>> Pos...@li... >>> https://lists.sourceforge.net/lists/listinfo/postgres-xc-general >>> >> >> >> -- >> StormDB - http://www.stormdb.com >> The Database Cloud >> Postgres-XC Support and Service >> >> ------------------------------------------------------------------------------ >> Everyone hates slow websites. So do we. >> Make your web apps faster with AppDynamics >> Download AppDynamics Lite for free today: >> http://p.sf.net/sfu/appdyn_d2d_mar >> _______________________________________________ >> Postgres-xc-general mailing list >> Pos...@li... >> https://lists.sourceforge.net/lists/listinfo/postgres-xc-general |
From: Koichi S. <koi...@gm...> - 2013-03-18 05:39:06
|
Please see my previous reply on running failed-over node at the same server. The following steps are not tested yet. 1. Stop all the cluster. 2. Copy all the material for remaining coordinator for the new coordinator. 3. Edit new coordinator's postgresql.conf, give new value of pgxc_node_name, 4. Edit new coordinator's postgresql.conf to configure gtm/gtm_proxy and pooler. 5. Edit other postgresql.conf if you feel necessary. 6. Edit pg_hba.conf if it is different from the original. 7. Start new coordinator. I'm not sure if the step 3 works fine. Now initdb initializes it's pgxc_node catalog. I'm afraid the step 3 main not update the catalog automatically. In this case, you may have to update pgxc_node using CREATE/UPDATE/DROP NODE. Please understand it is not tested yet either. So, at present, safest way is to run two coordinators at the same server until the failed server is back and then switch to the new server. Regards; ---------- Koichi Suzuki 2013/3/18 Theodotos Andreou <th...@ub...>: > On 03/18/2013 07:07 AM, Nikhil Sontakke wrote: >> Hi Theodotos, >> >>> Now I have two master coordinators running on the same node at different >>> ports (5432 and 5433). Does that mean I will have to reconfigure the load >>> balancer to round robin the traffic on the working node, at two different >>> ports or is this taken care by the master node on port 5432? >>> >> Why do you want to run two coordinators on one single node? I do not >> see any benefits to that. Also to answer your question, the master >> node on port 5432 does not automatically load balance between itself >> and the other local coordinator. >> >> Regards, >> Nikhils > Hi Nikhils, > > I am not sure if there is a benefit running two coordinators on a single > node. I was just following koichi's HA guide for a "pair configuration". > Maybe he can enlighten us. :) > > I was wondering, since all coordinators have a copy of the same catalog, > is there a real need for a standby coordinator? > > Also I would like to know the steps I need to take to restore the failed > coordinator because I am preparing automatic failover and restore scripts. >> >>> Expect more stupid questions as I move along the guide! :) >>> >>> Thanks >>> >>> Theo >>> >>> ------------------------------------------------------------------------------ >>> Everyone hates slow websites. So do we. >>> Make your web apps faster with AppDynamics >>> Download AppDynamics Lite for free today: >>> http://p.sf.net/sfu/appdyn_d2d_mar >>> _______________________________________________ >>> Postgres-xc-general mailing list >>> Pos...@li... >>> https://lists.sourceforge.net/lists/listinfo/postgres-xc-general >>> >> >> > > > ------------------------------------------------------------------------------ > Everyone hates slow websites. So do we. > Make your web apps faster with AppDynamics > Download AppDynamics Lite for free today: > http://p.sf.net/sfu/appdyn_d2d_mar > _______________________________________________ > Postgres-xc-general mailing list > Pos...@li... > https://lists.sourceforge.net/lists/listinfo/postgres-xc-general |
From: Theodotos A. <th...@ub...> - 2013-03-18 05:21:03
|
On 03/18/2013 07:07 AM, Nikhil Sontakke wrote: > Hi Theodotos, > >> Now I have two master coordinators running on the same node at different >> ports (5432 and 5433). Does that mean I will have to reconfigure the load >> balancer to round robin the traffic on the working node, at two different >> ports or is this taken care by the master node on port 5432? >> > Why do you want to run two coordinators on one single node? I do not > see any benefits to that. Also to answer your question, the master > node on port 5432 does not automatically load balance between itself > and the other local coordinator. > > Regards, > Nikhils Hi Nikhils, I am not sure if there is a benefit running two coordinators on a single node. I was just following koichi's HA guide for a "pair configuration". Maybe he can enlighten us. :) I was wondering, since all coordinators have a copy of the same catalog, is there a real need for a standby coordinator? Also I would like to know the steps I need to take to restore the failed coordinator because I am preparing automatic failover and restore scripts. > >> Expect more stupid questions as I move along the guide! :) >> >> Thanks >> >> Theo >> >> ------------------------------------------------------------------------------ >> Everyone hates slow websites. So do we. >> Make your web apps faster with AppDynamics >> Download AppDynamics Lite for free today: >> http://p.sf.net/sfu/appdyn_d2d_mar >> _______________________________________________ >> Postgres-xc-general mailing list >> Pos...@li... >> https://lists.sourceforge.net/lists/listinfo/postgres-xc-general >> > > |
From: Koichi S. <koi...@gm...> - 2013-03-18 05:16:04
|
Yes, you're correct. Because coordinator is essentially identical, you can leave failed coordinators as is and use remaining ones. If the failed server is back, you may have to stop the whole cluster, copies one of the coordinator to the new (restored) server, change the node name etc., to failback the coordinator. If you keep running two, when the failed server is back, you can build new slave to the restored server and then failover to it. It is not reasonable to keep using the same port. Instead, (in my case) I assign different port to coordinators at different server. For example, if you have the server1 and server2 and run coordinator master and slave, I'd assign the port as follows: 5432: coord1, master at server1 and slave at server2, 5433: coord2, master at server2 and slave at server1. You don't have to worry about port number after the failover. We're now implementing on-line node addition/removal feature. This will make things much simpler. You can use only remaining coordinators and later add new one to the restored server. Regards; ---------- Koichi Suzuki 2013/3/18 Nikhil Sontakke <ni...@st...>: > Hi Theodotos, > >> Now I have two master coordinators running on the same node at different >> ports (5432 and 5433). Does that mean I will have to reconfigure the load >> balancer to round robin the traffic on the working node, at two different >> ports or is this taken care by the master node on port 5432? >> > > Why do you want to run two coordinators on one single node? I do not > see any benefits to that. Also to answer your question, the master > node on port 5432 does not automatically load balance between itself > and the other local coordinator. > > Regards, > Nikhils > >> Expect more stupid questions as I move along the guide! :) >> >> Thanks >> >> Theo >> >> ------------------------------------------------------------------------------ >> Everyone hates slow websites. So do we. >> Make your web apps faster with AppDynamics >> Download AppDynamics Lite for free today: >> http://p.sf.net/sfu/appdyn_d2d_mar >> _______________________________________________ >> Postgres-xc-general mailing list >> Pos...@li... >> https://lists.sourceforge.net/lists/listinfo/postgres-xc-general >> > > > > -- > StormDB - http://www.stormdb.com > The Database Cloud > Postgres-XC Support and Service > > ------------------------------------------------------------------------------ > Everyone hates slow websites. So do we. > Make your web apps faster with AppDynamics > Download AppDynamics Lite for free today: > http://p.sf.net/sfu/appdyn_d2d_mar > _______________________________________________ > Postgres-xc-general mailing list > Pos...@li... > https://lists.sourceforge.net/lists/listinfo/postgres-xc-general |
From: Theodotos A. <th...@ub...> - 2013-03-18 05:12:10
|
"All the coordinators are essentially the same copy" right? But the failed coordinator is not the same copy as the others. What steps do I need to take in order to restore the failed coordinator? The guide says: "Failed coordinator can be restored offline ● Backup/restore ● Copy catalogue from a remaining coordinator" How is the above statement translated in commands? On 03/18/2013 04:25 AM, Koichi Suzuki wrote: > Because each coordinator is equivalent, both will bring the same > logical result (the same database contents). I've not tested which is > better (just use on e coordinator or both two) yet. > > Regards; > ---------- > Koichi Suzuki > > > 2013/3/17 Theodotos Andreou <th...@ub...>: >> Hi guys, >> >> I am using Koichi's HA guide to setup a two node pgxc cluster: >> >> wiki.postgresql.org/images/4/44/Pgxc_HA_20121024.pdf >> >> I am using a pair configuration according to page 36 >> >> I have two nodes (node1 and node2). The master coordinators run on port 5432 >> and the slave coordinators on port 5433. I have a TCP/IP loadbalancer (IPVS) >> that load balances (round robin) the traffic on both nodes on port 5432. >> >> When one of the nodes fail, the standby on the other is promoted to master. >> Now I have two master coordinators running on the same node at different >> ports (5432 and 5433). Does that mean I will have to reconfigure the load >> balancer to round robin the traffic on the working node, at two different >> ports or is this taken care by the master node on port 5432? >> >> Expect more stupid questions as I move along the guide! :) >> >> Thanks >> >> Theo >> >> ------------------------------------------------------------------------------ >> Everyone hates slow websites. So do we. >> Make your web apps faster with AppDynamics >> Download AppDynamics Lite for free today: >> http://p.sf.net/sfu/appdyn_d2d_mar >> _______________________________________________ >> Postgres-xc-general mailing list >> Pos...@li... >> https://lists.sourceforge.net/lists/listinfo/postgres-xc-general >> |
From: Nikhil S. <ni...@st...> - 2013-03-18 05:08:20
|
Hi Theodotos, > Now I have two master coordinators running on the same node at different > ports (5432 and 5433). Does that mean I will have to reconfigure the load > balancer to round robin the traffic on the working node, at two different > ports or is this taken care by the master node on port 5432? > Why do you want to run two coordinators on one single node? I do not see any benefits to that. Also to answer your question, the master node on port 5432 does not automatically load balance between itself and the other local coordinator. Regards, Nikhils > Expect more stupid questions as I move along the guide! :) > > Thanks > > Theo > > ------------------------------------------------------------------------------ > Everyone hates slow websites. So do we. > Make your web apps faster with AppDynamics > Download AppDynamics Lite for free today: > http://p.sf.net/sfu/appdyn_d2d_mar > _______________________________________________ > Postgres-xc-general mailing list > Pos...@li... > https://lists.sourceforge.net/lists/listinfo/postgres-xc-general > -- StormDB - http://www.stormdb.com The Database Cloud Postgres-XC Support and Service |
From: Koichi S. <koi...@gm...> - 2013-03-18 02:25:54
|
Because each coordinator is equivalent, both will bring the same logical result (the same database contents). I've not tested which is better (just use on e coordinator or both two) yet. Regards; ---------- Koichi Suzuki 2013/3/17 Theodotos Andreou <th...@ub...>: > Hi guys, > > I am using Koichi's HA guide to setup a two node pgxc cluster: > > wiki.postgresql.org/images/4/44/Pgxc_HA_20121024.pdf > > I am using a pair configuration according to page 36 > > I have two nodes (node1 and node2). The master coordinators run on port 5432 > and the slave coordinators on port 5433. I have a TCP/IP loadbalancer (IPVS) > that load balances (round robin) the traffic on both nodes on port 5432. > > When one of the nodes fail, the standby on the other is promoted to master. > Now I have two master coordinators running on the same node at different > ports (5432 and 5433). Does that mean I will have to reconfigure the load > balancer to round robin the traffic on the working node, at two different > ports or is this taken care by the master node on port 5432? > > Expect more stupid questions as I move along the guide! :) > > Thanks > > Theo > > ------------------------------------------------------------------------------ > Everyone hates slow websites. So do we. > Make your web apps faster with AppDynamics > Download AppDynamics Lite for free today: > http://p.sf.net/sfu/appdyn_d2d_mar > _______________________________________________ > Postgres-xc-general mailing list > Pos...@li... > https://lists.sourceforge.net/lists/listinfo/postgres-xc-general > |
From: Theodotos A. <th...@ub...> - 2013-03-17 13:10:12
|
Hi guys, I am using Koichi's HA guide to setup a two node pgxc cluster: wiki.postgresql.org/images/4/44/Pgxc_HA_20121024.pdf <https://duckduckgo.com/k/?u=https%3A%2F%2Ffanyv88.com%3A443%2Fhttps%2Fwiki.postgresql.org%2Fimages%2F4%2F44%2FPgxc_HA_20121024.pdf> I am using a pair configuration according to page 36 I have two nodes (node1 and node2). The master coordinators run on port 5432 and the slave coordinators on port 5433. I have a TCP/IP loadbalancer (IPVS) that load balances (round robin) the traffic on both nodes on port 5432. When one of the nodes fail, the standby on the other is promoted to master. Now I have two master coordinators running on the same node at different ports (5432 and 5433). Does that mean I will have to reconfigure the load balancer to round robin the traffic on the working node, at two different ports or is this taken care by the master node on port 5432? Expect more stupid questions as I move along the guide! :) Thanks Theo |
From: Theodotos A. <th...@ub...> - 2013-03-17 07:00:53
|
On 03/17/2013 08:25 AM, Michael Paquier wrote: > > > On Sun, Mar 17, 2013 at 3:08 PM, Theodotos Andreou <th...@ub... > <mailto:th...@ub...>> wrote: > > $ cat CN2/recovery.conf > standby_mode = on > primary_conninfo = 'host = node01 port = 5432 user = postgres-xc' > application_name = 'cnmaster01' > restore_command = 'cp CN2_ArchLog/%f %p' > archive_cleanup_command = 'pg_archivecleanup CN2_ArchLog %r' > > application_name is a parameter of the connection string > primary_conninfo, and not an independent parameter. Such a > configuration would also not work with normal Postgres ;) > -- > Michael I am also a postgresql newbie :) Thanks Michael |
From: Michael P. <mic...@gm...> - 2013-03-17 06:25:21
|
On Sun, Mar 17, 2013 at 3:08 PM, Theodotos Andreou <th...@ub...>wrote: > $ cat CN2/recovery.conf > standby_mode = on > primary_conninfo = 'host = node01 port = 5432 user = postgres-xc' > application_name = 'cnmaster01' > restore_command = 'cp CN2_ArchLog/%f %p' > archive_cleanup_command = 'pg_archivecleanup CN2_ArchLog %r' > application_name is a parameter of the connection string primary_conninfo, and not an independent parameter. Such a configuration would also not work with normal Postgres ;) -- Michael |
From: Theodotos A. <th...@ub...> - 2013-03-17 06:08:49
|
Hi guys, I am using Koichi's HA guide to setup a two node pgxc cluster: wiki.postgresql.org/images/4/44/Pgxc_HA_20121024.pdf <https://duckduckgo.com/k/?u=https%3A%2F%2Ffanyv88.com%3A443%2Fhttps%2Fwiki.postgresql.org%2Fimages%2F4%2F44%2FPgxc_HA_20121024.pdf> I am in page 48 "configuring coordinator slaves" I have this config: $ cat CN2/recovery.conf standby_mode = on primary_conninfo = 'host = node01 port = 5432 user = postgres-xc' application_name = 'cnmaster01' restore_command = 'cp CN2_ArchLog/%f %p' archive_cleanup_command = 'pg_archivecleanup CN2_ArchLog %r' $ cat CN2/postgresql.conf listen_addresses = '*' port = 5433 max_connections = 150 shared_buffers = 1024MB work_mem = 64MB maintenance_work_mem = 400MB wal_level = hot_standby synchronous_commit = on archive_mode = on archive_command = 'rsync %p node02:CN2_ArchLog/%f' max_wal_senders = 5 synchronous_standby_names = 'cnmaster01' hot_standby = on cpu_tuple_cost = 0.1 effective_cache_size = 2048MB logging_collector = on log_rotation_age = 1d log_rotation_size = 1GB log_min_duration_statement = 250ms log_checkpoints = on log_connections = on log_disconnections = on log_lock_waits = on log_temp_files = 0 datestyle = 'iso, mdy' lc_messages = 'en_US.UTF-8' lc_monetary = 'en_US.UTF-8' lc_numeric = 'en_US.UTF-8' lc_time = 'en_US.UTF-8' default_text_search_config = 'pg_catalog.english' pooler_port = 6668 gtm_host = 'localhost' gtm_port = 6666 pgxc_node_name = 'cnmaster01' When I try to start the slave node: $ pg_ctl start -Z coordinator -D CN2 I get this in the logs: LOG: database system was interrupted; last known up at 2013-03-17 06:29:44 EET FATAL: unrecognized recovery parameter "application_name" LOG: startup process (PID 6562) exited with exit code 1 LOG: aborting startup due to startup process failure Another question: Why do I have to start the server: 'pg_ctl start -Z coordinator -D coordSlaveDir" then reconfigure: "cat >> coordMasterDir/postgresql.conf" <EOF synchronous_commit = on synchronous_standby_names = 'coordName' EOF "cat >> coordSlaveDir/postgresql.conf" <<EOF hot_standby = on port = coordPort EOF and then reload? pg_ctl reload -Z coordinator -D coordMasterDir Can I just use all the configuration at once and just start the server? I am using postgres-xc 1.0.2 |
From: Chris B. <cb...@in...> - 2013-03-16 07:58:43
|
Thanks, that's a big help. It confirms what I expected. ________________________________________ From: Koichi Suzuki [koi...@gm...] Sent: March 15, 2013 8:22 PM To: Chris Brown Cc: pos...@li... Subject: Re: [Postgres-xc-general] XC Configurations One of the example is: 5 gtm proxies, 5 datanodes and 5 coordinator, one for each box, gtm master in one of the box, gtm slave for another. GTM does not need much CPU/Memory but deal with rather big amount of network traffic. So gtm/gtm slave should be given separate network segmemnt. Each datanode/coordinator can have their slaves at the next box. Please try to see pgxc_ctl bash script at https://github.com/koichi-szk/PGXC-Tools/tree/master/pgxc_ctl It comes with the similar case with 4 boxes (gtm and gtm slave are at the separate box but you can configure them at some of 5 boxes you have). Hope it helps. Regards; ---------- Koichi Suzuki 2013/3/15 Chris Brown <cb...@in...>: > Hi all, > > I'm doing some preliminary testing of XC (now that I have it working) and > I'm wondering what sort of configurations are recommended. > > For example, let's say I want write performance on well segregated data (few > cross-node writes) and I have 3 boxes to be the server farm. How would you > recommend that I configure those? > > Now let's say I have 5 boxes and want redundancy, how do I configure it and > how to I achieve redundancy? > > I know every configuration is going to be a bit different but I'm looking > for guidance on best practices. > > Thanks, > Chris... > > > ------------------------------------------------------------------------------ > Everyone hates slow websites. So do we. > Make your web apps faster with AppDynamics > Download AppDynamics Lite for free today: > http://p.sf.net/sfu/appdyn_d2d_mar > _______________________________________________ > Postgres-xc-general mailing list > Pos...@li... > https://lists.sourceforge.net/lists/listinfo/postgres-xc-general > |
From: Arni S. <Arn...@md...> - 2013-03-16 05:12:39
|
Thank you for your response, Table creation: create table Table( id bigint, seq integer, stuff text, PRIMARY KEY(id, seq)) DISTRIBUTE BY HASH(id); And some partitions: CREATE TABLE table_20121125 (CHECK (((created_at >= '2012-11-25'::date) AND (created_at < '2012-12-02'::date))) ) INHERITS (Table) DISTRIBUTE BY HASH(id); CREATE TABLE table_20121202 (CHECK (((created_at >= '2012-12-02'::date) AND (created_at < '2012-12-09'::date))) ) INHERITS (Table) DISTRIBUTE BY HASH(id); CREATE TABLE table_20121209 (CHECK (((created_at >= '2012-12-09'::date) AND (created_at < '2012-12-16'::date))) ) INHERITS (Table) DISTRIBUTE BY HASH(id); Best, Arni -----Original Message----- From: Koichi Suzuki [mailto:koi...@gm...] Sent: Friday, March 15, 2013 11:30 PM To: Arni Sumarlidason Cc: Ashutosh Bapat; pos...@li... Subject: Re: [Postgres-xc-general] Configuration Error - or? The plan looks strange, because it contains too many datanode scans. Ashutosh, could your review if the plan is reasonable? Arni; It will be very helpful if we get CREATE TABLE statement you used for involved tables. Table distribution information helps much to see what's going on. Best; ---------- Koichi Suzuki 2013/3/15 Arni Sumarlidason <Arn...@md...>: > Thank you for help, > > > > I used the script authored by Koichi Suzuki to deploy the cluster, so > each node contains a data node, coordinator, and gtm proxy. > > We have 20 nodes and a gtm master. > > We distribute data to nodes based on a hash(id), and we partition data > by > date: usually by week > > Unfortunately I cannot give data to reproduce, I think it would be too > large anyways. > > > > I have not tried it on an empty table, but I think it would be successful. > > If we lower the amount of requested data the query is successful. > > if I join on single partitions instead of the master tables the query > is successful. > > > > I think the problem has to do with the amount of data I’m requesting, > the select statement I shared earlier should bring data that is about > 1.4-2.0GB on disk. Postgres on the coordinator consumes over 16GB > before crashing. I attached a query plan. > > > > From: Ashutosh Bapat [mailto:ash...@en...] > Sent: Thursday, March 14, 2013 11:26 AM > > > To: Arni Sumarlidason > Cc: pos...@li... > Subject: Re: [Postgres-xc-general] Configuration Error - or? > > > > > > On Thu, Mar 14, 2013 at 7:20 PM, Arni Sumarlidason > <Arn...@md...> wrote: > > I ran this query, > > select * from table t, table_lang l where t.id=l.id and l.date >= > '2012-12-01' and l.date < '2013-01-01'; > > with these results, > > > Does this produce a crash without any data in the tables? If not, you > need to give the data as well. Please provide exact reproduction > steps, which when run without any modification, reproduce the crash. > Also, you will need to tell how the table is distributed, what XC > configuration you are using like how many datanodes/coordinators etc. > > > The connection to the server was lost. Attempting reset: Failed. > > Time: 241861.411 ms > > > > Thank you for quick response, > > Arni > > > > From: Ashutosh Bapat [mailto:ash...@en...] > Sent: Thursday, March 14, 2013 11:17 AM > To: Arni Sumarlidason > Cc: pos...@li... > Subject: Re: [Postgres-xc-general] Configuration Error - or? > > > > This should not happen, can you please provide reproduction steps. > > On Thu, Mar 14, 2013 at 6:40 PM, Arni Sumarlidason > <Arn...@md...> wrote: > > Good Morning / Evening, > > > > I think I may have a configuration problem… I have data distributed > relationally with date based partitions to 20 nodes. When I try to > select it from a coordinator the query executes quickly and the data > nodes promptly pump data to the coordinator. However I run out of > memory(16bg) and the postgres process crashes. Is there some way to > make the coordinator cache the data to its local table(local disk), or > is there some way to get around this memory issue? > > > > Arni Sumarlidason | Software Engineer, Information Technology > > MDA | 820 West Diamond Ave | Gaithersburg, MD | USA > > O: 240-833-8200 D: 240-833-8318 M: 256-393-2803 > > arn...@md...| > https://console.mxlogic.com/redir/?8VxNwQsL6zAQsCTNPb3a9EVd79I04GxHten > dTVcsCej79zANNKVIHmlysKIvIp0Q54hfBPoWXOpJ5BNByX3ybxJZ4S2_id41Fr2Rqj8N- > xsfIn8_3UWuqnjh09lwX8yT7pOH0Qg3koRyq82VEwH4ArDUvf5zZB0SyrvdIIczxNEVvsd > BVKhC4Nxyz4 > > > > > ---------------------------------------------------------------------- > -------- Everyone hates slow websites. So do we. > Make your web apps faster with AppDynamics Download AppDynamics Lite > for free today: > https://console.mxlogic.com/redir/?17cec6zBUQsCzAS-epophd79EVdw0RKEXMd > OzHuqIIwmwaE9E2GKNdTVcsCej79zANNKVIHmlysKIvIp0Q54hfBPoWXOpJ5BNByX3ybxJ > Z4S2_id41Fr2Rqj8N-xsfIn8_3UWuqnjh09lwX8yT7pOH0Qg3koRyq82VEwH4ArDUvf5zZ > B0SCrvdIIczxNEVvsdL3aSsXZkrfUM > _______________________________________________ > Postgres-xc-general mailing list > Pos...@li... > https://console.mxlogic.com/redir/?17cec6zBUQsCzAS-epophd79EVdAVPmEBC7 > OFeDkGT6TKw0vU6UvaAWovaAVgtHyRqj8N-xsfIn8_3UWurLOoVcsCej79zztPpmIH4Vto > _oO1Ea8yvbCNRTAPqbbzb5S74n3rW9I5-Aq83iS5GQChzZ2UvoKh-7NQYQKCy0iH1Sh5Ke > PBm1Ew6ENH4Qg5Ph1m98TfM-ub7Xa1JAS-rpop73zhO-UrQ5iatyH8 > > > > > -- > Best Wishes, > Ashutosh Bapat > EntepriseDB Corporation > The Enterprise Postgres Company > > > > > -- > Best Wishes, > Ashutosh Bapat > EntepriseDB Corporation > The Enterprise Postgres Company > > > ---------------------------------------------------------------------- > -------- Everyone hates slow websites. So do we. > Make your web apps faster with AppDynamics Download AppDynamics Lite > for free today: > https://console.mxlogic.com/redir/?zC763hOYqejhOrv7cIcECzAQsCM0qTktU6V > hRLdmmgbg5k4Q1lnoCXYCej79zANOoUTsSlHaNenmfScwq2y8DOVIttVcSyOUONtxN5MS- > yr1vF6y0QJxqJ9Ao_gK7SbAvxYtfdbFEw4GMtAhrzIVlwq81GcqNd41sQglyidPYfDyN-O > wrodLCSm6hMUQsLK6ZQTWGvX3 > _______________________________________________ > Postgres-xc-general mailing list > Pos...@li... > https://console.mxlogic.com/redir/?8VxNwQsL6zAQsCTNPb3a9EVd79IDeqR4IM- > l9QWBmUSZQ03_0T3VkDj3VkDa3JsmHip6fQbxZyV7Uv7jPt-j79zANOoVcsrKraRBoDbH7 > X6gd1h4jVsSeKYCrhpspoKMUyUrvhdwLQzh0qmMJmAOcvEn3X5OfM-eDCBQQg2loeO8JNS > sGMd40R6doCy0Kq8aN96V-7PNo_pgdLCTPrb38UsqenT3vYdFwKb1 > |
From: Koichi S. <koi...@gm...> - 2013-03-16 03:30:22
|
The plan looks strange, because it contains too many datanode scans. Ashutosh, could your review if the plan is reasonable? Ami; It will be very helpful if we get CREATE TABLE statement you used for involved tables. Table distribution information helps much to see what's going on. Best; ---------- Koichi Suzuki 2013/3/15 Arni Sumarlidason <Arn...@md...>: > Thank you for help, > > > > I used the script authored by Koichi Suzuki to deploy the cluster, so each > node contains a data node, coordinator, and gtm proxy. > > We have 20 nodes and a gtm master. > > We distribute data to nodes based on a hash(id), and we partition data by > date: usually by week > > Unfortunately I cannot give data to reproduce, I think it would be too large > anyways. > > > > I have not tried it on an empty table, but I think it would be successful. > > If we lower the amount of requested data the query is successful. > > if I join on single partitions instead of the master tables the query is > successful. > > > > I think the problem has to do with the amount of data I’m requesting, the > select statement I shared earlier should bring data that is about 1.4-2.0GB > on disk. Postgres on the coordinator consumes over 16GB before crashing. I > attached a query plan. > > > > From: Ashutosh Bapat [mailto:ash...@en...] > Sent: Thursday, March 14, 2013 11:26 AM > > > To: Arni Sumarlidason > Cc: pos...@li... > Subject: Re: [Postgres-xc-general] Configuration Error - or? > > > > > > On Thu, Mar 14, 2013 at 7:20 PM, Arni Sumarlidason > <Arn...@md...> wrote: > > I ran this query, > > select * from table t, table_lang l where t.id=l.id and l.date >= > '2012-12-01' and l.date < '2013-01-01'; > > with these results, > > > Does this produce a crash without any data in the tables? If not, you need > to give the data as well. Please provide exact reproduction steps, which > when run without any modification, reproduce the crash. Also, you will need > to tell how the table is distributed, what XC configuration you are using > like how many datanodes/coordinators etc. > > > The connection to the server was lost. Attempting reset: Failed. > > Time: 241861.411 ms > > > > Thank you for quick response, > > Arni > > > > From: Ashutosh Bapat [mailto:ash...@en...] > Sent: Thursday, March 14, 2013 11:17 AM > To: Arni Sumarlidason > Cc: pos...@li... > Subject: Re: [Postgres-xc-general] Configuration Error - or? > > > > This should not happen, can you please provide reproduction steps. > > On Thu, Mar 14, 2013 at 6:40 PM, Arni Sumarlidason > <Arn...@md...> wrote: > > Good Morning / Evening, > > > > I think I may have a configuration problem… I have data distributed > relationally with date based partitions to 20 nodes. When I try to select it > from a coordinator the query executes quickly and the data nodes promptly > pump data to the coordinator. However I run out of memory(16bg) and the > postgres process crashes. Is there some way to make the coordinator cache > the data to its local table(local disk), or is there some way to get around > this memory issue? > > > > Arni Sumarlidason | Software Engineer, Information Technology > > MDA | 820 West Diamond Ave | Gaithersburg, MD | USA > > O: 240-833-8200 D: 240-833-8318 M: 256-393-2803 > > arn...@md...| http://www.mdaus.com > > > > > ------------------------------------------------------------------------------ > Everyone hates slow websites. So do we. > Make your web apps faster with AppDynamics > Download AppDynamics Lite for free today: > http://p.sf.net/sfu/appdyn_d2d_mar > _______________________________________________ > Postgres-xc-general mailing list > Pos...@li... > https://lists.sourceforge.net/lists/listinfo/postgres-xc-general > > > > > -- > Best Wishes, > Ashutosh Bapat > EntepriseDB Corporation > The Enterprise Postgres Company > > > > > -- > Best Wishes, > Ashutosh Bapat > EntepriseDB Corporation > The Enterprise Postgres Company > > > ------------------------------------------------------------------------------ > Everyone hates slow websites. So do we. > Make your web apps faster with AppDynamics > Download AppDynamics Lite for free today: > http://p.sf.net/sfu/appdyn_d2d_mar > _______________________________________________ > Postgres-xc-general mailing list > Pos...@li... > https://lists.sourceforge.net/lists/listinfo/postgres-xc-general > |
From: Koichi S. <koi...@gm...> - 2013-03-16 03:23:07
|
One of the example is: 5 gtm proxies, 5 datanodes and 5 coordinator, one for each box, gtm master in one of the box, gtm slave for another. GTM does not need much CPU/Memory but deal with rather big amount of network traffic. So gtm/gtm slave should be given separate network segmemnt. Each datanode/coordinator can have their slaves at the next box. Please try to see pgxc_ctl bash script at https://github.com/koichi-szk/PGXC-Tools/tree/master/pgxc_ctl It comes with the similar case with 4 boxes (gtm and gtm slave are at the separate box but you can configure them at some of 5 boxes you have). Hope it helps. Regards; ---------- Koichi Suzuki 2013/3/15 Chris Brown <cb...@in...>: > Hi all, > > I'm doing some preliminary testing of XC (now that I have it working) and > I'm wondering what sort of configurations are recommended. > > For example, let's say I want write performance on well segregated data (few > cross-node writes) and I have 3 boxes to be the server farm. How would you > recommend that I configure those? > > Now let's say I have 5 boxes and want redundancy, how do I configure it and > how to I achieve redundancy? > > I know every configuration is going to be a bit different but I'm looking > for guidance on best practices. > > Thanks, > Chris... > > > ------------------------------------------------------------------------------ > Everyone hates slow websites. So do we. > Make your web apps faster with AppDynamics > Download AppDynamics Lite for free today: > http://p.sf.net/sfu/appdyn_d2d_mar > _______________________________________________ > Postgres-xc-general mailing list > Pos...@li... > https://lists.sourceforge.net/lists/listinfo/postgres-xc-general > |
From: Chris B. <cb...@in...> - 2013-03-15 20:50:08
|
This link was helpful: http://www.pgcon.org/2012/schedule/attachments/224_Postgres-XC_tutorial.pdf From: Chris Brown <cb...@in...<mailto:cb...@in...>> Date: Thursday, March 14, 2013 9:35 PM To: "pos...@li...<mailto:pos...@li...>" <pos...@li...<mailto:pos...@li...>> Subject: [Postgres-xc-general] XC Configurations Hi all, I'm doing some preliminary testing of XC (now that I have it working) and I'm wondering what sort of configurations are recommended. For example, let's say I want write performance on well segregated data (few cross-node writes) and I have 3 boxes to be the server farm. How would you recommend that I configure those? Now let's say I have 5 boxes and want redundancy, how do I configure it and how to I achieve redundancy? I know every configuration is going to be a bit different but I'm looking for guidance on best practices. Thanks, Chris... |
From: Chris B. <cb...@in...> - 2013-03-15 04:35:21
|
Hi all, I'm doing some preliminary testing of XC (now that I have it working) and I'm wondering what sort of configurations are recommended. For example, let's say I want write performance on well segregated data (few cross-node writes) and I have 3 boxes to be the server farm. How would you recommend that I configure those? Now let's say I have 5 boxes and want redundancy, how do I configure it and how to I achieve redundancy? I know every configuration is going to be a bit different but I'm looking for guidance on best practices. Thanks, Chris... |
From: Chris B. <cb...@in...> - 2013-03-15 00:10:07
|
It looks like this may be the problem. I turned connections on for the datanodes and coordinator. The coordinator reports the connections from 'createdb' and 'psql', but the datanodes don't report incoming connections from the coordinator. Netstat shows the datanodes listening to localhost only. Config error... Adding: listen_addresses = '*' And fixed. Thanks very much! I guess that should have been obvious. Perhaps a multi-node quick-start would help. Chris... ________________________________ From: Michael Paquier [mic...@gm...] Sent: March 14, 2013 4:42 PM To: Chris Brown Cc: pos...@li... Subject: Re: [Postgres-xc-general] Initial configuration of XC On Fri, Mar 15, 2013 at 8:27 AM, Chris Brown <cb...@in...<mailto:cb...@in...>> wrote: Okay: postgres=# select pgxc_pool_check(); pgxc_pool_check ----------------- f (1 row) postgres=# select pgxc_pool_reload(); pgxc_pool_reload ------------------ t (1 row) postgres=# select pgxc_pool_check(); pgxc_pool_check ----------------- t (1 row) In between these calls I called 'createdb test' (which failed as below). postgres=# select pgxc_pool_check(); pgxc_pool_check ----------------- f (1 row) So now to look at pg_hba.conf (on both datanodes and coordinator): local all all trust host all all 127.0.0.1/32<http://127.0.0.1/32> trust host all all ::1/128 trust host all all 10.0.0.0/8<http://10.0.0.0/8> trust It would be interesting to have a look at the connection logs on Datanode side to see if actually a connection is established. You can do that by setting log_connections to on. -- Michael |
From: Michael P. <mic...@gm...> - 2013-03-14 23:42:35
|
On Fri, Mar 15, 2013 at 8:27 AM, Chris Brown <cb...@in...> wrote: > Okay: > > postgres=# select pgxc_pool_check(); > pgxc_pool_check > ----------------- > f > (1 row) > > postgres=# select pgxc_pool_reload(); > pgxc_pool_reload > ------------------ > t > (1 row) > > postgres=# select pgxc_pool_check(); > pgxc_pool_check > ----------------- > t > (1 row) > > In between these calls I called 'createdb test' (which failed as below). > > > > postgres=# select pgxc_pool_check(); > pgxc_pool_check > ----------------- > f > (1 row) > > > So now to look at pg_hba.conf (on both datanodes and coordinator): > > local all all trust > host all all 127.0.0.1/32 trust > host all all ::1/128 trust > host all all 10.0.0.0/8 trust > It would be interesting to have a look at the connection logs on Datanode side to see if actually a connection is established. You can do that by setting log_connections to on. -- Michael |
From: Chris B. <cb...@in...> - 2013-03-14 23:28:07
|
Okay: postgres=# select pgxc_pool_check(); pgxc_pool_check ----------------- f (1 row) postgres=# select pgxc_pool_reload(); pgxc_pool_reload ------------------ t (1 row) postgres=# select pgxc_pool_check(); pgxc_pool_check ----------------- t (1 row) In between these calls I called 'createdb test' (which failed as below). postgres=# select pgxc_pool_check(); pgxc_pool_check ----------------- f (1 row) So now to look at pg_hba.conf (on both datanodes and coordinator): local all all trust host all all 127.0.0.1/32 trust host all all ::1/128 trust host all all 10.0.0.0/8 trust Chris... ________________________________ From: Michael Paquier [mic...@gm...] Sent: March 14, 2013 3:58 PM To: Chris Brown Cc: pos...@li... Subject: Re: [Postgres-xc-general] Initial configuration of XC On Fri, Mar 15, 2013 at 4:35 AM, Chris Brown <cb...@in...<mailto:cb...@in...>> wrote: Hi all, I'm a new postgres-xc user and am trying to set up my first cluster. I seem to have everything configured according to the directions but it's not working as expected. When I try to create a table I get (on box 3): -bash-4.2$ /usr/local/pgxc/bin/createdb test createdb: database creation failed: ERROR: Failed to get pooled connections In the logs I see (gtm.log): 1:140475820087040:2013-03-14 12:27:06.769 PDT -LOG: Any GTM standby node not found in registered node(s). LOCATION: gtm_standby_connect_to_standby_int, gtm_standby.c:378 1:140475794900736:2013-03-14 12:27:06.769 PDT -LOG: Assigning new transaction ID = 10212 LOCATION: GTM_GetGlobalTransactionIdMulti, gtm_txn.c:581 1:140475794900736:2013-03-14 12:27:06.769 PDT -LOG: Sending transaction id 10212 LOCATION: ProcessBeginTransactionGetGXIDCommand, gtm_txn.c:1172 1:140475794900736:2013-03-14 12:27:06.769 PDT -LOG: Received transaction ID 10212 for snapshot obtention LOCATION: ProcessGetSnapshotCommand, gtm_snap.c:307 1:140475794900736:2013-03-14 12:27:06.988 PDT -LOG: Cancelling transaction id 10212 LOCATION: ProcessRollbackTransactionCommand, gtm_txn.c:1989 1:140475794900736:2013-03-14 12:27:06.989 PDT -LOG: Cleaning up thread state LOCATION: GTM_ThreadCleanup, gtm_thread.c:265 coordinator.log: LOG: failed to connect to Datanode WARNING: can not connect to node 16384 LOG: failed to acquire connections STATEMENT: CREATE DATABASE test; ERROR: Failed to get pooled connections This error message means that Coordinator pooler is not able to get a connection with a remote node. So, what am I doing wrong? Why is the createdb not working? And/or where should I be looking for more diagnostics? Those settings look to be correct. What is the result output of "select pgxc_pool_check"? If it is false, you will need to launch pgxc_pool_reload to update the pooler cache with correct node information. If it is true, I imagine that you have either issues with your firewall or pg_hba.conf is not set correctly. -- Michael |
From: Michael P. <mic...@gm...> - 2013-03-14 22:59:03
|
On Fri, Mar 15, 2013 at 4:35 AM, Chris Brown <cb...@in...> wrote: > Hi all, > > I'm a new postgres-xc user and am trying to set up my first cluster. I > seem to have everything configured according to the directions but it's not > working as expected. When I try to create a table I get (on box 3): > > -bash-4.2$ /usr/local/pgxc/bin/createdb test > createdb: database creation failed: ERROR: Failed to get pooled > connections > > In the logs I see (gtm.log): > > 1:140475820087040:2013-03-14 12:27:06.769 PDT -LOG: Any GTM standby node > not found in registered node(s). > LOCATION: gtm_standby_connect_to_standby_int, gtm_standby.c:378 > 1:140475794900736:2013-03-14 12:27:06.769 PDT -LOG: Assigning new > transaction ID = 10212 > LOCATION: GTM_GetGlobalTransactionIdMulti, gtm_txn.c:581 > 1:140475794900736:2013-03-14 12:27:06.769 PDT -LOG: Sending transaction > id 10212 > LOCATION: ProcessBeginTransactionGetGXIDCommand, gtm_txn.c:1172 > 1:140475794900736:2013-03-14 12:27:06.769 PDT -LOG: Received transaction > ID 10212 for snapshot obtention > LOCATION: ProcessGetSnapshotCommand, gtm_snap.c:307 > 1:140475794900736:2013-03-14 12:27:06.988 PDT -LOG: Cancelling > transaction id 10212 > LOCATION: ProcessRollbackTransactionCommand, gtm_txn.c:1989 > 1:140475794900736:2013-03-14 12:27:06.989 PDT -LOG: Cleaning up thread > state > LOCATION: GTM_ThreadCleanup, gtm_thread.c:265 > > coordinator.log: > > LOG: failed to connect to Datanode > WARNING: can not connect to node 16384 > LOG: failed to acquire connections > STATEMENT: CREATE DATABASE test; > > ERROR: Failed to get pooled connections > This error message means that Coordinator pooler is not able to get a connection with a remote node. > So, what am I doing wrong? Why is the createdb not working? And/or where > should I be looking for more diagnostics? > Those settings look to be correct. What is the result output of "select pgxc_pool_check"? If it is false, you will need to launch pgxc_pool_reload to update the pooler cache with correct node information. If it is true, I imagine that you have either issues with your firewall or pg_hba.conf is not set correctly. -- Michael |
From: Chris B. <cb...@in...> - 2013-03-14 19:52:36
|
Hi all, I'm a new postgres-xc user and am trying to set up my first cluster. I seem to have everything configured according to the directions but it's not working as expected. When I try to create a table I get (on box 3): -bash-4.2$ /usr/local/pgxc/bin/createdb test createdb: database creation failed: ERROR: Failed to get pooled connections In the logs I see (gtm.log): 1:140475820087040:2013-03-14 12:27:06.769 PDT -LOG: Any GTM standby node not found in registered node(s). LOCATION: gtm_standby_connect_to_standby_int, gtm_standby.c:378 1:140475794900736:2013-03-14 12:27:06.769 PDT -LOG: Assigning new transaction ID = 10212 LOCATION: GTM_GetGlobalTransactionIdMulti, gtm_txn.c:581 1:140475794900736:2013-03-14 12:27:06.769 PDT -LOG: Sending transaction id 10212 LOCATION: ProcessBeginTransactionGetGXIDCommand, gtm_txn.c:1172 1:140475794900736:2013-03-14 12:27:06.769 PDT -LOG: Received transaction ID 10212 for snapshot obtention LOCATION: ProcessGetSnapshotCommand, gtm_snap.c:307 1:140475794900736:2013-03-14 12:27:06.988 PDT -LOG: Cancelling transaction id 10212 LOCATION: ProcessRollbackTransactionCommand, gtm_txn.c:1989 1:140475794900736:2013-03-14 12:27:06.989 PDT -LOG: Cleaning up thread state LOCATION: GTM_ThreadCleanup, gtm_thread.c:265 coordinator.log: LOG: failed to connect to Datanode WARNING: can not connect to node 16384 LOG: failed to acquire connections STATEMENT: CREATE DATABASE test; ERROR: Failed to get pooled connections STATEMENT: CREATE DATABASE test; I have 3 boxes (10.120.26.12, 10.120.26.13, 10.120.26,14). On box 1 and 2 are datanodes and box 3 is gtm and coordinator. I am using the default ports. iptables is set to allow all on all boxes. I am using the 1.0.2 tar and built locally on each machine. On the datanodes I have changed: max_prepared_transactions = 16 gtm_host = '10.120.26.14' #gtm_port = 6666 pgxc_node_name = 'dn_q2' # dn_q1 on box 1 On box 1 (datanode): postgres=# select * from pgxc_node; node_name | node_type | node_port | node_host | nodeis_primary | nodeis_preferred | node_id -----------+-----------+-----------+-----------+----------------+------------------+----------- dn_q1 | C | 5432 | localhost | f | f | 891718832 (1 row) On box 2 (datanode): postgres=# select * from pgxc_node; node_name | node_type | node_port | node_host | nodeis_primary | nodeis_preferred | node_id -----------+-----------+-----------+-----------+----------------+------------------+------------- dn_q2 | C | 5432 | localhost | f | f | -2106179335 (1 row) On box 3 (coordinator): postgres=# select * from pgxc_node; node_name | node_type | node_port | node_host | nodeis_primary | nodeis_preferred | node_id -----------+-----------+-----------+--------------+----------------+------------------+------------- coord_q3 | C | 5432 | localhost | f | f | -2063466557 dn_q1 | D | 5432 | 10.120.26.12 | f | f | 891718832 dn_q2 | D | 5432 | 10.120.26.13 | f | f | -2106179335 (3 rows) So, what am I doing wrong? Why is the createdb not working? And/or where should I be looking for more diagnostics? Thanks, Chris... |
From: Arni S. <Arn...@md...> - 2013-03-14 16:54:43
|
QUERY PLAN -------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------- Hash Join (cost=1.64..5.35 rows=236 width=10726) Hash Cond: (t.id = l.id) -> Append (cost=0.00..0.00 rows=36000 width=5638) -> Data Node Scan on table "_REMOTE_TABLE_QUERY_" (cost=0.00..0.00 rows=1000 width=5638) Node/s: node01d, node02d, node03d, node04d, node05d, node06d, node07d, node08d, node09d, node10d, node11d, node12d, node13d, node14d, node15d, node16d, node17d, node18d, node19d, node20d -> Data Node Scan on table_20121111 "_REMOTE_TABLE_QUERY_" (cost=0.00..0.00 rows=1000 width=5638) Node/s: node01d, node02d, node03d, node04d, node05d, node06d, node07d, node08d, node09d, node10d, node11d, node12d, node13d, node14d, node15d, node16d, node17d, node18d, node19d, node20d -> Data Node Scan on table_20121118 "_REMOTE_TABLE_QUERY_" (cost=0.00..0.00 rows=1000 width=5638) Node/s: node01d, node02d, node03d, node04d, node05d, node06d, node07d, node08d, node09d, node10d, node11d, node12d, node13d, node14d, node15d, node16d, node17d, node18d, node19d, node20d -> Data Node Scan on table_20121125 "_REMOTE_TABLE_QUERY_" (cost=0.00..0.00 rows=1000 width=5638) Node/s: node01d, node02d, node03d, node04d, node05d, node06d, node07d, node08d, node09d, node10d, node11d, node12d, node13d, node14d, node15d, node16d, node17d, node18d, node19d, node20d -> Data Node Scan on table_20121202 "_REMOTE_TABLE_QUERY_" (cost=0.00..0.00 rows=1000 width=5638) Node/s: node01d, node02d, node03d, node04d, node05d, node06d, node07d, node08d, node09d, node10d, node11d, node12d, node13d, node14d, node15d, node16d, node17d, node18d, node19d, node20d -> Data Node Scan on table_20121209 "_REMOTE_TABLE_QUERY_" (cost=0.00..0.00 rows=1000 width=5638) Node/s: node01d, node02d, node03d, node04d, node05d, node06d, node07d, node08d, node09d, node10d, node11d, node12d, node13d, node14d, node15d, node16d, node17d, node18d, node19d, node20d -> Data Node Scan on table_20121216 "_REMOTE_TABLE_QUERY_" (cost=0.00..0.00 rows=1000 width=5638) Node/s: node01d, node02d, node03d, node04d, node05d, node06d, node07d, node08d, node09d, node10d, node11d, node12d, node13d, node14d, node15d, node16d, node17d, node18d, node19d, node20d -> Data Node Scan on table_20121223 "_REMOTE_TABLE_QUERY_" (cost=0.00..0.00 rows=1000 width=5638) Node/s: node01d, node02d, node03d, node04d, node05d, node06d, node07d, node08d, node09d, node10d, node11d, node12d, node13d, node14d, node15d, node16d, node17d, node18d, node19d, node20d -> Data Node Scan on table_20121230 "_REMOTE_TABLE_QUERY_" (cost=0.00..0.00 rows=1000 width=5638) Node/s: node01d, node02d, node03d, node04d, node05d, node06d, node07d, node08d, node09d, node10d, node11d, node12d, node13d, node14d, node15d, node16d, node17d, node18d, node19d, node20d -> Data Node Scan on table_20130106 "_REMOTE_TABLE_QUERY_" (cost=0.00..0.00 rows=1000 width=5638) Node/s: node01d, node02d, node03d, node04d, node05d, node06d, node07d, node08d, node09d, node10d, node11d, node12d, node13d, node14d, node15d, node16d, node17d, node18d, node19d, node20d -> Data Node Scan on table_20130113 "_REMOTE_TABLE_QUERY_" (cost=0.00..0.00 rows=1000 width=5638) Node/s: node01d, node02d, node03d, node04d, node05d, node06d, node07d, node08d, node09d, node10d, node11d, node12d, node13d, node14d, node15d, node16d, node17d, node18d, node19d, node20d -> Data Node Scan on table_20130120 "_REMOTE_TABLE_QUERY_" (cost=0.00..0.00 rows=1000 width=5638) Node/s: node01d, node02d, node03d, node04d, node05d, node06d, node07d, node08d, node09d, node10d, node11d, node12d, node13d, node14d, node15d, node16d, node17d, node18d, node19d, node20d -> Data Node Scan on table_20130127 "_REMOTE_TABLE_QUERY_" (cost=0.00..0.00 rows=1000 width=5638) Node/s: node01d, node02d, node03d, node04d, node05d, node06d, node07d, node08d, node09d, node10d, node11d, node12d, node13d, node14d, node15d, node16d, node17d, node18d, node19d, node20d -> Data Node Scan on table_20130203 "_REMOTE_TABLE_QUERY_" (cost=0.00..0.00 rows=1000 width=5638) Node/s: node01d, node02d, node03d, node04d, node05d, node06d, node07d, node08d, node09d, node10d, node11d, node12d, node13d, node14d, node15d, node16d, node17d, node18d, node19d, node20d -> Data Node Scan on table_20130210 "_REMOTE_TABLE_QUERY_" (cost=0.00..0.00 rows=1000 width=5638) Node/s: node01d, node02d, node03d, node04d, node05d, node06d, node07d, node08d, node09d, node10d, node11d, node12d, node13d, node14d, node15d, node16d, node17d, node18d, node19d, node20d -> Data Node Scan on table_20130217 "_REMOTE_TABLE_QUERY_" (cost=0.00..0.00 rows=1000 width=5638) Node/s: node01d, node02d, node03d, node04d, node05d, node06d, node07d, node08d, node09d, node10d, node11d, node12d, node13d, node14d, node15d, node16d, node17d, node18d, node19d, node20d -> Data Node Scan on table_20130224 "_REMOTE_TABLE_QUERY_" (cost=0.00..0.00 rows=1000 width=5638) Node/s: node01d, node02d, node03d, node04d, node05d, node06d, node07d, node08d, node09d, node10d, node11d, node12d, node13d, node14d, node15d, node16d, node17d, node18d, node19d, node20d -> Data Node Scan on table_20130303 "_REMOTE_TABLE_QUERY_" (cost=0.00..0.00 rows=1000 width=5638) Node/s: node01d, node02d, node03d, node04d, node05d, node06d, node07d, node08d, node09d, node10d, node11d, node12d, node13d, node14d, node15d, node16d, node17d, node18d, node19d, node20d -> Data Node Scan on table_20130310 "_REMOTE_TABLE_QUERY_" (cost=0.00..0.00 rows=1000 width=5638) Node/s: node01d, node02d, node03d, node04d, node05d, node06d, node07d, node08d, node09d, node10d, node11d, node12d, node13d, node14d, node15d, node16d, node17d, node18d, node19d, node20d -> Data Node Scan on table_20130317 "_REMOTE_TABLE_QUERY_" (cost=0.00..0.00 rows=1000 width=5638) Node/s: node01d, node02d, node03d, node04d, node05d, node06d, node07d, node08d, node09d, node10d, node11d, node12d, node13d, node14d, node15d, node16d, node17d, node18d, node19d, node20d -> Data Node Scan on table_20130324 "_REMOTE_TABLE_QUERY_" (cost=0.00..0.00 rows=1000 width=5638) Node/s: node01d, node02d, node03d, node04d, node05d, node06d, node07d, node08d, node09d, node10d, node11d, node12d, node13d, node14d, node15d, node16d, node17d, node18d, node19d, node20d -> Data Node Scan on table_20130331 "_REMOTE_TABLE_QUERY_" (cost=0.00..0.00 rows=1000 width=5638) Node/s: node01d, node02d, node03d, node04d, node05d, node06d, node07d, node08d, node09d, node10d, node11d, node12d, node13d, node14d, node15d, node16d, node17d, node18d, node19d, node20d -> Data Node Scan on table_20130407 "_REMOTE_TABLE_QUERY_" (cost=0.00..0.00 rows=1000 width=5638) Node/s: node01d, node02d, node03d, node04d, node05d, node06d, node07d, node08d, node09d, node10d, node11d, node12d, node13d, node14d, node15d, node16d, node17d, node18d, node19d, node20d -> Data Node Scan on table_20130414 "_REMOTE_TABLE_QUERY_" (cost=0.00..0.00 rows=1000 width=5638) Node/s: node01d, node02d, node03d, node04d, node05d, node06d, node07d, node08d, node09d, node10d, node11d, node12d, node13d, node14d, node15d, node16d, node17d, node18d, node19d, node20d -> Data Node Scan on table_20130421 "_REMOTE_TABLE_QUERY_" (cost=0.00..0.00 rows=1000 width=5638) Node/s: node01d, node02d, node03d, node04d, node05d, node06d, node07d, node08d, node09d, node10d, node11d, node12d, node13d, node14d, node15d, node16d, node17d, node18d, node19d, node20d -> Data Node Scan on table_20130428 "_REMOTE_TABLE_QUERY_" (cost=0.00..0.00 rows=1000 width=5638) Node/s: node01d, node02d, node03d, node04d, node05d, node06d, node07d, node08d, node09d, node10d, node11d, node12d, node13d, node14d, node15d, node16d, node17d, node18d, node19d, node20d -> Data Node Scan on table_1202 "_REMOTE_TABLE_QUERY_" (cost=0.00..0.00 rows=1000 width=5638) Node/s: node01d, node02d, node03d, node04d, node05d, node06d, node07d, node08d, node09d, node10d, node11d, node12d, node13d, node14d, node15d, node16d, node17d, node18d, node19d, node20d -> Data Node Scan on table_1203 "_REMOTE_TABLE_QUERY_" (cost=0.00..0.00 rows=1000 width=5638) Node/s: node01d, node02d, node03d, node04d, node05d, node06d, node07d, node08d, node09d, node10d, node11d, node12d, node13d, node14d, node15d, node16d, node17d, node18d, node19d, node20d -> Data Node Scan on table_1204 "_REMOTE_TABLE_QUERY_" (cost=0.00..0.00 rows=1000 width=5638) Node/s: node01d, node02d, node03d, node04d, node05d, node06d, node07d, node08d, node09d, node10d, node11d, node12d, node13d, node14d, node15d, node16d, node17d, node18d, node19d, node20d -> Data Node Scan on table_1205 "_REMOTE_TABLE_QUERY_" (cost=0.00..0.00 rows=1000 width=5638) Node/s: node01d, node02d, node03d, node04d, node05d, node06d, node07d, node08d, node09d, node10d, node11d, node12d, node13d, node14d, node15d, node16d, node17d, node18d, node19d, node20d -> Data Node Scan on table_1206 "_REMOTE_TABLE_QUERY_" (cost=0.00..0.00 rows=1000 width=5638) Node/s: node01d, node02d, node03d, node04d, node05d, node06d, node07d, node08d, node09d, node10d, node11d, node12d, node13d, node14d, node15d, node16d, node17d, node18d, node19d, node20d -> Data Node Scan on table_1207 "_REMOTE_TABLE_QUERY_" (cost=0.00..0.00 rows=1000 width=5638) Node/s: node01d, node02d, node03d, node04d, node05d, node06d, node07d, node08d, node09d, node10d, node11d, node12d, node13d, node14d, node15d, node16d, node17d, node18d, node19d, node20d -> Data Node Scan on table_1208 "_REMOTE_TABLE_QUERY_" (cost=0.00..0.00 rows=1000 width=5638) Node/s: node01d, node02d, node03d, node04d, node05d, node06d, node07d, node08d, node09d, node10d, node11d, node12d, node13d, node14d, node15d, node16d, node17d, node18d, node19d, node20d -> Data Node Scan on table_1209 "_REMOTE_TABLE_QUERY_" (cost=0.00..0.00 rows=1000 width=5638) Node/s: node01d, node02d, node03d, node04d, node05d, node06d, node07d, node08d, node09d, node10d, node11d, node12d, node13d, node14d, node15d, node16d, node17d, node18d, node19d, node20d -> Data Node Scan on table_1210 "_REMOTE_TABLE_QUERY_" (cost=0.00..0.00 rows=1000 width=5638) Node/s: node01d, node02d, node03d, node04d, node05d, node06d, node07d, node08d, node09d, node10d, node11d, node12d, node13d, node14d, node15d, node16d, node17d, node18d, node19d, node20d -> Data Node Scan on table_1211 "_REMOTE_TABLE_QUERY_" (cost=0.00..0.00 rows=1000 width=5638) Node/s: node01d, node02d, node03d, node04d, node05d, node06d, node07d, node08d, node09d, node10d, node11d, node12d, node13d, node14d, node15d, node16d, node17d, node18d, node19d, node20d -> Hash (cost=0.00..0.00 rows=131000 width=5088) -> Append (cost=0.00..0.00 rows=131000 width=5088) -> Data Node Scan on table_join "_REMOTE_TABLE_QUERY_" (cost=0.00..0.00 rows=1000 width=5088) Node/s: node01d, node02d, node03d, node04d, node05d, node06d, node07d, node08d, node09d, node10d, node11d, node12d, node13d, node14d, node15d, node16d, node17d, node18d, node19d, node20d -> Data Node Scan on table_join_120806 "_REMOTE_TABLE_QUERY_" (cost=0.00..0.00 rows=1000 width=5088) Node/s: node01d, node02d, node03d, node04d, node05d, node06d, node07d, node08d, node09d, node10d, node11d, node12d, node13d, node14d, node15d, node16d, node17d, node18d, node19d, node20d -> Data Node Scan on table_join_120813 "_REMOTE_TABLE_QUERY_" (cost=0.00..0.00 rows=1000 width=5088) Node/s: node01d, node02d, node03d, node04d, node05d, node06d, node07d, node08d, node09d, node10d, node11d, node12d, node13d, node14d, node15d, node16d, node17d, node18d, node19d, node20d -> Data Node Scan on table_join_120820 "_REMOTE_TABLE_QUERY_" (cost=0.00..0.00 rows=1000 width=5088) Node/s: node01d, node02d, node03d, node04d, node05d, node06d, node07d, node08d, node09d, node10d, node11d, node12d, node13d, node14d, node15d, node16d, node17d, node18d, node19d, node20d -> Data Node Scan on table_join_120827 "_REMOTE_TABLE_QUERY_" (cost=0.00..0.00 rows=1000 width=5088) Node/s: node01d, node02d, node03d, node04d, node05d, node06d, node07d, node08d, node09d, node10d, node11d, node12d, node13d, node14d, node15d, node16d, node17d, node18d, node19d, node20d -> Data Node Scan on table_join_120903 "_REMOTE_TABLE_QUERY_" (cost=0.00..0.00 rows=1000 width=5088) Node/s: node01d, node02d, node03d, node04d, node05d, node06d, node07d, node08d, node09d, node10d, node11d, node12d, node13d, node14d, node15d, node16d, node17d, node18d, node19d, node20d -> Data Node Scan on table_join_120910 "_REMOTE_TABLE_QUERY_" (cost=0.00..0.00 rows=1000 width=5088) Node/s: node01d, node02d, node03d, node04d, node05d, node06d, node07d, node08d, node09d, node10d, node11d, node12d, node13d, node14d, node15d, node16d, node17d, node18d, node19d, node20d -> Data Node Scan on table_join_120917 "_REMOTE_TABLE_QUERY_" (cost=0.00..0.00 rows=1000 width=5088) Node/s: node01d, node02d, node03d, node04d, node05d, node06d, node07d, node08d, node09d, node10d, node11d, node12d, node13d, node14d, node15d, node16d, node17d, node18d, node19d, node20d -> Data Node Scan on table_join_120924 "_REMOTE_TABLE_QUERY_" (cost=0.00..0.00 rows=1000 width=5088) Node/s: node01d, node02d, node03d, node04d, node05d, node06d, node07d, node08d, node09d, node10d, node11d, node12d, node13d, node14d, node15d, node16d, node17d, node18d, node19d, node20d -> Data Node Scan on table_join_121001 "_REMOTE_TABLE_QUERY_" (cost=0.00..0.00 rows=1000 width=5088) Node/s: node01d, node02d, node03d, node04d, node05d, node06d, node07d, node08d, node09d, node10d, node11d, node12d, node13d, node14d, node15d, node16d, node17d, node18d, node19d, node20d -> Data Node Scan on table_join_121008 "_REMOTE_TABLE_QUERY_" (cost=0.00..0.00 rows=1000 width=5088) Node/s: node01d, node02d, node03d, node04d, node05d, node06d, node07d, node08d, node09d, node10d, node11d, node12d, node13d, node14d, node15d, node16d, node17d, node18d, node19d, node20d -> Data Node Scan on table_join_121015 "_REMOTE_TABLE_QUERY_" (cost=0.00..0.00 rows=1000 width=5088) Node/s: node01d, node02d, node03d, node04d, node05d, node06d, node07d, node08d, node09d, node10d, node11d, node12d, node13d, node14d, node15d, node16d, node17d, node18d, node19d, node20d -> Data Node Scan on table_join_121022 "_REMOTE_TABLE_QUERY_" (cost=0.00..0.00 rows=1000 width=5088) Node/s: node01d, node02d, node03d, node04d, node05d, node06d, node07d, node08d, node09d, node10d, node11d, node12d, node13d, node14d, node15d, node16d, node17d, node18d, node19d, node20d -> Data Node Scan on table_join_121029 "_REMOTE_TABLE_QUERY_" (cost=0.00..0.00 rows=1000 width=5088) Node/s: node01d, node02d, node03d, node04d, node05d, node06d, node07d, node08d, node09d, node10d, node11d, node12d, node13d, node14d, node15d, node16d, node17d, node18d, node19d, node20d -> Data Node Scan on table_join_121105 "_REMOTE_TABLE_QUERY_" (cost=0.00..0.00 rows=1000 width=5088) Node/s: node01d, node02d, node03d, node04d, node05d, node06d, node07d, node08d, node09d, node10d, node11d, node12d, node13d, node14d, node15d, node16d, node17d, node18d, node19d, node20d -> Data Node Scan on table_join_121112 "_REMOTE_TABLE_QUERY_" (cost=0.00..0.00 rows=1000 width=5088) Node/s: node01d, node02d, node03d, node04d, node05d, node06d, node07d, node08d, node09d, node10d, node11d, node12d, node13d, node14d, node15d, node16d, node17d, node18d, node19d, node20d -> Data Node Scan on table_join_121119 "_REMOTE_TABLE_QUERY_" (cost=0.00..0.00 rows=1000 width=5088) Node/s: node01d, node02d, node03d, node04d, node05d, node06d, node07d, node08d, node09d, node10d, node11d, node12d, node13d, node14d, node15d, node16d, node17d, node18d, node19d, node20d -> Data Node Scan on table_join_121126 "_REMOTE_TABLE_QUERY_" (cost=0.00..0.00 rows=1000 width=5088) Node/s: node01d, node02d, node03d, node04d, node05d, node06d, node07d, node08d, node09d, node10d, node11d, node12d, node13d, node14d, node15d, node16d, node17d, node18d, node19d, node20d -> Data Node Scan on table_join_121203 "_REMOTE_TABLE_QUERY_" (cost=0.00..0.00 rows=1000 width=5088) Node/s: node01d, node02d, node03d, node04d, node05d, node06d, node07d, node08d, node09d, node10d, node11d, node12d, node13d, node14d, node15d, node16d, node17d, node18d, node19d, node20d -> Data Node Scan on table_join_121210 "_REMOTE_TABLE_QUERY_" (cost=0.00..0.00 rows=1000 width=5088) Node/s: node01d, node02d, node03d, node04d, node05d, node06d, node07d, node08d, node09d, node10d, node11d, node12d, node13d, node14d, node15d, node16d, node17d, node18d, node19d, node20d -> Data Node Scan on table_join_121217 "_REMOTE_TABLE_QUERY_" (cost=0.00..0.00 rows=1000 width=5088) Node/s: node01d, node02d, node03d, node04d, node05d, node06d, node07d, node08d, node09d, node10d, node11d, node12d, node13d, node14d, node15d, node16d, node17d, node18d, node19d, node20d -> Data Node Scan on table_join_121224 "_REMOTE_TABLE_QUERY_" (cost=0.00..0.00 rows=1000 width=5088) Node/s: node01d, node02d, node03d, node04d, node05d, node06d, node07d, node08d, node09d, node10d, node11d, node12d, node13d, node14d, node15d, node16d, node17d, node18d, node19d, node20d -> Data Node Scan on table_join_121231 "_REMOTE_TABLE_QUERY_" (cost=0.00..0.00 rows=1000 width=5088) Node/s: node01d, node02d, node03d, node04d, node05d, node06d, node07d, node08d, node09d, node10d, node11d, node12d, node13d, node14d, node15d, node16d, node17d, node18d, node19d, node20d -> Data Node Scan on table_join_130107 "_REMOTE_TABLE_QUERY_" (cost=0.00..0.00 rows=1000 width=5088) Node/s: node01d, node02d, node03d, node04d, node05d, node06d, node07d, node08d, node09d, node10d, node11d, node12d, node13d, node14d, node15d, node16d, node17d, node18d, node19d, node20d -> Data Node Scan on table_join_130114 "_REMOTE_TABLE_QUERY_" (cost=0.00..0.00 rows=1000 width=5088) Node/s: node01d, node02d, node03d, node04d, node05d, node06d, node07d, node08d, node09d, node10d, node11d, node12d, node13d, node14d, node15d, node16d, node17d, node18d, node19d, node20d -> Data Node Scan on table_join_130121 "_REMOTE_TABLE_QUERY_" (cost=0.00..0.00 rows=1000 width=5088) Node/s: node01d, node02d, node03d, node04d, node05d, node06d, node07d, node08d, node09d, node10d, node11d, node12d, node13d, node14d, node15d, node16d, node17d, node18d, node19d, node20d -> Data Node Scan on table_join_130128 "_REMOTE_TABLE_QUERY_" (cost=0.00..0.00 rows=1000 width=5088) Node/s: node01d, node02d, node03d, node04d, node05d, node06d, node07d, node08d, node09d, node10d, node11d, node12d, node13d, node14d, node15d, node16d, node17d, node18d, node19d, node20d -> Data Node Scan on table_join_130204 "_REMOTE_TABLE_QUERY_" (cost=0.00..0.00 rows=1000 width=5088) Node/s: node01d, node02d, node03d, node04d, node05d, node06d, node07d, node08d, node09d, node10d, node11d, node12d, node13d, node14d, node15d, node16d, node17d, node18d, node19d, node20d -> Data Node Scan on table_join_130211 "_REMOTE_TABLE_QUERY_" (cost=0.00..0.00 rows=1000 width=5088) Node/s: node01d, node02d, node03d, node04d, node05d, node06d, node07d, node08d, node09d, node10d, node11d, node12d, node13d, node14d, node15d, node16d, node17d, node18d, node19d, node20d -> Data Node Scan on table_join_130218 "_REMOTE_TABLE_QUERY_" (cost=0.00..0.00 rows=1000 width=5088) Node/s: node01d, node02d, node03d, node04d, node05d, node06d, node07d, node08d, node09d, node10d, node11d, node12d, node13d, node14d, node15d, node16d, node17d, node18d, node19d, node20d -> Data Node Scan on table_join_130225 "_REMOTE_TABLE_QUERY_" (cost=0.00..0.00 rows=1000 width=5088) Node/s: node01d, node02d, node03d, node04d, node05d, node06d, node07d, node08d, node09d, node10d, node11d, node12d, node13d, node14d, node15d, node16d, node17d, node18d, node19d, node20d -> Data Node Scan on table_join_130304 "_REMOTE_TABLE_QUERY_" (cost=0.00..0.00 rows=1000 width=5088) Node/s: node01d, node02d, node03d, node04d, node05d, node06d, node07d, node08d, node09d, node10d, node11d, node12d, node13d, node14d, node15d, node16d, node17d, node18d, node19d, node20d -> Data Node Scan on table_join_130311 "_REMOTE_TABLE_QUERY_" (cost=0.00..0.00 rows=1000 width=5088) Node/s: node01d, node02d, node03d, node04d, node05d, node06d, node07d, node08d, node09d, node10d, node11d, node12d, node13d, node14d, node15d, node16d, node17d, node18d, node19d, node20d -> Data Node Scan on table_join_130318 "_REMOTE_TABLE_QUERY_" (cost=0.00..0.00 rows=1000 width=5088) Node/s: node01d, node02d, node03d, node04d, node05d, node06d, node07d, node08d, node09d, node10d, node11d, node12d, node13d, node14d, node15d, node16d, node17d, node18d, node19d, node20d -> Data Node Scan on table_join_130325 "_REMOTE_TABLE_QUERY_" (cost=0.00..0.00 rows=1000 width=5088) Node/s: node01d, node02d, node03d, node04d, node05d, node06d, node07d, node08d, node09d, node10d, node11d, node12d, node13d, node14d, node15d, node16d, node17d, node18d, node19d, node20d -> Data Node Scan on table_join_130401 "_REMOTE_TABLE_QUERY_" (cost=0.00..0.00 rows=1000 width=5088) Node/s: node01d, node02d, node03d, node04d, node05d, node06d, node07d, node08d, node09d, node10d, node11d, node12d, node13d, node14d, node15d, node16d, node17d, node18d, node19d, node20d -> Data Node Scan on table_join_130408 "_REMOTE_TABLE_QUERY_" (cost=0.00..0.00 rows=1000 width=5088) Node/s: node01d, node02d, node03d, node04d, node05d, node06d, node07d, node08d, node09d, node10d, node11d, node12d, node13d, node14d, node15d, node16d, node17d, node18d, node19d, node20d -> Data Node Scan on table_join_130415 "_REMOTE_TABLE_QUERY_" (cost=0.00..0.00 rows=1000 width=5088) Node/s: node01d, node02d, node03d, node04d, node05d, node06d, node07d, node08d, node09d, node10d, node11d, node12d, node13d, node14d, node15d, node16d, node17d, node18d, node19d, node20d -> Data Node Scan on table_join_130422 "_REMOTE_TABLE_QUERY_" (cost=0.00..0.00 rows=1000 width=5088) Node/s: node01d, node02d, node03d, node04d, node05d, node06d, node07d, node08d, node09d, node10d, node11d, node12d, node13d, node14d, node15d, node16d, node17d, node18d, node19d, node20d -> Data Node Scan on table_join_130429 "_REMOTE_TABLE_QUERY_" (cost=0.00..0.00 rows=1000 width=5088) Node/s: node01d, node02d, node03d, node04d, node05d, node06d, node07d, node08d, node09d, node10d, node11d, node12d, node13d, node14d, node15d, node16d, node17d, node18d, node19d, node20d -> Data Node Scan on table_join_130506 "_REMOTE_TABLE_QUERY_" (cost=0.00..0.00 rows=1000 width=5088) Node/s: node01d, node02d, node03d, node04d, node05d, node06d, node07d, node08d, node09d, node10d, node11d, node12d, node13d, node14d, node15d, node16d, node17d, node18d, node19d, node20d -> Data Node Scan on table_join_130513 "_REMOTE_TABLE_QUERY_" (cost=0.00..0.00 rows=1000 width=5088) Node/s: node01d, node02d, node03d, node04d, node05d, node06d, node07d, node08d, node09d, node10d, node11d, node12d, node13d, node14d, node15d, node16d, node17d, node18d, node19d, node20d -> Data Node Scan on table_join_130520 "_REMOTE_TABLE_QUERY_" (cost=0.00..0.00 rows=1000 width=5088) Node/s: node01d, node02d, node03d, node04d, node05d, node06d, node07d, node08d, node09d, node10d, node11d, node12d, node13d, node14d, node15d, node16d, node17d, node18d, node19d, node20d -> Data Node Scan on table_join_130527 "_REMOTE_TABLE_QUERY_" (cost=0.00..0.00 rows=1000 width=5088) Node/s: node01d, node02d, node03d, node04d, node05d, node06d, node07d, node08d, node09d, node10d, node11d, node12d, node13d, node14d, node15d, node16d, node17d, node18d, node19d, node20d -> Data Node Scan on table_join_130603 "_REMOTE_TABLE_QUERY_" (cost=0.00..0.00 rows=1000 width=5088) Node/s: node01d, node02d, node03d, node04d, node05d, node06d, node07d, node08d, node09d, node10d, node11d, node12d, node13d, node14d, node15d, node16d, node17d, node18d, node19d, node20d -> Data Node Scan on table_join_130610 "_REMOTE_TABLE_QUERY_" (cost=0.00..0.00 rows=1000 width=5088) Node/s: node01d, node02d, node03d, node04d, node05d, node06d, node07d, node08d, node09d, node10d, node11d, node12d, node13d, node14d, node15d, node16d, node17d, node18d, node19d, node20d -> Data Node Scan on table_join_130617 "_REMOTE_TABLE_QUERY_" (cost=0.00..0.00 rows=1000 width=5088) Node/s: node01d, node02d, node03d, node04d, node05d, node06d, node07d, node08d, node09d, node10d, node11d, node12d, node13d, node14d, node15d, node16d, node17d, node18d, node19d, node20d -> Data Node Scan on table_join_130624 "_REMOTE_TABLE_QUERY_" (cost=0.00..0.00 rows=1000 width=5088) Node/s: node01d, node02d, node03d, node04d, node05d, node06d, node07d, node08d, node09d, node10d, node11d, node12d, node13d, node14d, node15d, node16d, node17d, node18d, node19d, node20d -> Data Node Scan on table_join_130701 "_REMOTE_TABLE_QUERY_" (cost=0.00..0.00 rows=1000 width=5088) Node/s: node01d, node02d, node03d, node04d, node05d, node06d, node07d, node08d, node09d, node10d, node11d, node12d, node13d, node14d, node15d, node16d, node17d, node18d, node19d, node20d -> Data Node Scan on table_join_130708 "_REMOTE_TABLE_QUERY_" (cost=0.00..0.00 rows=1000 width=5088) Node/s: node01d, node02d, node03d, node04d, node05d, node06d, node07d, node08d, node09d, node10d, node11d, node12d, node13d, node14d, node15d, node16d, node17d, node18d, node19d, node20d -> Data Node Scan on table_join_130715 "_REMOTE_TABLE_QUERY_" (cost=0.00..0.00 rows=1000 width=5088) Node/s: node01d, node02d, node03d, node04d, node05d, node06d, node07d, node08d, node09d, node10d, node11d, node12d, node13d, node14d, node15d, node16d, node17d, node18d, node19d, node20d -> Data Node Scan on table_join_130722 "_REMOTE_TABLE_QUERY_" (cost=0.00..0.00 rows=1000 width=5088) Node/s: node01d, node02d, node03d, node04d, node05d, node06d, node07d, node08d, node09d, node10d, node11d, node12d, node13d, node14d, node15d, node16d, node17d, node18d, node19d, node20d -> Data Node Scan on table_join_130729 "_REMOTE_TABLE_QUERY_" (cost=0.00..0.00 rows=1000 width=5088) Node/s: node01d, node02d, node03d, node04d, node05d, node06d, node07d, node08d, node09d, node10d, node11d, node12d, node13d, node14d, node15d, node16d, node17d, node18d, node19d, node20d -> Data Node Scan on table_join_130805 "_REMOTE_TABLE_QUERY_" (cost=0.00..0.00 rows=1000 width=5088) Node/s: node01d, node02d, node03d, node04d, node05d, node06d, node07d, node08d, node09d, node10d, node11d, node12d, node13d, node14d, node15d, node16d, node17d, node18d, node19d, node20d -> Data Node Scan on table_join_130812 "_REMOTE_TABLE_QUERY_" (cost=0.00..0.00 rows=1000 width=5088) Node/s: node01d, node02d, node03d, node04d, node05d, node06d, node07d, node08d, node09d, node10d, node11d, node12d, node13d, node14d, node15d, node16d, node17d, node18d, node19d, node20d -> Data Node Scan on table_join_130819 "_REMOTE_TABLE_QUERY_" (cost=0.00..0.00 rows=1000 width=5088) Node/s: node01d, node02d, node03d, node04d, node05d, node06d, node07d, node08d, node09d, node10d, node11d, node12d, node13d, node14d, node15d, node16d, node17d, node18d, node19d, node20d -> Data Node Scan on table_join_130826 "_REMOTE_TABLE_QUERY_" (cost=0.00..0.00 rows=1000 width=5088) Node/s: node01d, node02d, node03d, node04d, node05d, node06d, node07d, node08d, node09d, node10d, node11d, node12d, node13d, node14d, node15d, node16d, node17d, node18d, node19d, node20d -> Data Node Scan on table_join_130902 "_REMOTE_TABLE_QUERY_" (cost=0.00..0.00 rows=1000 width=5088) Node/s: node01d, node02d, node03d, node04d, node05d, node06d, node07d, node08d, node09d, node10d, node11d, node12d, node13d, node14d, node15d, node16d, node17d, node18d, node19d, node20d -> Data Node Scan on table_join_130909 "_REMOTE_TABLE_QUERY_" (cost=0.00..0.00 rows=1000 width=5088) Node/s: node01d, node02d, node03d, node04d, node05d, node06d, node07d, node08d, node09d, node10d, node11d, node12d, node13d, node14d, node15d, node16d, node17d, node18d, node19d, node20d -> Data Node Scan on table_join_130916 "_REMOTE_TABLE_QUERY_" (cost=0.00..0.00 rows=1000 width=5088) Node/s: node01d, node02d, node03d, node04d, node05d, node06d, node07d, node08d, node09d, node10d, node11d, node12d, node13d, node14d, node15d, node16d, node17d, node18d, node19d, node20d -> Data Node Scan on table_join_130923 "_REMOTE_TABLE_QUERY_" (cost=0.00..0.00 rows=1000 width=5088) Node/s: node01d, node02d, node03d, node04d, node05d, node06d, node07d, node08d, node09d, node10d, node11d, node12d, node13d, node14d, node15d, node16d, node17d, node18d, node19d, node20d -> Data Node Scan on table_join_130930 "_REMOTE_TABLE_QUERY_" (cost=0.00..0.00 rows=1000 width=5088) Node/s: node01d, node02d, node03d, node04d, node05d, node06d, node07d, node08d, node09d, node10d, node11d, node12d, node13d, node14d, node15d, node16d, node17d, node18d, node19d, node20d -> Data Node Scan on table_join_131007 "_REMOTE_TABLE_QUERY_" (cost=0.00..0.00 rows=1000 width=5088) Node/s: node01d, node02d, node03d, node04d, node05d, node06d, node07d, node08d, node09d, node10d, node11d, node12d, node13d, node14d, node15d, node16d, node17d, node18d, node19d, node20d -> Data Node Scan on table_join_131014 "_REMOTE_TABLE_QUERY_" (cost=0.00..0.00 rows=1000 width=5088) Node/s: node01d, node02d, node03d, node04d, node05d, node06d, node07d, node08d, node09d, node10d, node11d, node12d, node13d, node14d, node15d, node16d, node17d, node18d, node19d, node20d -> Data Node Scan on table_join_131021 "_REMOTE_TABLE_QUERY_" (cost=0.00..0.00 rows=1000 width=5088) Node/s: node01d, node02d, node03d, node04d, node05d, node06d, node07d, node08d, node09d, node10d, node11d, node12d, node13d, node14d, node15d, node16d, node17d, node18d, node19d, node20d -> Data Node Scan on table_join_131028 "_REMOTE_TABLE_QUERY_" (cost=0.00..0.00 rows=1000 width=5088) Node/s: node01d, node02d, node03d, node04d, node05d, node06d, node07d, node08d, node09d, node10d, node11d, node12d, node13d, node14d, node15d, node16d, node17d, node18d, node19d, node20d -> Data Node Scan on table_join_131104 "_REMOTE_TABLE_QUERY_" (cost=0.00..0.00 rows=1000 width=5088) Node/s: node01d, node02d, node03d, node04d, node05d, node06d, node07d, node08d, node09d, node10d, node11d, node12d, node13d, node14d, node15d, node16d, node17d, node18d, node19d, node20d -> Data Node Scan on table_join_131111 "_REMOTE_TABLE_QUERY_" (cost=0.00..0.00 rows=1000 width=5088) Node/s: node01d, node02d, node03d, node04d, node05d, node06d, node07d, node08d, node09d, node10d, node11d, node12d, node13d, node14d, node15d, node16d, node17d, node18d, node19d, node20d -> Data Node Scan on table_join_131118 "_REMOTE_TABLE_QUERY_" (cost=0.00..0.00 rows=1000 width=5088) Node/s: node01d, node02d, node03d, node04d, node05d, node06d, node07d, node08d, node09d, node10d, node11d, node12d, node13d, node14d, node15d, node16d, node17d, node18d, node19d, node20d -> Data Node Scan on table_join_131125 "_REMOTE_TABLE_QUERY_" (cost=0.00..0.00 rows=1000 width=5088) Node/s: node01d, node02d, node03d, node04d, node05d, node06d, node07d, node08d, node09d, node10d, node11d, node12d, node13d, node14d, node15d, node16d, node17d, node18d, node19d, node20d -> Data Node Scan on table_join_131202 "_REMOTE_TABLE_QUERY_" (cost=0.00..0.00 rows=1000 width=5088) Node/s: node01d, node02d, node03d, node04d, node05d, node06d, node07d, node08d, node09d, node10d, node11d, node12d, node13d, node14d, node15d, node16d, node17d, node18d, node19d, node20d -> Data Node Scan on table_join_131209 "_REMOTE_TABLE_QUERY_" (cost=0.00..0.00 rows=1000 width=5088) Node/s: node01d, node02d, node03d, node04d, node05d, node06d, node07d, node08d, node09d, node10d, node11d, node12d, node13d, node14d, node15d, node16d, node17d, node18d, node19d, node20d -> Data Node Scan on table_join_131216 "_REMOTE_TABLE_QUERY_" (cost=0.00..0.00 rows=1000 width=5088) Node/s: node01d, node02d, node03d, node04d, node05d, node06d, node07d, node08d, node09d, node10d, node11d, node12d, node13d, node14d, node15d, node16d, node17d, node18d, node19d, node20d -> Data Node Scan on table_join_131223 "_REMOTE_TABLE_QUERY_" (cost=0.00..0.00 rows=1000 width=5088) Node/s: node01d, node02d, node03d, node04d, node05d, node06d, node07d, node08d, node09d, node10d, node11d, node12d, node13d, node14d, node15d, node16d, node17d, node18d, node19d, node20d -> Data Node Scan on table_join_131230 "_REMOTE_TABLE_QUERY_" (cost=0.00..0.00 rows=1000 width=5088) Node/s: node01d, node02d, node03d, node04d, node05d, node06d, node07d, node08d, node09d, node10d, node11d, node12d, node13d, node14d, node15d, node16d, node17d, node18d, node19d, node20d -> Data Node Scan on table_join_140106 "_REMOTE_TABLE_QUERY_" (cost=0.00..0.00 rows=1000 width=5088) Node/s: node01d, node02d, node03d, node04d, node05d, node06d, node07d, node08d, node09d, node10d, node11d, node12d, node13d, node14d, node15d, node16d, node17d, node18d, node19d, node20d -> Data Node Scan on table_join_140113 "_REMOTE_TABLE_QUERY_" (cost=0.00..0.00 rows=1000 width=5088) Node/s: node01d, node02d, node03d, node04d, node05d, node06d, node07d, node08d, node09d, node10d, node11d, node12d, node13d, node14d, node15d, node16d, node17d, node18d, node19d, node20d -> Data Node Scan on table_join_140120 "_REMOTE_TABLE_QUERY_" (cost=0.00..0.00 rows=1000 width=5088) Node/s: node01d, node02d, node03d, node04d, node05d, node06d, node07d, node08d, node09d, node10d, node11d, node12d, node13d, node14d, node15d, node16d, node17d, node18d, node19d, node20d -> Data Node Scan on table_join_140127 "_REMOTE_TABLE_QUERY_" (cost=0.00..0.00 rows=1000 width=5088) Node/s: node01d, node02d, node03d, node04d, node05d, node06d, node07d, node08d, node09d, node10d, node11d, node12d, node13d, node14d, node15d, node16d, node17d, node18d, node19d, node20d -> Data Node Scan on table_join_140203 "_REMOTE_TABLE_QUERY_" (cost=0.00..0.00 rows=1000 width=5088) Node/s: node01d, node02d, node03d, node04d, node05d, node06d, node07d, node08d, node09d, node10d, node11d, node12d, node13d, node14d, node15d, node16d, node17d, node18d, node19d, node20d -> Data Node Scan on table_join_140210 "_REMOTE_TABLE_QUERY_" (cost=0.00..0.00 rows=1000 width=5088) Node/s: node01d, node02d, node03d, node04d, node05d, node06d, node07d, node08d, node09d, node10d, node11d, node12d, node13d, node14d, node15d, node16d, node17d, node18d, node19d, node20d -> Data Node Scan on table_join_140217 "_REMOTE_TABLE_QUERY_" (cost=0.00..0.00 rows=1000 width=5088) Node/s: node01d, node02d, node03d, node04d, node05d, node06d, node07d, node08d, node09d, node10d, node11d, node12d, node13d, node14d, node15d, node16d, node17d, node18d, node19d, node20d -> Data Node Scan on table_join_140224 "_REMOTE_TABLE_QUERY_" (cost=0.00..0.00 rows=1000 width=5088) Node/s: node01d, node02d, node03d, node04d, node05d, node06d, node07d, node08d, node09d, node10d, node11d, node12d, node13d, node14d, node15d, node16d, node17d, node18d, node19d, node20d -> Data Node Scan on table_join_140303 "_REMOTE_TABLE_QUERY_" (cost=0.00..0.00 rows=1000 width=5088) Node/s: node01d, node02d, node03d, node04d, node05d, node06d, node07d, node08d, node09d, node10d, node11d, node12d, node13d, node14d, node15d, node16d, node17d, node18d, node19d, node20d -> Data Node Scan on table_join_140310 "_REMOTE_TABLE_QUERY_" (cost=0.00..0.00 rows=1000 width=5088) Node/s: node01d, node02d, node03d, node04d, node05d, node06d, node07d, node08d, node09d, node10d, node11d, node12d, node13d, node14d, node15d, node16d, node17d, node18d, node19d, node20d -> Data Node Scan on table_join_140317 "_REMOTE_TABLE_QUERY_" (cost=0.00..0.00 rows=1000 width=5088) Node/s: node01d, node02d, node03d, node04d, node05d, node06d, node07d, node08d, node09d, node10d, node11d, node12d, node13d, node14d, node15d, node16d, node17d, node18d, node19d, node20d -> Data Node Scan on table_join_140324 "_REMOTE_TABLE_QUERY_" (cost=0.00..0.00 rows=1000 width=5088) Node/s: node01d, node02d, node03d, node04d, node05d, node06d, node07d, node08d, node09d, node10d, node11d, node12d, node13d, node14d, node15d, node16d, node17d, node18d, node19d, node20d -> Data Node Scan on table_join_140331 "_REMOTE_TABLE_QUERY_" (cost=0.00..0.00 rows=1000 width=5088) Node/s: node01d, node02d, node03d, node04d, node05d, node06d, node07d, node08d, node09d, node10d, node11d, node12d, node13d, node14d, node15d, node16d, node17d, node18d, node19d, node20d -> Data Node Scan on table_join_140407 "_REMOTE_TABLE_QUERY_" (cost=0.00..0.00 rows=1000 width=5088) Node/s: node01d, node02d, node03d, node04d, node05d, node06d, node07d, node08d, node09d, node10d, node11d, node12d, node13d, node14d, node15d, node16d, node17d, node18d, node19d, node20d -> Data Node Scan on table_join_140414 "_REMOTE_TABLE_QUERY_" (cost=0.00..0.00 rows=1000 width=5088) Node/s: node01d, node02d, node03d, node04d, node05d, node06d, node07d, node08d, node09d, node10d, node11d, node12d, node13d, node14d, node15d, node16d, node17d, node18d, node19d, node20d -> Data Node Scan on table_join_140421 "_REMOTE_TABLE_QUERY_" (cost=0.00..0.00 rows=1000 width=5088) Node/s: node01d, node02d, node03d, node04d, node05d, node06d, node07d, node08d, node09d, node10d, node11d, node12d, node13d, node14d, node15d, node16d, node17d, node18d, node19d, node20d -> Data Node Scan on table_join_140428 "_REMOTE_TABLE_QUERY_" (cost=0.00..0.00 rows=1000 width=5088) Node/s: node01d, node02d, node03d, node04d, node05d, node06d, node07d, node08d, node09d, node10d, node11d, node12d, node13d, node14d, node15d, node16d, node17d, node18d, node19d, node20d -> Data Node Scan on table_join_140505 "_REMOTE_TABLE_QUERY_" (cost=0.00..0.00 rows=1000 width=5088) Node/s: node01d, node02d, node03d, node04d, node05d, node06d, node07d, node08d, node09d, node10d, node11d, node12d, node13d, node14d, node15d, node16d, node17d, node18d, node19d, node20d -> Data Node Scan on table_join_140512 "_REMOTE_TABLE_QUERY_" (cost=0.00..0.00 rows=1000 width=5088) Node/s: node01d, node02d, node03d, node04d, node05d, node06d, node07d, node08d, node09d, node10d, node11d, node12d, node13d, node14d, node15d, node16d, node17d, node18d, node19d, node20d -> Data Node Scan on table_join_140519 "_REMOTE_TABLE_QUERY_" (cost=0.00..0.00 rows=1000 width=5088) Node/s: node01d, node02d, node03d, node04d, node05d, node06d, node07d, node08d, node09d, node10d, node11d, node12d, node13d, node14d, node15d, node16d, node17d, node18d, node19d, node20d -> Data Node Scan on table_join_140526 "_REMOTE_TABLE_QUERY_" (cost=0.00..0.00 rows=1000 width=5088) Node/s: node01d, node02d, node03d, node04d, node05d, node06d, node07d, node08d, node09d, node10d, node11d, node12d, node13d, node14d, node15d, node16d, node17d, node18d, node19d, node20d -> Data Node Scan on table_join_140602 "_REMOTE_TABLE_QUERY_" (cost=0.00..0.00 rows=1000 width=5088) Node/s: node01d, node02d, node03d, node04d, node05d, node06d, node07d, node08d, node09d, node10d, node11d, node12d, node13d, node14d, node15d, node16d, node17d, node18d, node19d, node20d -> Data Node Scan on table_join_140609 "_REMOTE_TABLE_QUERY_" (cost=0.00..0.00 rows=1000 width=5088) Node/s: node01d, node02d, node03d, node04d, node05d, node06d, node07d, node08d, node09d, node10d, node11d, node12d, node13d, node14d, node15d, node16d, node17d, node18d, node19d, node20d -> Data Node Scan on table_join_140616 "_REMOTE_TABLE_QUERY_" (cost=0.00..0.00 rows=1000 width=5088) Node/s: node01d, node02d, node03d, node04d, node05d, node06d, node07d, node08d, node09d, node10d, node11d, node12d, node13d, node14d, node15d, node16d, node17d, node18d, node19d, node20d -> Data Node Scan on table_join_140623 "_REMOTE_TABLE_QUERY_" (cost=0.00..0.00 rows=1000 width=5088) Node/s: node01d, node02d, node03d, node04d, node05d, node06d, node07d, node08d, node09d, node10d, node11d, node12d, node13d, node14d, node15d, node16d, node17d, node18d, node19d, node20d -> Data Node Scan on table_join_140630 "_REMOTE_TABLE_QUERY_" (cost=0.00..0.00 rows=1000 width=5088) Node/s: node01d, node02d, node03d, node04d, node05d, node06d, node07d, node08d, node09d, node10d, node11d, node12d, node13d, node14d, node15d, node16d, node17d, node18d, node19d, node20d -> Data Node Scan on table_join_140707 "_REMOTE_TABLE_QUERY_" (cost=0.00..0.00 rows=1000 width=5088) Node/s: node01d, node02d, node03d, node04d, node05d, node06d, node07d, node08d, node09d, node10d, node11d, node12d, node13d, node14d, node15d, node16d, node17d, node18d, node19d, node20d -> Data Node Scan on table_join_140714 "_REMOTE_TABLE_QUERY_" (cost=0.00..0.00 rows=1000 width=5088) Node/s: node01d, node02d, node03d, node04d, node05d, node06d, node07d, node08d, node09d, node10d, node11d, node12d, node13d, node14d, node15d, node16d, node17d, node18d, node19d, node20d -> Data Node Scan on table_join_140721 "_REMOTE_TABLE_QUERY_" (cost=0.00..0.00 rows=1000 width=5088) Node/s: node01d, node02d, node03d, node04d, node05d, node06d, node07d, node08d, node09d, node10d, node11d, node12d, node13d, node14d, node15d, node16d, node17d, node18d, node19d, node20d -> Data Node Scan on table_join_140728 "_REMOTE_TABLE_QUERY_" (cost=0.00..0.00 rows=1000 width=5088) Node/s: node01d, node02d, node03d, node04d, node05d, node06d, node07d, node08d, node09d, node10d, node11d, node12d, node13d, node14d, node15d, node16d, node17d, node18d, node19d, node20d -> Data Node Scan on table_join_140804 "_REMOTE_TABLE_QUERY_" (cost=0.00..0.00 rows=1000 width=5088) Node/s: node01d, node02d, node03d, node04d, node05d, node06d, node07d, node08d, node09d, node10d, node11d, node12d, node13d, node14d, node15d, node16d, node17d, node18d, node19d, node20d -> Data Node Scan on table_join_140811 "_REMOTE_TABLE_QUERY_" (cost=0.00..0.00 rows=1000 width=5088) Node/s: node01d, node02d, node03d, node04d, node05d, node06d, node07d, node08d, node09d, node10d, node11d, node12d, node13d, node14d, node15d, node16d, node17d, node18d, node19d, node20d -> Data Node Scan on table_join_140818 "_REMOTE_TABLE_QUERY_" (cost=0.00..0.00 rows=1000 width=5088) Node/s: node01d, node02d, node03d, node04d, node05d, node06d, node07d, node08d, node09d, node10d, node11d, node12d, node13d, node14d, node15d, node16d, node17d, node18d, node19d, node20d -> Data Node Scan on table_join_140825 "_REMOTE_TABLE_QUERY_" (cost=0.00..0.00 rows=1000 width=5088) Node/s: node01d, node02d, node03d, node04d, node05d, node06d, node07d, node08d, node09d, node10d, node11d, node12d, node13d, node14d, node15d, node16d, node17d, node18d, node19d, node20d -> Data Node Scan on table_join_140901 "_REMOTE_TABLE_QUERY_" (cost=0.00..0.00 rows=1000 width=5088) Node/s: node01d, node02d, node03d, node04d, node05d, node06d, node07d, node08d, node09d, node10d, node11d, node12d, node13d, node14d, node15d, node16d, node17d, node18d, node19d, node20d -> Data Node Scan on table_join_140908 "_REMOTE_TABLE_QUERY_" (cost=0.00..0.00 rows=1000 width=5088) Node/s: node01d, node02d, node03d, node04d, node05d, node06d, node07d, node08d, node09d, node10d, node11d, node12d, node13d, node14d, node15d, node16d, node17d, node18d, node19d, node20d -> Data Node Scan on table_join_140915 "_REMOTE_TABLE_QUERY_" (cost=0.00..0.00 rows=1000 width=5088) Node/s: node01d, node02d, node03d, node04d, node05d, node06d, node07d, node08d, node09d, node10d, node11d, node12d, node13d, node14d, node15d, node16d, node17d, node18d, node19d, node20d -> Data Node Scan on table_join_140922 "_REMOTE_TABLE_QUERY_" (cost=0.00..0.00 rows=1000 width=5088) Node/s: node01d, node02d, node03d, node04d, node05d, node06d, node07d, node08d, node09d, node10d, node11d, node12d, node13d, node14d, node15d, node16d, node17d, node18d, node19d, node20d -> Data Node Scan on table_join_140929 "_REMOTE_TABLE_QUERY_" (cost=0.00..0.00 rows=1000 width=5088) Node/s: node01d, node02d, node03d, node04d, node05d, node06d, node07d, node08d, node09d, node10d, node11d, node12d, node13d, node14d, node15d, node16d, node17d, node18d, node19d, node20d -> Data Node Scan on table_join_141006 "_REMOTE_TABLE_QUERY_" (cost=0.00..0.00 rows=1000 width=5088) Node/s: node01d, node02d, node03d, node04d, node05d, node06d, node07d, node08d, node09d, node10d, node11d, node12d, node13d, node14d, node15d, node16d, node17d, node18d, node19d, node20d -> Data Node Scan on table_join_141013 "_REMOTE_TABLE_QUERY_" (cost=0.00..0.00 rows=1000 width=5088) Node/s: node01d, node02d, node03d, node04d, node05d, node06d, node07d, node08d, node09d, node10d, node11d, node12d, node13d, node14d, node15d, node16d, node17d, node18d, node19d, node20d -> Data Node Scan on table_join_141020 "_REMOTE_TABLE_QUERY_" (cost=0.00..0.00 rows=1000 width=5088) Node/s: node01d, node02d, node03d, node04d, node05d, node06d, node07d, node08d, node09d, node10d, node11d, node12d, node13d, node14d, node15d, node16d, node17d, node18d, node19d, node20d -> Data Node Scan on table_join_141027 "_REMOTE_TABLE_QUERY_" (cost=0.00..0.00 rows=1000 width=5088) Node/s: node01d, node02d, node03d, node04d, node05d, node06d, node07d, node08d, node09d, node10d, node11d, node12d, node13d, node14d, node15d, node16d, node17d, node18d, node19d, node20d -> Data Node Scan on table_join_141103 "_REMOTE_TABLE_QUERY_" (cost=0.00..0.00 rows=1000 width=5088) Node/s: node01d, node02d, node03d, node04d, node05d, node06d, node07d, node08d, node09d, node10d, node11d, node12d, node13d, node14d, node15d, node16d, node17d, node18d, node19d, node20d -> Data Node Scan on table_join_141110 "_REMOTE_TABLE_QUERY_" (cost=0.00..0.00 rows=1000 width=5088) Node/s: node01d, node02d, node03d, node04d, node05d, node06d, node07d, node08d, node09d, node10d, node11d, node12d, node13d, node14d, node15d, node16d, node17d, node18d, node19d, node20d -> Data Node Scan on table_join_141117 "_REMOTE_TABLE_QUERY_" (cost=0.00..0.00 rows=1000 width=5088) Node/s: node01d, node02d, node03d, node04d, node05d, node06d, node07d, node08d, node09d, node10d, node11d, node12d, node13d, node14d, node15d, node16d, node17d, node18d, node19d, node20d -> Data Node Scan on table_join_141124 "_REMOTE_TABLE_QUERY_" (cost=0.00..0.00 rows=1000 width=5088) Node/s: node01d, node02d, node03d, node04d, node05d, node06d, node07d, node08d, node09d, node10d, node11d, node12d, node13d, node14d, node15d, node16d, node17d, node18d, node19d, node20d -> Data Node Scan on table_join_141201 "_REMOTE_TABLE_QUERY_" (cost=0.00..0.00 rows=1000 width=5088) Node/s: node01d, node02d, node03d, node04d, node05d, node06d, node07d, node08d, node09d, node10d, node11d, node12d, node13d, node14d, node15d, node16d, node17d, node18d, node19d, node20d -> Data Node Scan on table_join_141208 "_REMOTE_TABLE_QUERY_" (cost=0.00..0.00 rows=1000 width=5088) Node/s: node01d, node02d, node03d, node04d, node05d, node06d, node07d, node08d, node09d, node10d, node11d, node12d, node13d, node14d, node15d, node16d, node17d, node18d, node19d, node20d -> Data Node Scan on table_join_141215 "_REMOTE_TABLE_QUERY_" (cost=0.00..0.00 rows=1000 width=5088) Node/s: node01d, node02d, node03d, node04d, node05d, node06d, node07d, node08d, node09d, node10d, node11d, node12d, node13d, node14d, node15d, node16d, node17d, node18d, node19d, node20d -> Data Node Scan on table_join_141222 "_REMOTE_TABLE_QUERY_" (cost=0.00..0.00 rows=1000 width=5088) Node/s: node01d, node02d, node03d, node04d, node05d, node06d, node07d, node08d, node09d, node10d, node11d, node12d, node13d, node14d, node15d, node16d, node17d, node18d, node19d, node20d -> Data Node Scan on table_join_141229 "_REMOTE_TABLE_QUERY_" (cost=0.00..0.00 rows=1000 width=5088) Node/s: node01d, node02d, node03d, node04d, node05d, node06d, node07d, node08d, node09d, node10d, node11d, node12d, node13d, node14d, node15d, node16d, node17d, node18d, node19d, node20d -> Data Node Scan on table_join_150105 "_REMOTE_TABLE_QUERY_" (cost=0.00..0.00 rows=1000 width=5088) Node/s: node01d, node02d, node03d, node04d, node05d, node06d, node07d, node08d, node09d, node10d, node11d, node12d, node13d, node14d, node15d, node16d, node17d, node18d, node19d, node20d -> Data Node Scan on table_join_150112 "_REMOTE_TABLE_QUERY_" (cost=0.00..0.00 rows=1000 width=5088) Node/s: node01d, node02d, node03d, node04d, node05d, node06d, node07d, node08d, node09d, node10d, node11d, node12d, node13d, node14d, node15d, node16d, node17d, node18d, node19d, node20d -> Data Node Scan on table_join_150119 "_REMOTE_TABLE_QUERY_" (cost=0.00..0.00 rows=1000 width=5088) Node/s: node01d, node02d, node03d, node04d, node05d, node06d, node07d, node08d, node09d, node10d, node11d, node12d, node13d, node14d, node15d, node16d, node17d, node18d, node19d, node20d -> Data Node Scan on table_join_150126 "_REMOTE_TABLE_QUERY_" (cost=0.00..0.00 rows=1000 width=5088) Node/s: node01d, node02d, node03d, node04d, node05d, node06d, node07d, node08d, node09d, node10d, node11d, node12d, node13d, node14d, node15d, node16d, node17d, node18d, node19d, node20d (339 rows) |