You can subscribe to this list here.
2010 |
Jan
|
Feb
|
Mar
|
Apr
|
May
(2) |
Jun
|
Jul
|
Aug
(6) |
Sep
|
Oct
(19) |
Nov
(1) |
Dec
|
---|---|---|---|---|---|---|---|---|---|---|---|---|
2011 |
Jan
(12) |
Feb
(1) |
Mar
(4) |
Apr
(4) |
May
(32) |
Jun
(12) |
Jul
(11) |
Aug
(1) |
Sep
(6) |
Oct
(3) |
Nov
|
Dec
(10) |
2012 |
Jan
(11) |
Feb
(1) |
Mar
(3) |
Apr
(25) |
May
(53) |
Jun
(38) |
Jul
(103) |
Aug
(54) |
Sep
(31) |
Oct
(66) |
Nov
(77) |
Dec
(20) |
2013 |
Jan
(91) |
Feb
(86) |
Mar
(103) |
Apr
(107) |
May
(25) |
Jun
(37) |
Jul
(17) |
Aug
(59) |
Sep
(38) |
Oct
(78) |
Nov
(29) |
Dec
(15) |
2014 |
Jan
(23) |
Feb
(82) |
Mar
(118) |
Apr
(101) |
May
(103) |
Jun
(45) |
Jul
(6) |
Aug
(10) |
Sep
|
Oct
(32) |
Nov
|
Dec
(9) |
2015 |
Jan
(3) |
Feb
(5) |
Mar
|
Apr
(1) |
May
|
Jun
|
Jul
(9) |
Aug
(4) |
Sep
(3) |
Oct
|
Nov
|
Dec
|
2016 |
Jan
(3) |
Feb
|
Mar
|
Apr
|
May
|
Jun
|
Jul
|
Aug
|
Sep
|
Oct
|
Nov
|
Dec
|
2017 |
Jan
|
Feb
|
Mar
|
Apr
|
May
|
Jun
(3) |
Jul
|
Aug
|
Sep
|
Oct
|
Nov
|
Dec
|
2018 |
Jan
|
Feb
|
Mar
|
Apr
|
May
(4) |
Jun
|
Jul
|
Aug
|
Sep
|
Oct
|
Nov
|
Dec
|
S | M | T | W | T | F | S |
---|---|---|---|---|---|---|
|
|
|
|
|
1
(6) |
2
|
3
|
4
|
5
|
6
|
7
(5) |
8
(1) |
9
|
10
|
11
(2) |
12
|
13
(7) |
14
|
15
(6) |
16
(1) |
17
|
18
(9) |
19
(10) |
20
(3) |
21
(6) |
22
(6) |
23
|
24
|
25
(20) |
26
(1) |
27
(1) |
28
(2) |
|
|
From: kushal <kus...@gm...> - 2013-02-22 07:28:01
|
Thanks Michael. It worked. On 22 February 2013 12:35, Michael Paquier <mic...@gm...>wrote: > > > On Fri, Feb 22, 2013 at 3:45 PM, kushal <kus...@gm...> wrote: > >> Its still not working. So here is what I have done. I dropped mydb >> database. I dropped data nodes from both coordinators. >> Then I ran: >> psql -p 15432 -c "CREATE NODE coord2 WITH (TYPE='coordinator', >> PORT=15433)" >> psql -p 15433 -c "CREATE NODE coord1 WITH (TYPE='coordinator', >> PORT=15432)" >> select pgxc_pool_reload(); on each coordinator >> >> Then I created datanode 1 on coord1. >> Again I ran select pgxc_pool_reload(); on each coordinator. Still I >> cannot see datanode1 entry on coordinator 2. >> > Meh? I don't really think you understood what I was explaining. You need > to create *all* the remote nodes on each Coordinator. > So on Coordinator 1: create Co2, Dn1 and Dn2. > and on Coordinator 2: create Co1, Dn1 and Dn2. > > So... > Here is what you need to launch as commands on your server: > # Register Coordinators on each other Coordinator > > psql -p 15432 -c "CREATE NODE coord2 WITH (TYPE='coordinator', PORT=15433)" > psql -p 15433 -c "CREATE NODE coord1 WITH (TYPE='coordinator', > PORT=15432)" > # Register Datanodes on Coordinator 1 and update cache > psql -p 15432 -c "CREATE NODE dn1 WITH (TYPE='datanode', PORT=15442)" > psql -p 15432 -c "CREATE NODE dn2 WITH (TYPE='datanode', PORT=15443)" > psql -p 15432 -c "select pgxc_pool_reload()" > # Register Datanodes on Coordinator 2 and update cache > psql -p 15433 -c "CREATE NODE dn1 WITH (TYPE='datanode', PORT=15442)" > psql -p 15433 -c "CREATE NODE dn2 WITH (TYPE='datanode', PORT=15443)" > psql -p 15433 -c "select pgxc_pool_reload()" > > And you are done. > > >> Also I noticed that under the table pgxc_node on coord1, there is an >> entry for coord1 with port as 5432, whereas the actual port for coord1 is >> 15432. >> > Don't worry about that this is a dummy entry. > > >> Similarly on coord2, the self port under pgxc_node is 5432, but actual >> value is 15433. I am not sure whether this is ok. >> > This is perfectly fine. You could still use ALTER NODE to update the port > value to your needs, but this is not necessary at all as a node is not > going to connect to itself > -- > Michael > |
From: Michael P. <mic...@gm...> - 2013-02-22 07:05:30
|
On Fri, Feb 22, 2013 at 3:45 PM, kushal <kus...@gm...> wrote: > Its still not working. So here is what I have done. I dropped mydb > database. I dropped data nodes from both coordinators. > Then I ran: > psql -p 15432 -c "CREATE NODE coord2 WITH (TYPE='coordinator', PORT=15433)" > psql -p 15433 -c "CREATE NODE coord1 WITH (TYPE='coordinator', PORT=15432)" > select pgxc_pool_reload(); on each coordinator > > Then I created datanode 1 on coord1. > Again I ran select pgxc_pool_reload(); on each coordinator. Still I cannot > see datanode1 entry on coordinator 2. > Meh? I don't really think you understood what I was explaining. You need to create *all* the remote nodes on each Coordinator. So on Coordinator 1: create Co2, Dn1 and Dn2. and on Coordinator 2: create Co1, Dn1 and Dn2. So... Here is what you need to launch as commands on your server: # Register Coordinators on each other Coordinator psql -p 15432 -c "CREATE NODE coord2 WITH (TYPE='coordinator', PORT=15433)" psql -p 15433 -c "CREATE NODE coord1 WITH (TYPE='coordinator', PORT=15432)" # Register Datanodes on Coordinator 1 and update cache psql -p 15432 -c "CREATE NODE dn1 WITH (TYPE='datanode', PORT=15442)" psql -p 15432 -c "CREATE NODE dn2 WITH (TYPE='datanode', PORT=15443)" psql -p 15432 -c "select pgxc_pool_reload()" # Register Datanodes on Coordinator 2 and update cache psql -p 15433 -c "CREATE NODE dn1 WITH (TYPE='datanode', PORT=15442)" psql -p 15433 -c "CREATE NODE dn2 WITH (TYPE='datanode', PORT=15443)" psql -p 15433 -c "select pgxc_pool_reload()" And you are done. > Also I noticed that under the table pgxc_node on coord1, there is an entry > for coord1 with port as 5432, whereas the actual port for coord1 is 15432. > Don't worry about that this is a dummy entry. > Similarly on coord2, the self port under pgxc_node is 5432, but actual > value is 15433. I am not sure whether this is ok. > This is perfectly fine. You could still use ALTER NODE to update the port value to your needs, but this is not necessary at all as a node is not going to connect to itself -- Michael |
From: kushal <kus...@gm...> - 2013-02-22 06:45:56
|
Its still not working. So here is what I have done. I dropped mydb database. I dropped data nodes from both coordinators. Then I ran: psql -p 15432 -c "CREATE NODE coord2 WITH (TYPE='coordinator', PORT=15433)" psql -p 15433 -c "CREATE NODE coord1 WITH (TYPE='coordinator', PORT=15432)" select pgxc_pool_reload(); on each coordinator Then I created datanode 1 on coord1. Again I ran select pgxc_pool_reload(); on each coordinator. Still I cannot see datanode1 entry on coordinator 2. Also I noticed that under the table pgxc_node on coord1, there is an entry for coord1 with port as 5432, whereas the actual port for coord1 is 15432. Similarly on coord2, the self port under pgxc_node is 5432, but actual value is 15433. I am not sure whether this is ok. Regards Kushal On 22 February 2013 11:37, Michael Paquier <mic...@gm...>wrote: > > > On Fri, Feb 22, 2013 at 2:29 PM, kushal <kus...@gm...> wrote: > >> Hi >> >> I am trying to create a setup with 2 coordinators and 2 datanodes on one >> server. I am able to get the gtm and nodes up and running. >> >> Ports: >> Data Node 1: 15442 >> Data Node 2: 15443 >> Coordinator 1: 15432 >> Coordinator 1: 15433 >> >> Next I did 'psql -p 15432 postgres' and created two datanodes >> CREATE NODE dn1 WITH (TYPE='datanode', PORT=15442); >> CREATE NODE dn2 WITH (TYPE='datanode', PORT=15443); >> >> After that I create one database: 'Create database mydb' >> >> Now I can see mydb database on datanodes 1 and 2 and coordinator 1. >> Also I can see the datanode 1 and 2 when I execute 'select * from >> pgxc_node' on coordinator 1. >> >> But mydb database does not exist and datanodes 1 and 2 are not present in >> pgxc_node table on coordinator 2. >> >> Is there anything I have missed? >> > Yes, 2 things: > - You have 2 Coordinators, so you need also to register Coordinator 2 on > Coordinator 1 like that: > psql -p 15432 -c "CREATE NODE coord2 WITH (TYPE='coordinator', PORT=15433)" > And register Coordinator 1 on Coordinator 2 like that: > psql -p 15433 -c "CREATE NODE coord1 WITH (TYPE='coordinator', PORT=15432)" > - 2nd thing you forgot is to run this command on each Coordinator: > select pgxc_pool_reload(); > This updates the pooler cache located on each Coordinator with latest node > information. > > I think this might be a very basic thing and I am sorry to bother you >> guys. I tried and failed to search through the xc mailing lists for older >> posts on similar topic. Is there a way to quickly filter the older posts >> for quick check? >> > Everything's here: > > http://sourceforge.net/search/?group_id=311227&type_of_search=mlists&source=navbar > -- > Michael > |
From: Michael P. <mic...@gm...> - 2013-02-22 06:07:39
|
On Fri, Feb 22, 2013 at 2:29 PM, kushal <kus...@gm...> wrote: > Hi > > I am trying to create a setup with 2 coordinators and 2 datanodes on one > server. I am able to get the gtm and nodes up and running. > > Ports: > Data Node 1: 15442 > Data Node 2: 15443 > Coordinator 1: 15432 > Coordinator 1: 15433 > > Next I did 'psql -p 15432 postgres' and created two datanodes > CREATE NODE dn1 WITH (TYPE='datanode', PORT=15442); > CREATE NODE dn2 WITH (TYPE='datanode', PORT=15443); > > After that I create one database: 'Create database mydb' > > Now I can see mydb database on datanodes 1 and 2 and coordinator 1. > Also I can see the datanode 1 and 2 when I execute 'select * from > pgxc_node' on coordinator 1. > > But mydb database does not exist and datanodes 1 and 2 are not present in > pgxc_node table on coordinator 2. > > Is there anything I have missed? > Yes, 2 things: - You have 2 Coordinators, so you need also to register Coordinator 2 on Coordinator 1 like that: psql -p 15432 -c "CREATE NODE coord2 WITH (TYPE='coordinator', PORT=15433)" And register Coordinator 1 on Coordinator 2 like that: psql -p 15433 -c "CREATE NODE coord1 WITH (TYPE='coordinator', PORT=15432)" - 2nd thing you forgot is to run this command on each Coordinator: select pgxc_pool_reload(); This updates the pooler cache located on each Coordinator with latest node information. I think this might be a very basic thing and I am sorry to bother you guys. > I tried and failed to search through the xc mailing lists for older posts > on similar topic. Is there a way to quickly filter the older posts for > quick check? > Everything's here: http://sourceforge.net/search/?group_id=311227&type_of_search=mlists&source=navbar -- Michael |
From: kushal <kus...@gm...> - 2013-02-22 05:30:02
|
Hi I am trying to create a setup with 2 coordinators and 2 datanodes on one server. I am able to get the gtm and nodes up and running. Ports: Data Node 1: 15442 Data Node 2: 15443 Coordinator 1: 15432 Coordinator 1: 15433 Next I did 'psql -p 15432 postgres' and created two datanodes CREATE NODE dn1 WITH (TYPE='datanode', PORT=15442); CREATE NODE dn2 WITH (TYPE='datanode', PORT=15443); After that I create one database: 'Create database mydb' Now I can see mydb database on datanodes 1 and 2 and coordinator 1. Also I can see the datanode 1 and 2 when I execute 'select * from pgxc_node' on coordinator 1. But mydb database does not exist and datanodes 1 and 2 are not present in pgxc_node table on coordinator 2. Is there anything I have missed? I think this might be a very basic thing and I am sorry to bother you guys. I tried and failed to search through the xc mailing lists for older posts on similar topic. Is there a way to quickly filter the older posts for quick check? Thanks Kushal |
From: kushal <kus...@gm...> - 2013-02-21 12:08:35
|
Thanks guys. Issue is resolved. On 21 February 2013 04:04, Michael Paquier <mic...@gm...>wrote: > > > On Thu, Feb 21, 2013 at 8:47 PM, kushal <kus...@gm...> wrote: > >> LOG: database system was interrupted; last known up at 2013-02-21 >> 02:50:33 PST >> FATAL: lock file "/tmp/.s.PGPOOL.6667.lock" already exists >> HINT: Is another postmaster (PID 12892) using socket file >> "/tmp/.s.PGPOOL.6667"? >> LOG: database system was not properly shut down; automatic recovery in >> progress >> LOG: pool manager process (PID 12907) exited with exit code 1 >> LOG: terminating any other active server processes >> LOG: startup process (PID 12906) exited with exit code 2 >> LOG: aborting startup due to startup process failure >> >> >> Can anyone please help? >> > Simply set pooler_port to different values in each Coordinator's > postgresql.conf. As 6667 is the default value, this error is encountered > each time a second Coordiantor comes up. > -- > Michael > |
From: Michael P. <mic...@gm...> - 2013-02-21 12:04:34
|
On Thu, Feb 21, 2013 at 8:47 PM, kushal <kus...@gm...> wrote: > LOG: database system was interrupted; last known up at 2013-02-21 > 02:50:33 PST > FATAL: lock file "/tmp/.s.PGPOOL.6667.lock" already exists > HINT: Is another postmaster (PID 12892) using socket file > "/tmp/.s.PGPOOL.6667"? > LOG: database system was not properly shut down; automatic recovery in > progress > LOG: pool manager process (PID 12907) exited with exit code 1 > LOG: terminating any other active server processes > LOG: startup process (PID 12906) exited with exit code 2 > LOG: aborting startup due to startup process failure > > > Can anyone please help? > Simply set pooler_port to different values in each Coordinator's postgresql.conf. As 6667 is the default value, this error is encountered each time a second Coordiantor comes up. -- Michael |
From: Nikhil S. <ni...@st...> - 2013-02-21 12:03:22
|
Hi Kushal, Please ensure that the pooler_port value is different for both the coordinators. Regards, Nikhils On Thu, Feb 21, 2013 at 5:29 PM, Mason Sharp <ma...@st...> wrote: > > > On Thu, Feb 21, 2013 at 6:49 AM, kushal <kus...@gm...> wrote: >> >> Forgot to add. This whole setup is tried on one single server. >> >> >> On 21 February 2013 03:47, kushal <kus...@gm...> wrote: >>> >>> Hi >>> >>> I am using XC 1.0.2. I am trying to create the whole setup including gtm, >>> 2 gtm proxies, 2 coordinators and 2 datanodes. >>> >>> Ports Configured: >>> GTM: 16666 >>> GTM Proxy 1: 16667 >>> GTM Proxy 1: 16668 >>> Data Node 1: 15442 >>> Data Node 2: 15443 >>> Coordinator 1: 15432 >>> Coordinator 1: 15433 >>> >>> Coordinator 1 has gtm port configured to 16667 and Coordinator 2 to >>> 16668. >>> >>> Next I try to start all the nodes. >>> gtm_ctl start -Z gtm -D /var/lib/pgsql/gtm >>> gtm_ctl start -Z gtm_proxy -D /var/lib/pgsql/gtm_proxy1 >>> gtm_ctl start -Z gtm_proxy -D /var/lib/pgsql/gtm_proxy2 >>> pg_ctl start -Z datanode -D /var/lib/pgsql/data1 >>> pg_ctl start -Z datanode -D /var/lib/pgsql/data2 >>> pg_ctl start -Z coordinator -D /var/lib/pgsql/coord1 >>> pg_ctl start -Z coordinator -D /var/lib/pgsql/coord2 >>> >>> Everything works fine till coord1. In the last step while starting >>> coord2, I get this error: >>> >>> LOG: database system was interrupted; last known up at 2013-02-21 >>> 02:50:33 PST >>> FATAL: lock file "/tmp/.s.PGPOOL.6667.lock" already exists > > > Are you using the same port number in the coordinators for each one's > connection pool? > >>> >>> HINT: Is another postmaster (PID 12892) using socket file >>> "/tmp/.s.PGPOOL.6667"? >>> LOG: database system was not properly shut down; automatic recovery in >>> progress >>> LOG: pool manager process (PID 12907) exited with exit code 1 >>> LOG: terminating any other active server processes >>> LOG: startup process (PID 12906) exited with exit code 2 >>> LOG: aborting startup due to startup process failure >>> >>> >>> Can anyone please help? >>> >>> >>> >> >> >> >> ------------------------------------------------------------------------------ >> Everyone hates slow websites. So do we. >> Make your web apps faster with AppDynamics >> Download AppDynamics Lite for free today: >> http://p.sf.net/sfu/appdyn_d2d_feb >> _______________________________________________ >> Postgres-xc-general mailing list >> Pos...@li... >> https://lists.sourceforge.net/lists/listinfo/postgres-xc-general >> > > > > -- > Mason Sharp > > StormDB - http://www.stormdb.com > The Database Cloud > Postgres-XC Support and Services > ------------------------------------------------------------------------------ > Everyone hates slow websites. So do we. > Make your web apps faster with AppDynamics > Download AppDynamics Lite for free today: > http://p.sf.net/sfu/appdyn_d2d_feb > _______________________________________________ > Postgres-xc-general mailing list > Pos...@li... > https://lists.sourceforge.net/lists/listinfo/postgres-xc-general > -- StormDB - http://www.stormdb.com The Database Cloud Postgres-XC Support and Service |
From: Mason S. <ma...@st...> - 2013-02-21 12:00:02
|
On Thu, Feb 21, 2013 at 6:49 AM, kushal <kus...@gm...> wrote: > Forgot to add. This whole setup is tried on one single server. > > > On 21 February 2013 03:47, kushal <kus...@gm...> wrote: > >> Hi >> >> I am using XC 1.0.2. I am trying to create the whole setup including gtm, >> 2 gtm proxies, 2 coordinators and 2 datanodes. >> >> Ports Configured: >> GTM: 16666 >> GTM Proxy 1: 16667 >> GTM Proxy 1: 16668 >> Data Node 1: 15442 >> Data Node 2: 15443 >> Coordinator 1: 15432 >> Coordinator 1: 15433 >> >> Coordinator 1 has gtm port configured to 16667 and Coordinator 2 to 16668. >> >> Next I try to start all the nodes. >> gtm_ctl start -Z gtm -D /var/lib/pgsql/gtm >> gtm_ctl start -Z gtm_proxy -D /var/lib/pgsql/gtm_proxy1 >> gtm_ctl start -Z gtm_proxy -D /var/lib/pgsql/gtm_proxy2 >> pg_ctl start -Z datanode -D /var/lib/pgsql/data1 >> pg_ctl start -Z datanode -D /var/lib/pgsql/data2 >> pg_ctl start -Z coordinator -D /var/lib/pgsql/coord1 >> pg_ctl start -Z coordinator -D /var/lib/pgsql/coord2 >> >> Everything works fine till coord1. In the last step while starting >> coord2, I get this error: >> >> LOG: database system was interrupted; last known up at 2013-02-21 >> 02:50:33 PST >> FATAL: lock file "/tmp/.s.PGPOOL.6667.lock" already exists >> > Are you using the same port number in the coordinators for each one's connection pool? > HINT: Is another postmaster (PID 12892) using socket file >> "/tmp/.s.PGPOOL.6667"? >> LOG: database system was not properly shut down; automatic recovery in >> progress >> LOG: pool manager process (PID 12907) exited with exit code 1 >> LOG: terminating any other active server processes >> LOG: startup process (PID 12906) exited with exit code 2 >> LOG: aborting startup due to startup process failure >> >> >> Can anyone please help? >> >> >> >> > > > ------------------------------------------------------------------------------ > Everyone hates slow websites. So do we. > Make your web apps faster with AppDynamics > Download AppDynamics Lite for free today: > http://p.sf.net/sfu/appdyn_d2d_feb > _______________________________________________ > Postgres-xc-general mailing list > Pos...@li... > https://lists.sourceforge.net/lists/listinfo/postgres-xc-general > > -- Mason Sharp StormDB - http://www.stormdb.com The Database Cloud Postgres-XC Support and Services |
From: kushal <kus...@gm...> - 2013-02-21 11:50:07
|
Forgot to add. This whole setup is tried on one single server. On 21 February 2013 03:47, kushal <kus...@gm...> wrote: > Hi > > I am using XC 1.0.2. I am trying to create the whole setup including gtm, > 2 gtm proxies, 2 coordinators and 2 datanodes. > > Ports Configured: > GTM: 16666 > GTM Proxy 1: 16667 > GTM Proxy 1: 16668 > Data Node 1: 15442 > Data Node 2: 15443 > Coordinator 1: 15432 > Coordinator 1: 15433 > > Coordinator 1 has gtm port configured to 16667 and Coordinator 2 to 16668. > > Next I try to start all the nodes. > gtm_ctl start -Z gtm -D /var/lib/pgsql/gtm > gtm_ctl start -Z gtm_proxy -D /var/lib/pgsql/gtm_proxy1 > gtm_ctl start -Z gtm_proxy -D /var/lib/pgsql/gtm_proxy2 > pg_ctl start -Z datanode -D /var/lib/pgsql/data1 > pg_ctl start -Z datanode -D /var/lib/pgsql/data2 > pg_ctl start -Z coordinator -D /var/lib/pgsql/coord1 > pg_ctl start -Z coordinator -D /var/lib/pgsql/coord2 > > Everything works fine till coord1. In the last step while starting coord2, > I get this error: > > LOG: database system was interrupted; last known up at 2013-02-21 > 02:50:33 PST > FATAL: lock file "/tmp/.s.PGPOOL.6667.lock" already exists > HINT: Is another postmaster (PID 12892) using socket file > "/tmp/.s.PGPOOL.6667"? > LOG: database system was not properly shut down; automatic recovery in > progress > LOG: pool manager process (PID 12907) exited with exit code 1 > LOG: terminating any other active server processes > LOG: startup process (PID 12906) exited with exit code 2 > LOG: aborting startup due to startup process failure > > > Can anyone please help? > > > > |
From: kushal <kus...@gm...> - 2013-02-21 11:47:49
|
Hi I am using XC 1.0.2. I am trying to create the whole setup including gtm, 2 gtm proxies, 2 coordinators and 2 datanodes. Ports Configured: GTM: 16666 GTM Proxy 1: 16667 GTM Proxy 1: 16668 Data Node 1: 15442 Data Node 2: 15443 Coordinator 1: 15432 Coordinator 1: 15433 Coordinator 1 has gtm port configured to 16667 and Coordinator 2 to 16668. Next I try to start all the nodes. gtm_ctl start -Z gtm -D /var/lib/pgsql/gtm gtm_ctl start -Z gtm_proxy -D /var/lib/pgsql/gtm_proxy1 gtm_ctl start -Z gtm_proxy -D /var/lib/pgsql/gtm_proxy2 pg_ctl start -Z datanode -D /var/lib/pgsql/data1 pg_ctl start -Z datanode -D /var/lib/pgsql/data2 pg_ctl start -Z coordinator -D /var/lib/pgsql/coord1 pg_ctl start -Z coordinator -D /var/lib/pgsql/coord2 Everything works fine till coord1. In the last step while starting coord2, I get this error: LOG: database system was interrupted; last known up at 2013-02-21 02:50:33 PST FATAL: lock file "/tmp/.s.PGPOOL.6667.lock" already exists HINT: Is another postmaster (PID 12892) using socket file "/tmp/.s.PGPOOL.6667"? LOG: database system was not properly shut down; automatic recovery in progress LOG: pool manager process (PID 12907) exited with exit code 1 LOG: terminating any other active server processes LOG: startup process (PID 12906) exited with exit code 2 LOG: aborting startup due to startup process failure Can anyone please help? |
From: Arni S. <Arn...@md...> - 2013-02-20 04:24:27
|
Mr. Ashutosh Bapat, Thank you for your fast response & confirmation that FQS is executed in parallel. I was going to send you screenshots of the evolution of the query, but it did run in parallel perfectly. Earlier when I ran the query it ran "node by node" from the last node to the first node. Is there any logic that would cause this behavior - perhaps I need rest :.) Best regards, Arni From: Ashutosh Bapat [mailto:ash...@en...] Sent: Tuesday, February 19, 2013 11:08 PM To: Arni Sumarlidason Cc: pos...@li... Subject: Re: [Postgres-xc-general] FQS/RGQ Optimizations Hi Arni, What do you mean by "in the execution of the FQS logic -- it seems to query one node at a time instead of in parallel"? What gave you that impression? When a query is fired on multiple nodes, it's done in-parallel or asynchronously. For better understanding of how query is processed in Postgres-XC, please watch following video - http://www.youtube.com/watch?v=g_9CYat8nkY<https://console.mxlogic.com/redir/?2eosod7bNEVd79JYsOMOyqejhOr015GJPr8vO-6dDoCicD4hkQsyg7SrQ2E7dTVeZXTLuZXCXCZEjrDmxfy6Hrc5v2vNmzaDE4endL8IcCQmn4mmmmrCzASMedwLQzh0qmMJmAOcvEn3X5OfM-eDCBQQg2U3z60I90eq80MJeiIxjpOH2xKvxYY1NJ4SOrpop73zhO-Urgf0s> On Wed, Feb 20, 2013 at 8:01 AM, Arni Sumarlidason <Arn...@md...<mailto:Arn...@md...>> wrote: Good Evening all, Thank you for your support over the last few days, really appreciate it. :) I have a question regarding query optimization, I have one table distributed by HASH, and another by REPLICATE. When attempting to compare values between the two tables[1] I noticed that in the execution of the FQS logic -- it seems to query one node at a time instead of in parallel. I google'd FQS and found an article by Michael Paquier and played with disabling the feature. However disabling it changed the execution method to _REMOTE_TABLE_QUERY_ -- this also did not seem to be parallel. Is there any way to optimize for __REMOTE_GROUP_QUERY__? Thank you again, Arni Sumarlidason [1] SELECT * FROM data m, world_boundaries b WHERE ((b.cntry_name ILIKE '%Iceland%')) AND m.point_geom && b.wkb_geometry AND st_intersects(m.point_geom, b.wkb_geometry); Data - Hash'd world_boundaries - Replicated ------------------------------------------------------------------------------ Everyone hates slow websites. So do we. Make your web apps faster with AppDynamics Download AppDynamics Lite for free today: http://p.sf.net/sfu/appdyn_d2d_feb<https://console.mxlogic.com/redir/?4sMUMqenzhOqejrUVBxB4QsCzAS03mWzL0TaeJVGOO1q0GwCwaK-gdTVeZXTLuZXCXCZEjrDmxfy6Hrc5v2vNmzaDE4endL8IcCQmn4mmmmrCzASMedwLQzh0qmMJmAOcvEn3X5OfM-eDCBQQg2U3z60I90eq80MJeiIxjpOH2xKvxYY1NJcSOrpop73zhO-UrWlX5> _______________________________________________ Postgres-xc-general mailing list Pos...@li...<mailto:Pos...@li...> https://lists.sourceforge.net/lists/listinfo/postgres-xc-general<https://console.mxlogic.com/redir/?zC763hOYqejhOrv7cIcECzAQsCOsVHkiP3VkDjGlrzrTg0fY3sfBitcfBisEeRNqJ9Ao_gK7SbAvxYtfdTVeZXTLuZXCXCZEjrDmxfy6Hrc5v2vNmzaDE4endL8IcCQmn4mmmmrCzASMedwLQzh0qmMJmAOcvEn3X5OfM-eDCBQQg2U3z60I90eq80MJeiIxjpOH2xKvxYY1NJASOrpop73zhO-Ur8tlHf_Fa0K-C> -- Best Wishes, Ashutosh Bapat EntepriseDB Corporation The Enterprise Postgres Company |
From: Ashutosh B. <ash...@en...> - 2013-02-20 04:08:34
|
Hi Arni, What do you mean by "in the execution of the FQS logic -- it seems to query one node at a time instead of in parallel"? What gave you that impression? When a query is fired on multiple nodes, it's done in-parallel or asynchronously. For better understanding of how query is processed in Postgres-XC, please watch following video - http://www.youtube.com/watch?v=g_9CYat8nkY On Wed, Feb 20, 2013 at 8:01 AM, Arni Sumarlidason < Arn...@md...> wrote: > Good Evening all,**** > > ** ** > > Thank you for your support over the last few days, really appreciate it. J > **** > > ** ** > > I have a question regarding query optimization, I have one table > distributed by HASH, and another by REPLICATE. When attempting to compare > values between the two tables[1] I noticed that in the execution of the FQS > logic -- it seems to query one node at a time instead of in parallel. I > google’d FQS and found an article by Michael Paquier and played with > disabling the feature. However disabling it changed the execution method to > _REMOTE_TABLE_QUERY_ -- this also did not seem to be parallel. Is there any > way to optimize for __REMOTE_GROUP_QUERY__?**** > > ** ** > > Thank you again,**** > > ** ** > > *Arni Sumarlidason* **** > > ** ** > > [1] SELECT * FROM data m, world_boundaries b WHERE ((b.cntry_name ILIKE > '%Iceland%')) AND m.point_geom && b.wkb_geometry AND > st_intersects(m.point_geom, b.wkb_geometry);**** > > Data – Hash’d**** > > world_boundaries – Replicated**** > > ** ** > > > ------------------------------------------------------------------------------ > Everyone hates slow websites. So do we. > Make your web apps faster with AppDynamics > Download AppDynamics Lite for free today: > http://p.sf.net/sfu/appdyn_d2d_feb > _______________________________________________ > Postgres-xc-general mailing list > Pos...@li... > https://lists.sourceforge.net/lists/listinfo/postgres-xc-general > > -- Best Wishes, Ashutosh Bapat EntepriseDB Corporation The Enterprise Postgres Company |
From: Arni S. <Arn...@md...> - 2013-02-20 02:31:32
|
Good Evening all, Thank you for your support over the last few days, really appreciate it. :) I have a question regarding query optimization, I have one table distributed by HASH, and another by REPLICATE. When attempting to compare values between the two tables[1] I noticed that in the execution of the FQS logic -- it seems to query one node at a time instead of in parallel. I google'd FQS and found an article by Michael Paquier and played with disabling the feature. However disabling it changed the execution method to _REMOTE_TABLE_QUERY_ -- this also did not seem to be parallel. Is there any way to optimize for __REMOTE_GROUP_QUERY__? Thank you again, Arni Sumarlidason [1] SELECT * FROM data m, world_boundaries b WHERE ((b.cntry_name ILIKE '%Iceland%')) AND m.point_geom && b.wkb_geometry AND st_intersects(m.point_geom, b.wkb_geometry); Data - Hash'd world_boundaries - Replicated |
From: Koichi S. <ko...@in...> - 2013-02-19 12:01:22
|
I found some wrong wording. Please let me correct. On Tue, 19 Feb 2013 17:10:00 +0530 kushal <kus...@gm...> wrote: > Yes. This is what I need and it should allow me get started with my own > setup. Though, I might bother you guys again with more doubts :) > > Thanks. > > On 19 February 2013 16:17, Koichi Suzuki <koi...@gm...> wrote: > > > Postgres-XC accepts CREATE DATABASE statement, same as PostgreSQL. So, > > > > CREATE DATABASE DB1; > > > > creates DB1 database which spans over all the nodes. You can > > similarly create DB2. Then you can connect to either DB1 and DB2 > > and create table1, table2 and table3 for both databases. You can You have to create table1, table2 and table3 on DB1 and DB2 separately. CREATE TABLE TABLE1 ... on DB1 will not propagate to DB2. Just to clarify. > > setup privilege of Customer1 only to DB1 and Customer2 only to DB2 > > respectively. > > > > Schema of these databases are stored in all the nodes. > > > > This is very similar to PostgreSQL's database and user administration. > > > > At present, you can specify which nodes each table should be > > distributed/replicated manually at CREATE TABLE or ALTER TABLE. > > This will help to control what server specific customer's data are > > stored. > > > > I hope this picture meets your needs. > > ---------- > > Koichi Suzuki > > > > 2013/2/18 kushal <kus...@gm...>: > > > But I guess the impact would be restricted to the schema of one > > particular > > > database instance, which in this would only be DB1. The schema for DB2 > > would > > > remain intact. Right? > > > > > > On 18 February 2013 14:36, Ashutosh Bapat < > > ash...@en...> > > > wrote: > > >> > > >> > > >> > > >> On Mon, Feb 18, 2013 at 2:32 PM, kushal <kus...@gm...> wrote: > > >>> > > >>> I think your responses answer my question. So here is how my database > > >>> structure looks like without XC. > > >>> > > >>> There should be one database with table1, table2 and table3 for each > > >>> customer under one postgres server. > > >>> > > >>> So If I am not wrong then with XC for coordinators A and B and > > datanodes > > >>> C and D, each would contain database instances DB1, DB2 and so on with > > same > > >>> schema (which in this case is table1, table2 and table3) and > > distribution or > > >>> replication would depend on how I do it while creating tables (table1, > > >>> table2, table3) for each of the individual database instances. > > >> > > >> > > >> Upto this it looks fine. > > >> > > >>> > > >>> And even if the schema changes for DB1 in future, it won't have impact > > on > > >>> DB2 queries or storage. > > >>> > > >> > > >> Any DDL you perform on XC, has to routed through the coordinator and > > hence > > >> it affects all the datanodes. > > >> > > >>> > > >>> --Kushal > > >>> > > >>> > > >>> On 18 February 2013 13:41, Ashutosh Bapat > > >>> <ash...@en...> wrote: > > >>>> > > >>>> Hi Kushal, > > >>>> Thanks for your interest in Postgres-XC. > > >>>> > > >>>> In Postgres-XC, every database/schema is created on all datanodes and > > >>>> coordinators. So, one can not created datanode specific databases. > > The only > > >>>> objects that are distributed are the tables. You can distribute your > > data > > >>>> across datanodes. > > >>>> > > >>>> But you are using term database instances, which is confusing. Do you > > >>>> mean database system instances? > > >>>> > > >>>> May be an example would help to understand your system's architecture. > > >>>> > > >>>> On Mon, Feb 18, 2013 at 12:56 PM, kushal <kus...@gm...> > > wrote: > > >>>>> > > >>>>> Hi > > >>>>> > > >>>>> This is my first post on the postgres xc mailing list. Let me first > > >>>>> just congratulate the whole team for coming up with such a cool > > framework. > > >>>>> > > >>>>> I have few questions around the requirements we have to support our > > >>>>> product. Now it is required to keep multiple database instances, > > lets say > > >>>>> one for each customer, accessible from one app. Without postgres-xc, > > I can > > >>>>> do that by just creating a connecting to specific database instance > > for a > > >>>>> particular customer from my app. > > >>>>> > > >>>>> Can the same be done with postgres-xc interface? So basically, I > > should > > >>>>> be able to create different database instances across datanodes > > accessible > > >>>>> through any coordinator. > > >>>>> If yes, how does the distribution/replication work? Is it going to be > > >>>>> somewhat different? > > >>>>> > > >>>>> > > >>>>> Thanks & Regards, > > >>>>> Kushal > > >>>>> > > >>>>> > > >>>>> > > ------------------------------------------------------------------------------ > > >>>>> The Go Parallel Website, sponsored by Intel - in partnership with > > >>>>> Geeknet, > > >>>>> is your hub for all things parallel software development, from weekly > > >>>>> thought > > >>>>> leadership blogs to news, videos, case studies, tutorials, tech docs, > > >>>>> whitepapers, evaluation guides, and opinion stories. Check out the > > most > > >>>>> recent posts - join the conversation now. > > >>>>> http://goparallel.sourceforge.net/ > > >>>>> _______________________________________________ > > >>>>> Postgres-xc-general mailing list > > >>>>> Pos...@li... > > >>>>> https://lists.sourceforge.net/lists/listinfo/postgres-xc-general > > >>>>> > > >>>> > > >>>> > > >>>> > > >>>> -- > > >>>> Best Wishes, > > >>>> Ashutosh Bapat > > >>>> EntepriseDB Corporation > > >>>> The Enterprise Postgres Company > > >>> > > >>> > > >>> > > >>> > > >>> > > ------------------------------------------------------------------------------ > > >>> The Go Parallel Website, sponsored by Intel - in partnership with > > >>> Geeknet, > > >>> is your hub for all things parallel software development, from weekly > > >>> thought > > >>> leadership blogs to news, videos, case studies, tutorials, tech docs, > > >>> whitepapers, evaluation guides, and opinion stories. Check out the most > > >>> recent posts - join the conversation now. > > >>> http://goparallel.sourceforge.net/ > > >>> _______________________________________________ > > >>> Postgres-xc-general mailing list > > >>> Pos...@li... > > >>> https://lists.sourceforge.net/lists/listinfo/postgres-xc-general > > >>> > > >> > > >> > > >> > > >> -- > > >> Best Wishes, > > >> Ashutosh Bapat > > >> EntepriseDB Corporation > > >> The Enterprise Postgres Company > > > > > > > > > > > > > > ------------------------------------------------------------------------------ > > > The Go Parallel Website, sponsored by Intel - in partnership with > > Geeknet, > > > is your hub for all things parallel software development, from weekly > > > thought > > > leadership blogs to news, videos, case studies, tutorials, tech docs, > > > whitepapers, evaluation guides, and opinion stories. Check out the most > > > recent posts - join the conversation now. > > http://goparallel.sourceforge.net/ > > > _______________________________________________ > > > Postgres-xc-general mailing list > > > Pos...@li... > > > https://lists.sourceforge.net/lists/listinfo/postgres-xc-general > > > > > |
From: kushal <kus...@gm...> - 2013-02-19 11:40:08
|
Yes. This is what I need and it should allow me get started with my own setup. Though, I might bother you guys again with more doubts :) Thanks. On 19 February 2013 16:17, Koichi Suzuki <koi...@gm...> wrote: > Postgres-XC accepts CREATE DATABASE statement, same as PostgreSQL. So, > > CREATE DATABASE DB1; > > creates DB1 database which spans over all the nodes. You can > similarly create DB2. Then you can connect to either DB1 and DB2 > and create table1, table2 and table3 for both databases. You can > setup privilege of Customer1 only to DB1 and Customer2 only to DB2 > respectively. > > Schema of these databases are stored in all the nodes. > > This is very similar to PostgreSQL's database and user administration. > > At present, you can specify which nodes each table should be > distributed/replicated manually at CREATE TABLE or ALTER TABLE. > This will help to control what server specific customer's data are > stored. > > I hope this picture meets your needs. > ---------- > Koichi Suzuki > > 2013/2/18 kushal <kus...@gm...>: > > But I guess the impact would be restricted to the schema of one > particular > > database instance, which in this would only be DB1. The schema for DB2 > would > > remain intact. Right? > > > > On 18 February 2013 14:36, Ashutosh Bapat < > ash...@en...> > > wrote: > >> > >> > >> > >> On Mon, Feb 18, 2013 at 2:32 PM, kushal <kus...@gm...> wrote: > >>> > >>> I think your responses answer my question. So here is how my database > >>> structure looks like without XC. > >>> > >>> There should be one database with table1, table2 and table3 for each > >>> customer under one postgres server. > >>> > >>> So If I am not wrong then with XC for coordinators A and B and > datanodes > >>> C and D, each would contain database instances DB1, DB2 and so on with > same > >>> schema (which in this case is table1, table2 and table3) and > distribution or > >>> replication would depend on how I do it while creating tables (table1, > >>> table2, table3) for each of the individual database instances. > >> > >> > >> Upto this it looks fine. > >> > >>> > >>> And even if the schema changes for DB1 in future, it won't have impact > on > >>> DB2 queries or storage. > >>> > >> > >> Any DDL you perform on XC, has to routed through the coordinator and > hence > >> it affects all the datanodes. > >> > >>> > >>> --Kushal > >>> > >>> > >>> On 18 February 2013 13:41, Ashutosh Bapat > >>> <ash...@en...> wrote: > >>>> > >>>> Hi Kushal, > >>>> Thanks for your interest in Postgres-XC. > >>>> > >>>> In Postgres-XC, every database/schema is created on all datanodes and > >>>> coordinators. So, one can not created datanode specific databases. > The only > >>>> objects that are distributed are the tables. You can distribute your > data > >>>> across datanodes. > >>>> > >>>> But you are using term database instances, which is confusing. Do you > >>>> mean database system instances? > >>>> > >>>> May be an example would help to understand your system's architecture. > >>>> > >>>> On Mon, Feb 18, 2013 at 12:56 PM, kushal <kus...@gm...> > wrote: > >>>>> > >>>>> Hi > >>>>> > >>>>> This is my first post on the postgres xc mailing list. Let me first > >>>>> just congratulate the whole team for coming up with such a cool > framework. > >>>>> > >>>>> I have few questions around the requirements we have to support our > >>>>> product. Now it is required to keep multiple database instances, > lets say > >>>>> one for each customer, accessible from one app. Without postgres-xc, > I can > >>>>> do that by just creating a connecting to specific database instance > for a > >>>>> particular customer from my app. > >>>>> > >>>>> Can the same be done with postgres-xc interface? So basically, I > should > >>>>> be able to create different database instances across datanodes > accessible > >>>>> through any coordinator. > >>>>> If yes, how does the distribution/replication work? Is it going to be > >>>>> somewhat different? > >>>>> > >>>>> > >>>>> Thanks & Regards, > >>>>> Kushal > >>>>> > >>>>> > >>>>> > ------------------------------------------------------------------------------ > >>>>> The Go Parallel Website, sponsored by Intel - in partnership with > >>>>> Geeknet, > >>>>> is your hub for all things parallel software development, from weekly > >>>>> thought > >>>>> leadership blogs to news, videos, case studies, tutorials, tech docs, > >>>>> whitepapers, evaluation guides, and opinion stories. Check out the > most > >>>>> recent posts - join the conversation now. > >>>>> http://goparallel.sourceforge.net/ > >>>>> _______________________________________________ > >>>>> Postgres-xc-general mailing list > >>>>> Pos...@li... > >>>>> https://lists.sourceforge.net/lists/listinfo/postgres-xc-general > >>>>> > >>>> > >>>> > >>>> > >>>> -- > >>>> Best Wishes, > >>>> Ashutosh Bapat > >>>> EntepriseDB Corporation > >>>> The Enterprise Postgres Company > >>> > >>> > >>> > >>> > >>> > ------------------------------------------------------------------------------ > >>> The Go Parallel Website, sponsored by Intel - in partnership with > >>> Geeknet, > >>> is your hub for all things parallel software development, from weekly > >>> thought > >>> leadership blogs to news, videos, case studies, tutorials, tech docs, > >>> whitepapers, evaluation guides, and opinion stories. Check out the most > >>> recent posts - join the conversation now. > >>> http://goparallel.sourceforge.net/ > >>> _______________________________________________ > >>> Postgres-xc-general mailing list > >>> Pos...@li... > >>> https://lists.sourceforge.net/lists/listinfo/postgres-xc-general > >>> > >> > >> > >> > >> -- > >> Best Wishes, > >> Ashutosh Bapat > >> EntepriseDB Corporation > >> The Enterprise Postgres Company > > > > > > > > > ------------------------------------------------------------------------------ > > The Go Parallel Website, sponsored by Intel - in partnership with > Geeknet, > > is your hub for all things parallel software development, from weekly > > thought > > leadership blogs to news, videos, case studies, tutorials, tech docs, > > whitepapers, evaluation guides, and opinion stories. Check out the most > > recent posts - join the conversation now. > http://goparallel.sourceforge.net/ > > _______________________________________________ > > Postgres-xc-general mailing list > > Pos...@li... > > https://lists.sourceforge.net/lists/listinfo/postgres-xc-general > > > |
From: Koichi S. <koi...@gm...> - 2013-02-19 10:47:28
|
Postgres-XC accepts CREATE DATABASE statement, same as PostgreSQL. So, CREATE DATABASE DB1; creates DB1 database which spans over all the nodes. You can similarly create DB2. Then you can connect to either DB1 and DB2 and create table1, table2 and table3 for both databases. You can setup privilege of Customer1 only to DB1 and Customer2 only to DB2 respectively. Schema of these databases are stored in all the nodes. This is very similar to PostgreSQL's database and user administration. At present, you can specify which nodes each table should be distributed/replicated manually at CREATE TABLE or ALTER TABLE. This will help to control what server specific customer's data are stored. I hope this picture meets your needs. ---------- Koichi Suzuki 2013/2/18 kushal <kus...@gm...>: > But I guess the impact would be restricted to the schema of one particular > database instance, which in this would only be DB1. The schema for DB2 would > remain intact. Right? > > On 18 February 2013 14:36, Ashutosh Bapat <ash...@en...> > wrote: >> >> >> >> On Mon, Feb 18, 2013 at 2:32 PM, kushal <kus...@gm...> wrote: >>> >>> I think your responses answer my question. So here is how my database >>> structure looks like without XC. >>> >>> There should be one database with table1, table2 and table3 for each >>> customer under one postgres server. >>> >>> So If I am not wrong then with XC for coordinators A and B and datanodes >>> C and D, each would contain database instances DB1, DB2 and so on with same >>> schema (which in this case is table1, table2 and table3) and distribution or >>> replication would depend on how I do it while creating tables (table1, >>> table2, table3) for each of the individual database instances. >> >> >> Upto this it looks fine. >> >>> >>> And even if the schema changes for DB1 in future, it won't have impact on >>> DB2 queries or storage. >>> >> >> Any DDL you perform on XC, has to routed through the coordinator and hence >> it affects all the datanodes. >> >>> >>> --Kushal >>> >>> >>> On 18 February 2013 13:41, Ashutosh Bapat >>> <ash...@en...> wrote: >>>> >>>> Hi Kushal, >>>> Thanks for your interest in Postgres-XC. >>>> >>>> In Postgres-XC, every database/schema is created on all datanodes and >>>> coordinators. So, one can not created datanode specific databases. The only >>>> objects that are distributed are the tables. You can distribute your data >>>> across datanodes. >>>> >>>> But you are using term database instances, which is confusing. Do you >>>> mean database system instances? >>>> >>>> May be an example would help to understand your system's architecture. >>>> >>>> On Mon, Feb 18, 2013 at 12:56 PM, kushal <kus...@gm...> wrote: >>>>> >>>>> Hi >>>>> >>>>> This is my first post on the postgres xc mailing list. Let me first >>>>> just congratulate the whole team for coming up with such a cool framework. >>>>> >>>>> I have few questions around the requirements we have to support our >>>>> product. Now it is required to keep multiple database instances, lets say >>>>> one for each customer, accessible from one app. Without postgres-xc, I can >>>>> do that by just creating a connecting to specific database instance for a >>>>> particular customer from my app. >>>>> >>>>> Can the same be done with postgres-xc interface? So basically, I should >>>>> be able to create different database instances across datanodes accessible >>>>> through any coordinator. >>>>> If yes, how does the distribution/replication work? Is it going to be >>>>> somewhat different? >>>>> >>>>> >>>>> Thanks & Regards, >>>>> Kushal >>>>> >>>>> >>>>> ------------------------------------------------------------------------------ >>>>> The Go Parallel Website, sponsored by Intel - in partnership with >>>>> Geeknet, >>>>> is your hub for all things parallel software development, from weekly >>>>> thought >>>>> leadership blogs to news, videos, case studies, tutorials, tech docs, >>>>> whitepapers, evaluation guides, and opinion stories. Check out the most >>>>> recent posts - join the conversation now. >>>>> http://goparallel.sourceforge.net/ >>>>> _______________________________________________ >>>>> Postgres-xc-general mailing list >>>>> Pos...@li... >>>>> https://lists.sourceforge.net/lists/listinfo/postgres-xc-general >>>>> >>>> >>>> >>>> >>>> -- >>>> Best Wishes, >>>> Ashutosh Bapat >>>> EntepriseDB Corporation >>>> The Enterprise Postgres Company >>> >>> >>> >>> >>> ------------------------------------------------------------------------------ >>> The Go Parallel Website, sponsored by Intel - in partnership with >>> Geeknet, >>> is your hub for all things parallel software development, from weekly >>> thought >>> leadership blogs to news, videos, case studies, tutorials, tech docs, >>> whitepapers, evaluation guides, and opinion stories. Check out the most >>> recent posts - join the conversation now. >>> http://goparallel.sourceforge.net/ >>> _______________________________________________ >>> Postgres-xc-general mailing list >>> Pos...@li... >>> https://lists.sourceforge.net/lists/listinfo/postgres-xc-general >>> >> >> >> >> -- >> Best Wishes, >> Ashutosh Bapat >> EntepriseDB Corporation >> The Enterprise Postgres Company > > > > ------------------------------------------------------------------------------ > The Go Parallel Website, sponsored by Intel - in partnership with Geeknet, > is your hub for all things parallel software development, from weekly > thought > leadership blogs to news, videos, case studies, tutorials, tech docs, > whitepapers, evaluation guides, and opinion stories. Check out the most > recent posts - join the conversation now. http://goparallel.sourceforge.net/ > _______________________________________________ > Postgres-xc-general mailing list > Pos...@li... > https://lists.sourceforge.net/lists/listinfo/postgres-xc-general > |
From: Arni S. <Arn...@md...> - 2013-02-19 02:51:43
|
Ugggh! Thanks for fast response! Of course something silly. Sent from my iPhone On Feb 18, 2013, at 9:48 PM, "Michael Paquier" <mic...@gm...<mailto:mic...@gm...>> wrote: On Tue, Feb 19, 2013 at 11:37 AM, Arni Sumarlidason <Arn...@md...<mailto:Arn...@md...>> wrote: The issue seems to be related to the size of the cluster. When attempting to initialize 20 nodes. 18, 19, and 20 consistently failed (reproducible). I tried a cluster of 17 with fewer errors. I tried 10 with success. It looks like you did not update max_coordinator and max_datanodes in postgresql.conf. The default value is 16, that would explain why cluster setting is failing with more than 16 nodes. -- Michael |
From: Koichi S. <koi...@gm...> - 2013-02-19 02:51:24
|
This is related to postgresql.conf parameter max_coordinators and max_datanodes. Default value is 16 so you should extend them. They are specific to XC. Regards; ---------- Koichi Suzuki 2013/2/19 Arni Sumarlidason <Arn...@md...>: > The issue seems to be related to the size of the cluster. When attempting to > initialize 20 nodes. 18, 19, and 20 consistently failed (reproducible). I > tried a cluster of 17 with fewer errors. I tried 10 with success. > > > > Best, > > > > From: Arni Sumarlidason > Sent: Monday, February 18, 2013 8:14 PM > To: 'koi...@gm...'; 'Postgres-XC Developers' > Cc: pos...@li...; mic...@gm... > Subject: RE: [Postgres-xc-general] pgxc: snapshot > > > > Mr. Koichi, > > > > You are right, PGADMIN was the source of these warnings. > > > > Do you have any idea what would cause the following error: > > # Cache lookup failed for node when executing CREATE NODE > > > > Lastly, I believe there are typos on lines 789, 797 in the pgxc script. > > > > Best regards, > > > > -----Original Message----- > From: koi...@gm... [mailto:koi...@gm...] On Behalf > Of Koichi Suzuki > Sent: Monday, February 18, 2013 12:52 AM > To: Arni Sumarlidason; Postgres-XC Developers > Subject: Re: [Postgres-xc-general] pgxc: snapshot > > > > I tried the stress test. Autovacuum seems to work well. I also ran > > vacuum analyze verbose from psql directly and it worked without > > problem during this stress test. I ran vacuumdb as well. All > > worked without problem. > > > > I noticed that you used pgAdmin. Unfortunately, pgAdmin has not been > > tuned to work with XC. I know some of pgAdmin features works well > > but others don't. Could you try to run vacuum from psql or > > vacuumdb? If they don't work, please let me know. > > > > Best Regards; > > ---------- > > Koichi Suzuki > > > > > > 2013/2/18 Koichi Suzuki <ko...@in...>: > >> Nice to hear that pgxc_ctl helps. > >> > >> As to the warning, I will try to reproduce the problem and fix it. I >> need to find a time for it so please forgive me a bit of time. The test >> will run many small transactions which will cause autovacuum lauched, as >> attached. This test was built as Datanode slave stress test. I think >> this may work as autovacuum lauch test. I will test it with four >> coordinators and four datanodes, and four gtm_proxies as well. Whole test >> will take about a couple of hours with five of six-core Xeon servers (one >> for GTM). > >> > >> Do you think this makes sense to reproduce your problem? > >> > >> I will run it both on master and REL1_0_STABLE. > >> > >> Regards; > >> --- > >> Koichi > >> > >> On Sat, 16 Feb 2013 19:32:11 +0000 > >> Arni Sumarlidason <Arn...@md...> wrote: > >> > >>> Koichi, and others, > >>> > >>> I spun some fresh VMs and ran your script with the identical outcome, GTM >>> Snapshot warnings from the auto vacuum launcher. > >>> Please advise. > >>> > >>> > >>> Thank you for your script, it does make life easier!! > >>> > >>> Best, > >>> > >>> -----Original Message----- > >>> From: Koichi Suzuki [mailto:ko...@in...] > >>> Sent: Friday, February 15, 2013 4:11 AM > >>> To: Arni Sumarlidason > >>> Cc: Michael Paquier; koi...@gm...; > >>> pos...@li... > >>> Subject: Re: [Postgres-xc-general] pgxc: snapshot > >>> > >>> If you're not sure about the configuration, please try pgxc_ctl > >>> available at > >>> > >>> git://github.com/koichi-szk/PGXC-Tools.git > >>> > >>> This is bash script (I'm rewriting into C now) so it will help to >>> understand how to configure XC. > >>> > >>> Regards; > >>> --- > >>> Koichi Suzuki > >>> > >>> On Fri, 15 Feb 2013 04:22:49 +0000 > >>> Arni Sumarlidason <Arn...@md...> wrote: > >>> > >>> > Thank you both for fast response!! > >>> > > >>> > RE: Koichi Suzuki > >>> > I downloaded the git this afternoon. > >>> > > >>> > RE: Michael Paquier > >>> > > >>> > - Confirm it is from the datanode's log. > >>> > > >>> > - Both coord & datanode connect via the same gtm_proxy on >>> > localhost > >>> > > >>> > These are my simplified configs, the only change I make on each > >>> > node is the nodename, PG_HBA > >>> > local all all >>> > trust > >>> > host all all 127.0.0.1/32 >>> > trust > >>> > host all all ::1/128 >>> > trust > >>> > host all all 10.100.170.0/24 >>> > trust > >>> > > >>> > COORD > >>> > pgxc_node_name = 'coord01' > >>> > listen_addresses = '*' > >>> > port = 5432 > >>> > max_connections = 200 > >>> > > >>> > gtm_port = 6666 > >>> > gtm_host = 'localhost' > >>> > pooler_port = 6670 > >>> > > >>> > shared_buffers = 32MB > >>> > work_mem = 1MB > >>> > maintenance_work_mem = 16MB > >>> > max_stack_depth = 2MB > >>> > > >>> > log_timezone = 'US/Eastern' > >>> > datestyle = 'iso, mdy' > >>> > timezone = 'US/Eastern' > >>> > lc_messages = 'en_US.UTF-8' > >>> > lc_monetary = 'en_US.UTF-8' > >>> > lc_numeric = 'en_US.UTF-8' > >>> > lc_time = 'en_US.UTF-8' > >>> > default_text_search_config = 'pg_catalog.english' > >>> > > >>> > DATA > >>> > pgxc_node_name = 'data01' > >>> > listen_addresses = '*' > >>> > port = 5433 > >>> > max_connections = 200 > >>> > > >>> > gtm_port = 6666 > >>> > gtm_host = 'localhost' > >>> > > >>> > shared_buffers = 32MB > >>> > work_mem = 1MB > >>> > maintenance_work_mem = 16MB > >>> > max_stack_depth = 2MB > >>> > > >>> > log_timezone = 'US/Eastern' > >>> > datestyle = 'iso, mdy' > >>> > timezone = 'US/Eastern' > >>> > lc_messages = 'en_US.UTF-8' > >>> > lc_monetary = 'en_US.UTF-8' > >>> > lc_numeric = 'en_US.UTF-8' > >>> > lc_time = 'en_US.UTF-8' > >>> > default_text_search_config = 'pg_catalog.english' > >>> > > >>> > PROXY > >>> > Nodename = 'proxy01' > >>> > listen_addresses = '*' > >>> > port = 6666 > >>> > gtm_host = '10.100.170.10' > >>> > gtm_port = 6666 > >>> > > >>> > > >>> > best, > >>> > > >>> > Arni > >>> > > >>> > From: Michael Paquier [mailto:mic...@gm...] > >>> > Sent: Thursday, February 14, 2013 11:06 PM > >>> > To: Arni Sumarlidason > >>> > Cc: pos...@li... > >>> > Subject: Re: [Postgres-xc-general] pgxc: snapshot > >>> > > >>> > > >>> > On Fri, Feb 15, 2013 at 12:57 PM, Arni Sumarlidason >>> > <Arn...@md...<mailto:Arn...@md...>> wrote: > >>> > Hi Everyone! > >>> > > >>> > I am getting these errors, "Warning: do not have a gtm snapshot >>> > available"[1]. After researching I found posts about the auto vacuum causing >>> > these errors, is this fix or work in progress? Also, I am seeing them >>> > without the CONTEXT: automatic vacuum message too. Is this something to >>> > worry about? Cluster seems to be functioning normally. > >>> > > >>> > Vacuum and analyze from pgadmin looks like this, > >>> > INFO: vacuuming "public.table" > >>> > INFO: "table": found 0 removable, 0 nonremovable row versions in 0 > >>> > pages > >>> > DETAIL: 0 dead row versions cannot be removed yet. > >>> > CPU 0.00s/0.00u sec elapsed 0.00 sec. > >>> > INFO: analyzing "public.table" > >>> > INFO: "table": scanned 0 of 0 pages, containing 0 live rows and 0 > >>> > dead rows; 0 rows in sample, 0 estimated total rows Total query >>> > runtime: 15273 ms. > >>> > > >>> > Should we use execute direct to perform maintenance? > >>> > No. Isn't this happening on a Datanode? > >>> > Be sure first to set gtm_host and gtm_port in postgresql.conf of all >>> > the nodes, Coordinator and Datanode included. GXID and snapshots are fetched >>> > of course on Coordinator for normal transaction run but also on all the >>> > nodes for autovacuum. > >>> > -- > >>> > Michael > >>> |
From: Michael P. <mic...@gm...> - 2013-02-19 02:48:31
|
On Tue, Feb 19, 2013 at 11:37 AM, Arni Sumarlidason < Arn...@md...> wrote: > The issue seems to be related to the size of the cluster. When > attempting to initialize 20 nodes. 18, 19, and 20 consistently failed > (reproducible). I tried a cluster of 17 with fewer errors. I tried 10 with > success. > It looks like you did not update max_coordinator and max_datanodes in postgresql.conf. The default value is 16, that would explain why cluster setting is failing with more than 16 nodes. -- Michael |
From: Arni S. <Arn...@md...> - 2013-02-19 02:37:41
|
The issue seems to be related to the size of the cluster. When attempting to initialize 20 nodes. 18, 19, and 20 consistently failed (reproducible). I tried a cluster of 17 with fewer errors. I tried 10 with success. Best, From: Arni Sumarlidason Sent: Monday, February 18, 2013 8:14 PM To: 'koi...@gm...'; 'Postgres-XC Developers' Cc: pos...@li...; mic...@gm... Subject: RE: [Postgres-xc-general] pgxc: snapshot Mr. Koichi, You are right, PGADMIN was the source of these warnings. Do you have any idea what would cause the following error: # Cache lookup failed for node when executing CREATE NODE Lastly, I believe there are typos on lines 789, 797 in the pgxc script. Best regards, -----Original Message----- From: koi...@gm...<mailto:koi...@gm...> [mailto:koi...@gm...] On Behalf Of Koichi Suzuki Sent: Monday, February 18, 2013 12:52 AM To: Arni Sumarlidason; Postgres-XC Developers Subject: Re: [Postgres-xc-general] pgxc: snapshot I tried the stress test. Autovacuum seems to work well. I also ran vacuum analyze verbose from psql directly and it worked without problem during this stress test. I ran vacuumdb as well. All worked without problem. I noticed that you used pgAdmin. Unfortunately, pgAdmin has not been tuned to work with XC. I know some of pgAdmin features works well but others don't. Could you try to run vacuum from psql or vacuumdb? If they don't work, please let me know. Best Regards; ---------- Koichi Suzuki 2013/2/18 Koichi Suzuki <ko...@in...<mailto:ko...@in...>>: > Nice to hear that pgxc_ctl helps. > > As to the warning, I will try to reproduce the problem and fix it. I need to find a time for it so please forgive me a bit of time. The test will run many small transactions which will cause autovacuum lauched, as attached. This test was built as Datanode slave stress test. I think this may work as autovacuum lauch test. I will test it with four coordinators and four datanodes, and four gtm_proxies as well. Whole test will take about a couple of hours with five of six-core Xeon servers (one for GTM). > > Do you think this makes sense to reproduce your problem? > > I will run it both on master and REL1_0_STABLE. > > Regards; > --- > Koichi > > On Sat, 16 Feb 2013 19:32:11 +0000 > Arni Sumarlidason <Arn...@md...<mailto:Arn...@md...>> wrote: > >> Koichi, and others, >> >> I spun some fresh VMs and ran your script with the identical outcome, GTM Snapshot warnings from the auto vacuum launcher. >> Please advise. >> >> >> Thank you for your script, it does make life easier!! >> >> Best, >> >> -----Original Message----- >> From: Koichi Suzuki [mailto:ko...@in...] >> Sent: Friday, February 15, 2013 4:11 AM >> To: Arni Sumarlidason >> Cc: Michael Paquier; koi...@gm...<mailto:koi...@gm...>; >> pos...@li...<mailto:pos...@li...> >> Subject: Re: [Postgres-xc-general] pgxc: snapshot >> >> If you're not sure about the configuration, please try pgxc_ctl >> available at >> >> git://github.com/koichi-szk/PGXC-Tools.git >> >> This is bash script (I'm rewriting into C now) so it will help to understand how to configure XC. >> >> Regards; >> --- >> Koichi Suzuki >> >> On Fri, 15 Feb 2013 04:22:49 +0000 >> Arni Sumarlidason <Arn...@md...<mailto:Arn...@md...>> wrote: >> >> > Thank you both for fast response!! >> > >> > RE: Koichi Suzuki >> > I downloaded the git this afternoon. >> > >> > RE: Michael Paquier >> > >> > - Confirm it is from the datanode's log. >> > >> > - Both coord & datanode connect via the same gtm_proxy on localhost >> > >> > These are my simplified configs, the only change I make on each >> > node is the nodename, PG_HBA >> > local all all trust >> > host all all 127.0.0.1/32 trust >> > host all all ::1/128 trust >> > host all all 10.100.170.0/24 trust >> > >> > COORD >> > pgxc_node_name = 'coord01' >> > listen_addresses = '*' >> > port = 5432 >> > max_connections = 200 >> > >> > gtm_port = 6666 >> > gtm_host = 'localhost' >> > pooler_port = 6670 >> > >> > shared_buffers = 32MB >> > work_mem = 1MB >> > maintenance_work_mem = 16MB >> > max_stack_depth = 2MB >> > >> > log_timezone = 'US/Eastern' >> > datestyle = 'iso, mdy' >> > timezone = 'US/Eastern' >> > lc_messages = 'en_US.UTF-8' >> > lc_monetary = 'en_US.UTF-8' >> > lc_numeric = 'en_US.UTF-8' >> > lc_time = 'en_US.UTF-8' >> > default_text_search_config = 'pg_catalog.english' >> > >> > DATA >> > pgxc_node_name = 'data01' >> > listen_addresses = '*' >> > port = 5433 >> > max_connections = 200 >> > >> > gtm_port = 6666 >> > gtm_host = 'localhost' >> > >> > shared_buffers = 32MB >> > work_mem = 1MB >> > maintenance_work_mem = 16MB >> > max_stack_depth = 2MB >> > >> > log_timezone = 'US/Eastern' >> > datestyle = 'iso, mdy' >> > timezone = 'US/Eastern' >> > lc_messages = 'en_US.UTF-8' >> > lc_monetary = 'en_US.UTF-8' >> > lc_numeric = 'en_US.UTF-8' >> > lc_time = 'en_US.UTF-8' >> > default_text_search_config = 'pg_catalog.english' >> > >> > PROXY >> > Nodename = 'proxy01' >> > listen_addresses = '*' >> > port = 6666 >> > gtm_host = '10.100.170.10' >> > gtm_port = 6666 >> > >> > >> > best, >> > >> > Arni >> > >> > From: Michael Paquier [mailto:mic...@gm...] >> > Sent: Thursday, February 14, 2013 11:06 PM >> > To: Arni Sumarlidason >> > Cc: pos...@li...<mailto:pos...@li...> >> > Subject: Re: [Postgres-xc-general] pgxc: snapshot >> > >> > >> > On Fri, Feb 15, 2013 at 12:57 PM, Arni Sumarlidason <Arn...@md...<mailto:Arn...@md...<mailto:Arn...@md...%3cmailto:Arn...@md...>>> wrote: >> > Hi Everyone! >> > >> > I am getting these errors, "Warning: do not have a gtm snapshot available"[1]. After researching I found posts about the auto vacuum causing these errors, is this fix or work in progress? Also, I am seeing them without the CONTEXT: automatic vacuum message too. Is this something to worry about? Cluster seems to be functioning normally. >> > >> > Vacuum and analyze from pgadmin looks like this, >> > INFO: vacuuming "public.table" >> > INFO: "table": found 0 removable, 0 nonremovable row versions in 0 >> > pages >> > DETAIL: 0 dead row versions cannot be removed yet. >> > CPU 0.00s/0.00u sec elapsed 0.00 sec. >> > INFO: analyzing "public.table" >> > INFO: "table": scanned 0 of 0 pages, containing 0 live rows and 0 >> > dead rows; 0 rows in sample, 0 estimated total rows Total query runtime: 15273 ms. >> > >> > Should we use execute direct to perform maintenance? >> > No. Isn't this happening on a Datanode? >> > Be sure first to set gtm_host and gtm_port in postgresql.conf of all the nodes, Coordinator and Datanode included. GXID and snapshots are fetched of course on Coordinator for normal transaction run but also on all the nodes for autovacuum. >> > -- >> > Michael >> |
From: Arni S. <Arn...@md...> - 2013-02-19 01:13:48
|
Mr. Koichi, You are right, PGADMIN was the source of these warnings. Do you have any idea what would cause the following error: # Cache lookup failed for node when executing CREATE NODE Lastly, I believe there are typos on lines 789, 797 in the pgxc script. Best regards, -----Original Message----- From: koi...@gm...<mailto:koi...@gm...> [mailto:koi...@gm...] On Behalf Of Koichi Suzuki Sent: Monday, February 18, 2013 12:52 AM To: Arni Sumarlidason; Postgres-XC Developers Subject: Re: [Postgres-xc-general] pgxc: snapshot I tried the stress test. Autovacuum seems to work well. I also ran vacuum analyze verbose from psql directly and it worked without problem during this stress test. I ran vacuumdb as well. All worked without problem. I noticed that you used pgAdmin. Unfortunately, pgAdmin has not been tuned to work with XC. I know some of pgAdmin features works well but others don't. Could you try to run vacuum from psql or vacuumdb? If they don't work, please let me know. Best Regards; ---------- Koichi Suzuki 2013/2/18 Koichi Suzuki <ko...@in...<mailto:ko...@in...>>: > Nice to hear that pgxc_ctl helps. > > As to the warning, I will try to reproduce the problem and fix it. I need to find a time for it so please forgive me a bit of time. The test will run many small transactions which will cause autovacuum lauched, as attached. This test was built as Datanode slave stress test. I think this may work as autovacuum lauch test. I will test it with four coordinators and four datanodes, and four gtm_proxies as well. Whole test will take about a couple of hours with five of six-core Xeon servers (one for GTM). > > Do you think this makes sense to reproduce your problem? > > I will run it both on master and REL1_0_STABLE. > > Regards; > --- > Koichi > > On Sat, 16 Feb 2013 19:32:11 +0000 > Arni Sumarlidason <Arn...@md...<mailto:Arn...@md...>> wrote: > >> Koichi, and others, >> >> I spun some fresh VMs and ran your script with the identical outcome, GTM Snapshot warnings from the auto vacuum launcher. >> Please advise. >> >> >> Thank you for your script, it does make life easier!! >> >> Best, >> >> -----Original Message----- >> From: Koichi Suzuki [mailto:ko...@in...] >> Sent: Friday, February 15, 2013 4:11 AM >> To: Arni Sumarlidason >> Cc: Michael Paquier; koi...@gm...<mailto:koi...@gm...>; >> pos...@li...<mailto:pos...@li...> >> Subject: Re: [Postgres-xc-general] pgxc: snapshot >> >> If you're not sure about the configuration, please try pgxc_ctl >> available at >> >> git://github.com/koichi-szk/PGXC-Tools.git >> >> This is bash script (I'm rewriting into C now) so it will help to understand how to configure XC. >> >> Regards; >> --- >> Koichi Suzuki >> >> On Fri, 15 Feb 2013 04:22:49 +0000 >> Arni Sumarlidason <Arn...@md...<mailto:Arn...@md...>> wrote: >> >> > Thank you both for fast response!! >> > >> > RE: Koichi Suzuki >> > I downloaded the git this afternoon. >> > >> > RE: Michael Paquier >> > >> > - Confirm it is from the datanode's log. >> > >> > - Both coord & datanode connect via the same gtm_proxy on localhost >> > >> > These are my simplified configs, the only change I make on each >> > node is the nodename, PG_HBA >> > local all all trust >> > host all all 127.0.0.1/32 trust >> > host all all ::1/128 trust >> > host all all 10.100.170.0/24 trust >> > >> > COORD >> > pgxc_node_name = 'coord01' >> > listen_addresses = '*' >> > port = 5432 >> > max_connections = 200 >> > >> > gtm_port = 6666 >> > gtm_host = 'localhost' >> > pooler_port = 6670 >> > >> > shared_buffers = 32MB >> > work_mem = 1MB >> > maintenance_work_mem = 16MB >> > max_stack_depth = 2MB >> > >> > log_timezone = 'US/Eastern' >> > datestyle = 'iso, mdy' >> > timezone = 'US/Eastern' >> > lc_messages = 'en_US.UTF-8' >> > lc_monetary = 'en_US.UTF-8' >> > lc_numeric = 'en_US.UTF-8' >> > lc_time = 'en_US.UTF-8' >> > default_text_search_config = 'pg_catalog.english' >> > >> > DATA >> > pgxc_node_name = 'data01' >> > listen_addresses = '*' >> > port = 5433 >> > max_connections = 200 >> > >> > gtm_port = 6666 >> > gtm_host = 'localhost' >> > >> > shared_buffers = 32MB >> > work_mem = 1MB >> > maintenance_work_mem = 16MB >> > max_stack_depth = 2MB >> > >> > log_timezone = 'US/Eastern' >> > datestyle = 'iso, mdy' >> > timezone = 'US/Eastern' >> > lc_messages = 'en_US.UTF-8' >> > lc_monetary = 'en_US.UTF-8' >> > lc_numeric = 'en_US.UTF-8' >> > lc_time = 'en_US.UTF-8' >> > default_text_search_config = 'pg_catalog.english' >> > >> > PROXY >> > Nodename = 'proxy01' >> > listen_addresses = '*' >> > port = 6666 >> > gtm_host = '10.100.170.10' >> > gtm_port = 6666 >> > >> > >> > best, >> > >> > Arni >> > >> > From: Michael Paquier [mailto:mic...@gm...] >> > Sent: Thursday, February 14, 2013 11:06 PM >> > To: Arni Sumarlidason >> > Cc: pos...@li...<mailto:pos...@li...> >> > Subject: Re: [Postgres-xc-general] pgxc: snapshot >> > >> > >> > On Fri, Feb 15, 2013 at 12:57 PM, Arni Sumarlidason <Arn...@md...<mailto:Arn...@md...<mailto:Arn...@md...%3cmailto:Arn...@md...>>> wrote: >> > Hi Everyone! >> > >> > I am getting these errors, "Warning: do not have a gtm snapshot available"[1]. After researching I found posts about the auto vacuum causing these errors, is this fix or work in progress? Also, I am seeing them without the CONTEXT: automatic vacuum message too. Is this something to worry about? Cluster seems to be functioning normally. >> > >> > Vacuum and analyze from pgadmin looks like this, >> > INFO: vacuuming "public.table" >> > INFO: "table": found 0 removable, 0 nonremovable row versions in 0 >> > pages >> > DETAIL: 0 dead row versions cannot be removed yet. >> > CPU 0.00s/0.00u sec elapsed 0.00 sec. >> > INFO: analyzing "public.table" >> > INFO: "table": scanned 0 of 0 pages, containing 0 live rows and 0 >> > dead rows; 0 rows in sample, 0 estimated total rows Total query runtime: 15273 ms. >> > >> > Should we use execute direct to perform maintenance? >> > No. Isn't this happening on a Datanode? >> > Be sure first to set gtm_host and gtm_port in postgresql.conf of all the nodes, Coordinator and Datanode included. GXID and snapshots are fetched of course on Coordinator for normal transaction run but also on all the nodes for autovacuum. >> > -- >> > Michael >> |
From: Koichi S. <koi...@gm...> - 2013-02-19 00:07:52
|
Hello, I noticed you have unnecessary space betwee '$' and 'HOME'. This could be a cause of a problem. Please check your configuration and the variable datanodeMasterDirs. This variable contains the value of directories for datanodes. You can invoke pgxc_ctl -v and then type echo ${datanodeMasterDirs[@]} which will tell if the variable contains what you want. Regards; ---------- Koichi Suzuki 2013/2/19 abel <ab...@me...>: > hello Koichi, I use the tools pgxc koichi, and I get the following error > >> FATAL: "$ HOME/nodes/Datanode" is not a valid data directory >> DETAIL: File "/$ HOME/nodes/Datanode/PG_VERSION" is missing. > > > who can give me help? > > this error occurs when I try to "pgxc -v init" > > Tanks > > > On 11.02.2013 01:27, Koichi Suzuki wrote: >> >> I have my personal project below. >> >> git://github.com/koichi-szk/PGXC-Tools.git >> >> You will find pgx_ctl, with bash script ans its manual. I hope they >> provide sufficient information to configure your XC cluster. >> >> Good luck. >> ---------- >> Koichi Suzuki >> >> >> 2013/2/11 Ashutosh Bapat <ash...@en...>: >>> >>> Hi Abel, >>> WIth the information you have provided, it's not possible to find the >>> correct cause. You will need to provide detailed steps. Generally >>> speaking, >>> you might have missed some argument specifying the node to be booted is >>> coordinator OR you might be connecting to datanode expecting it to be a >>> coordinator. There are any number of possibilities. >>> >>> >>> On Sat, Feb 9, 2013 at 3:34 AM, abel <ab...@me...> wrote: >>>> >>>> >>>> >>>> which is the steps to install more than 2 coordinators and DataNodes on >>>> different machines, I am setting like this website >>>> http://postgresxc.wikia.com/wiki/Real_Server_configuration >>>> >>>> I'm working with debian 6 >>>> >>>> after configure I receive the following message: >>>> psql: FATAL: Can not Identify itself Coordinator >>>> >>>> >>>> >>>> >>>> ------------------------------------------------------------------------------ >>>> Free Next-Gen Firewall Hardware Offer >>>> Buy your Sophos next-gen firewall before the end March 2013 >>>> and get the hardware for free! Learn more. >>>> http://p.sf.net/sfu/sophos-d2d-feb >>>> _______________________________________________ >>>> Postgres-xc-general mailing list >>>> Pos...@li... >>>> https://lists.sourceforge.net/lists/listinfo/postgres-xc-general >>> >>> >>> >>> >>> >>> -- >>> Best Wishes, >>> Ashutosh Bapat >>> EntepriseDB Corporation >>> The Enterprise Postgres Company >>> >>> >>> >>> ------------------------------------------------------------------------------ >>> Free Next-Gen Firewall Hardware Offer >>> Buy your Sophos next-gen firewall before the end March 2013 >>> and get the hardware for free! Learn more. >>> http://p.sf.net/sfu/sophos-d2d-feb >>> _______________________________________________ >>> Postgres-xc-general mailing list >>> Pos...@li... >>> https://lists.sourceforge.net/lists/listinfo/postgres-xc-general >>> > |
From: abel <ab...@me...> - 2013-02-18 18:39:48
|
hello Koichi, I use the tools pgxc koichi, and I get the following error > FATAL: "$ HOME/nodes/Datanode" is not a valid data directory > DETAIL: File "/$ HOME/nodes/Datanode/PG_VERSION" is missing. who can give me help? this error occurs when I try to "pgxc -v init" Tanks On 11.02.2013 01:27, Koichi Suzuki wrote: > I have my personal project below. > > git://github.com/koichi-szk/PGXC-Tools.git > > You will find pgx_ctl, with bash script ans its manual. I hope they > provide sufficient information to configure your XC cluster. > > Good luck. > ---------- > Koichi Suzuki > > > 2013/2/11 Ashutosh Bapat <ash...@en...>: >> Hi Abel, >> WIth the information you have provided, it's not possible to find >> the >> correct cause. You will need to provide detailed steps. Generally >> speaking, >> you might have missed some argument specifying the node to be booted >> is >> coordinator OR you might be connecting to datanode expecting it to >> be a >> coordinator. There are any number of possibilities. >> >> >> On Sat, Feb 9, 2013 at 3:34 AM, abel <ab...@me...> wrote: >>> >>> >>> which is the steps to install more than 2 coordinators and >>> DataNodes on >>> different machines, I am setting like this website >>> http://postgresxc.wikia.com/wiki/Real_Server_configuration >>> >>> I'm working with debian 6 >>> >>> after configure I receive the following message: >>> psql: FATAL: Can not Identify itself Coordinator >>> >>> >>> >>> ------------------------------------------------------------------------------ >>> Free Next-Gen Firewall Hardware Offer >>> Buy your Sophos next-gen firewall before the end March 2013 >>> and get the hardware for free! Learn more. >>> http://p.sf.net/sfu/sophos-d2d-feb >>> _______________________________________________ >>> Postgres-xc-general mailing list >>> Pos...@li... >>> https://lists.sourceforge.net/lists/listinfo/postgres-xc-general >> >> >> >> >> -- >> Best Wishes, >> Ashutosh Bapat >> EntepriseDB Corporation >> The Enterprise Postgres Company >> >> >> ------------------------------------------------------------------------------ >> Free Next-Gen Firewall Hardware Offer >> Buy your Sophos next-gen firewall before the end March 2013 >> and get the hardware for free! Learn more. >> http://p.sf.net/sfu/sophos-d2d-feb >> _______________________________________________ >> Postgres-xc-general mailing list >> Pos...@li... >> https://lists.sourceforge.net/lists/listinfo/postgres-xc-general >> |
From: kushal <kus...@gm...> - 2013-02-18 09:27:13
|
But I guess the impact would be restricted to the schema of one particular database instance, which in this would only be DB1. The schema for DB2 would remain intact. Right? On 18 February 2013 14:36, Ashutosh Bapat <ash...@en...>wrote: > > > On Mon, Feb 18, 2013 at 2:32 PM, kushal <kus...@gm...> wrote: > >> I think your responses answer my question. So here is how my database >> structure looks like without XC. >> >> There should be one database with table1, table2 and table3 for each >> customer under one postgres server. >> >> So If I am not wrong then with XC for coordinators A and B and datanodes >> C and D, each would contain database instances DB1, DB2 and so on with same >> schema (which in this case is table1, table2 and table3) and distribution >> or replication would depend on how I do it while creating tables (table1, >> table2, table3) for each of the individual database instances. >> > > Upto this it looks fine. > > >> And even if the schema changes for DB1 in future, it won't have impact on >> DB2 queries or storage. >> >> > Any DDL you perform on XC, has to routed through the coordinator and hence > it affects all the datanodes. > > >> --Kushal >> >> >> On 18 February 2013 13:41, Ashutosh Bapat < >> ash...@en...> wrote: >> >>> Hi Kushal, >>> Thanks for your interest in Postgres-XC. >>> >>> In Postgres-XC, every database/schema is created on all datanodes and >>> coordinators. So, one can not created datanode specific databases. The only >>> objects that are distributed are the tables. You can distribute your data >>> across datanodes. >>> >>> But you are using term database instances, which is confusing. Do you >>> mean database system instances? >>> >>> May be an example would help to understand your system's architecture. >>> >>> On Mon, Feb 18, 2013 at 12:56 PM, kushal <kus...@gm...> wrote: >>> >>>> Hi >>>> >>>> This is my first post on the postgres xc mailing list. Let me first >>>> just congratulate the whole team for coming up with such a cool framework. >>>> >>>> I have few questions around the requirements we have to support our >>>> product. Now it is required to keep multiple database instances, lets say >>>> one for each customer, accessible from one app. Without postgres-xc, I can >>>> do that by just creating a connecting to specific database instance for a >>>> particular customer from my app. >>>> >>>> Can the same be done with postgres-xc interface? So basically, I should >>>> be able to create different database instances across datanodes accessible >>>> through any coordinator. >>>> If yes, how does the distribution/replication work? Is it going to be >>>> somewhat different? >>>> >>>> >>>> Thanks & Regards, >>>> Kushal >>>> >>>> >>>> ------------------------------------------------------------------------------ >>>> The Go Parallel Website, sponsored by Intel - in partnership with >>>> Geeknet, >>>> is your hub for all things parallel software development, from weekly >>>> thought >>>> leadership blogs to news, videos, case studies, tutorials, tech docs, >>>> whitepapers, evaluation guides, and opinion stories. Check out the most >>>> recent posts - join the conversation now. >>>> http://goparallel.sourceforge.net/ >>>> _______________________________________________ >>>> Postgres-xc-general mailing list >>>> Pos...@li... >>>> https://lists.sourceforge.net/lists/listinfo/postgres-xc-general >>>> >>>> >>> >>> >>> -- >>> Best Wishes, >>> Ashutosh Bapat >>> EntepriseDB Corporation >>> The Enterprise Postgres Company >>> >> >> >> >> ------------------------------------------------------------------------------ >> The Go Parallel Website, sponsored by Intel - in partnership with Geeknet, >> is your hub for all things parallel software development, from weekly >> thought >> leadership blogs to news, videos, case studies, tutorials, tech docs, >> whitepapers, evaluation guides, and opinion stories. Check out the most >> recent posts - join the conversation now. >> http://goparallel.sourceforge.net/ >> _______________________________________________ >> Postgres-xc-general mailing list >> Pos...@li... >> https://lists.sourceforge.net/lists/listinfo/postgres-xc-general >> >> > > > -- > Best Wishes, > Ashutosh Bapat > EntepriseDB Corporation > The Enterprise Postgres Company > |