You can subscribe to this list here.
| 2010 |
Jan
|
Feb
|
Mar
|
Apr
|
May
(2) |
Jun
|
Jul
|
Aug
(6) |
Sep
|
Oct
(19) |
Nov
(1) |
Dec
|
|---|---|---|---|---|---|---|---|---|---|---|---|---|
| 2011 |
Jan
(12) |
Feb
(1) |
Mar
(4) |
Apr
(4) |
May
(32) |
Jun
(12) |
Jul
(11) |
Aug
(1) |
Sep
(6) |
Oct
(3) |
Nov
|
Dec
(10) |
| 2012 |
Jan
(11) |
Feb
(1) |
Mar
(3) |
Apr
(25) |
May
(53) |
Jun
(38) |
Jul
(103) |
Aug
(54) |
Sep
(31) |
Oct
(66) |
Nov
(77) |
Dec
(20) |
| 2013 |
Jan
(91) |
Feb
(86) |
Mar
(103) |
Apr
(107) |
May
(25) |
Jun
(37) |
Jul
(17) |
Aug
(59) |
Sep
(38) |
Oct
(78) |
Nov
(29) |
Dec
(15) |
| 2014 |
Jan
(23) |
Feb
(82) |
Mar
(118) |
Apr
(101) |
May
(103) |
Jun
(45) |
Jul
(6) |
Aug
(10) |
Sep
|
Oct
(32) |
Nov
|
Dec
(9) |
| 2015 |
Jan
(3) |
Feb
(5) |
Mar
|
Apr
(1) |
May
|
Jun
|
Jul
(9) |
Aug
(4) |
Sep
(3) |
Oct
|
Nov
|
Dec
|
| 2016 |
Jan
(3) |
Feb
|
Mar
|
Apr
|
May
|
Jun
|
Jul
|
Aug
|
Sep
|
Oct
|
Nov
|
Dec
|
| 2017 |
Jan
|
Feb
|
Mar
|
Apr
|
May
|
Jun
(3) |
Jul
|
Aug
|
Sep
|
Oct
|
Nov
|
Dec
|
| 2018 |
Jan
|
Feb
|
Mar
|
Apr
|
May
(4) |
Jun
|
Jul
|
Aug
|
Sep
|
Oct
|
Nov
|
Dec
|
|
From: Ashutosh B. <ash...@en...> - 2013-02-20 04:08:34
|
Hi Arni, What do you mean by "in the execution of the FQS logic -- it seems to query one node at a time instead of in parallel"? What gave you that impression? When a query is fired on multiple nodes, it's done in-parallel or asynchronously. For better understanding of how query is processed in Postgres-XC, please watch following video - http://www.youtube.com/watch?v=g_9CYat8nkY On Wed, Feb 20, 2013 at 8:01 AM, Arni Sumarlidason < Arn...@md...> wrote: > Good Evening all,**** > > ** ** > > Thank you for your support over the last few days, really appreciate it. J > **** > > ** ** > > I have a question regarding query optimization, I have one table > distributed by HASH, and another by REPLICATE. When attempting to compare > values between the two tables[1] I noticed that in the execution of the FQS > logic -- it seems to query one node at a time instead of in parallel. I > google’d FQS and found an article by Michael Paquier and played with > disabling the feature. However disabling it changed the execution method to > _REMOTE_TABLE_QUERY_ -- this also did not seem to be parallel. Is there any > way to optimize for __REMOTE_GROUP_QUERY__?**** > > ** ** > > Thank you again,**** > > ** ** > > *Arni Sumarlidason* **** > > ** ** > > [1] SELECT * FROM data m, world_boundaries b WHERE ((b.cntry_name ILIKE > '%Iceland%')) AND m.point_geom && b.wkb_geometry AND > st_intersects(m.point_geom, b.wkb_geometry);**** > > Data – Hash’d**** > > world_boundaries – Replicated**** > > ** ** > > > ------------------------------------------------------------------------------ > Everyone hates slow websites. So do we. > Make your web apps faster with AppDynamics > Download AppDynamics Lite for free today: > http://p.sf.net/sfu/appdyn_d2d_feb > _______________________________________________ > Postgres-xc-general mailing list > Pos...@li... > https://lists.sourceforge.net/lists/listinfo/postgres-xc-general > > -- Best Wishes, Ashutosh Bapat EntepriseDB Corporation The Enterprise Postgres Company |
|
From: Arni S. <Arn...@md...> - 2013-02-20 02:31:32
|
Good Evening all, Thank you for your support over the last few days, really appreciate it. :) I have a question regarding query optimization, I have one table distributed by HASH, and another by REPLICATE. When attempting to compare values between the two tables[1] I noticed that in the execution of the FQS logic -- it seems to query one node at a time instead of in parallel. I google'd FQS and found an article by Michael Paquier and played with disabling the feature. However disabling it changed the execution method to _REMOTE_TABLE_QUERY_ -- this also did not seem to be parallel. Is there any way to optimize for __REMOTE_GROUP_QUERY__? Thank you again, Arni Sumarlidason [1] SELECT * FROM data m, world_boundaries b WHERE ((b.cntry_name ILIKE '%Iceland%')) AND m.point_geom && b.wkb_geometry AND st_intersects(m.point_geom, b.wkb_geometry); Data - Hash'd world_boundaries - Replicated |
|
From: Koichi S. <ko...@in...> - 2013-02-19 12:01:22
|
I found some wrong wording. Please let me correct. On Tue, 19 Feb 2013 17:10:00 +0530 kushal <kus...@gm...> wrote: > Yes. This is what I need and it should allow me get started with my own > setup. Though, I might bother you guys again with more doubts :) > > Thanks. > > On 19 February 2013 16:17, Koichi Suzuki <koi...@gm...> wrote: > > > Postgres-XC accepts CREATE DATABASE statement, same as PostgreSQL. So, > > > > CREATE DATABASE DB1; > > > > creates DB1 database which spans over all the nodes. You can > > similarly create DB2. Then you can connect to either DB1 and DB2 > > and create table1, table2 and table3 for both databases. You can You have to create table1, table2 and table3 on DB1 and DB2 separately. CREATE TABLE TABLE1 ... on DB1 will not propagate to DB2. Just to clarify. > > setup privilege of Customer1 only to DB1 and Customer2 only to DB2 > > respectively. > > > > Schema of these databases are stored in all the nodes. > > > > This is very similar to PostgreSQL's database and user administration. > > > > At present, you can specify which nodes each table should be > > distributed/replicated manually at CREATE TABLE or ALTER TABLE. > > This will help to control what server specific customer's data are > > stored. > > > > I hope this picture meets your needs. > > ---------- > > Koichi Suzuki > > > > 2013/2/18 kushal <kus...@gm...>: > > > But I guess the impact would be restricted to the schema of one > > particular > > > database instance, which in this would only be DB1. The schema for DB2 > > would > > > remain intact. Right? > > > > > > On 18 February 2013 14:36, Ashutosh Bapat < > > ash...@en...> > > > wrote: > > >> > > >> > > >> > > >> On Mon, Feb 18, 2013 at 2:32 PM, kushal <kus...@gm...> wrote: > > >>> > > >>> I think your responses answer my question. So here is how my database > > >>> structure looks like without XC. > > >>> > > >>> There should be one database with table1, table2 and table3 for each > > >>> customer under one postgres server. > > >>> > > >>> So If I am not wrong then with XC for coordinators A and B and > > datanodes > > >>> C and D, each would contain database instances DB1, DB2 and so on with > > same > > >>> schema (which in this case is table1, table2 and table3) and > > distribution or > > >>> replication would depend on how I do it while creating tables (table1, > > >>> table2, table3) for each of the individual database instances. > > >> > > >> > > >> Upto this it looks fine. > > >> > > >>> > > >>> And even if the schema changes for DB1 in future, it won't have impact > > on > > >>> DB2 queries or storage. > > >>> > > >> > > >> Any DDL you perform on XC, has to routed through the coordinator and > > hence > > >> it affects all the datanodes. > > >> > > >>> > > >>> --Kushal > > >>> > > >>> > > >>> On 18 February 2013 13:41, Ashutosh Bapat > > >>> <ash...@en...> wrote: > > >>>> > > >>>> Hi Kushal, > > >>>> Thanks for your interest in Postgres-XC. > > >>>> > > >>>> In Postgres-XC, every database/schema is created on all datanodes and > > >>>> coordinators. So, one can not created datanode specific databases. > > The only > > >>>> objects that are distributed are the tables. You can distribute your > > data > > >>>> across datanodes. > > >>>> > > >>>> But you are using term database instances, which is confusing. Do you > > >>>> mean database system instances? > > >>>> > > >>>> May be an example would help to understand your system's architecture. > > >>>> > > >>>> On Mon, Feb 18, 2013 at 12:56 PM, kushal <kus...@gm...> > > wrote: > > >>>>> > > >>>>> Hi > > >>>>> > > >>>>> This is my first post on the postgres xc mailing list. Let me first > > >>>>> just congratulate the whole team for coming up with such a cool > > framework. > > >>>>> > > >>>>> I have few questions around the requirements we have to support our > > >>>>> product. Now it is required to keep multiple database instances, > > lets say > > >>>>> one for each customer, accessible from one app. Without postgres-xc, > > I can > > >>>>> do that by just creating a connecting to specific database instance > > for a > > >>>>> particular customer from my app. > > >>>>> > > >>>>> Can the same be done with postgres-xc interface? So basically, I > > should > > >>>>> be able to create different database instances across datanodes > > accessible > > >>>>> through any coordinator. > > >>>>> If yes, how does the distribution/replication work? Is it going to be > > >>>>> somewhat different? > > >>>>> > > >>>>> > > >>>>> Thanks & Regards, > > >>>>> Kushal > > >>>>> > > >>>>> > > >>>>> > > ------------------------------------------------------------------------------ > > >>>>> The Go Parallel Website, sponsored by Intel - in partnership with > > >>>>> Geeknet, > > >>>>> is your hub for all things parallel software development, from weekly > > >>>>> thought > > >>>>> leadership blogs to news, videos, case studies, tutorials, tech docs, > > >>>>> whitepapers, evaluation guides, and opinion stories. Check out the > > most > > >>>>> recent posts - join the conversation now. > > >>>>> http://goparallel.sourceforge.net/ > > >>>>> _______________________________________________ > > >>>>> Postgres-xc-general mailing list > > >>>>> Pos...@li... > > >>>>> https://lists.sourceforge.net/lists/listinfo/postgres-xc-general > > >>>>> > > >>>> > > >>>> > > >>>> > > >>>> -- > > >>>> Best Wishes, > > >>>> Ashutosh Bapat > > >>>> EntepriseDB Corporation > > >>>> The Enterprise Postgres Company > > >>> > > >>> > > >>> > > >>> > > >>> > > ------------------------------------------------------------------------------ > > >>> The Go Parallel Website, sponsored by Intel - in partnership with > > >>> Geeknet, > > >>> is your hub for all things parallel software development, from weekly > > >>> thought > > >>> leadership blogs to news, videos, case studies, tutorials, tech docs, > > >>> whitepapers, evaluation guides, and opinion stories. Check out the most > > >>> recent posts - join the conversation now. > > >>> http://goparallel.sourceforge.net/ > > >>> _______________________________________________ > > >>> Postgres-xc-general mailing list > > >>> Pos...@li... > > >>> https://lists.sourceforge.net/lists/listinfo/postgres-xc-general > > >>> > > >> > > >> > > >> > > >> -- > > >> Best Wishes, > > >> Ashutosh Bapat > > >> EntepriseDB Corporation > > >> The Enterprise Postgres Company > > > > > > > > > > > > > > ------------------------------------------------------------------------------ > > > The Go Parallel Website, sponsored by Intel - in partnership with > > Geeknet, > > > is your hub for all things parallel software development, from weekly > > > thought > > > leadership blogs to news, videos, case studies, tutorials, tech docs, > > > whitepapers, evaluation guides, and opinion stories. Check out the most > > > recent posts - join the conversation now. > > http://goparallel.sourceforge.net/ > > > _______________________________________________ > > > Postgres-xc-general mailing list > > > Pos...@li... > > > https://lists.sourceforge.net/lists/listinfo/postgres-xc-general > > > > > |
|
From: kushal <kus...@gm...> - 2013-02-19 11:40:08
|
Yes. This is what I need and it should allow me get started with my own setup. Though, I might bother you guys again with more doubts :) Thanks. On 19 February 2013 16:17, Koichi Suzuki <koi...@gm...> wrote: > Postgres-XC accepts CREATE DATABASE statement, same as PostgreSQL. So, > > CREATE DATABASE DB1; > > creates DB1 database which spans over all the nodes. You can > similarly create DB2. Then you can connect to either DB1 and DB2 > and create table1, table2 and table3 for both databases. You can > setup privilege of Customer1 only to DB1 and Customer2 only to DB2 > respectively. > > Schema of these databases are stored in all the nodes. > > This is very similar to PostgreSQL's database and user administration. > > At present, you can specify which nodes each table should be > distributed/replicated manually at CREATE TABLE or ALTER TABLE. > This will help to control what server specific customer's data are > stored. > > I hope this picture meets your needs. > ---------- > Koichi Suzuki > > 2013/2/18 kushal <kus...@gm...>: > > But I guess the impact would be restricted to the schema of one > particular > > database instance, which in this would only be DB1. The schema for DB2 > would > > remain intact. Right? > > > > On 18 February 2013 14:36, Ashutosh Bapat < > ash...@en...> > > wrote: > >> > >> > >> > >> On Mon, Feb 18, 2013 at 2:32 PM, kushal <kus...@gm...> wrote: > >>> > >>> I think your responses answer my question. So here is how my database > >>> structure looks like without XC. > >>> > >>> There should be one database with table1, table2 and table3 for each > >>> customer under one postgres server. > >>> > >>> So If I am not wrong then with XC for coordinators A and B and > datanodes > >>> C and D, each would contain database instances DB1, DB2 and so on with > same > >>> schema (which in this case is table1, table2 and table3) and > distribution or > >>> replication would depend on how I do it while creating tables (table1, > >>> table2, table3) for each of the individual database instances. > >> > >> > >> Upto this it looks fine. > >> > >>> > >>> And even if the schema changes for DB1 in future, it won't have impact > on > >>> DB2 queries or storage. > >>> > >> > >> Any DDL you perform on XC, has to routed through the coordinator and > hence > >> it affects all the datanodes. > >> > >>> > >>> --Kushal > >>> > >>> > >>> On 18 February 2013 13:41, Ashutosh Bapat > >>> <ash...@en...> wrote: > >>>> > >>>> Hi Kushal, > >>>> Thanks for your interest in Postgres-XC. > >>>> > >>>> In Postgres-XC, every database/schema is created on all datanodes and > >>>> coordinators. So, one can not created datanode specific databases. > The only > >>>> objects that are distributed are the tables. You can distribute your > data > >>>> across datanodes. > >>>> > >>>> But you are using term database instances, which is confusing. Do you > >>>> mean database system instances? > >>>> > >>>> May be an example would help to understand your system's architecture. > >>>> > >>>> On Mon, Feb 18, 2013 at 12:56 PM, kushal <kus...@gm...> > wrote: > >>>>> > >>>>> Hi > >>>>> > >>>>> This is my first post on the postgres xc mailing list. Let me first > >>>>> just congratulate the whole team for coming up with such a cool > framework. > >>>>> > >>>>> I have few questions around the requirements we have to support our > >>>>> product. Now it is required to keep multiple database instances, > lets say > >>>>> one for each customer, accessible from one app. Without postgres-xc, > I can > >>>>> do that by just creating a connecting to specific database instance > for a > >>>>> particular customer from my app. > >>>>> > >>>>> Can the same be done with postgres-xc interface? So basically, I > should > >>>>> be able to create different database instances across datanodes > accessible > >>>>> through any coordinator. > >>>>> If yes, how does the distribution/replication work? Is it going to be > >>>>> somewhat different? > >>>>> > >>>>> > >>>>> Thanks & Regards, > >>>>> Kushal > >>>>> > >>>>> > >>>>> > ------------------------------------------------------------------------------ > >>>>> The Go Parallel Website, sponsored by Intel - in partnership with > >>>>> Geeknet, > >>>>> is your hub for all things parallel software development, from weekly > >>>>> thought > >>>>> leadership blogs to news, videos, case studies, tutorials, tech docs, > >>>>> whitepapers, evaluation guides, and opinion stories. Check out the > most > >>>>> recent posts - join the conversation now. > >>>>> http://goparallel.sourceforge.net/ > >>>>> _______________________________________________ > >>>>> Postgres-xc-general mailing list > >>>>> Pos...@li... > >>>>> https://lists.sourceforge.net/lists/listinfo/postgres-xc-general > >>>>> > >>>> > >>>> > >>>> > >>>> -- > >>>> Best Wishes, > >>>> Ashutosh Bapat > >>>> EntepriseDB Corporation > >>>> The Enterprise Postgres Company > >>> > >>> > >>> > >>> > >>> > ------------------------------------------------------------------------------ > >>> The Go Parallel Website, sponsored by Intel - in partnership with > >>> Geeknet, > >>> is your hub for all things parallel software development, from weekly > >>> thought > >>> leadership blogs to news, videos, case studies, tutorials, tech docs, > >>> whitepapers, evaluation guides, and opinion stories. Check out the most > >>> recent posts - join the conversation now. > >>> http://goparallel.sourceforge.net/ > >>> _______________________________________________ > >>> Postgres-xc-general mailing list > >>> Pos...@li... > >>> https://lists.sourceforge.net/lists/listinfo/postgres-xc-general > >>> > >> > >> > >> > >> -- > >> Best Wishes, > >> Ashutosh Bapat > >> EntepriseDB Corporation > >> The Enterprise Postgres Company > > > > > > > > > ------------------------------------------------------------------------------ > > The Go Parallel Website, sponsored by Intel - in partnership with > Geeknet, > > is your hub for all things parallel software development, from weekly > > thought > > leadership blogs to news, videos, case studies, tutorials, tech docs, > > whitepapers, evaluation guides, and opinion stories. Check out the most > > recent posts - join the conversation now. > http://goparallel.sourceforge.net/ > > _______________________________________________ > > Postgres-xc-general mailing list > > Pos...@li... > > https://lists.sourceforge.net/lists/listinfo/postgres-xc-general > > > |
|
From: Koichi S. <koi...@gm...> - 2013-02-19 10:47:28
|
Postgres-XC accepts CREATE DATABASE statement, same as PostgreSQL. So, CREATE DATABASE DB1; creates DB1 database which spans over all the nodes. You can similarly create DB2. Then you can connect to either DB1 and DB2 and create table1, table2 and table3 for both databases. You can setup privilege of Customer1 only to DB1 and Customer2 only to DB2 respectively. Schema of these databases are stored in all the nodes. This is very similar to PostgreSQL's database and user administration. At present, you can specify which nodes each table should be distributed/replicated manually at CREATE TABLE or ALTER TABLE. This will help to control what server specific customer's data are stored. I hope this picture meets your needs. ---------- Koichi Suzuki 2013/2/18 kushal <kus...@gm...>: > But I guess the impact would be restricted to the schema of one particular > database instance, which in this would only be DB1. The schema for DB2 would > remain intact. Right? > > On 18 February 2013 14:36, Ashutosh Bapat <ash...@en...> > wrote: >> >> >> >> On Mon, Feb 18, 2013 at 2:32 PM, kushal <kus...@gm...> wrote: >>> >>> I think your responses answer my question. So here is how my database >>> structure looks like without XC. >>> >>> There should be one database with table1, table2 and table3 for each >>> customer under one postgres server. >>> >>> So If I am not wrong then with XC for coordinators A and B and datanodes >>> C and D, each would contain database instances DB1, DB2 and so on with same >>> schema (which in this case is table1, table2 and table3) and distribution or >>> replication would depend on how I do it while creating tables (table1, >>> table2, table3) for each of the individual database instances. >> >> >> Upto this it looks fine. >> >>> >>> And even if the schema changes for DB1 in future, it won't have impact on >>> DB2 queries or storage. >>> >> >> Any DDL you perform on XC, has to routed through the coordinator and hence >> it affects all the datanodes. >> >>> >>> --Kushal >>> >>> >>> On 18 February 2013 13:41, Ashutosh Bapat >>> <ash...@en...> wrote: >>>> >>>> Hi Kushal, >>>> Thanks for your interest in Postgres-XC. >>>> >>>> In Postgres-XC, every database/schema is created on all datanodes and >>>> coordinators. So, one can not created datanode specific databases. The only >>>> objects that are distributed are the tables. You can distribute your data >>>> across datanodes. >>>> >>>> But you are using term database instances, which is confusing. Do you >>>> mean database system instances? >>>> >>>> May be an example would help to understand your system's architecture. >>>> >>>> On Mon, Feb 18, 2013 at 12:56 PM, kushal <kus...@gm...> wrote: >>>>> >>>>> Hi >>>>> >>>>> This is my first post on the postgres xc mailing list. Let me first >>>>> just congratulate the whole team for coming up with such a cool framework. >>>>> >>>>> I have few questions around the requirements we have to support our >>>>> product. Now it is required to keep multiple database instances, lets say >>>>> one for each customer, accessible from one app. Without postgres-xc, I can >>>>> do that by just creating a connecting to specific database instance for a >>>>> particular customer from my app. >>>>> >>>>> Can the same be done with postgres-xc interface? So basically, I should >>>>> be able to create different database instances across datanodes accessible >>>>> through any coordinator. >>>>> If yes, how does the distribution/replication work? Is it going to be >>>>> somewhat different? >>>>> >>>>> >>>>> Thanks & Regards, >>>>> Kushal >>>>> >>>>> >>>>> ------------------------------------------------------------------------------ >>>>> The Go Parallel Website, sponsored by Intel - in partnership with >>>>> Geeknet, >>>>> is your hub for all things parallel software development, from weekly >>>>> thought >>>>> leadership blogs to news, videos, case studies, tutorials, tech docs, >>>>> whitepapers, evaluation guides, and opinion stories. Check out the most >>>>> recent posts - join the conversation now. >>>>> http://goparallel.sourceforge.net/ >>>>> _______________________________________________ >>>>> Postgres-xc-general mailing list >>>>> Pos...@li... >>>>> https://lists.sourceforge.net/lists/listinfo/postgres-xc-general >>>>> >>>> >>>> >>>> >>>> -- >>>> Best Wishes, >>>> Ashutosh Bapat >>>> EntepriseDB Corporation >>>> The Enterprise Postgres Company >>> >>> >>> >>> >>> ------------------------------------------------------------------------------ >>> The Go Parallel Website, sponsored by Intel - in partnership with >>> Geeknet, >>> is your hub for all things parallel software development, from weekly >>> thought >>> leadership blogs to news, videos, case studies, tutorials, tech docs, >>> whitepapers, evaluation guides, and opinion stories. Check out the most >>> recent posts - join the conversation now. >>> http://goparallel.sourceforge.net/ >>> _______________________________________________ >>> Postgres-xc-general mailing list >>> Pos...@li... >>> https://lists.sourceforge.net/lists/listinfo/postgres-xc-general >>> >> >> >> >> -- >> Best Wishes, >> Ashutosh Bapat >> EntepriseDB Corporation >> The Enterprise Postgres Company > > > > ------------------------------------------------------------------------------ > The Go Parallel Website, sponsored by Intel - in partnership with Geeknet, > is your hub for all things parallel software development, from weekly > thought > leadership blogs to news, videos, case studies, tutorials, tech docs, > whitepapers, evaluation guides, and opinion stories. Check out the most > recent posts - join the conversation now. http://goparallel.sourceforge.net/ > _______________________________________________ > Postgres-xc-general mailing list > Pos...@li... > https://lists.sourceforge.net/lists/listinfo/postgres-xc-general > |
|
From: Arni S. <Arn...@md...> - 2013-02-19 02:51:43
|
Ugggh! Thanks for fast response! Of course something silly. Sent from my iPhone On Feb 18, 2013, at 9:48 PM, "Michael Paquier" <mic...@gm...<mailto:mic...@gm...>> wrote: On Tue, Feb 19, 2013 at 11:37 AM, Arni Sumarlidason <Arn...@md...<mailto:Arn...@md...>> wrote: The issue seems to be related to the size of the cluster. When attempting to initialize 20 nodes. 18, 19, and 20 consistently failed (reproducible). I tried a cluster of 17 with fewer errors. I tried 10 with success. It looks like you did not update max_coordinator and max_datanodes in postgresql.conf. The default value is 16, that would explain why cluster setting is failing with more than 16 nodes. -- Michael |
|
From: Koichi S. <koi...@gm...> - 2013-02-19 02:51:24
|
This is related to postgresql.conf parameter max_coordinators and max_datanodes. Default value is 16 so you should extend them. They are specific to XC. Regards; ---------- Koichi Suzuki 2013/2/19 Arni Sumarlidason <Arn...@md...>: > The issue seems to be related to the size of the cluster. When attempting to > initialize 20 nodes. 18, 19, and 20 consistently failed (reproducible). I > tried a cluster of 17 with fewer errors. I tried 10 with success. > > > > Best, > > > > From: Arni Sumarlidason > Sent: Monday, February 18, 2013 8:14 PM > To: 'koi...@gm...'; 'Postgres-XC Developers' > Cc: pos...@li...; mic...@gm... > Subject: RE: [Postgres-xc-general] pgxc: snapshot > > > > Mr. Koichi, > > > > You are right, PGADMIN was the source of these warnings. > > > > Do you have any idea what would cause the following error: > > # Cache lookup failed for node when executing CREATE NODE > > > > Lastly, I believe there are typos on lines 789, 797 in the pgxc script. > > > > Best regards, > > > > -----Original Message----- > From: koi...@gm... [mailto:koi...@gm...] On Behalf > Of Koichi Suzuki > Sent: Monday, February 18, 2013 12:52 AM > To: Arni Sumarlidason; Postgres-XC Developers > Subject: Re: [Postgres-xc-general] pgxc: snapshot > > > > I tried the stress test. Autovacuum seems to work well. I also ran > > vacuum analyze verbose from psql directly and it worked without > > problem during this stress test. I ran vacuumdb as well. All > > worked without problem. > > > > I noticed that you used pgAdmin. Unfortunately, pgAdmin has not been > > tuned to work with XC. I know some of pgAdmin features works well > > but others don't. Could you try to run vacuum from psql or > > vacuumdb? If they don't work, please let me know. > > > > Best Regards; > > ---------- > > Koichi Suzuki > > > > > > 2013/2/18 Koichi Suzuki <ko...@in...>: > >> Nice to hear that pgxc_ctl helps. > >> > >> As to the warning, I will try to reproduce the problem and fix it. I >> need to find a time for it so please forgive me a bit of time. The test >> will run many small transactions which will cause autovacuum lauched, as >> attached. This test was built as Datanode slave stress test. I think >> this may work as autovacuum lauch test. I will test it with four >> coordinators and four datanodes, and four gtm_proxies as well. Whole test >> will take about a couple of hours with five of six-core Xeon servers (one >> for GTM). > >> > >> Do you think this makes sense to reproduce your problem? > >> > >> I will run it both on master and REL1_0_STABLE. > >> > >> Regards; > >> --- > >> Koichi > >> > >> On Sat, 16 Feb 2013 19:32:11 +0000 > >> Arni Sumarlidason <Arn...@md...> wrote: > >> > >>> Koichi, and others, > >>> > >>> I spun some fresh VMs and ran your script with the identical outcome, GTM >>> Snapshot warnings from the auto vacuum launcher. > >>> Please advise. > >>> > >>> > >>> Thank you for your script, it does make life easier!! > >>> > >>> Best, > >>> > >>> -----Original Message----- > >>> From: Koichi Suzuki [mailto:ko...@in...] > >>> Sent: Friday, February 15, 2013 4:11 AM > >>> To: Arni Sumarlidason > >>> Cc: Michael Paquier; koi...@gm...; > >>> pos...@li... > >>> Subject: Re: [Postgres-xc-general] pgxc: snapshot > >>> > >>> If you're not sure about the configuration, please try pgxc_ctl > >>> available at > >>> > >>> git://github.com/koichi-szk/PGXC-Tools.git > >>> > >>> This is bash script (I'm rewriting into C now) so it will help to >>> understand how to configure XC. > >>> > >>> Regards; > >>> --- > >>> Koichi Suzuki > >>> > >>> On Fri, 15 Feb 2013 04:22:49 +0000 > >>> Arni Sumarlidason <Arn...@md...> wrote: > >>> > >>> > Thank you both for fast response!! > >>> > > >>> > RE: Koichi Suzuki > >>> > I downloaded the git this afternoon. > >>> > > >>> > RE: Michael Paquier > >>> > > >>> > - Confirm it is from the datanode's log. > >>> > > >>> > - Both coord & datanode connect via the same gtm_proxy on >>> > localhost > >>> > > >>> > These are my simplified configs, the only change I make on each > >>> > node is the nodename, PG_HBA > >>> > local all all >>> > trust > >>> > host all all 127.0.0.1/32 >>> > trust > >>> > host all all ::1/128 >>> > trust > >>> > host all all 10.100.170.0/24 >>> > trust > >>> > > >>> > COORD > >>> > pgxc_node_name = 'coord01' > >>> > listen_addresses = '*' > >>> > port = 5432 > >>> > max_connections = 200 > >>> > > >>> > gtm_port = 6666 > >>> > gtm_host = 'localhost' > >>> > pooler_port = 6670 > >>> > > >>> > shared_buffers = 32MB > >>> > work_mem = 1MB > >>> > maintenance_work_mem = 16MB > >>> > max_stack_depth = 2MB > >>> > > >>> > log_timezone = 'US/Eastern' > >>> > datestyle = 'iso, mdy' > >>> > timezone = 'US/Eastern' > >>> > lc_messages = 'en_US.UTF-8' > >>> > lc_monetary = 'en_US.UTF-8' > >>> > lc_numeric = 'en_US.UTF-8' > >>> > lc_time = 'en_US.UTF-8' > >>> > default_text_search_config = 'pg_catalog.english' > >>> > > >>> > DATA > >>> > pgxc_node_name = 'data01' > >>> > listen_addresses = '*' > >>> > port = 5433 > >>> > max_connections = 200 > >>> > > >>> > gtm_port = 6666 > >>> > gtm_host = 'localhost' > >>> > > >>> > shared_buffers = 32MB > >>> > work_mem = 1MB > >>> > maintenance_work_mem = 16MB > >>> > max_stack_depth = 2MB > >>> > > >>> > log_timezone = 'US/Eastern' > >>> > datestyle = 'iso, mdy' > >>> > timezone = 'US/Eastern' > >>> > lc_messages = 'en_US.UTF-8' > >>> > lc_monetary = 'en_US.UTF-8' > >>> > lc_numeric = 'en_US.UTF-8' > >>> > lc_time = 'en_US.UTF-8' > >>> > default_text_search_config = 'pg_catalog.english' > >>> > > >>> > PROXY > >>> > Nodename = 'proxy01' > >>> > listen_addresses = '*' > >>> > port = 6666 > >>> > gtm_host = '10.100.170.10' > >>> > gtm_port = 6666 > >>> > > >>> > > >>> > best, > >>> > > >>> > Arni > >>> > > >>> > From: Michael Paquier [mailto:mic...@gm...] > >>> > Sent: Thursday, February 14, 2013 11:06 PM > >>> > To: Arni Sumarlidason > >>> > Cc: pos...@li... > >>> > Subject: Re: [Postgres-xc-general] pgxc: snapshot > >>> > > >>> > > >>> > On Fri, Feb 15, 2013 at 12:57 PM, Arni Sumarlidason >>> > <Arn...@md...<mailto:Arn...@md...>> wrote: > >>> > Hi Everyone! > >>> > > >>> > I am getting these errors, "Warning: do not have a gtm snapshot >>> > available"[1]. After researching I found posts about the auto vacuum causing >>> > these errors, is this fix or work in progress? Also, I am seeing them >>> > without the CONTEXT: automatic vacuum message too. Is this something to >>> > worry about? Cluster seems to be functioning normally. > >>> > > >>> > Vacuum and analyze from pgadmin looks like this, > >>> > INFO: vacuuming "public.table" > >>> > INFO: "table": found 0 removable, 0 nonremovable row versions in 0 > >>> > pages > >>> > DETAIL: 0 dead row versions cannot be removed yet. > >>> > CPU 0.00s/0.00u sec elapsed 0.00 sec. > >>> > INFO: analyzing "public.table" > >>> > INFO: "table": scanned 0 of 0 pages, containing 0 live rows and 0 > >>> > dead rows; 0 rows in sample, 0 estimated total rows Total query >>> > runtime: 15273 ms. > >>> > > >>> > Should we use execute direct to perform maintenance? > >>> > No. Isn't this happening on a Datanode? > >>> > Be sure first to set gtm_host and gtm_port in postgresql.conf of all >>> > the nodes, Coordinator and Datanode included. GXID and snapshots are fetched >>> > of course on Coordinator for normal transaction run but also on all the >>> > nodes for autovacuum. > >>> > -- > >>> > Michael > >>> |
|
From: Michael P. <mic...@gm...> - 2013-02-19 02:48:31
|
On Tue, Feb 19, 2013 at 11:37 AM, Arni Sumarlidason < Arn...@md...> wrote: > The issue seems to be related to the size of the cluster. When > attempting to initialize 20 nodes. 18, 19, and 20 consistently failed > (reproducible). I tried a cluster of 17 with fewer errors. I tried 10 with > success. > It looks like you did not update max_coordinator and max_datanodes in postgresql.conf. The default value is 16, that would explain why cluster setting is failing with more than 16 nodes. -- Michael |
|
From: Arni S. <Arn...@md...> - 2013-02-19 02:37:41
|
The issue seems to be related to the size of the cluster. When attempting to initialize 20 nodes. 18, 19, and 20 consistently failed (reproducible). I tried a cluster of 17 with fewer errors. I tried 10 with success. Best, From: Arni Sumarlidason Sent: Monday, February 18, 2013 8:14 PM To: 'koi...@gm...'; 'Postgres-XC Developers' Cc: pos...@li...; mic...@gm... Subject: RE: [Postgres-xc-general] pgxc: snapshot Mr. Koichi, You are right, PGADMIN was the source of these warnings. Do you have any idea what would cause the following error: # Cache lookup failed for node when executing CREATE NODE Lastly, I believe there are typos on lines 789, 797 in the pgxc script. Best regards, -----Original Message----- From: koi...@gm...<mailto:koi...@gm...> [mailto:koi...@gm...] On Behalf Of Koichi Suzuki Sent: Monday, February 18, 2013 12:52 AM To: Arni Sumarlidason; Postgres-XC Developers Subject: Re: [Postgres-xc-general] pgxc: snapshot I tried the stress test. Autovacuum seems to work well. I also ran vacuum analyze verbose from psql directly and it worked without problem during this stress test. I ran vacuumdb as well. All worked without problem. I noticed that you used pgAdmin. Unfortunately, pgAdmin has not been tuned to work with XC. I know some of pgAdmin features works well but others don't. Could you try to run vacuum from psql or vacuumdb? If they don't work, please let me know. Best Regards; ---------- Koichi Suzuki 2013/2/18 Koichi Suzuki <ko...@in...<mailto:ko...@in...>>: > Nice to hear that pgxc_ctl helps. > > As to the warning, I will try to reproduce the problem and fix it. I need to find a time for it so please forgive me a bit of time. The test will run many small transactions which will cause autovacuum lauched, as attached. This test was built as Datanode slave stress test. I think this may work as autovacuum lauch test. I will test it with four coordinators and four datanodes, and four gtm_proxies as well. Whole test will take about a couple of hours with five of six-core Xeon servers (one for GTM). > > Do you think this makes sense to reproduce your problem? > > I will run it both on master and REL1_0_STABLE. > > Regards; > --- > Koichi > > On Sat, 16 Feb 2013 19:32:11 +0000 > Arni Sumarlidason <Arn...@md...<mailto:Arn...@md...>> wrote: > >> Koichi, and others, >> >> I spun some fresh VMs and ran your script with the identical outcome, GTM Snapshot warnings from the auto vacuum launcher. >> Please advise. >> >> >> Thank you for your script, it does make life easier!! >> >> Best, >> >> -----Original Message----- >> From: Koichi Suzuki [mailto:ko...@in...] >> Sent: Friday, February 15, 2013 4:11 AM >> To: Arni Sumarlidason >> Cc: Michael Paquier; koi...@gm...<mailto:koi...@gm...>; >> pos...@li...<mailto:pos...@li...> >> Subject: Re: [Postgres-xc-general] pgxc: snapshot >> >> If you're not sure about the configuration, please try pgxc_ctl >> available at >> >> git://github.com/koichi-szk/PGXC-Tools.git >> >> This is bash script (I'm rewriting into C now) so it will help to understand how to configure XC. >> >> Regards; >> --- >> Koichi Suzuki >> >> On Fri, 15 Feb 2013 04:22:49 +0000 >> Arni Sumarlidason <Arn...@md...<mailto:Arn...@md...>> wrote: >> >> > Thank you both for fast response!! >> > >> > RE: Koichi Suzuki >> > I downloaded the git this afternoon. >> > >> > RE: Michael Paquier >> > >> > - Confirm it is from the datanode's log. >> > >> > - Both coord & datanode connect via the same gtm_proxy on localhost >> > >> > These are my simplified configs, the only change I make on each >> > node is the nodename, PG_HBA >> > local all all trust >> > host all all 127.0.0.1/32 trust >> > host all all ::1/128 trust >> > host all all 10.100.170.0/24 trust >> > >> > COORD >> > pgxc_node_name = 'coord01' >> > listen_addresses = '*' >> > port = 5432 >> > max_connections = 200 >> > >> > gtm_port = 6666 >> > gtm_host = 'localhost' >> > pooler_port = 6670 >> > >> > shared_buffers = 32MB >> > work_mem = 1MB >> > maintenance_work_mem = 16MB >> > max_stack_depth = 2MB >> > >> > log_timezone = 'US/Eastern' >> > datestyle = 'iso, mdy' >> > timezone = 'US/Eastern' >> > lc_messages = 'en_US.UTF-8' >> > lc_monetary = 'en_US.UTF-8' >> > lc_numeric = 'en_US.UTF-8' >> > lc_time = 'en_US.UTF-8' >> > default_text_search_config = 'pg_catalog.english' >> > >> > DATA >> > pgxc_node_name = 'data01' >> > listen_addresses = '*' >> > port = 5433 >> > max_connections = 200 >> > >> > gtm_port = 6666 >> > gtm_host = 'localhost' >> > >> > shared_buffers = 32MB >> > work_mem = 1MB >> > maintenance_work_mem = 16MB >> > max_stack_depth = 2MB >> > >> > log_timezone = 'US/Eastern' >> > datestyle = 'iso, mdy' >> > timezone = 'US/Eastern' >> > lc_messages = 'en_US.UTF-8' >> > lc_monetary = 'en_US.UTF-8' >> > lc_numeric = 'en_US.UTF-8' >> > lc_time = 'en_US.UTF-8' >> > default_text_search_config = 'pg_catalog.english' >> > >> > PROXY >> > Nodename = 'proxy01' >> > listen_addresses = '*' >> > port = 6666 >> > gtm_host = '10.100.170.10' >> > gtm_port = 6666 >> > >> > >> > best, >> > >> > Arni >> > >> > From: Michael Paquier [mailto:mic...@gm...] >> > Sent: Thursday, February 14, 2013 11:06 PM >> > To: Arni Sumarlidason >> > Cc: pos...@li...<mailto:pos...@li...> >> > Subject: Re: [Postgres-xc-general] pgxc: snapshot >> > >> > >> > On Fri, Feb 15, 2013 at 12:57 PM, Arni Sumarlidason <Arn...@md...<mailto:Arn...@md...<mailto:Arn...@md...%3cmailto:Arn...@md...>>> wrote: >> > Hi Everyone! >> > >> > I am getting these errors, "Warning: do not have a gtm snapshot available"[1]. After researching I found posts about the auto vacuum causing these errors, is this fix or work in progress? Also, I am seeing them without the CONTEXT: automatic vacuum message too. Is this something to worry about? Cluster seems to be functioning normally. >> > >> > Vacuum and analyze from pgadmin looks like this, >> > INFO: vacuuming "public.table" >> > INFO: "table": found 0 removable, 0 nonremovable row versions in 0 >> > pages >> > DETAIL: 0 dead row versions cannot be removed yet. >> > CPU 0.00s/0.00u sec elapsed 0.00 sec. >> > INFO: analyzing "public.table" >> > INFO: "table": scanned 0 of 0 pages, containing 0 live rows and 0 >> > dead rows; 0 rows in sample, 0 estimated total rows Total query runtime: 15273 ms. >> > >> > Should we use execute direct to perform maintenance? >> > No. Isn't this happening on a Datanode? >> > Be sure first to set gtm_host and gtm_port in postgresql.conf of all the nodes, Coordinator and Datanode included. GXID and snapshots are fetched of course on Coordinator for normal transaction run but also on all the nodes for autovacuum. >> > -- >> > Michael >> |
|
From: Arni S. <Arn...@md...> - 2013-02-19 01:13:48
|
Mr. Koichi, You are right, PGADMIN was the source of these warnings. Do you have any idea what would cause the following error: # Cache lookup failed for node when executing CREATE NODE Lastly, I believe there are typos on lines 789, 797 in the pgxc script. Best regards, -----Original Message----- From: koi...@gm...<mailto:koi...@gm...> [mailto:koi...@gm...] On Behalf Of Koichi Suzuki Sent: Monday, February 18, 2013 12:52 AM To: Arni Sumarlidason; Postgres-XC Developers Subject: Re: [Postgres-xc-general] pgxc: snapshot I tried the stress test. Autovacuum seems to work well. I also ran vacuum analyze verbose from psql directly and it worked without problem during this stress test. I ran vacuumdb as well. All worked without problem. I noticed that you used pgAdmin. Unfortunately, pgAdmin has not been tuned to work with XC. I know some of pgAdmin features works well but others don't. Could you try to run vacuum from psql or vacuumdb? If they don't work, please let me know. Best Regards; ---------- Koichi Suzuki 2013/2/18 Koichi Suzuki <ko...@in...<mailto:ko...@in...>>: > Nice to hear that pgxc_ctl helps. > > As to the warning, I will try to reproduce the problem and fix it. I need to find a time for it so please forgive me a bit of time. The test will run many small transactions which will cause autovacuum lauched, as attached. This test was built as Datanode slave stress test. I think this may work as autovacuum lauch test. I will test it with four coordinators and four datanodes, and four gtm_proxies as well. Whole test will take about a couple of hours with five of six-core Xeon servers (one for GTM). > > Do you think this makes sense to reproduce your problem? > > I will run it both on master and REL1_0_STABLE. > > Regards; > --- > Koichi > > On Sat, 16 Feb 2013 19:32:11 +0000 > Arni Sumarlidason <Arn...@md...<mailto:Arn...@md...>> wrote: > >> Koichi, and others, >> >> I spun some fresh VMs and ran your script with the identical outcome, GTM Snapshot warnings from the auto vacuum launcher. >> Please advise. >> >> >> Thank you for your script, it does make life easier!! >> >> Best, >> >> -----Original Message----- >> From: Koichi Suzuki [mailto:ko...@in...] >> Sent: Friday, February 15, 2013 4:11 AM >> To: Arni Sumarlidason >> Cc: Michael Paquier; koi...@gm...<mailto:koi...@gm...>; >> pos...@li...<mailto:pos...@li...> >> Subject: Re: [Postgres-xc-general] pgxc: snapshot >> >> If you're not sure about the configuration, please try pgxc_ctl >> available at >> >> git://github.com/koichi-szk/PGXC-Tools.git >> >> This is bash script (I'm rewriting into C now) so it will help to understand how to configure XC. >> >> Regards; >> --- >> Koichi Suzuki >> >> On Fri, 15 Feb 2013 04:22:49 +0000 >> Arni Sumarlidason <Arn...@md...<mailto:Arn...@md...>> wrote: >> >> > Thank you both for fast response!! >> > >> > RE: Koichi Suzuki >> > I downloaded the git this afternoon. >> > >> > RE: Michael Paquier >> > >> > - Confirm it is from the datanode's log. >> > >> > - Both coord & datanode connect via the same gtm_proxy on localhost >> > >> > These are my simplified configs, the only change I make on each >> > node is the nodename, PG_HBA >> > local all all trust >> > host all all 127.0.0.1/32 trust >> > host all all ::1/128 trust >> > host all all 10.100.170.0/24 trust >> > >> > COORD >> > pgxc_node_name = 'coord01' >> > listen_addresses = '*' >> > port = 5432 >> > max_connections = 200 >> > >> > gtm_port = 6666 >> > gtm_host = 'localhost' >> > pooler_port = 6670 >> > >> > shared_buffers = 32MB >> > work_mem = 1MB >> > maintenance_work_mem = 16MB >> > max_stack_depth = 2MB >> > >> > log_timezone = 'US/Eastern' >> > datestyle = 'iso, mdy' >> > timezone = 'US/Eastern' >> > lc_messages = 'en_US.UTF-8' >> > lc_monetary = 'en_US.UTF-8' >> > lc_numeric = 'en_US.UTF-8' >> > lc_time = 'en_US.UTF-8' >> > default_text_search_config = 'pg_catalog.english' >> > >> > DATA >> > pgxc_node_name = 'data01' >> > listen_addresses = '*' >> > port = 5433 >> > max_connections = 200 >> > >> > gtm_port = 6666 >> > gtm_host = 'localhost' >> > >> > shared_buffers = 32MB >> > work_mem = 1MB >> > maintenance_work_mem = 16MB >> > max_stack_depth = 2MB >> > >> > log_timezone = 'US/Eastern' >> > datestyle = 'iso, mdy' >> > timezone = 'US/Eastern' >> > lc_messages = 'en_US.UTF-8' >> > lc_monetary = 'en_US.UTF-8' >> > lc_numeric = 'en_US.UTF-8' >> > lc_time = 'en_US.UTF-8' >> > default_text_search_config = 'pg_catalog.english' >> > >> > PROXY >> > Nodename = 'proxy01' >> > listen_addresses = '*' >> > port = 6666 >> > gtm_host = '10.100.170.10' >> > gtm_port = 6666 >> > >> > >> > best, >> > >> > Arni >> > >> > From: Michael Paquier [mailto:mic...@gm...] >> > Sent: Thursday, February 14, 2013 11:06 PM >> > To: Arni Sumarlidason >> > Cc: pos...@li...<mailto:pos...@li...> >> > Subject: Re: [Postgres-xc-general] pgxc: snapshot >> > >> > >> > On Fri, Feb 15, 2013 at 12:57 PM, Arni Sumarlidason <Arn...@md...<mailto:Arn...@md...<mailto:Arn...@md...%3cmailto:Arn...@md...>>> wrote: >> > Hi Everyone! >> > >> > I am getting these errors, "Warning: do not have a gtm snapshot available"[1]. After researching I found posts about the auto vacuum causing these errors, is this fix or work in progress? Also, I am seeing them without the CONTEXT: automatic vacuum message too. Is this something to worry about? Cluster seems to be functioning normally. >> > >> > Vacuum and analyze from pgadmin looks like this, >> > INFO: vacuuming "public.table" >> > INFO: "table": found 0 removable, 0 nonremovable row versions in 0 >> > pages >> > DETAIL: 0 dead row versions cannot be removed yet. >> > CPU 0.00s/0.00u sec elapsed 0.00 sec. >> > INFO: analyzing "public.table" >> > INFO: "table": scanned 0 of 0 pages, containing 0 live rows and 0 >> > dead rows; 0 rows in sample, 0 estimated total rows Total query runtime: 15273 ms. >> > >> > Should we use execute direct to perform maintenance? >> > No. Isn't this happening on a Datanode? >> > Be sure first to set gtm_host and gtm_port in postgresql.conf of all the nodes, Coordinator and Datanode included. GXID and snapshots are fetched of course on Coordinator for normal transaction run but also on all the nodes for autovacuum. >> > -- >> > Michael >> |
|
From: Koichi S. <koi...@gm...> - 2013-02-19 00:07:52
|
Hello, I noticed you have unnecessary space betwee '$' and 'HOME'.
This could be a cause of a problem. Please check your configuration
and the variable datanodeMasterDirs. This variable contains the
value of directories for datanodes. You can invoke pgxc_ctl -v and
then type echo ${datanodeMasterDirs[@]} which will tell if the
variable contains what you want.
Regards;
----------
Koichi Suzuki
2013/2/19 abel <ab...@me...>:
> hello Koichi, I use the tools pgxc koichi, and I get the following error
>
>> FATAL: "$ HOME/nodes/Datanode" is not a valid data directory
>> DETAIL: File "/$ HOME/nodes/Datanode/PG_VERSION" is missing.
>
>
> who can give me help?
>
> this error occurs when I try to "pgxc -v init"
>
> Tanks
>
>
> On 11.02.2013 01:27, Koichi Suzuki wrote:
>>
>> I have my personal project below.
>>
>> git://github.com/koichi-szk/PGXC-Tools.git
>>
>> You will find pgx_ctl, with bash script ans its manual. I hope they
>> provide sufficient information to configure your XC cluster.
>>
>> Good luck.
>> ----------
>> Koichi Suzuki
>>
>>
>> 2013/2/11 Ashutosh Bapat <ash...@en...>:
>>>
>>> Hi Abel,
>>> WIth the information you have provided, it's not possible to find the
>>> correct cause. You will need to provide detailed steps. Generally
>>> speaking,
>>> you might have missed some argument specifying the node to be booted is
>>> coordinator OR you might be connecting to datanode expecting it to be a
>>> coordinator. There are any number of possibilities.
>>>
>>>
>>> On Sat, Feb 9, 2013 at 3:34 AM, abel <ab...@me...> wrote:
>>>>
>>>>
>>>>
>>>> which is the steps to install more than 2 coordinators and DataNodes on
>>>> different machines, I am setting like this website
>>>> http://postgresxc.wikia.com/wiki/Real_Server_configuration
>>>>
>>>> I'm working with debian 6
>>>>
>>>> after configure I receive the following message:
>>>> psql: FATAL: Can not Identify itself Coordinator
>>>>
>>>>
>>>>
>>>>
>>>> ------------------------------------------------------------------------------
>>>> Free Next-Gen Firewall Hardware Offer
>>>> Buy your Sophos next-gen firewall before the end March 2013
>>>> and get the hardware for free! Learn more.
>>>> http://p.sf.net/sfu/sophos-d2d-feb
>>>> _______________________________________________
>>>> Postgres-xc-general mailing list
>>>> Pos...@li...
>>>> https://lists.sourceforge.net/lists/listinfo/postgres-xc-general
>>>
>>>
>>>
>>>
>>>
>>> --
>>> Best Wishes,
>>> Ashutosh Bapat
>>> EntepriseDB Corporation
>>> The Enterprise Postgres Company
>>>
>>>
>>>
>>> ------------------------------------------------------------------------------
>>> Free Next-Gen Firewall Hardware Offer
>>> Buy your Sophos next-gen firewall before the end March 2013
>>> and get the hardware for free! Learn more.
>>> http://p.sf.net/sfu/sophos-d2d-feb
>>> _______________________________________________
>>> Postgres-xc-general mailing list
>>> Pos...@li...
>>> https://lists.sourceforge.net/lists/listinfo/postgres-xc-general
>>>
>
|
|
From: abel <ab...@me...> - 2013-02-18 18:39:48
|
hello Koichi, I use the tools pgxc koichi, and I get the following error > FATAL: "$ HOME/nodes/Datanode" is not a valid data directory > DETAIL: File "/$ HOME/nodes/Datanode/PG_VERSION" is missing. who can give me help? this error occurs when I try to "pgxc -v init" Tanks On 11.02.2013 01:27, Koichi Suzuki wrote: > I have my personal project below. > > git://github.com/koichi-szk/PGXC-Tools.git > > You will find pgx_ctl, with bash script ans its manual. I hope they > provide sufficient information to configure your XC cluster. > > Good luck. > ---------- > Koichi Suzuki > > > 2013/2/11 Ashutosh Bapat <ash...@en...>: >> Hi Abel, >> WIth the information you have provided, it's not possible to find >> the >> correct cause. You will need to provide detailed steps. Generally >> speaking, >> you might have missed some argument specifying the node to be booted >> is >> coordinator OR you might be connecting to datanode expecting it to >> be a >> coordinator. There are any number of possibilities. >> >> >> On Sat, Feb 9, 2013 at 3:34 AM, abel <ab...@me...> wrote: >>> >>> >>> which is the steps to install more than 2 coordinators and >>> DataNodes on >>> different machines, I am setting like this website >>> http://postgresxc.wikia.com/wiki/Real_Server_configuration >>> >>> I'm working with debian 6 >>> >>> after configure I receive the following message: >>> psql: FATAL: Can not Identify itself Coordinator >>> >>> >>> >>> ------------------------------------------------------------------------------ >>> Free Next-Gen Firewall Hardware Offer >>> Buy your Sophos next-gen firewall before the end March 2013 >>> and get the hardware for free! Learn more. >>> http://p.sf.net/sfu/sophos-d2d-feb >>> _______________________________________________ >>> Postgres-xc-general mailing list >>> Pos...@li... >>> https://lists.sourceforge.net/lists/listinfo/postgres-xc-general >> >> >> >> >> -- >> Best Wishes, >> Ashutosh Bapat >> EntepriseDB Corporation >> The Enterprise Postgres Company >> >> >> ------------------------------------------------------------------------------ >> Free Next-Gen Firewall Hardware Offer >> Buy your Sophos next-gen firewall before the end March 2013 >> and get the hardware for free! Learn more. >> http://p.sf.net/sfu/sophos-d2d-feb >> _______________________________________________ >> Postgres-xc-general mailing list >> Pos...@li... >> https://lists.sourceforge.net/lists/listinfo/postgres-xc-general >> |
|
From: kushal <kus...@gm...> - 2013-02-18 09:27:13
|
But I guess the impact would be restricted to the schema of one particular database instance, which in this would only be DB1. The schema for DB2 would remain intact. Right? On 18 February 2013 14:36, Ashutosh Bapat <ash...@en...>wrote: > > > On Mon, Feb 18, 2013 at 2:32 PM, kushal <kus...@gm...> wrote: > >> I think your responses answer my question. So here is how my database >> structure looks like without XC. >> >> There should be one database with table1, table2 and table3 for each >> customer under one postgres server. >> >> So If I am not wrong then with XC for coordinators A and B and datanodes >> C and D, each would contain database instances DB1, DB2 and so on with same >> schema (which in this case is table1, table2 and table3) and distribution >> or replication would depend on how I do it while creating tables (table1, >> table2, table3) for each of the individual database instances. >> > > Upto this it looks fine. > > >> And even if the schema changes for DB1 in future, it won't have impact on >> DB2 queries or storage. >> >> > Any DDL you perform on XC, has to routed through the coordinator and hence > it affects all the datanodes. > > >> --Kushal >> >> >> On 18 February 2013 13:41, Ashutosh Bapat < >> ash...@en...> wrote: >> >>> Hi Kushal, >>> Thanks for your interest in Postgres-XC. >>> >>> In Postgres-XC, every database/schema is created on all datanodes and >>> coordinators. So, one can not created datanode specific databases. The only >>> objects that are distributed are the tables. You can distribute your data >>> across datanodes. >>> >>> But you are using term database instances, which is confusing. Do you >>> mean database system instances? >>> >>> May be an example would help to understand your system's architecture. >>> >>> On Mon, Feb 18, 2013 at 12:56 PM, kushal <kus...@gm...> wrote: >>> >>>> Hi >>>> >>>> This is my first post on the postgres xc mailing list. Let me first >>>> just congratulate the whole team for coming up with such a cool framework. >>>> >>>> I have few questions around the requirements we have to support our >>>> product. Now it is required to keep multiple database instances, lets say >>>> one for each customer, accessible from one app. Without postgres-xc, I can >>>> do that by just creating a connecting to specific database instance for a >>>> particular customer from my app. >>>> >>>> Can the same be done with postgres-xc interface? So basically, I should >>>> be able to create different database instances across datanodes accessible >>>> through any coordinator. >>>> If yes, how does the distribution/replication work? Is it going to be >>>> somewhat different? >>>> >>>> >>>> Thanks & Regards, >>>> Kushal >>>> >>>> >>>> ------------------------------------------------------------------------------ >>>> The Go Parallel Website, sponsored by Intel - in partnership with >>>> Geeknet, >>>> is your hub for all things parallel software development, from weekly >>>> thought >>>> leadership blogs to news, videos, case studies, tutorials, tech docs, >>>> whitepapers, evaluation guides, and opinion stories. Check out the most >>>> recent posts - join the conversation now. >>>> http://goparallel.sourceforge.net/ >>>> _______________________________________________ >>>> Postgres-xc-general mailing list >>>> Pos...@li... >>>> https://lists.sourceforge.net/lists/listinfo/postgres-xc-general >>>> >>>> >>> >>> >>> -- >>> Best Wishes, >>> Ashutosh Bapat >>> EntepriseDB Corporation >>> The Enterprise Postgres Company >>> >> >> >> >> ------------------------------------------------------------------------------ >> The Go Parallel Website, sponsored by Intel - in partnership with Geeknet, >> is your hub for all things parallel software development, from weekly >> thought >> leadership blogs to news, videos, case studies, tutorials, tech docs, >> whitepapers, evaluation guides, and opinion stories. Check out the most >> recent posts - join the conversation now. >> http://goparallel.sourceforge.net/ >> _______________________________________________ >> Postgres-xc-general mailing list >> Pos...@li... >> https://lists.sourceforge.net/lists/listinfo/postgres-xc-general >> >> > > > -- > Best Wishes, > Ashutosh Bapat > EntepriseDB Corporation > The Enterprise Postgres Company > |
|
From: Ashutosh B. <ash...@en...> - 2013-02-18 09:06:49
|
On Mon, Feb 18, 2013 at 2:32 PM, kushal <kus...@gm...> wrote: > I think your responses answer my question. So here is how my database > structure looks like without XC. > > There should be one database with table1, table2 and table3 for each > customer under one postgres server. > > So If I am not wrong then with XC for coordinators A and B and datanodes C > and D, each would contain database instances DB1, DB2 and so on with same > schema (which in this case is table1, table2 and table3) and distribution > or replication would depend on how I do it while creating tables (table1, > table2, table3) for each of the individual database instances. Upto this it looks fine. > And even if the schema changes for DB1 in future, it won't have impact on > DB2 queries or storage. > > Any DDL you perform on XC, has to routed through the coordinator and hence it affects all the datanodes. > --Kushal > > > On 18 February 2013 13:41, Ashutosh Bapat <ash...@en... > > wrote: > >> Hi Kushal, >> Thanks for your interest in Postgres-XC. >> >> In Postgres-XC, every database/schema is created on all datanodes and >> coordinators. So, one can not created datanode specific databases. The only >> objects that are distributed are the tables. You can distribute your data >> across datanodes. >> >> But you are using term database instances, which is confusing. Do you >> mean database system instances? >> >> May be an example would help to understand your system's architecture. >> >> On Mon, Feb 18, 2013 at 12:56 PM, kushal <kus...@gm...> wrote: >> >>> Hi >>> >>> This is my first post on the postgres xc mailing list. Let me first just >>> congratulate the whole team for coming up with such a cool framework. >>> >>> I have few questions around the requirements we have to support our >>> product. Now it is required to keep multiple database instances, lets say >>> one for each customer, accessible from one app. Without postgres-xc, I can >>> do that by just creating a connecting to specific database instance for a >>> particular customer from my app. >>> >>> Can the same be done with postgres-xc interface? So basically, I should >>> be able to create different database instances across datanodes accessible >>> through any coordinator. >>> If yes, how does the distribution/replication work? Is it going to be >>> somewhat different? >>> >>> >>> Thanks & Regards, >>> Kushal >>> >>> >>> ------------------------------------------------------------------------------ >>> The Go Parallel Website, sponsored by Intel - in partnership with >>> Geeknet, >>> is your hub for all things parallel software development, from weekly >>> thought >>> leadership blogs to news, videos, case studies, tutorials, tech docs, >>> whitepapers, evaluation guides, and opinion stories. Check out the most >>> recent posts - join the conversation now. >>> http://goparallel.sourceforge.net/ >>> _______________________________________________ >>> Postgres-xc-general mailing list >>> Pos...@li... >>> https://lists.sourceforge.net/lists/listinfo/postgres-xc-general >>> >>> >> >> >> -- >> Best Wishes, >> Ashutosh Bapat >> EntepriseDB Corporation >> The Enterprise Postgres Company >> > > > > ------------------------------------------------------------------------------ > The Go Parallel Website, sponsored by Intel - in partnership with Geeknet, > is your hub for all things parallel software development, from weekly > thought > leadership blogs to news, videos, case studies, tutorials, tech docs, > whitepapers, evaluation guides, and opinion stories. Check out the most > recent posts - join the conversation now. > http://goparallel.sourceforge.net/ > _______________________________________________ > Postgres-xc-general mailing list > Pos...@li... > https://lists.sourceforge.net/lists/listinfo/postgres-xc-general > > -- Best Wishes, Ashutosh Bapat EntepriseDB Corporation The Enterprise Postgres Company |
|
From: kushal <kus...@gm...> - 2013-02-18 09:02:11
|
I think your responses answer my question. So here is how my database structure looks like without XC. There should be one database with table1, table2 and table3 for each customer under one postgres server. So If I am not wrong then with XC for coordinators A and B and datanodes C and D, each would contain database instances DB1, DB2 and so on with same schema (which in this case is table1, table2 and table3) and distribution or replication would depend on how I do it while creating tables (table1, table2, table3) for each of the individual database instances. And even if the schema changes for DB1 in future, it won't have impact on DB2 queries or storage. --Kushal On 18 February 2013 13:41, Ashutosh Bapat <ash...@en...>wrote: > Hi Kushal, > Thanks for your interest in Postgres-XC. > > In Postgres-XC, every database/schema is created on all datanodes and > coordinators. So, one can not created datanode specific databases. The only > objects that are distributed are the tables. You can distribute your data > across datanodes. > > But you are using term database instances, which is confusing. Do you mean > database system instances? > > May be an example would help to understand your system's architecture. > > On Mon, Feb 18, 2013 at 12:56 PM, kushal <kus...@gm...> wrote: > >> Hi >> >> This is my first post on the postgres xc mailing list. Let me first just >> congratulate the whole team for coming up with such a cool framework. >> >> I have few questions around the requirements we have to support our >> product. Now it is required to keep multiple database instances, lets say >> one for each customer, accessible from one app. Without postgres-xc, I can >> do that by just creating a connecting to specific database instance for a >> particular customer from my app. >> >> Can the same be done with postgres-xc interface? So basically, I should >> be able to create different database instances across datanodes accessible >> through any coordinator. >> If yes, how does the distribution/replication work? Is it going to be >> somewhat different? >> >> >> Thanks & Regards, >> Kushal >> >> >> ------------------------------------------------------------------------------ >> The Go Parallel Website, sponsored by Intel - in partnership with Geeknet, >> is your hub for all things parallel software development, from weekly >> thought >> leadership blogs to news, videos, case studies, tutorials, tech docs, >> whitepapers, evaluation guides, and opinion stories. Check out the most >> recent posts - join the conversation now. >> http://goparallel.sourceforge.net/ >> _______________________________________________ >> Postgres-xc-general mailing list >> Pos...@li... >> https://lists.sourceforge.net/lists/listinfo/postgres-xc-general >> >> > > > -- > Best Wishes, > Ashutosh Bapat > EntepriseDB Corporation > The Enterprise Postgres Company > |
|
From: Ashutosh B. <ash...@en...> - 2013-02-18 08:11:31
|
Hi Kushal, Thanks for your interest in Postgres-XC. In Postgres-XC, every database/schema is created on all datanodes and coordinators. So, one can not created datanode specific databases. The only objects that are distributed are the tables. You can distribute your data across datanodes. But you are using term database instances, which is confusing. Do you mean database system instances? May be an example would help to understand your system's architecture. On Mon, Feb 18, 2013 at 12:56 PM, kushal <kus...@gm...> wrote: > Hi > > This is my first post on the postgres xc mailing list. Let me first just > congratulate the whole team for coming up with such a cool framework. > > I have few questions around the requirements we have to support our > product. Now it is required to keep multiple database instances, lets say > one for each customer, accessible from one app. Without postgres-xc, I can > do that by just creating a connecting to specific database instance for a > particular customer from my app. > > Can the same be done with postgres-xc interface? So basically, I should be > able to create different database instances across datanodes accessible > through any coordinator. > If yes, how does the distribution/replication work? Is it going to be > somewhat different? > > > Thanks & Regards, > Kushal > > > ------------------------------------------------------------------------------ > The Go Parallel Website, sponsored by Intel - in partnership with Geeknet, > is your hub for all things parallel software development, from weekly > thought > leadership blogs to news, videos, case studies, tutorials, tech docs, > whitepapers, evaluation guides, and opinion stories. Check out the most > recent posts - join the conversation now. > http://goparallel.sourceforge.net/ > _______________________________________________ > Postgres-xc-general mailing list > Pos...@li... > https://lists.sourceforge.net/lists/listinfo/postgres-xc-general > > -- Best Wishes, Ashutosh Bapat EntepriseDB Corporation The Enterprise Postgres Company |
|
From: Michael P. <mic...@gm...> - 2013-02-18 08:00:04
|
On Mon, Feb 18, 2013 at 4:26 PM, kushal <kus...@gm...> wrote: > Hi > > This is my first post on the postgres xc mailing list. Let me first just > congratulate the whole team for coming up with such a cool framework. > > I have few questions around the requirements we have to support our > product. Now it is required to keep multiple database instances, lets say > one for each customer, accessible from one app. Without postgres-xc, I can > do that by just creating a connecting to specific database instance for a > particular customer from my app. > > Can the same be done with postgres-xc interface? So basically, I should be > able to create different database instances across datanodes accessible > through any coordinator. > If yes, how does the distribution/replication work? Is it going to be > somewhat different? > In the case of XC, when an object is created, it is created on all the nodes. This means that if you run a CREATE DATABASE command, this database will be created on all the Coordinators and all the Datanodes. For tables it is the same, the catalogs of all the nodes are kept in sync for consistency. Then, replication/distribution is table-based and controlled by a clause extension called DISTRIBUTE BY in query CREATE TABLE: http://postgres-xc.sourceforge.net/docs/1_0_2/sql-createtable.html You can also distribute or replicate your data on a portion of nodes if necessary, with the clause extension TO NODE. This controls the way tuples are distributed among the nodes. So let's imagine that you create a table replicated on node 1 and 3 in a cluster of 4 Datanodes. Datanodes 2 and 4 will keep an empty table and Datanode 1 and 3 will have the same data replicated. When running a query on Coordinator, XC planner is smart enough to determine the list of nodes to run query on, so in the case of the table below, only one of the two Datanodes 1/3 will be targetted for read, and both will be targetted in the case of a write. -- Michael |
|
From: kushal <kus...@gm...> - 2013-02-18 07:26:19
|
Hi This is my first post on the postgres xc mailing list. Let me first just congratulate the whole team for coming up with such a cool framework. I have few questions around the requirements we have to support our product. Now it is required to keep multiple database instances, lets say one for each customer, accessible from one app. Without postgres-xc, I can do that by just creating a connecting to specific database instance for a particular customer from my app. Can the same be done with postgres-xc interface? So basically, I should be able to create different database instances across datanodes accessible through any coordinator. If yes, how does the distribution/replication work? Is it going to be somewhat different? Thanks & Regards, Kushal |
|
From: Koichi S. <ko...@in...> - 2013-02-18 01:33:00
|
Nice to hear that pgxc_ctl helps. As to the warning, I will try to reproduce the problem and fix it. I need to find a time for it so please forgive me a bit of time. The test will run many small transactions which will cause autovacuum lauched, as attached. This test was built as Datanode slave stress test. I think this may work as autovacuum lauch test. I will test it with four coordinators and four datanodes, and four gtm_proxies as well. Whole test will take about a couple of hours with five of six-core Xeon servers (one for GTM). Do you think this makes sense to reproduce your problem? I will run it both on master and REL1_0_STABLE. Regards; --- Koichi On Sat, 16 Feb 2013 19:32:11 +0000 Arni Sumarlidason <Arn...@md...> wrote: > Koichi, and others, > > I spun some fresh VMs and ran your script with the identical outcome, GTM Snapshot warnings from the auto vacuum launcher. > Please advise. > > > Thank you for your script, it does make life easier!! > > Best, > > -----Original Message----- > From: Koichi Suzuki [mailto:ko...@in...] > Sent: Friday, February 15, 2013 4:11 AM > To: Arni Sumarlidason > Cc: Michael Paquier; koi...@gm...; pos...@li... > Subject: Re: [Postgres-xc-general] pgxc: snapshot > > If you're not sure about the configuration, please try pgxc_ctl available at > > git://github.com/koichi-szk/PGXC-Tools.git > > This is bash script (I'm rewriting into C now) so it will help to understand how to configure XC. > > Regards; > --- > Koichi Suzuki > > On Fri, 15 Feb 2013 04:22:49 +0000 > Arni Sumarlidason <Arn...@md...> wrote: > > > Thank you both for fast response!! > > > > RE: Koichi Suzuki > > I downloaded the git this afternoon. > > > > RE: Michael Paquier > > > > - Confirm it is from the datanode's log. > > > > - Both coord & datanode connect via the same gtm_proxy on localhost > > > > These are my simplified configs, the only change I make on each node > > is the nodename, PG_HBA > > local all all trust > > host all all 127.0.0.1/32 trust > > host all all ::1/128 trust > > host all all 10.100.170.0/24 trust > > > > COORD > > pgxc_node_name = 'coord01' > > listen_addresses = '*' > > port = 5432 > > max_connections = 200 > > > > gtm_port = 6666 > > gtm_host = 'localhost' > > pooler_port = 6670 > > > > shared_buffers = 32MB > > work_mem = 1MB > > maintenance_work_mem = 16MB > > max_stack_depth = 2MB > > > > log_timezone = 'US/Eastern' > > datestyle = 'iso, mdy' > > timezone = 'US/Eastern' > > lc_messages = 'en_US.UTF-8' > > lc_monetary = 'en_US.UTF-8' > > lc_numeric = 'en_US.UTF-8' > > lc_time = 'en_US.UTF-8' > > default_text_search_config = 'pg_catalog.english' > > > > DATA > > pgxc_node_name = 'data01' > > listen_addresses = '*' > > port = 5433 > > max_connections = 200 > > > > gtm_port = 6666 > > gtm_host = 'localhost' > > > > shared_buffers = 32MB > > work_mem = 1MB > > maintenance_work_mem = 16MB > > max_stack_depth = 2MB > > > > log_timezone = 'US/Eastern' > > datestyle = 'iso, mdy' > > timezone = 'US/Eastern' > > lc_messages = 'en_US.UTF-8' > > lc_monetary = 'en_US.UTF-8' > > lc_numeric = 'en_US.UTF-8' > > lc_time = 'en_US.UTF-8' > > default_text_search_config = 'pg_catalog.english' > > > > PROXY > > Nodename = 'proxy01' > > listen_addresses = '*' > > port = 6666 > > gtm_host = '10.100.170.10' > > gtm_port = 6666 > > > > > > best, > > > > Arni > > > > From: Michael Paquier [mailto:mic...@gm...] > > Sent: Thursday, February 14, 2013 11:06 PM > > To: Arni Sumarlidason > > Cc: pos...@li... > > Subject: Re: [Postgres-xc-general] pgxc: snapshot > > > > > > On Fri, Feb 15, 2013 at 12:57 PM, Arni Sumarlidason <Arn...@md...<mailto:Arn...@md...>> wrote: > > Hi Everyone! > > > > I am getting these errors, "Warning: do not have a gtm snapshot available"[1]. After researching I found posts about the auto vacuum causing these errors, is this fix or work in progress? Also, I am seeing them without the CONTEXT: automatic vacuum message too. Is this something to worry about? Cluster seems to be functioning normally. > > > > Vacuum and analyze from pgadmin looks like this, > > INFO: vacuuming "public.table" > > INFO: "table": found 0 removable, 0 nonremovable row versions in 0 > > pages > > DETAIL: 0 dead row versions cannot be removed yet. > > CPU 0.00s/0.00u sec elapsed 0.00 sec. > > INFO: analyzing "public.table" > > INFO: "table": scanned 0 of 0 pages, containing 0 live rows and 0 > > dead rows; 0 rows in sample, 0 estimated total rows Total query runtime: 15273 ms. > > > > Should we use execute direct to perform maintenance? > > No. Isn't this happening on a Datanode? > > Be sure first to set gtm_host and gtm_port in postgresql.conf of all the nodes, Coordinator and Datanode included. GXID and snapshots are fetched of course on Coordinator for normal transaction run but also on all the nodes for autovacuum. > > -- > > Michael > |
|
From: Arni S. <Arn...@md...> - 2013-02-16 19:32:31
|
Koichi, and others, I spun some fresh VMs and ran your script with the identical outcome, GTM Snapshot warnings from the auto vacuum launcher. Please advise. Thank you for your script, it does make life easier!! Best, -----Original Message----- From: Koichi Suzuki [mailto:ko...@in...] Sent: Friday, February 15, 2013 4:11 AM To: Arni Sumarlidason Cc: Michael Paquier; koi...@gm...; pos...@li... Subject: Re: [Postgres-xc-general] pgxc: snapshot If you're not sure about the configuration, please try pgxc_ctl available at git://github.com/koichi-szk/PGXC-Tools.git This is bash script (I'm rewriting into C now) so it will help to understand how to configure XC. Regards; --- Koichi Suzuki On Fri, 15 Feb 2013 04:22:49 +0000 Arni Sumarlidason <Arn...@md...> wrote: > Thank you both for fast response!! > > RE: Koichi Suzuki > I downloaded the git this afternoon. > > RE: Michael Paquier > > - Confirm it is from the datanode's log. > > - Both coord & datanode connect via the same gtm_proxy on localhost > > These are my simplified configs, the only change I make on each node > is the nodename, PG_HBA > local all all trust > host all all 127.0.0.1/32 trust > host all all ::1/128 trust > host all all 10.100.170.0/24 trust > > COORD > pgxc_node_name = 'coord01' > listen_addresses = '*' > port = 5432 > max_connections = 200 > > gtm_port = 6666 > gtm_host = 'localhost' > pooler_port = 6670 > > shared_buffers = 32MB > work_mem = 1MB > maintenance_work_mem = 16MB > max_stack_depth = 2MB > > log_timezone = 'US/Eastern' > datestyle = 'iso, mdy' > timezone = 'US/Eastern' > lc_messages = 'en_US.UTF-8' > lc_monetary = 'en_US.UTF-8' > lc_numeric = 'en_US.UTF-8' > lc_time = 'en_US.UTF-8' > default_text_search_config = 'pg_catalog.english' > > DATA > pgxc_node_name = 'data01' > listen_addresses = '*' > port = 5433 > max_connections = 200 > > gtm_port = 6666 > gtm_host = 'localhost' > > shared_buffers = 32MB > work_mem = 1MB > maintenance_work_mem = 16MB > max_stack_depth = 2MB > > log_timezone = 'US/Eastern' > datestyle = 'iso, mdy' > timezone = 'US/Eastern' > lc_messages = 'en_US.UTF-8' > lc_monetary = 'en_US.UTF-8' > lc_numeric = 'en_US.UTF-8' > lc_time = 'en_US.UTF-8' > default_text_search_config = 'pg_catalog.english' > > PROXY > Nodename = 'proxy01' > listen_addresses = '*' > port = 6666 > gtm_host = '10.100.170.10' > gtm_port = 6666 > > > best, > > Arni > > From: Michael Paquier [mailto:mic...@gm...] > Sent: Thursday, February 14, 2013 11:06 PM > To: Arni Sumarlidason > Cc: pos...@li... > Subject: Re: [Postgres-xc-general] pgxc: snapshot > > > On Fri, Feb 15, 2013 at 12:57 PM, Arni Sumarlidason <Arn...@md...<mailto:Arn...@md...>> wrote: > Hi Everyone! > > I am getting these errors, "Warning: do not have a gtm snapshot available"[1]. After researching I found posts about the auto vacuum causing these errors, is this fix or work in progress? Also, I am seeing them without the CONTEXT: automatic vacuum message too. Is this something to worry about? Cluster seems to be functioning normally. > > Vacuum and analyze from pgadmin looks like this, > INFO: vacuuming "public.table" > INFO: "table": found 0 removable, 0 nonremovable row versions in 0 > pages > DETAIL: 0 dead row versions cannot be removed yet. > CPU 0.00s/0.00u sec elapsed 0.00 sec. > INFO: analyzing "public.table" > INFO: "table": scanned 0 of 0 pages, containing 0 live rows and 0 > dead rows; 0 rows in sample, 0 estimated total rows Total query runtime: 15273 ms. > > Should we use execute direct to perform maintenance? > No. Isn't this happening on a Datanode? > Be sure first to set gtm_host and gtm_port in postgresql.conf of all the nodes, Coordinator and Datanode included. GXID and snapshots are fetched of course on Coordinator for normal transaction run but also on all the nodes for autovacuum. > -- > Michael |
|
From: Koichi S. <ko...@in...> - 2013-02-15 09:10:37
|
If you're not sure about the configuration, please try pgxc_ctl available at git://github.com/koichi-szk/PGXC-Tools.git This is bash script (I'm rewriting into C now) so it will help to understand how to configure XC. Regards; --- Koichi Suzuki On Fri, 15 Feb 2013 04:22:49 +0000 Arni Sumarlidason <Arn...@md...> wrote: > Thank you both for fast response!! > > RE: Koichi Suzuki > I downloaded the git this afternoon. > > RE: Michael Paquier > > - Confirm it is from the datanode's log. > > - Both coord & datanode connect via the same gtm_proxy on localhost > > These are my simplified configs, the only change I make on each node is the nodename, > PG_HBA > local all all trust > host all all 127.0.0.1/32 trust > host all all ::1/128 trust > host all all 10.100.170.0/24 trust > > COORD > pgxc_node_name = 'coord01' > listen_addresses = '*' > port = 5432 > max_connections = 200 > > gtm_port = 6666 > gtm_host = 'localhost' > pooler_port = 6670 > > shared_buffers = 32MB > work_mem = 1MB > maintenance_work_mem = 16MB > max_stack_depth = 2MB > > log_timezone = 'US/Eastern' > datestyle = 'iso, mdy' > timezone = 'US/Eastern' > lc_messages = 'en_US.UTF-8' > lc_monetary = 'en_US.UTF-8' > lc_numeric = 'en_US.UTF-8' > lc_time = 'en_US.UTF-8' > default_text_search_config = 'pg_catalog.english' > > DATA > pgxc_node_name = 'data01' > listen_addresses = '*' > port = 5433 > max_connections = 200 > > gtm_port = 6666 > gtm_host = 'localhost' > > shared_buffers = 32MB > work_mem = 1MB > maintenance_work_mem = 16MB > max_stack_depth = 2MB > > log_timezone = 'US/Eastern' > datestyle = 'iso, mdy' > timezone = 'US/Eastern' > lc_messages = 'en_US.UTF-8' > lc_monetary = 'en_US.UTF-8' > lc_numeric = 'en_US.UTF-8' > lc_time = 'en_US.UTF-8' > default_text_search_config = 'pg_catalog.english' > > PROXY > Nodename = 'proxy01' > listen_addresses = '*' > port = 6666 > gtm_host = '10.100.170.10' > gtm_port = 6666 > > > best, > > Arni > > From: Michael Paquier [mailto:mic...@gm...] > Sent: Thursday, February 14, 2013 11:06 PM > To: Arni Sumarlidason > Cc: pos...@li... > Subject: Re: [Postgres-xc-general] pgxc: snapshot > > > On Fri, Feb 15, 2013 at 12:57 PM, Arni Sumarlidason <Arn...@md...<mailto:Arn...@md...>> wrote: > Hi Everyone! > > I am getting these errors, "Warning: do not have a gtm snapshot available"[1]. After researching I found posts about the auto vacuum causing these errors, is this fix or work in progress? Also, I am seeing them without the CONTEXT: automatic vacuum message too. Is this something to worry about? Cluster seems to be functioning normally. > > Vacuum and analyze from pgadmin looks like this, > INFO: vacuuming "public.table" > INFO: "table": found 0 removable, 0 nonremovable row versions in 0 pages > DETAIL: 0 dead row versions cannot be removed yet. > CPU 0.00s/0.00u sec elapsed 0.00 sec. > INFO: analyzing "public.table" > INFO: "table": scanned 0 of 0 pages, containing 0 live rows and 0 dead rows; 0 rows in sample, 0 estimated total rows > Total query runtime: 15273 ms. > > Should we use execute direct to perform maintenance? > No. Isn't this happening on a Datanode? > Be sure first to set gtm_host and gtm_port in postgresql.conf of all the nodes, Coordinator and Datanode included. GXID and snapshots are fetched of course on Coordinator for normal transaction run but also on all the nodes for autovacuum. > -- > Michael |
|
From: Koichi S. <ko...@in...> - 2013-02-15 09:07:54
|
I fixed this issue happening on Datanode too. This is included in 1.0.2. Year, it is important to configure gtm correctly for all the datanodes/coordinators. I wonder, if the configuration is not correct, these nodes won't startup. Regards; --- Koichi On Fri, 15 Feb 2013 13:06:22 +0900 Michael Paquier <mic...@gm...> wrote: > On Fri, Feb 15, 2013 at 12:57 PM, Arni Sumarlidason < > Arn...@md...> wrote: > > > Hi Everyone!**** > > > > ** ** > > > > I am getting these errors, “Warning: do not have a gtm snapshot > > available”[1]. After researching I found posts about the auto vacuum > > causing these errors, is this fix or work in progress? Also, I am seeing > > them without the CONTEXT: automatic vacuum message too. Is this something > > to worry about? Cluster seems to be functioning normally. > > > **** > > > > ** ** > > > > Vacuum and analyze from pgadmin looks like this,**** > > > > *INFO: vacuuming "public.table"* > > > > *INFO: "table": found 0 removable, 0 nonremovable row versions in 0 pages > > * > > > > *DETAIL: 0 dead row versions cannot be removed yet.* > > > > *CPU 0.00s/0.00u sec elapsed 0.00 sec.* > > > > *INFO: analyzing "public.table"* > > > > *INFO: "table": scanned 0 of 0 pages, containing 0 live rows and 0 dead > > rows; 0 rows in sample, 0 estimated total rows* > > > > *Total query runtime: 15273 ms.* > > > > * * > > > > Should we use execute direct to perform maintenance?**** > > > > ** > > > No. Isn't this happening on a Datanode? > Be sure first to set gtm_host and gtm_port in postgresql.conf of all the > nodes, Coordinator and Datanode included. GXID and snapshots are fetched of > course on Coordinator for normal transaction run but also on all the nodes > for autovacuum. > -- > Michael |
|
From: Arni S. <Arn...@md...> - 2013-02-15 04:23:05
|
Thank you both for fast response!! RE: Koichi Suzuki I downloaded the git this afternoon. RE: Michael Paquier - Confirm it is from the datanode's log. - Both coord & datanode connect via the same gtm_proxy on localhost These are my simplified configs, the only change I make on each node is the nodename, PG_HBA local all all trust host all all 127.0.0.1/32 trust host all all ::1/128 trust host all all 10.100.170.0/24 trust COORD pgxc_node_name = 'coord01' listen_addresses = '*' port = 5432 max_connections = 200 gtm_port = 6666 gtm_host = 'localhost' pooler_port = 6670 shared_buffers = 32MB work_mem = 1MB maintenance_work_mem = 16MB max_stack_depth = 2MB log_timezone = 'US/Eastern' datestyle = 'iso, mdy' timezone = 'US/Eastern' lc_messages = 'en_US.UTF-8' lc_monetary = 'en_US.UTF-8' lc_numeric = 'en_US.UTF-8' lc_time = 'en_US.UTF-8' default_text_search_config = 'pg_catalog.english' DATA pgxc_node_name = 'data01' listen_addresses = '*' port = 5433 max_connections = 200 gtm_port = 6666 gtm_host = 'localhost' shared_buffers = 32MB work_mem = 1MB maintenance_work_mem = 16MB max_stack_depth = 2MB log_timezone = 'US/Eastern' datestyle = 'iso, mdy' timezone = 'US/Eastern' lc_messages = 'en_US.UTF-8' lc_monetary = 'en_US.UTF-8' lc_numeric = 'en_US.UTF-8' lc_time = 'en_US.UTF-8' default_text_search_config = 'pg_catalog.english' PROXY Nodename = 'proxy01' listen_addresses = '*' port = 6666 gtm_host = '10.100.170.10' gtm_port = 6666 best, Arni From: Michael Paquier [mailto:mic...@gm...] Sent: Thursday, February 14, 2013 11:06 PM To: Arni Sumarlidason Cc: pos...@li... Subject: Re: [Postgres-xc-general] pgxc: snapshot On Fri, Feb 15, 2013 at 12:57 PM, Arni Sumarlidason <Arn...@md...<mailto:Arn...@md...>> wrote: Hi Everyone! I am getting these errors, "Warning: do not have a gtm snapshot available"[1]. After researching I found posts about the auto vacuum causing these errors, is this fix or work in progress? Also, I am seeing them without the CONTEXT: automatic vacuum message too. Is this something to worry about? Cluster seems to be functioning normally. Vacuum and analyze from pgadmin looks like this, INFO: vacuuming "public.table" INFO: "table": found 0 removable, 0 nonremovable row versions in 0 pages DETAIL: 0 dead row versions cannot be removed yet. CPU 0.00s/0.00u sec elapsed 0.00 sec. INFO: analyzing "public.table" INFO: "table": scanned 0 of 0 pages, containing 0 live rows and 0 dead rows; 0 rows in sample, 0 estimated total rows Total query runtime: 15273 ms. Should we use execute direct to perform maintenance? No. Isn't this happening on a Datanode? Be sure first to set gtm_host and gtm_port in postgresql.conf of all the nodes, Coordinator and Datanode included. GXID and snapshots are fetched of course on Coordinator for normal transaction run but also on all the nodes for autovacuum. -- Michael |
|
From: Michael P. <mic...@gm...> - 2013-02-15 04:06:34
|
On Fri, Feb 15, 2013 at 12:57 PM, Arni Sumarlidason < Arn...@md...> wrote: > Hi Everyone!**** > > ** ** > > I am getting these errors, “Warning: do not have a gtm snapshot > available”[1]. After researching I found posts about the auto vacuum > causing these errors, is this fix or work in progress? Also, I am seeing > them without the CONTEXT: automatic vacuum message too. Is this something > to worry about? Cluster seems to be functioning normally. > **** > > ** ** > > Vacuum and analyze from pgadmin looks like this,**** > > *INFO: vacuuming "public.table"* > > *INFO: "table": found 0 removable, 0 nonremovable row versions in 0 pages > * > > *DETAIL: 0 dead row versions cannot be removed yet.* > > *CPU 0.00s/0.00u sec elapsed 0.00 sec.* > > *INFO: analyzing "public.table"* > > *INFO: "table": scanned 0 of 0 pages, containing 0 live rows and 0 dead > rows; 0 rows in sample, 0 estimated total rows* > > *Total query runtime: 15273 ms.* > > * * > > Should we use execute direct to perform maintenance?**** > > ** > No. Isn't this happening on a Datanode? Be sure first to set gtm_host and gtm_port in postgresql.conf of all the nodes, Coordinator and Datanode included. GXID and snapshots are fetched of course on Coordinator for normal transaction run but also on all the nodes for autovacuum. -- Michael |
|
From: Koichi S. <ko...@in...> - 2013-02-15 04:04:59
|
You don't have to do execute direct in this case. I've found similar issue last December and made a fix both for REL1_0_STABLE and master. I believe it is included in 1.0.2 and hope it fixes your issue. Could you try the latest one? If you still have the same problem, please let me know. Best; --- Koichi Suzuki On Fri, 15 Feb 2013 03:57:38 +0000 Arni Sumarlidason <Arn...@md...> wrote: > Hi Everyone! > > I am getting these errors, "Warning: do not have a gtm snapshot available"[1]. After researching I found posts about the auto vacuum causing these errors, is this fix or work in progress? Also, I am seeing them without the CONTEXT: automatic vacuum message too. Is this something to worry about? Cluster seems to be functioning normally. > > Vacuum and analyze from pgadmin looks like this, > INFO: vacuuming "public.table" > INFO: "table": found 0 removable, 0 nonremovable row versions in 0 pages > DETAIL: 0 dead row versions cannot be removed yet. > CPU 0.00s/0.00u sec elapsed 0.00 sec. > INFO: analyzing "public.table" > INFO: "table": scanned 0 of 0 pages, containing 0 live rows and 0 dead rows; 0 rows in sample, 0 estimated total rows > Total query runtime: 15273 ms. > > Should we use execute direct to perform maintenance? > > > Arni Sumarlidason | Software Engineer, Information Technology > > [1] [cid:image001.png@01CE0B04.7BC38930] |