You can subscribe to this list here.
2010 |
Jan
|
Feb
|
Mar
|
Apr
|
May
(2) |
Jun
|
Jul
|
Aug
(6) |
Sep
|
Oct
(19) |
Nov
(1) |
Dec
|
---|---|---|---|---|---|---|---|---|---|---|---|---|
2011 |
Jan
(12) |
Feb
(1) |
Mar
(4) |
Apr
(4) |
May
(32) |
Jun
(12) |
Jul
(11) |
Aug
(1) |
Sep
(6) |
Oct
(3) |
Nov
|
Dec
(10) |
2012 |
Jan
(11) |
Feb
(1) |
Mar
(3) |
Apr
(25) |
May
(53) |
Jun
(38) |
Jul
(103) |
Aug
(54) |
Sep
(31) |
Oct
(66) |
Nov
(77) |
Dec
(20) |
2013 |
Jan
(91) |
Feb
(86) |
Mar
(103) |
Apr
(107) |
May
(25) |
Jun
(37) |
Jul
(17) |
Aug
(59) |
Sep
(38) |
Oct
(78) |
Nov
(29) |
Dec
(15) |
2014 |
Jan
(23) |
Feb
(82) |
Mar
(118) |
Apr
(101) |
May
(103) |
Jun
(45) |
Jul
(6) |
Aug
(10) |
Sep
|
Oct
(32) |
Nov
|
Dec
(9) |
2015 |
Jan
(3) |
Feb
(5) |
Mar
|
Apr
(1) |
May
|
Jun
|
Jul
(9) |
Aug
(4) |
Sep
(3) |
Oct
|
Nov
|
Dec
|
2016 |
Jan
(3) |
Feb
|
Mar
|
Apr
|
May
|
Jun
|
Jul
|
Aug
|
Sep
|
Oct
|
Nov
|
Dec
|
2017 |
Jan
|
Feb
|
Mar
|
Apr
|
May
|
Jun
(3) |
Jul
|
Aug
|
Sep
|
Oct
|
Nov
|
Dec
|
2018 |
Jan
|
Feb
|
Mar
|
Apr
|
May
(4) |
Jun
|
Jul
|
Aug
|
Sep
|
Oct
|
Nov
|
Dec
|
S | M | T | W | T | F | S |
---|---|---|---|---|---|---|
|
|
|
|
|
1
(6) |
2
|
3
|
4
|
5
|
6
|
7
(5) |
8
(1) |
9
|
10
|
11
(2) |
12
|
13
(7) |
14
|
15
(6) |
16
(1) |
17
|
18
(9) |
19
(10) |
20
(3) |
21
(6) |
22
(6) |
23
|
24
|
25
(20) |
26
(1) |
27
(1) |
28
(2) |
|
|
From: Ashutosh B. <ash...@en...> - 2013-02-18 09:06:49
|
On Mon, Feb 18, 2013 at 2:32 PM, kushal <kus...@gm...> wrote: > I think your responses answer my question. So here is how my database > structure looks like without XC. > > There should be one database with table1, table2 and table3 for each > customer under one postgres server. > > So If I am not wrong then with XC for coordinators A and B and datanodes C > and D, each would contain database instances DB1, DB2 and so on with same > schema (which in this case is table1, table2 and table3) and distribution > or replication would depend on how I do it while creating tables (table1, > table2, table3) for each of the individual database instances. Upto this it looks fine. > And even if the schema changes for DB1 in future, it won't have impact on > DB2 queries or storage. > > Any DDL you perform on XC, has to routed through the coordinator and hence it affects all the datanodes. > --Kushal > > > On 18 February 2013 13:41, Ashutosh Bapat <ash...@en... > > wrote: > >> Hi Kushal, >> Thanks for your interest in Postgres-XC. >> >> In Postgres-XC, every database/schema is created on all datanodes and >> coordinators. So, one can not created datanode specific databases. The only >> objects that are distributed are the tables. You can distribute your data >> across datanodes. >> >> But you are using term database instances, which is confusing. Do you >> mean database system instances? >> >> May be an example would help to understand your system's architecture. >> >> On Mon, Feb 18, 2013 at 12:56 PM, kushal <kus...@gm...> wrote: >> >>> Hi >>> >>> This is my first post on the postgres xc mailing list. Let me first just >>> congratulate the whole team for coming up with such a cool framework. >>> >>> I have few questions around the requirements we have to support our >>> product. Now it is required to keep multiple database instances, lets say >>> one for each customer, accessible from one app. Without postgres-xc, I can >>> do that by just creating a connecting to specific database instance for a >>> particular customer from my app. >>> >>> Can the same be done with postgres-xc interface? So basically, I should >>> be able to create different database instances across datanodes accessible >>> through any coordinator. >>> If yes, how does the distribution/replication work? Is it going to be >>> somewhat different? >>> >>> >>> Thanks & Regards, >>> Kushal >>> >>> >>> ------------------------------------------------------------------------------ >>> The Go Parallel Website, sponsored by Intel - in partnership with >>> Geeknet, >>> is your hub for all things parallel software development, from weekly >>> thought >>> leadership blogs to news, videos, case studies, tutorials, tech docs, >>> whitepapers, evaluation guides, and opinion stories. Check out the most >>> recent posts - join the conversation now. >>> http://goparallel.sourceforge.net/ >>> _______________________________________________ >>> Postgres-xc-general mailing list >>> Pos...@li... >>> https://lists.sourceforge.net/lists/listinfo/postgres-xc-general >>> >>> >> >> >> -- >> Best Wishes, >> Ashutosh Bapat >> EntepriseDB Corporation >> The Enterprise Postgres Company >> > > > > ------------------------------------------------------------------------------ > The Go Parallel Website, sponsored by Intel - in partnership with Geeknet, > is your hub for all things parallel software development, from weekly > thought > leadership blogs to news, videos, case studies, tutorials, tech docs, > whitepapers, evaluation guides, and opinion stories. Check out the most > recent posts - join the conversation now. > http://goparallel.sourceforge.net/ > _______________________________________________ > Postgres-xc-general mailing list > Pos...@li... > https://lists.sourceforge.net/lists/listinfo/postgres-xc-general > > -- Best Wishes, Ashutosh Bapat EntepriseDB Corporation The Enterprise Postgres Company |
From: kushal <kus...@gm...> - 2013-02-18 09:02:11
|
I think your responses answer my question. So here is how my database structure looks like without XC. There should be one database with table1, table2 and table3 for each customer under one postgres server. So If I am not wrong then with XC for coordinators A and B and datanodes C and D, each would contain database instances DB1, DB2 and so on with same schema (which in this case is table1, table2 and table3) and distribution or replication would depend on how I do it while creating tables (table1, table2, table3) for each of the individual database instances. And even if the schema changes for DB1 in future, it won't have impact on DB2 queries or storage. --Kushal On 18 February 2013 13:41, Ashutosh Bapat <ash...@en...>wrote: > Hi Kushal, > Thanks for your interest in Postgres-XC. > > In Postgres-XC, every database/schema is created on all datanodes and > coordinators. So, one can not created datanode specific databases. The only > objects that are distributed are the tables. You can distribute your data > across datanodes. > > But you are using term database instances, which is confusing. Do you mean > database system instances? > > May be an example would help to understand your system's architecture. > > On Mon, Feb 18, 2013 at 12:56 PM, kushal <kus...@gm...> wrote: > >> Hi >> >> This is my first post on the postgres xc mailing list. Let me first just >> congratulate the whole team for coming up with such a cool framework. >> >> I have few questions around the requirements we have to support our >> product. Now it is required to keep multiple database instances, lets say >> one for each customer, accessible from one app. Without postgres-xc, I can >> do that by just creating a connecting to specific database instance for a >> particular customer from my app. >> >> Can the same be done with postgres-xc interface? So basically, I should >> be able to create different database instances across datanodes accessible >> through any coordinator. >> If yes, how does the distribution/replication work? Is it going to be >> somewhat different? >> >> >> Thanks & Regards, >> Kushal >> >> >> ------------------------------------------------------------------------------ >> The Go Parallel Website, sponsored by Intel - in partnership with Geeknet, >> is your hub for all things parallel software development, from weekly >> thought >> leadership blogs to news, videos, case studies, tutorials, tech docs, >> whitepapers, evaluation guides, and opinion stories. Check out the most >> recent posts - join the conversation now. >> http://goparallel.sourceforge.net/ >> _______________________________________________ >> Postgres-xc-general mailing list >> Pos...@li... >> https://lists.sourceforge.net/lists/listinfo/postgres-xc-general >> >> > > > -- > Best Wishes, > Ashutosh Bapat > EntepriseDB Corporation > The Enterprise Postgres Company > |
From: Ashutosh B. <ash...@en...> - 2013-02-18 08:11:31
|
Hi Kushal, Thanks for your interest in Postgres-XC. In Postgres-XC, every database/schema is created on all datanodes and coordinators. So, one can not created datanode specific databases. The only objects that are distributed are the tables. You can distribute your data across datanodes. But you are using term database instances, which is confusing. Do you mean database system instances? May be an example would help to understand your system's architecture. On Mon, Feb 18, 2013 at 12:56 PM, kushal <kus...@gm...> wrote: > Hi > > This is my first post on the postgres xc mailing list. Let me first just > congratulate the whole team for coming up with such a cool framework. > > I have few questions around the requirements we have to support our > product. Now it is required to keep multiple database instances, lets say > one for each customer, accessible from one app. Without postgres-xc, I can > do that by just creating a connecting to specific database instance for a > particular customer from my app. > > Can the same be done with postgres-xc interface? So basically, I should be > able to create different database instances across datanodes accessible > through any coordinator. > If yes, how does the distribution/replication work? Is it going to be > somewhat different? > > > Thanks & Regards, > Kushal > > > ------------------------------------------------------------------------------ > The Go Parallel Website, sponsored by Intel - in partnership with Geeknet, > is your hub for all things parallel software development, from weekly > thought > leadership blogs to news, videos, case studies, tutorials, tech docs, > whitepapers, evaluation guides, and opinion stories. Check out the most > recent posts - join the conversation now. > http://goparallel.sourceforge.net/ > _______________________________________________ > Postgres-xc-general mailing list > Pos...@li... > https://lists.sourceforge.net/lists/listinfo/postgres-xc-general > > -- Best Wishes, Ashutosh Bapat EntepriseDB Corporation The Enterprise Postgres Company |
From: Michael P. <mic...@gm...> - 2013-02-18 08:00:04
|
On Mon, Feb 18, 2013 at 4:26 PM, kushal <kus...@gm...> wrote: > Hi > > This is my first post on the postgres xc mailing list. Let me first just > congratulate the whole team for coming up with such a cool framework. > > I have few questions around the requirements we have to support our > product. Now it is required to keep multiple database instances, lets say > one for each customer, accessible from one app. Without postgres-xc, I can > do that by just creating a connecting to specific database instance for a > particular customer from my app. > > Can the same be done with postgres-xc interface? So basically, I should be > able to create different database instances across datanodes accessible > through any coordinator. > If yes, how does the distribution/replication work? Is it going to be > somewhat different? > In the case of XC, when an object is created, it is created on all the nodes. This means that if you run a CREATE DATABASE command, this database will be created on all the Coordinators and all the Datanodes. For tables it is the same, the catalogs of all the nodes are kept in sync for consistency. Then, replication/distribution is table-based and controlled by a clause extension called DISTRIBUTE BY in query CREATE TABLE: http://postgres-xc.sourceforge.net/docs/1_0_2/sql-createtable.html You can also distribute or replicate your data on a portion of nodes if necessary, with the clause extension TO NODE. This controls the way tuples are distributed among the nodes. So let's imagine that you create a table replicated on node 1 and 3 in a cluster of 4 Datanodes. Datanodes 2 and 4 will keep an empty table and Datanode 1 and 3 will have the same data replicated. When running a query on Coordinator, XC planner is smart enough to determine the list of nodes to run query on, so in the case of the table below, only one of the two Datanodes 1/3 will be targetted for read, and both will be targetted in the case of a write. -- Michael |
From: kushal <kus...@gm...> - 2013-02-18 07:26:19
|
Hi This is my first post on the postgres xc mailing list. Let me first just congratulate the whole team for coming up with such a cool framework. I have few questions around the requirements we have to support our product. Now it is required to keep multiple database instances, lets say one for each customer, accessible from one app. Without postgres-xc, I can do that by just creating a connecting to specific database instance for a particular customer from my app. Can the same be done with postgres-xc interface? So basically, I should be able to create different database instances across datanodes accessible through any coordinator. If yes, how does the distribution/replication work? Is it going to be somewhat different? Thanks & Regards, Kushal |
From: Koichi S. <ko...@in...> - 2013-02-18 01:33:00
|
Nice to hear that pgxc_ctl helps. As to the warning, I will try to reproduce the problem and fix it. I need to find a time for it so please forgive me a bit of time. The test will run many small transactions which will cause autovacuum lauched, as attached. This test was built as Datanode slave stress test. I think this may work as autovacuum lauch test. I will test it with four coordinators and four datanodes, and four gtm_proxies as well. Whole test will take about a couple of hours with five of six-core Xeon servers (one for GTM). Do you think this makes sense to reproduce your problem? I will run it both on master and REL1_0_STABLE. Regards; --- Koichi On Sat, 16 Feb 2013 19:32:11 +0000 Arni Sumarlidason <Arn...@md...> wrote: > Koichi, and others, > > I spun some fresh VMs and ran your script with the identical outcome, GTM Snapshot warnings from the auto vacuum launcher. > Please advise. > > > Thank you for your script, it does make life easier!! > > Best, > > -----Original Message----- > From: Koichi Suzuki [mailto:ko...@in...] > Sent: Friday, February 15, 2013 4:11 AM > To: Arni Sumarlidason > Cc: Michael Paquier; koi...@gm...; pos...@li... > Subject: Re: [Postgres-xc-general] pgxc: snapshot > > If you're not sure about the configuration, please try pgxc_ctl available at > > git://github.com/koichi-szk/PGXC-Tools.git > > This is bash script (I'm rewriting into C now) so it will help to understand how to configure XC. > > Regards; > --- > Koichi Suzuki > > On Fri, 15 Feb 2013 04:22:49 +0000 > Arni Sumarlidason <Arn...@md...> wrote: > > > Thank you both for fast response!! > > > > RE: Koichi Suzuki > > I downloaded the git this afternoon. > > > > RE: Michael Paquier > > > > - Confirm it is from the datanode's log. > > > > - Both coord & datanode connect via the same gtm_proxy on localhost > > > > These are my simplified configs, the only change I make on each node > > is the nodename, PG_HBA > > local all all trust > > host all all 127.0.0.1/32 trust > > host all all ::1/128 trust > > host all all 10.100.170.0/24 trust > > > > COORD > > pgxc_node_name = 'coord01' > > listen_addresses = '*' > > port = 5432 > > max_connections = 200 > > > > gtm_port = 6666 > > gtm_host = 'localhost' > > pooler_port = 6670 > > > > shared_buffers = 32MB > > work_mem = 1MB > > maintenance_work_mem = 16MB > > max_stack_depth = 2MB > > > > log_timezone = 'US/Eastern' > > datestyle = 'iso, mdy' > > timezone = 'US/Eastern' > > lc_messages = 'en_US.UTF-8' > > lc_monetary = 'en_US.UTF-8' > > lc_numeric = 'en_US.UTF-8' > > lc_time = 'en_US.UTF-8' > > default_text_search_config = 'pg_catalog.english' > > > > DATA > > pgxc_node_name = 'data01' > > listen_addresses = '*' > > port = 5433 > > max_connections = 200 > > > > gtm_port = 6666 > > gtm_host = 'localhost' > > > > shared_buffers = 32MB > > work_mem = 1MB > > maintenance_work_mem = 16MB > > max_stack_depth = 2MB > > > > log_timezone = 'US/Eastern' > > datestyle = 'iso, mdy' > > timezone = 'US/Eastern' > > lc_messages = 'en_US.UTF-8' > > lc_monetary = 'en_US.UTF-8' > > lc_numeric = 'en_US.UTF-8' > > lc_time = 'en_US.UTF-8' > > default_text_search_config = 'pg_catalog.english' > > > > PROXY > > Nodename = 'proxy01' > > listen_addresses = '*' > > port = 6666 > > gtm_host = '10.100.170.10' > > gtm_port = 6666 > > > > > > best, > > > > Arni > > > > From: Michael Paquier [mailto:mic...@gm...] > > Sent: Thursday, February 14, 2013 11:06 PM > > To: Arni Sumarlidason > > Cc: pos...@li... > > Subject: Re: [Postgres-xc-general] pgxc: snapshot > > > > > > On Fri, Feb 15, 2013 at 12:57 PM, Arni Sumarlidason <Arn...@md...<mailto:Arn...@md...>> wrote: > > Hi Everyone! > > > > I am getting these errors, "Warning: do not have a gtm snapshot available"[1]. After researching I found posts about the auto vacuum causing these errors, is this fix or work in progress? Also, I am seeing them without the CONTEXT: automatic vacuum message too. Is this something to worry about? Cluster seems to be functioning normally. > > > > Vacuum and analyze from pgadmin looks like this, > > INFO: vacuuming "public.table" > > INFO: "table": found 0 removable, 0 nonremovable row versions in 0 > > pages > > DETAIL: 0 dead row versions cannot be removed yet. > > CPU 0.00s/0.00u sec elapsed 0.00 sec. > > INFO: analyzing "public.table" > > INFO: "table": scanned 0 of 0 pages, containing 0 live rows and 0 > > dead rows; 0 rows in sample, 0 estimated total rows Total query runtime: 15273 ms. > > > > Should we use execute direct to perform maintenance? > > No. Isn't this happening on a Datanode? > > Be sure first to set gtm_host and gtm_port in postgresql.conf of all the nodes, Coordinator and Datanode included. GXID and snapshots are fetched of course on Coordinator for normal transaction run but also on all the nodes for autovacuum. > > -- > > Michael > |
From: Arni S. <Arn...@md...> - 2013-02-16 19:32:31
|
Koichi, and others, I spun some fresh VMs and ran your script with the identical outcome, GTM Snapshot warnings from the auto vacuum launcher. Please advise. Thank you for your script, it does make life easier!! Best, -----Original Message----- From: Koichi Suzuki [mailto:ko...@in...] Sent: Friday, February 15, 2013 4:11 AM To: Arni Sumarlidason Cc: Michael Paquier; koi...@gm...; pos...@li... Subject: Re: [Postgres-xc-general] pgxc: snapshot If you're not sure about the configuration, please try pgxc_ctl available at git://github.com/koichi-szk/PGXC-Tools.git This is bash script (I'm rewriting into C now) so it will help to understand how to configure XC. Regards; --- Koichi Suzuki On Fri, 15 Feb 2013 04:22:49 +0000 Arni Sumarlidason <Arn...@md...> wrote: > Thank you both for fast response!! > > RE: Koichi Suzuki > I downloaded the git this afternoon. > > RE: Michael Paquier > > - Confirm it is from the datanode's log. > > - Both coord & datanode connect via the same gtm_proxy on localhost > > These are my simplified configs, the only change I make on each node > is the nodename, PG_HBA > local all all trust > host all all 127.0.0.1/32 trust > host all all ::1/128 trust > host all all 10.100.170.0/24 trust > > COORD > pgxc_node_name = 'coord01' > listen_addresses = '*' > port = 5432 > max_connections = 200 > > gtm_port = 6666 > gtm_host = 'localhost' > pooler_port = 6670 > > shared_buffers = 32MB > work_mem = 1MB > maintenance_work_mem = 16MB > max_stack_depth = 2MB > > log_timezone = 'US/Eastern' > datestyle = 'iso, mdy' > timezone = 'US/Eastern' > lc_messages = 'en_US.UTF-8' > lc_monetary = 'en_US.UTF-8' > lc_numeric = 'en_US.UTF-8' > lc_time = 'en_US.UTF-8' > default_text_search_config = 'pg_catalog.english' > > DATA > pgxc_node_name = 'data01' > listen_addresses = '*' > port = 5433 > max_connections = 200 > > gtm_port = 6666 > gtm_host = 'localhost' > > shared_buffers = 32MB > work_mem = 1MB > maintenance_work_mem = 16MB > max_stack_depth = 2MB > > log_timezone = 'US/Eastern' > datestyle = 'iso, mdy' > timezone = 'US/Eastern' > lc_messages = 'en_US.UTF-8' > lc_monetary = 'en_US.UTF-8' > lc_numeric = 'en_US.UTF-8' > lc_time = 'en_US.UTF-8' > default_text_search_config = 'pg_catalog.english' > > PROXY > Nodename = 'proxy01' > listen_addresses = '*' > port = 6666 > gtm_host = '10.100.170.10' > gtm_port = 6666 > > > best, > > Arni > > From: Michael Paquier [mailto:mic...@gm...] > Sent: Thursday, February 14, 2013 11:06 PM > To: Arni Sumarlidason > Cc: pos...@li... > Subject: Re: [Postgres-xc-general] pgxc: snapshot > > > On Fri, Feb 15, 2013 at 12:57 PM, Arni Sumarlidason <Arn...@md...<mailto:Arn...@md...>> wrote: > Hi Everyone! > > I am getting these errors, "Warning: do not have a gtm snapshot available"[1]. After researching I found posts about the auto vacuum causing these errors, is this fix or work in progress? Also, I am seeing them without the CONTEXT: automatic vacuum message too. Is this something to worry about? Cluster seems to be functioning normally. > > Vacuum and analyze from pgadmin looks like this, > INFO: vacuuming "public.table" > INFO: "table": found 0 removable, 0 nonremovable row versions in 0 > pages > DETAIL: 0 dead row versions cannot be removed yet. > CPU 0.00s/0.00u sec elapsed 0.00 sec. > INFO: analyzing "public.table" > INFO: "table": scanned 0 of 0 pages, containing 0 live rows and 0 > dead rows; 0 rows in sample, 0 estimated total rows Total query runtime: 15273 ms. > > Should we use execute direct to perform maintenance? > No. Isn't this happening on a Datanode? > Be sure first to set gtm_host and gtm_port in postgresql.conf of all the nodes, Coordinator and Datanode included. GXID and snapshots are fetched of course on Coordinator for normal transaction run but also on all the nodes for autovacuum. > -- > Michael |
From: Koichi S. <ko...@in...> - 2013-02-15 09:10:37
|
If you're not sure about the configuration, please try pgxc_ctl available at git://github.com/koichi-szk/PGXC-Tools.git This is bash script (I'm rewriting into C now) so it will help to understand how to configure XC. Regards; --- Koichi Suzuki On Fri, 15 Feb 2013 04:22:49 +0000 Arni Sumarlidason <Arn...@md...> wrote: > Thank you both for fast response!! > > RE: Koichi Suzuki > I downloaded the git this afternoon. > > RE: Michael Paquier > > - Confirm it is from the datanode's log. > > - Both coord & datanode connect via the same gtm_proxy on localhost > > These are my simplified configs, the only change I make on each node is the nodename, > PG_HBA > local all all trust > host all all 127.0.0.1/32 trust > host all all ::1/128 trust > host all all 10.100.170.0/24 trust > > COORD > pgxc_node_name = 'coord01' > listen_addresses = '*' > port = 5432 > max_connections = 200 > > gtm_port = 6666 > gtm_host = 'localhost' > pooler_port = 6670 > > shared_buffers = 32MB > work_mem = 1MB > maintenance_work_mem = 16MB > max_stack_depth = 2MB > > log_timezone = 'US/Eastern' > datestyle = 'iso, mdy' > timezone = 'US/Eastern' > lc_messages = 'en_US.UTF-8' > lc_monetary = 'en_US.UTF-8' > lc_numeric = 'en_US.UTF-8' > lc_time = 'en_US.UTF-8' > default_text_search_config = 'pg_catalog.english' > > DATA > pgxc_node_name = 'data01' > listen_addresses = '*' > port = 5433 > max_connections = 200 > > gtm_port = 6666 > gtm_host = 'localhost' > > shared_buffers = 32MB > work_mem = 1MB > maintenance_work_mem = 16MB > max_stack_depth = 2MB > > log_timezone = 'US/Eastern' > datestyle = 'iso, mdy' > timezone = 'US/Eastern' > lc_messages = 'en_US.UTF-8' > lc_monetary = 'en_US.UTF-8' > lc_numeric = 'en_US.UTF-8' > lc_time = 'en_US.UTF-8' > default_text_search_config = 'pg_catalog.english' > > PROXY > Nodename = 'proxy01' > listen_addresses = '*' > port = 6666 > gtm_host = '10.100.170.10' > gtm_port = 6666 > > > best, > > Arni > > From: Michael Paquier [mailto:mic...@gm...] > Sent: Thursday, February 14, 2013 11:06 PM > To: Arni Sumarlidason > Cc: pos...@li... > Subject: Re: [Postgres-xc-general] pgxc: snapshot > > > On Fri, Feb 15, 2013 at 12:57 PM, Arni Sumarlidason <Arn...@md...<mailto:Arn...@md...>> wrote: > Hi Everyone! > > I am getting these errors, "Warning: do not have a gtm snapshot available"[1]. After researching I found posts about the auto vacuum causing these errors, is this fix or work in progress? Also, I am seeing them without the CONTEXT: automatic vacuum message too. Is this something to worry about? Cluster seems to be functioning normally. > > Vacuum and analyze from pgadmin looks like this, > INFO: vacuuming "public.table" > INFO: "table": found 0 removable, 0 nonremovable row versions in 0 pages > DETAIL: 0 dead row versions cannot be removed yet. > CPU 0.00s/0.00u sec elapsed 0.00 sec. > INFO: analyzing "public.table" > INFO: "table": scanned 0 of 0 pages, containing 0 live rows and 0 dead rows; 0 rows in sample, 0 estimated total rows > Total query runtime: 15273 ms. > > Should we use execute direct to perform maintenance? > No. Isn't this happening on a Datanode? > Be sure first to set gtm_host and gtm_port in postgresql.conf of all the nodes, Coordinator and Datanode included. GXID and snapshots are fetched of course on Coordinator for normal transaction run but also on all the nodes for autovacuum. > -- > Michael |
From: Koichi S. <ko...@in...> - 2013-02-15 09:07:54
|
I fixed this issue happening on Datanode too. This is included in 1.0.2. Year, it is important to configure gtm correctly for all the datanodes/coordinators. I wonder, if the configuration is not correct, these nodes won't startup. Regards; --- Koichi On Fri, 15 Feb 2013 13:06:22 +0900 Michael Paquier <mic...@gm...> wrote: > On Fri, Feb 15, 2013 at 12:57 PM, Arni Sumarlidason < > Arn...@md...> wrote: > > > Hi Everyone!**** > > > > ** ** > > > > I am getting these errors, “Warning: do not have a gtm snapshot > > available”[1]. After researching I found posts about the auto vacuum > > causing these errors, is this fix or work in progress? Also, I am seeing > > them without the CONTEXT: automatic vacuum message too. Is this something > > to worry about? Cluster seems to be functioning normally. > > > **** > > > > ** ** > > > > Vacuum and analyze from pgadmin looks like this,**** > > > > *INFO: vacuuming "public.table"* > > > > *INFO: "table": found 0 removable, 0 nonremovable row versions in 0 pages > > * > > > > *DETAIL: 0 dead row versions cannot be removed yet.* > > > > *CPU 0.00s/0.00u sec elapsed 0.00 sec.* > > > > *INFO: analyzing "public.table"* > > > > *INFO: "table": scanned 0 of 0 pages, containing 0 live rows and 0 dead > > rows; 0 rows in sample, 0 estimated total rows* > > > > *Total query runtime: 15273 ms.* > > > > * * > > > > Should we use execute direct to perform maintenance?**** > > > > ** > > > No. Isn't this happening on a Datanode? > Be sure first to set gtm_host and gtm_port in postgresql.conf of all the > nodes, Coordinator and Datanode included. GXID and snapshots are fetched of > course on Coordinator for normal transaction run but also on all the nodes > for autovacuum. > -- > Michael |
From: Arni S. <Arn...@md...> - 2013-02-15 04:23:05
|
Thank you both for fast response!! RE: Koichi Suzuki I downloaded the git this afternoon. RE: Michael Paquier - Confirm it is from the datanode's log. - Both coord & datanode connect via the same gtm_proxy on localhost These are my simplified configs, the only change I make on each node is the nodename, PG_HBA local all all trust host all all 127.0.0.1/32 trust host all all ::1/128 trust host all all 10.100.170.0/24 trust COORD pgxc_node_name = 'coord01' listen_addresses = '*' port = 5432 max_connections = 200 gtm_port = 6666 gtm_host = 'localhost' pooler_port = 6670 shared_buffers = 32MB work_mem = 1MB maintenance_work_mem = 16MB max_stack_depth = 2MB log_timezone = 'US/Eastern' datestyle = 'iso, mdy' timezone = 'US/Eastern' lc_messages = 'en_US.UTF-8' lc_monetary = 'en_US.UTF-8' lc_numeric = 'en_US.UTF-8' lc_time = 'en_US.UTF-8' default_text_search_config = 'pg_catalog.english' DATA pgxc_node_name = 'data01' listen_addresses = '*' port = 5433 max_connections = 200 gtm_port = 6666 gtm_host = 'localhost' shared_buffers = 32MB work_mem = 1MB maintenance_work_mem = 16MB max_stack_depth = 2MB log_timezone = 'US/Eastern' datestyle = 'iso, mdy' timezone = 'US/Eastern' lc_messages = 'en_US.UTF-8' lc_monetary = 'en_US.UTF-8' lc_numeric = 'en_US.UTF-8' lc_time = 'en_US.UTF-8' default_text_search_config = 'pg_catalog.english' PROXY Nodename = 'proxy01' listen_addresses = '*' port = 6666 gtm_host = '10.100.170.10' gtm_port = 6666 best, Arni From: Michael Paquier [mailto:mic...@gm...] Sent: Thursday, February 14, 2013 11:06 PM To: Arni Sumarlidason Cc: pos...@li... Subject: Re: [Postgres-xc-general] pgxc: snapshot On Fri, Feb 15, 2013 at 12:57 PM, Arni Sumarlidason <Arn...@md...<mailto:Arn...@md...>> wrote: Hi Everyone! I am getting these errors, "Warning: do not have a gtm snapshot available"[1]. After researching I found posts about the auto vacuum causing these errors, is this fix or work in progress? Also, I am seeing them without the CONTEXT: automatic vacuum message too. Is this something to worry about? Cluster seems to be functioning normally. Vacuum and analyze from pgadmin looks like this, INFO: vacuuming "public.table" INFO: "table": found 0 removable, 0 nonremovable row versions in 0 pages DETAIL: 0 dead row versions cannot be removed yet. CPU 0.00s/0.00u sec elapsed 0.00 sec. INFO: analyzing "public.table" INFO: "table": scanned 0 of 0 pages, containing 0 live rows and 0 dead rows; 0 rows in sample, 0 estimated total rows Total query runtime: 15273 ms. Should we use execute direct to perform maintenance? No. Isn't this happening on a Datanode? Be sure first to set gtm_host and gtm_port in postgresql.conf of all the nodes, Coordinator and Datanode included. GXID and snapshots are fetched of course on Coordinator for normal transaction run but also on all the nodes for autovacuum. -- Michael |
From: Michael P. <mic...@gm...> - 2013-02-15 04:06:34
|
On Fri, Feb 15, 2013 at 12:57 PM, Arni Sumarlidason < Arn...@md...> wrote: > Hi Everyone!**** > > ** ** > > I am getting these errors, “Warning: do not have a gtm snapshot > available”[1]. After researching I found posts about the auto vacuum > causing these errors, is this fix or work in progress? Also, I am seeing > them without the CONTEXT: automatic vacuum message too. Is this something > to worry about? Cluster seems to be functioning normally. > **** > > ** ** > > Vacuum and analyze from pgadmin looks like this,**** > > *INFO: vacuuming "public.table"* > > *INFO: "table": found 0 removable, 0 nonremovable row versions in 0 pages > * > > *DETAIL: 0 dead row versions cannot be removed yet.* > > *CPU 0.00s/0.00u sec elapsed 0.00 sec.* > > *INFO: analyzing "public.table"* > > *INFO: "table": scanned 0 of 0 pages, containing 0 live rows and 0 dead > rows; 0 rows in sample, 0 estimated total rows* > > *Total query runtime: 15273 ms.* > > * * > > Should we use execute direct to perform maintenance?**** > > ** > No. Isn't this happening on a Datanode? Be sure first to set gtm_host and gtm_port in postgresql.conf of all the nodes, Coordinator and Datanode included. GXID and snapshots are fetched of course on Coordinator for normal transaction run but also on all the nodes for autovacuum. -- Michael |
From: Koichi S. <ko...@in...> - 2013-02-15 04:04:59
|
You don't have to do execute direct in this case. I've found similar issue last December and made a fix both for REL1_0_STABLE and master. I believe it is included in 1.0.2 and hope it fixes your issue. Could you try the latest one? If you still have the same problem, please let me know. Best; --- Koichi Suzuki On Fri, 15 Feb 2013 03:57:38 +0000 Arni Sumarlidason <Arn...@md...> wrote: > Hi Everyone! > > I am getting these errors, "Warning: do not have a gtm snapshot available"[1]. After researching I found posts about the auto vacuum causing these errors, is this fix or work in progress? Also, I am seeing them without the CONTEXT: automatic vacuum message too. Is this something to worry about? Cluster seems to be functioning normally. > > Vacuum and analyze from pgadmin looks like this, > INFO: vacuuming "public.table" > INFO: "table": found 0 removable, 0 nonremovable row versions in 0 pages > DETAIL: 0 dead row versions cannot be removed yet. > CPU 0.00s/0.00u sec elapsed 0.00 sec. > INFO: analyzing "public.table" > INFO: "table": scanned 0 of 0 pages, containing 0 live rows and 0 dead rows; 0 rows in sample, 0 estimated total rows > Total query runtime: 15273 ms. > > Should we use execute direct to perform maintenance? > > > Arni Sumarlidason | Software Engineer, Information Technology > > [1] [cid:image001.png@01CE0B04.7BC38930] |
From: Koichi S. <koi...@gm...> - 2013-02-13 05:29:40
|
Appreciate for very nice and interesting topics. ---------- Koichi Suzuki 2013/2/13 Zenaan Harkness <ze...@fr...>: > [SOLVED] > > On 2/13/13, Michael Paquier <mic...@gm...> wrote: >> On Wed, Feb 13, 2013 at 1:28 PM, Zenaan Harkness <ze...@fr...> wrote: >> >>> On 2/13/13, Zenaan Harkness <ze...@fr...> wrote: >>> > # this resulted in only 8.6MiB download just now, >>> > # on an up-to-date pg.git repo. >> >> This quantity of data looks correct, half of it being due to the data in >> doc-xc/. >> >> $ git remote set-branches --add master pgxc/master >>> fatal: No such remote 'master' >> >> I am not a GIT specialist, but in order to get all the branches, do only >> that: >> git remote add -f pgxc git:// >> postgres-xc.git.sourceforge.net/gitroot/postgres-xc/postgres-xc >> >> You will be able to see all the remote branches available with "git branch >> -a" > > Bingo! Thank you very much. I ran git remote rm pgxc (rm my previous > remote attempt), then ran your command. > It appears that for some reason --tags option (possibly when combined > with -m) causes a problem with fetching remote branches. > > Much appreciated, > Zenaan > > ------------------------------------------------------------------------------ > Free Next-Gen Firewall Hardware Offer > Buy your Sophos next-gen firewall before the end March 2013 > and get the hardware for free! Learn more. > http://p.sf.net/sfu/sophos-d2d-feb > _______________________________________________ > Postgres-xc-general mailing list > Pos...@li... > https://lists.sourceforge.net/lists/listinfo/postgres-xc-general |
From: Zenaan H. <ze...@fr...> - 2013-02-13 05:24:15
|
[SOLVED] On 2/13/13, Michael Paquier <mic...@gm...> wrote: > On Wed, Feb 13, 2013 at 1:28 PM, Zenaan Harkness <ze...@fr...> wrote: > >> On 2/13/13, Zenaan Harkness <ze...@fr...> wrote: >> > # this resulted in only 8.6MiB download just now, >> > # on an up-to-date pg.git repo. > > This quantity of data looks correct, half of it being due to the data in > doc-xc/. > > $ git remote set-branches --add master pgxc/master >> fatal: No such remote 'master' > > I am not a GIT specialist, but in order to get all the branches, do only > that: > git remote add -f pgxc git:// > postgres-xc.git.sourceforge.net/gitroot/postgres-xc/postgres-xc > > You will be able to see all the remote branches available with "git branch > -a" Bingo! Thank you very much. I ran git remote rm pgxc (rm my previous remote attempt), then ran your command. It appears that for some reason --tags option (possibly when combined with -m) causes a problem with fetching remote branches. Much appreciated, Zenaan |
From: Michael P. <mic...@gm...> - 2013-02-13 05:09:56
|
On Wed, Feb 13, 2013 at 1:28 PM, Zenaan Harkness <ze...@fr...> wrote: > On 2/13/13, Zenaan Harkness <ze...@fr...> wrote: > > # this resulted in only 8.6MiB download just now, > > # on an up-to-date pg.git repo. > This quantity of data looks correct, half of it being due to the data in doc-xc/. $ git remote set-branches --add master pgxc/master > fatal: No such remote 'master' > I am not a GIT specialist, but in order to get all the branches, do only that: git remote add -f pgxc git:// postgres-xc.git.sourceforge.net/gitroot/postgres-xc/postgres-xc You will be able to see all the remote branches available with "git branch -a" In this case, remote branches will be listed as remotes/pgxc/master, remotes/pgxc/REL1_0_STABLE, or whatever. Then checkout a branch, here master, with that: git branch --track pgxc-master pgxc/master git checkout pgxc-master This will create a branch called pgxc-master set to track the remote PGXC master branch when doing a git pull on this branch. Replace pgxc-master by the name you wish. -- Michael |
From: Zenaan H. <ze...@fr...> - 2013-02-13 04:28:57
|
On 2/13/13, Zenaan Harkness <ze...@fr...> wrote: > Here's what I just tried: > > cd postgresql.git/ > git remote add -f --tags -m master pgxc > git://postgres-xc.git.sourceforge.net/gitroot/postgres-xc/postgres-xc Please note, the above two lines should be one, forgot to manually format sorry. > # this resulted in only 8.6MiB download just now, > # on an up-to-date pg.git repo. > > #view results: > git remote -v > git branch -a > > In my case, remotes/pgxc/HEAD is the only pgxc 'branch' so it appears > I'm not quite doing something right. Looking at the repo summary page > at: > http://postgres-xc.git.sourceforge.net/git/gitweb.cgi?p=postgres-xc/postgres-xc;a=summary > suggests that there are a number of 'heads' which I assume are branches. > My ~/.gitconfig only has user name and email, so nothing there should > be stopping the branches from being downloaded. > > I guess I'm missing something in my command above. From my reading of > man git-remote it should do the trick... > > Anyone know what's wrong with it? I just tried the following: $ git remote set-branches --add master pgxc/master fatal: No such remote 'master' ?? |
From: Zenaan H. <ze...@fr...> - 2013-02-13 04:20:53
|
Here's what I just tried: cd postgresql.git/ git remote add -f --tags -m master pgxc git://postgres-xc.git.sourceforge.net/gitroot/postgres-xc/postgres-xc # this resulted in only 8.6MiB download just now, # on an up-to-date pg.git repo. #view results: git remote -v git branch -a In my case, remotes/pgxc/HEAD is the only pgxc 'branch' so it appears I'm not quite doing something right. Looking at the repo summary page at: http://postgres-xc.git.sourceforge.net/git/gitweb.cgi?p=postgres-xc/postgres-xc;a=summary suggests that there are a number of 'heads' which I assume are branches. My ~/.gitconfig only has user name and email, so nothing there should be stopping the branches from being downloaded. I guess I'm missing something in my command above. From my reading of man git-remote it should do the trick... Anyone know what's wrong with it? TIA Zenaan |
From: Zenaan H. <ze...@fr...> - 2013-02-13 04:08:10
|
fyi.. ---------- Forwarded message ---------- From: Michael Paquier Date: Wed, 13 Feb 2013 12:59:54 +0900 On Tue, Feb 12, 2013 at 2:36 PM, Zenaan Harkness <ze...@fr...> wrote: > Does somone know the object overlap likely between pg and pgxc > repositories? > > I ask because I could just git clone pgxc, or I could add a remote for > pgxc to my pg git clone, and make sure the branches are added, and > fetch that remote. > And in this way, common files/ objects are properly shared in one > repo, rather than duplicated. > PG and PGXC share 99.9% of a common commit history, so yes simply adding a remote to PGXC in your existing PG folder is good. This is how I do for my own dev, and I believe that most of the people in this project do the same. Doing that is particularly helpful when you want to merge PG code directly in XC or when you need to have a look at the code diffs between both projects. -- Michael |
From: Zenaan H. <ze...@fr...> - 2013-02-13 03:35:14
|
---------- Forwarded message ---------- From: Koichi Suzuki <koi...@gm...> Date: Wed, 13 Feb 2013 10:49:51 +0900 Subject: Re: [GENERAL] cloning postgres-xc Cc: pgs...@po... Yes, this ML is not a right place to discuss this. Could you raise this issue at postgres-xc-general ML? Regards; ---------- Koichi Suzuki 2013/2/12 Pavan Deolasee <pav...@gm...>: > This may not be the best place to ask these questions and you could > have considered using postgres-xc-general mailing list from the > Postgres-XC project site. Anyways, see my comments below. > > On Tue, Feb 12, 2013 at 11:06 AM, Zenaan Harkness <ze...@fr...> wrote: >> Does somone know the object overlap likely between pg and pgxc repositories? > > There is quite a lot overlap. Even though Postgres-XC has changed many > files and added many other, there is still plenty of common code. > >> I ask because I could just git clone pgxc, or I could add a remote for >> pgxc to my pg git clone, and make sure the branches are added, and >> fetch that remote. >> And in this way, common files/ objects are properly shared in one >> repo, rather than duplicated. >> >> Thoughts? >> > > ISTM that's the right way, especially if you're interested in keeping > PG code as well. This way, you will avoid a lot of duplicates and can > also quickly do a "git diff" between files of these two projects. I > find that very convenient at times. > > Thanks, > Pavan > > -- > Pavan Deolasee > http://www.linkedin.com/in/pavandeolasee > > > -- > Sent via pgsql-general mailing list (pgs...@po...) > To make changes to your subscription: > http://www.postgresql.org/mailpref/pgsql-general |
From: Koichi S. <koi...@gm...> - 2013-02-11 05:27:14
|
I have my personal project below. git://github.com/koichi-szk/PGXC-Tools.git You will find pgx_ctl, with bash script ans its manual. I hope they provide sufficient information to configure your XC cluster. Good luck. ---------- Koichi Suzuki 2013/2/11 Ashutosh Bapat <ash...@en...>: > Hi Abel, > WIth the information you have provided, it's not possible to find the > correct cause. You will need to provide detailed steps. Generally speaking, > you might have missed some argument specifying the node to be booted is > coordinator OR you might be connecting to datanode expecting it to be a > coordinator. There are any number of possibilities. > > > On Sat, Feb 9, 2013 at 3:34 AM, abel <ab...@me...> wrote: >> >> >> which is the steps to install more than 2 coordinators and DataNodes on >> different machines, I am setting like this website >> http://postgresxc.wikia.com/wiki/Real_Server_configuration >> >> I'm working with debian 6 >> >> after configure I receive the following message: >> psql: FATAL: Can not Identify itself Coordinator >> >> >> ------------------------------------------------------------------------------ >> Free Next-Gen Firewall Hardware Offer >> Buy your Sophos next-gen firewall before the end March 2013 >> and get the hardware for free! Learn more. >> http://p.sf.net/sfu/sophos-d2d-feb >> _______________________________________________ >> Postgres-xc-general mailing list >> Pos...@li... >> https://lists.sourceforge.net/lists/listinfo/postgres-xc-general > > > > > -- > Best Wishes, > Ashutosh Bapat > EntepriseDB Corporation > The Enterprise Postgres Company > > ------------------------------------------------------------------------------ > Free Next-Gen Firewall Hardware Offer > Buy your Sophos next-gen firewall before the end March 2013 > and get the hardware for free! Learn more. > http://p.sf.net/sfu/sophos-d2d-feb > _______________________________________________ > Postgres-xc-general mailing list > Pos...@li... > https://lists.sourceforge.net/lists/listinfo/postgres-xc-general > |
From: Ashutosh B. <ash...@en...> - 2013-02-11 04:39:19
|
Hi Abel, WIth the information you have provided, it's not possible to find the correct cause. You will need to provide detailed steps. Generally speaking, you might have missed some argument specifying the node to be booted is coordinator OR you might be connecting to datanode expecting it to be a coordinator. There are any number of possibilities. On Sat, Feb 9, 2013 at 3:34 AM, abel <ab...@me...> wrote: > > which is the steps to install more than 2 coordinators and DataNodes on > different machines, I am setting like this website > http://postgresxc.wikia.com/wiki/Real_Server_configuration > > I'm working with debian 6 > > after configure I receive the following message: > psql: FATAL: Can not Identify itself Coordinator > > > ------------------------------------------------------------------------------ > Free Next-Gen Firewall Hardware Offer > Buy your Sophos next-gen firewall before the end March 2013 > and get the hardware for free! Learn more. > http://p.sf.net/sfu/sophos-d2d-feb > _______________________________________________ > Postgres-xc-general mailing list > Pos...@li... > https://lists.sourceforge.net/lists/listinfo/postgres-xc-general > -- Best Wishes, Ashutosh Bapat EntepriseDB Corporation The Enterprise Postgres Company |
From: abel <ab...@me...> - 2013-02-08 22:20:21
|
which is the steps to install more than 2 coordinators and DataNodes on different machines, I am setting like this website http://postgresxc.wikia.com/wiki/Real_Server_configuration I'm working with debian 6 after configure I receive the following message: psql: FATAL: Can not Identify itself Coordinator |
From: Michael P. <mic...@gm...> - 2013-02-07 11:48:15
|
On Thu, Feb 7, 2013 at 6:37 PM, Paulo Pires <pj...@ub...> wrote: > Hi all, > > Is there a changelog? I wanted to know if RETURNING support is merged > into this version. > This is a minor release with only bug fixes. You need to wait for the next major release, normally 1.1, to get this support. Btw, the feature has been committed on master branch, so you can already test it by fetching latest development code. -- Michael |
From: Paulo P. <pj...@ub...> - 2013-02-07 11:35:26
|
Hi all, Is there a changelog? I wanted to know if RETURNING support is merged into this version. Thanks, PP On 07-02-2013 07:36, Koichi Suzuki wrote: > Postgres-XC development group release postgres-xc 1.0.2 with a bunch > of fixes/improvement as well as upgrade the base code to PostgreSQL > 9.1.7. > > You will find resources at: > http://postgres-xc.sourceforge.net/ > http://postgres-xc.sourceforge.net/docs/ > https://sourceforge.net/projects/postgres-xc/files/Version_1.0/ > > Upgrading base PostgreSQL version to 9.2 will be included in the next > major release. > > ---- > Postgres-XC is a symmetric PostgreSQL cluster which provides both read > and write scalability using mixture of table sharding and replication. > > Each Postgres-XC cluster node provides single database view, where > application can connect to any cluster node and run any transactions. > Result of transactions are visible from all the cluster nodes without > delay. > ---------- > Koichi Suzuki > > ------------------------------------------------------------------------------ > Free Next-Gen Firewall Hardware Offer > Buy your Sophos next-gen firewall before the end March 2013 > and get the hardware for free! Learn more. > http://p.sf.net/sfu/sophos-d2d-feb > _______________________________________________ > Postgres-xc-general mailing list > Pos...@li... > https://lists.sourceforge.net/lists/listinfo/postgres-xc-general |
From: Satoshi N. <sn...@up...> - 2013-02-07 08:15:34
|
(2013/02/07 16:54), Michael Paquier wrote: > On Thu, Feb 7, 2013 at 4:36 PM, Koichi Suzuki <koi...@gm... > <mailto:koi...@gm...>> wrote: > > Postgres-XC development group release postgres-xc 1.0.2 with a bunch > of fixes/improvement as well as upgrade the base code to PostgreSQL > 9.1.7. > > There is no 1.0.2 tag in the GIT repository. I am just going to correct > that. Ah, sorry. I just added the tag. Also I apologize that I pushed many old tags onto the branch which generates lots of commit mails. Regards, -- Satoshi Nagayasu <sn...@up...> Uptime Technologies, LLC. http://www.uptime.jp |