You can subscribe to this list here.
2010 |
Jan
|
Feb
|
Mar
|
Apr
|
May
(2) |
Jun
|
Jul
|
Aug
(6) |
Sep
|
Oct
(19) |
Nov
(1) |
Dec
|
---|---|---|---|---|---|---|---|---|---|---|---|---|
2011 |
Jan
(12) |
Feb
(1) |
Mar
(4) |
Apr
(4) |
May
(32) |
Jun
(12) |
Jul
(11) |
Aug
(1) |
Sep
(6) |
Oct
(3) |
Nov
|
Dec
(10) |
2012 |
Jan
(11) |
Feb
(1) |
Mar
(3) |
Apr
(25) |
May
(53) |
Jun
(38) |
Jul
(103) |
Aug
(54) |
Sep
(31) |
Oct
(66) |
Nov
(77) |
Dec
(20) |
2013 |
Jan
(91) |
Feb
(86) |
Mar
(103) |
Apr
(107) |
May
(25) |
Jun
(37) |
Jul
(17) |
Aug
(59) |
Sep
(38) |
Oct
(78) |
Nov
(29) |
Dec
(15) |
2014 |
Jan
(23) |
Feb
(82) |
Mar
(118) |
Apr
(101) |
May
(103) |
Jun
(45) |
Jul
(6) |
Aug
(10) |
Sep
|
Oct
(32) |
Nov
|
Dec
(9) |
2015 |
Jan
(3) |
Feb
(5) |
Mar
|
Apr
(1) |
May
|
Jun
|
Jul
(9) |
Aug
(4) |
Sep
(3) |
Oct
|
Nov
|
Dec
|
2016 |
Jan
(3) |
Feb
|
Mar
|
Apr
|
May
|
Jun
|
Jul
|
Aug
|
Sep
|
Oct
|
Nov
|
Dec
|
2017 |
Jan
|
Feb
|
Mar
|
Apr
|
May
|
Jun
(3) |
Jul
|
Aug
|
Sep
|
Oct
|
Nov
|
Dec
|
2018 |
Jan
|
Feb
|
Mar
|
Apr
|
May
(4) |
Jun
|
Jul
|
Aug
|
Sep
|
Oct
|
Nov
|
Dec
|
S | M | T | W | T | F | S |
---|---|---|---|---|---|---|
|
|
|
|
|
1
(6) |
2
|
3
|
4
|
5
|
6
|
7
(5) |
8
(1) |
9
|
10
|
11
(2) |
12
|
13
(7) |
14
|
15
(6) |
16
(1) |
17
|
18
(9) |
19
(10) |
20
(3) |
21
(6) |
22
(6) |
23
|
24
|
25
(20) |
26
(1) |
27
(1) |
28
(2) |
|
|
From: Koichi S. <ko...@in...> - 2013-02-15 09:10:37
|
If you're not sure about the configuration, please try pgxc_ctl available at git://github.com/koichi-szk/PGXC-Tools.git This is bash script (I'm rewriting into C now) so it will help to understand how to configure XC. Regards; --- Koichi Suzuki On Fri, 15 Feb 2013 04:22:49 +0000 Arni Sumarlidason <Arn...@md...> wrote: > Thank you both for fast response!! > > RE: Koichi Suzuki > I downloaded the git this afternoon. > > RE: Michael Paquier > > - Confirm it is from the datanode's log. > > - Both coord & datanode connect via the same gtm_proxy on localhost > > These are my simplified configs, the only change I make on each node is the nodename, > PG_HBA > local all all trust > host all all 127.0.0.1/32 trust > host all all ::1/128 trust > host all all 10.100.170.0/24 trust > > COORD > pgxc_node_name = 'coord01' > listen_addresses = '*' > port = 5432 > max_connections = 200 > > gtm_port = 6666 > gtm_host = 'localhost' > pooler_port = 6670 > > shared_buffers = 32MB > work_mem = 1MB > maintenance_work_mem = 16MB > max_stack_depth = 2MB > > log_timezone = 'US/Eastern' > datestyle = 'iso, mdy' > timezone = 'US/Eastern' > lc_messages = 'en_US.UTF-8' > lc_monetary = 'en_US.UTF-8' > lc_numeric = 'en_US.UTF-8' > lc_time = 'en_US.UTF-8' > default_text_search_config = 'pg_catalog.english' > > DATA > pgxc_node_name = 'data01' > listen_addresses = '*' > port = 5433 > max_connections = 200 > > gtm_port = 6666 > gtm_host = 'localhost' > > shared_buffers = 32MB > work_mem = 1MB > maintenance_work_mem = 16MB > max_stack_depth = 2MB > > log_timezone = 'US/Eastern' > datestyle = 'iso, mdy' > timezone = 'US/Eastern' > lc_messages = 'en_US.UTF-8' > lc_monetary = 'en_US.UTF-8' > lc_numeric = 'en_US.UTF-8' > lc_time = 'en_US.UTF-8' > default_text_search_config = 'pg_catalog.english' > > PROXY > Nodename = 'proxy01' > listen_addresses = '*' > port = 6666 > gtm_host = '10.100.170.10' > gtm_port = 6666 > > > best, > > Arni > > From: Michael Paquier [mailto:mic...@gm...] > Sent: Thursday, February 14, 2013 11:06 PM > To: Arni Sumarlidason > Cc: pos...@li... > Subject: Re: [Postgres-xc-general] pgxc: snapshot > > > On Fri, Feb 15, 2013 at 12:57 PM, Arni Sumarlidason <Arn...@md...<mailto:Arn...@md...>> wrote: > Hi Everyone! > > I am getting these errors, "Warning: do not have a gtm snapshot available"[1]. After researching I found posts about the auto vacuum causing these errors, is this fix or work in progress? Also, I am seeing them without the CONTEXT: automatic vacuum message too. Is this something to worry about? Cluster seems to be functioning normally. > > Vacuum and analyze from pgadmin looks like this, > INFO: vacuuming "public.table" > INFO: "table": found 0 removable, 0 nonremovable row versions in 0 pages > DETAIL: 0 dead row versions cannot be removed yet. > CPU 0.00s/0.00u sec elapsed 0.00 sec. > INFO: analyzing "public.table" > INFO: "table": scanned 0 of 0 pages, containing 0 live rows and 0 dead rows; 0 rows in sample, 0 estimated total rows > Total query runtime: 15273 ms. > > Should we use execute direct to perform maintenance? > No. Isn't this happening on a Datanode? > Be sure first to set gtm_host and gtm_port in postgresql.conf of all the nodes, Coordinator and Datanode included. GXID and snapshots are fetched of course on Coordinator for normal transaction run but also on all the nodes for autovacuum. > -- > Michael |
From: Koichi S. <ko...@in...> - 2013-02-15 09:07:54
|
I fixed this issue happening on Datanode too. This is included in 1.0.2. Year, it is important to configure gtm correctly for all the datanodes/coordinators. I wonder, if the configuration is not correct, these nodes won't startup. Regards; --- Koichi On Fri, 15 Feb 2013 13:06:22 +0900 Michael Paquier <mic...@gm...> wrote: > On Fri, Feb 15, 2013 at 12:57 PM, Arni Sumarlidason < > Arn...@md...> wrote: > > > Hi Everyone!**** > > > > ** ** > > > > I am getting these errors, “Warning: do not have a gtm snapshot > > available”[1]. After researching I found posts about the auto vacuum > > causing these errors, is this fix or work in progress? Also, I am seeing > > them without the CONTEXT: automatic vacuum message too. Is this something > > to worry about? Cluster seems to be functioning normally. > > > **** > > > > ** ** > > > > Vacuum and analyze from pgadmin looks like this,**** > > > > *INFO: vacuuming "public.table"* > > > > *INFO: "table": found 0 removable, 0 nonremovable row versions in 0 pages > > * > > > > *DETAIL: 0 dead row versions cannot be removed yet.* > > > > *CPU 0.00s/0.00u sec elapsed 0.00 sec.* > > > > *INFO: analyzing "public.table"* > > > > *INFO: "table": scanned 0 of 0 pages, containing 0 live rows and 0 dead > > rows; 0 rows in sample, 0 estimated total rows* > > > > *Total query runtime: 15273 ms.* > > > > * * > > > > Should we use execute direct to perform maintenance?**** > > > > ** > > > No. Isn't this happening on a Datanode? > Be sure first to set gtm_host and gtm_port in postgresql.conf of all the > nodes, Coordinator and Datanode included. GXID and snapshots are fetched of > course on Coordinator for normal transaction run but also on all the nodes > for autovacuum. > -- > Michael |
From: Arni S. <Arn...@md...> - 2013-02-15 04:23:05
|
Thank you both for fast response!! RE: Koichi Suzuki I downloaded the git this afternoon. RE: Michael Paquier - Confirm it is from the datanode's log. - Both coord & datanode connect via the same gtm_proxy on localhost These are my simplified configs, the only change I make on each node is the nodename, PG_HBA local all all trust host all all 127.0.0.1/32 trust host all all ::1/128 trust host all all 10.100.170.0/24 trust COORD pgxc_node_name = 'coord01' listen_addresses = '*' port = 5432 max_connections = 200 gtm_port = 6666 gtm_host = 'localhost' pooler_port = 6670 shared_buffers = 32MB work_mem = 1MB maintenance_work_mem = 16MB max_stack_depth = 2MB log_timezone = 'US/Eastern' datestyle = 'iso, mdy' timezone = 'US/Eastern' lc_messages = 'en_US.UTF-8' lc_monetary = 'en_US.UTF-8' lc_numeric = 'en_US.UTF-8' lc_time = 'en_US.UTF-8' default_text_search_config = 'pg_catalog.english' DATA pgxc_node_name = 'data01' listen_addresses = '*' port = 5433 max_connections = 200 gtm_port = 6666 gtm_host = 'localhost' shared_buffers = 32MB work_mem = 1MB maintenance_work_mem = 16MB max_stack_depth = 2MB log_timezone = 'US/Eastern' datestyle = 'iso, mdy' timezone = 'US/Eastern' lc_messages = 'en_US.UTF-8' lc_monetary = 'en_US.UTF-8' lc_numeric = 'en_US.UTF-8' lc_time = 'en_US.UTF-8' default_text_search_config = 'pg_catalog.english' PROXY Nodename = 'proxy01' listen_addresses = '*' port = 6666 gtm_host = '10.100.170.10' gtm_port = 6666 best, Arni From: Michael Paquier [mailto:mic...@gm...] Sent: Thursday, February 14, 2013 11:06 PM To: Arni Sumarlidason Cc: pos...@li... Subject: Re: [Postgres-xc-general] pgxc: snapshot On Fri, Feb 15, 2013 at 12:57 PM, Arni Sumarlidason <Arn...@md...<mailto:Arn...@md...>> wrote: Hi Everyone! I am getting these errors, "Warning: do not have a gtm snapshot available"[1]. After researching I found posts about the auto vacuum causing these errors, is this fix or work in progress? Also, I am seeing them without the CONTEXT: automatic vacuum message too. Is this something to worry about? Cluster seems to be functioning normally. Vacuum and analyze from pgadmin looks like this, INFO: vacuuming "public.table" INFO: "table": found 0 removable, 0 nonremovable row versions in 0 pages DETAIL: 0 dead row versions cannot be removed yet. CPU 0.00s/0.00u sec elapsed 0.00 sec. INFO: analyzing "public.table" INFO: "table": scanned 0 of 0 pages, containing 0 live rows and 0 dead rows; 0 rows in sample, 0 estimated total rows Total query runtime: 15273 ms. Should we use execute direct to perform maintenance? No. Isn't this happening on a Datanode? Be sure first to set gtm_host and gtm_port in postgresql.conf of all the nodes, Coordinator and Datanode included. GXID and snapshots are fetched of course on Coordinator for normal transaction run but also on all the nodes for autovacuum. -- Michael |
From: Michael P. <mic...@gm...> - 2013-02-15 04:06:34
|
On Fri, Feb 15, 2013 at 12:57 PM, Arni Sumarlidason < Arn...@md...> wrote: > Hi Everyone!**** > > ** ** > > I am getting these errors, “Warning: do not have a gtm snapshot > available”[1]. After researching I found posts about the auto vacuum > causing these errors, is this fix or work in progress? Also, I am seeing > them without the CONTEXT: automatic vacuum message too. Is this something > to worry about? Cluster seems to be functioning normally. > **** > > ** ** > > Vacuum and analyze from pgadmin looks like this,**** > > *INFO: vacuuming "public.table"* > > *INFO: "table": found 0 removable, 0 nonremovable row versions in 0 pages > * > > *DETAIL: 0 dead row versions cannot be removed yet.* > > *CPU 0.00s/0.00u sec elapsed 0.00 sec.* > > *INFO: analyzing "public.table"* > > *INFO: "table": scanned 0 of 0 pages, containing 0 live rows and 0 dead > rows; 0 rows in sample, 0 estimated total rows* > > *Total query runtime: 15273 ms.* > > * * > > Should we use execute direct to perform maintenance?**** > > ** > No. Isn't this happening on a Datanode? Be sure first to set gtm_host and gtm_port in postgresql.conf of all the nodes, Coordinator and Datanode included. GXID and snapshots are fetched of course on Coordinator for normal transaction run but also on all the nodes for autovacuum. -- Michael |
From: Koichi S. <ko...@in...> - 2013-02-15 04:04:59
|
You don't have to do execute direct in this case. I've found similar issue last December and made a fix both for REL1_0_STABLE and master. I believe it is included in 1.0.2 and hope it fixes your issue. Could you try the latest one? If you still have the same problem, please let me know. Best; --- Koichi Suzuki On Fri, 15 Feb 2013 03:57:38 +0000 Arni Sumarlidason <Arn...@md...> wrote: > Hi Everyone! > > I am getting these errors, "Warning: do not have a gtm snapshot available"[1]. After researching I found posts about the auto vacuum causing these errors, is this fix or work in progress? Also, I am seeing them without the CONTEXT: automatic vacuum message too. Is this something to worry about? Cluster seems to be functioning normally. > > Vacuum and analyze from pgadmin looks like this, > INFO: vacuuming "public.table" > INFO: "table": found 0 removable, 0 nonremovable row versions in 0 pages > DETAIL: 0 dead row versions cannot be removed yet. > CPU 0.00s/0.00u sec elapsed 0.00 sec. > INFO: analyzing "public.table" > INFO: "table": scanned 0 of 0 pages, containing 0 live rows and 0 dead rows; 0 rows in sample, 0 estimated total rows > Total query runtime: 15273 ms. > > Should we use execute direct to perform maintenance? > > > Arni Sumarlidason | Software Engineer, Information Technology > > [1] [cid:image001.png@01CE0B04.7BC38930] |