You can subscribe to this list here.
2010 |
Jan
|
Feb
|
Mar
|
Apr
|
May
(2) |
Jun
|
Jul
|
Aug
(6) |
Sep
|
Oct
(19) |
Nov
(1) |
Dec
|
---|---|---|---|---|---|---|---|---|---|---|---|---|
2011 |
Jan
(12) |
Feb
(1) |
Mar
(4) |
Apr
(4) |
May
(32) |
Jun
(12) |
Jul
(11) |
Aug
(1) |
Sep
(6) |
Oct
(3) |
Nov
|
Dec
(10) |
2012 |
Jan
(11) |
Feb
(1) |
Mar
(3) |
Apr
(25) |
May
(53) |
Jun
(38) |
Jul
(103) |
Aug
(54) |
Sep
(31) |
Oct
(66) |
Nov
(77) |
Dec
(20) |
2013 |
Jan
(91) |
Feb
(86) |
Mar
(103) |
Apr
(107) |
May
(25) |
Jun
(37) |
Jul
(17) |
Aug
(59) |
Sep
(38) |
Oct
(78) |
Nov
(29) |
Dec
(15) |
2014 |
Jan
(23) |
Feb
(82) |
Mar
(118) |
Apr
(101) |
May
(103) |
Jun
(45) |
Jul
(6) |
Aug
(10) |
Sep
|
Oct
(32) |
Nov
|
Dec
(9) |
2015 |
Jan
(3) |
Feb
(5) |
Mar
|
Apr
(1) |
May
|
Jun
|
Jul
(9) |
Aug
(4) |
Sep
(3) |
Oct
|
Nov
|
Dec
|
2016 |
Jan
(3) |
Feb
|
Mar
|
Apr
|
May
|
Jun
|
Jul
|
Aug
|
Sep
|
Oct
|
Nov
|
Dec
|
2017 |
Jan
|
Feb
|
Mar
|
Apr
|
May
|
Jun
(3) |
Jul
|
Aug
|
Sep
|
Oct
|
Nov
|
Dec
|
2018 |
Jan
|
Feb
|
Mar
|
Apr
|
May
(4) |
Jun
|
Jul
|
Aug
|
Sep
|
Oct
|
Nov
|
Dec
|
S | M | T | W | T | F | S |
---|---|---|---|---|---|---|
|
|
1
|
2
(3) |
3
(3) |
4
(4) |
5
(1) |
6
|
7
(7) |
8
(6) |
9
(5) |
10
(7) |
11
(7) |
12
(1) |
13
|
14
(3) |
15
(4) |
16
(6) |
17
(13) |
18
(6) |
19
|
20
|
21
|
22
|
23
(1) |
24
(4) |
25
(5) |
26
(1) |
27
|
28
(2) |
29
(10) |
30
(2) |
|
|
|
From: Aaron J. <aja...@re...> - 2014-04-09 15:53:00
|
So, I upgraded to 1.2.1 and rebuilt the databases from scratch to ensure there were no residual issues from a previous install. Again, the configuration is coord_1, data_node_1, data_node_2 and gtm on a single instance host. After rebuilding with initdb and initgtm; and restarting the server, I have clean logs with just a notification that the databse is ready to accept connections and autovacuum launcher starting. I connected to the coordinator and created two data nodes - no problems. Then I created the database as follows: CREATE DATABASE demo; This comes back from the coordinator successfully. However, the logs appear to have generated the following errors in the sequence shown. ==> data_node_1/pg_log/postgresql-2014-04-09_152532.log <== LOG: failed to find proc 0x7fabf3d9d7c0 in ProcArray STATEMENT: COMMIT PREPARED 'T10008' ==> data_node_2/pg_log/postgresql-2014-04-09_152532.log <== LOG: failed to find proc 0x7f7aa44ac7c0 in ProcArray STATEMENT: COMMIT PREPARED 'T10008' ==> data_coord/pg_log/postgresql-2014-04-09_152532.log <== LOG: failed to find proc 0x7fadfaa18240 in ProcArray STATEMENT: CREATE DATABASE demo ==> data_node_2/pg_log/postgresql-2014-04-09_152532.log <== LOG: failed to find proc 0x7f7aa4499700 in ProcArray ==> data_node_1/pg_log/postgresql-2014-04-09_152532.log <== LOG: failed to find proc 0x7fabf3d8a700 in ProcArray LOG: failed to find proc 0x7fabf3d8a700 in ProcArray As with before, subsequent items eventually fail and I have to believe it's related to this failure during database construction. I can't see anything special in the configuration as they seem incredibly simple, but clearly there is something I've overlooked here. Aaron ________________________________ From: 鈴木 幸市 [ko...@in...] Sent: Tuesday, April 08, 2014 8:30 PM To: Aaron Jackson Cc: pos...@li... Subject: Re: [Postgres-xc-general] Newbie Question 1.2.1 beta has an issue in adding node. 1.2.1 fixes this problem so please simply replace the binary to 1.2.1 and try. Because of the error, there could be some inconsistency among datanodes. So if possible, you can dump all the data and restore them. Regards; --- Koichi Suzuki 2014/04/09 4:47、Aaron Jackson <aja...@re...<mailto:aja...@re...>> のメール: My apologies if this seems far too simple. I'm looking at Postgres-XC 1.2beta to build out a datastore. I've been through the documentation several times and I built out what I believed was a reasonable first step, with a GTM, single coordinator and two data nodes on an amazon i2 instance. I start all four instances (gtm, coord, data_node_1 and data_node_2), add the nodes to the coordinator and build my schema - to this point, everything is scripted. At this point, I already see failures to find a procs in the database logs. The failures match transactions that are coming in and are as follows: STATEMENT: COMMIT PREPARED 'T10010' LOG: failed to find proc 0x7f6bcbc0a7c0 in ProcArray STATEMENT: COMMIT PREPARED 'T10012' LOG: failed to find proc 0x7f6bcbc0a7c0 in ProcArray STATEMENT: COMMIT PREPARED 'T10014' LOG: failed to find proc 0x7f6bcbc0a7c0 in ProcArray STATEMENT: COMMIT PREPARED 'T10016' As these appear to be mostly benign (I'm sure they're not). I begin building the tables in my schema and this is usually about the point I begin to experience a breakdown, usually resulting with the coordinator reporting that the database is in recovery mode. There is nothing special about the DDL - for example, it can be as simple as the following: DROP TABLE IF EXISTS Foo.Bar; CREATE TABLE Foo.Bar( Foo int NOT NULL, Bar smallint NOT NULL ) DISTRIBUTE HASH( Foo ); The first message to come back is "PANIC: sorry, too many clients already" - followed shortly thereafter by "FATAL: the database system is in recovery mode" The configurations were built using initdb or initgtm directly. gtm/gtm.conf ---------------------------------------- nodename = 'one' port = 6666 data_coord/postgresql.conf ---------------------------------------- max_connections = 100 shared_buffers = 128MB max_prepared_transactions = 20 log_destination = 'stderr' logging_collector = on log_directory = 'pg_log' log_timezone = 'UTC' datestyle = 'iso, mdy' timezone = 'UTC' lc_messages = 'en_US.UTF-8' lc_monetary = 'en_US.UTF-8' lc_numeric = 'en_US.UTF-8' lc_time = 'en_US.UTF-8' default_text_search_config = 'pg_catalog.english' min_pool_size = 20 max_pool_size = 200 pgxc_node_name = 'coord_1' data_node_1/postgresql.conf ---------------------------------------- port = 15432 max_connections = 100 shared_buffers = 128MB max_prepared_transactions = 100 log_destination = 'stderr' logging_collector = on log_directory = 'pg_log' log_timezone = 'UTC' datestyle = 'iso, mdy' timezone = 'UTC' lc_messages = 'en_US.UTF-8' lc_monetary = 'en_US.UTF-8' lc_numeric = 'en_US.UTF-8' lc_time = 'en_US.UTF-8' default_text_search_config = 'pg_catalog.english' min_pool_size = 1 max_pool_size = 100 pgxc_node_name = 'node_1' data_node_2/postgresql.conf ---------------------------------------- port = 15433 max_connections = 100 shared_buffers = 128MB max_prepared_transactions = 100 log_destination = 'stderr' logging_collector = on log_directory = 'pg_log' log_timezone = 'UTC' datestyle = 'iso, mdy' timezone = 'UTC' lc_messages = 'en_US.UTF-8' lc_monetary = 'en_US.UTF-8' lc_numeric = 'en_US.UTF-8' lc_time = 'en_US.UTF-8' default_text_search_config = 'pg_catalog.english' min_pool_size = 1 max_pool_size = 100 pgxc_node_name = 'node_2' Any insight would be helpful in understanding what I've done wrong here. Thank you, Aaron ------------------------------------------------------------------------------ Put Bad Developers to Shame Dominate Development with Jenkins Continuous Integration Continuously Automate Build, Test & Deployment Start a new project now. Try Jenkins in the cloud. http://p.sf.net/sfu/13600_Cloudbees_______________________________________________ Postgres-xc-general mailing list Pos...@li... https://lists.sourceforge.net/lists/listinfo/postgres-xc-general |
From: 鈴木 幸市 <ko...@in...> - 2014-04-09 08:43:23
|
Please try pg_dumpall if it works. Regards; --- Koichi Suzuki 2014/04/09 17:31、Juned Khan <jkh...@gm...<mailto:jkh...@gm...>> のメール: Hi All, recently i have just installed the pgxc-1.2.1 on one of my production server but when i tried to take backup of all data it gave me below errors. PGXC pg_dump mydatabase > mydatabase.sql pg_dump: [archiver (db)] query failed: ERROR: cannot execute nextval() in a read-only transaction pg_dump: [archiver (db)] query was: SELECT pg_catalog.nextval('account_package_id_seq'); With some googling i found this http://www.postgresql.org/message-id/4DC...@po... How to solve this issue? even pg_dumpall is working Please suggest. -- Thanks, Juned Khan iNextrix Technologies Pvt Ltd. www.inextrix.com<http://www.inextrix.com/> ------------------------------------------------------------------------------ Put Bad Developers to Shame Dominate Development with Jenkins Continuous Integration Continuously Automate Build, Test & Deployment Start a new project now. Try Jenkins in the cloud. http://p.sf.net/sfu/13600_Cloudbees_______________________________________________ Postgres-xc-general mailing list Pos...@li... https://lists.sourceforge.net/lists/listinfo/postgres-xc-general |
From: Juned K. <jkh...@gm...> - 2014-04-09 08:31:46
|
Hi All, recently i have just installed the pgxc-1.2.1 on one of my production server but when i tried to take backup of all data it gave me below errors. PGXC pg_dump mydatabase > mydatabase.sql pg_dump: [archiver (db)] query failed: ERROR: cannot execute nextval() in a read-only transaction pg_dump: [archiver (db)] query was: SELECT pg_catalog.nextval('account_package_id_seq'); With some googling i found this http://www.postgresql.org/message-id/4DC...@po... How to solve this issue? even pg_dumpall is working Please suggest. -- Thanks, Juned Khan iNextrix Technologies Pvt Ltd. www.inextrix.com |
From: 鈴木 幸市 <ko...@in...> - 2014-04-09 01:30:41
|
1.2.1 beta has an issue in adding node. 1.2.1 fixes this problem so please simply replace the binary to 1.2.1 and try. Because of the error, there could be some inconsistency among datanodes. So if possible, you can dump all the data and restore them. Regards; --- Koichi Suzuki 2014/04/09 4:47、Aaron Jackson <aja...@re...<mailto:aja...@re...>> のメール: My apologies if this seems far too simple. I'm looking at Postgres-XC 1.2beta to build out a datastore. I've been through the documentation several times and I built out what I believed was a reasonable first step, with a GTM, single coordinator and two data nodes on an amazon i2 instance. I start all four instances (gtm, coord, data_node_1 and data_node_2), add the nodes to the coordinator and build my schema - to this point, everything is scripted. At this point, I already see failures to find a procs in the database logs. The failures match transactions that are coming in and are as follows: STATEMENT: COMMIT PREPARED 'T10010' LOG: failed to find proc 0x7f6bcbc0a7c0 in ProcArray STATEMENT: COMMIT PREPARED 'T10012' LOG: failed to find proc 0x7f6bcbc0a7c0 in ProcArray STATEMENT: COMMIT PREPARED 'T10014' LOG: failed to find proc 0x7f6bcbc0a7c0 in ProcArray STATEMENT: COMMIT PREPARED 'T10016' As these appear to be mostly benign (I'm sure they're not). I begin building the tables in my schema and this is usually about the point I begin to experience a breakdown, usually resulting with the coordinator reporting that the database is in recovery mode. There is nothing special about the DDL - for example, it can be as simple as the following: DROP TABLE IF EXISTS Foo.Bar; CREATE TABLE Foo.Bar( Foo int NOT NULL, Bar smallint NOT NULL ) DISTRIBUTE HASH( Foo ); The first message to come back is "PANIC: sorry, too many clients already" - followed shortly thereafter by "FATAL: the database system is in recovery mode" The configurations were built using initdb or initgtm directly. gtm/gtm.conf ---------------------------------------- nodename = 'one' port = 6666 data_coord/postgresql.conf ---------------------------------------- max_connections = 100 shared_buffers = 128MB max_prepared_transactions = 20 log_destination = 'stderr' logging_collector = on log_directory = 'pg_log' log_timezone = 'UTC' datestyle = 'iso, mdy' timezone = 'UTC' lc_messages = 'en_US.UTF-8' lc_monetary = 'en_US.UTF-8' lc_numeric = 'en_US.UTF-8' lc_time = 'en_US.UTF-8' default_text_search_config = 'pg_catalog.english' min_pool_size = 20 max_pool_size = 200 pgxc_node_name = 'coord_1' data_node_1/postgresql.conf ---------------------------------------- port = 15432 max_connections = 100 shared_buffers = 128MB max_prepared_transactions = 100 log_destination = 'stderr' logging_collector = on log_directory = 'pg_log' log_timezone = 'UTC' datestyle = 'iso, mdy' timezone = 'UTC' lc_messages = 'en_US.UTF-8' lc_monetary = 'en_US.UTF-8' lc_numeric = 'en_US.UTF-8' lc_time = 'en_US.UTF-8' default_text_search_config = 'pg_catalog.english' min_pool_size = 1 max_pool_size = 100 pgxc_node_name = 'node_1' data_node_2/postgresql.conf ---------------------------------------- port = 15433 max_connections = 100 shared_buffers = 128MB max_prepared_transactions = 100 log_destination = 'stderr' logging_collector = on log_directory = 'pg_log' log_timezone = 'UTC' datestyle = 'iso, mdy' timezone = 'UTC' lc_messages = 'en_US.UTF-8' lc_monetary = 'en_US.UTF-8' lc_numeric = 'en_US.UTF-8' lc_time = 'en_US.UTF-8' default_text_search_config = 'pg_catalog.english' min_pool_size = 1 max_pool_size = 100 pgxc_node_name = 'node_2' Any insight would be helpful in understanding what I've done wrong here. Thank you, Aaron ------------------------------------------------------------------------------ Put Bad Developers to Shame Dominate Development with Jenkins Continuous Integration Continuously Automate Build, Test & Deployment Start a new project now. Try Jenkins in the cloud. http://p.sf.net/sfu/13600_Cloudbees_______________________________________________ Postgres-xc-general mailing list Pos...@li... https://lists.sourceforge.net/lists/listinfo/postgres-xc-general |
From: 鈴木 幸市 <ko...@in...> - 2014-04-09 01:27:34
|
Could you let me know what release you used. I suppose 1.2.1 and 1.1.1 has this fix. Also let me know how you dropped node, manually or by pgxc_ctl? --- Koichi Suzuki 2014/04/09 1:46、Sergio Sinuco <ser...@da...<mailto:ser...@da...>> のメール: Hi. I decided to modify my configuration like this: Node A: 1 gtm master Node B: 1 gtm_proxy, 1 coordinator master, 1 coordinator slave (for node C coordinator), 1 datanode master, 1 datanode slave (for node C data node) Node C: 1 gtm_proxy, 1 coordinator master, 1 coordinator slave (for node B coordinator), 1 datanode master, 1 datanode slave (for node B data node) Node D: 1 gtm slave I added following lines in postgresql.conf of each data node slave. pgxc_ctl script didn't add them. archive_mode = off archive_command = '' max_wal_senders = 0 wal_level = minimal In normal operation, everything worked fine. Fist I killed data node master in Node B, I promoted corresponding data node slave to master with pgxc_ctl. I got the following error: ERROR: PGXC node datanode1: two nodes cannot be primary pgxc_pool_reload ------------------ t (1 row) ERROR: PGXC node datanode1: two nodes cannot be primary pgxc_pool_reload ------------------ t (1 row) In both master coordinators, I queried pgxc_node and the failed datanode still pointed to node B. I had to execute ALTER NODE for change host and primary property. After that, everything worked fine. Then, I killed coordinator master in Node B, I promoted corresponding coordinator slave to master. I got the following results: ALTER NODE pgxc_pool_reload ------------------ t (1 row) ERROR: Failed to get pooled connections CONTEXT: SQL statement "EXECUTE DIRECT ON (coord1) 'SELECT pg_catalog.pg_try_advisory_xact_lock_shared(65535, 0)'" pgxc_pool_reload ------------------ t (1 row) So in new master coordinator, when I queried pgxc_node the failed coordinator pointed correctly to node C. But in the original master coordinator of node C, when I queried pgxc_node the failed coordinator pointed to node B. In this coordinator I tried to modify failed coordinator with ALTER NODE but I got the same error Failed to get pooled connections. Finally when I execute ALTER TALBE … DELETE NODE for a table, in the new master coordinator it executed ok. But in the original master coordinator of node C I got the same error Failed to get pooled connections. Is this behaviour normal? What can i do? 2014-04-02 23:19 GMT-05:00 鈴木 幸市 <ko...@in...<mailto:ko...@in...>>: Before removing a datanode, you should drop the datanode from tables before you drop the datanode. You can do this with ALTER TALBE … DELETE NODE as seen in http://postgres-xc.sourceforge.net/docs/1_1/sql-altertable.html. This will extract all the rows from the node and redistribute them (as well as others) to the new set of nodes. List of the nodes where a table is distributed/replicated in pgxc_class. Regards; --- Koichi Suzuki 2014/04/02 15:06、Sergio Sinuco <ser...@da...<mailto:ser...@da...>> のメール: I use pgxc_ctl. My configuration is: Node A: 1 gtm master Node B: 1 gtm_proxy, 1 coordinator, 1 datanode Node C: 1 gtm_proxy, 1 coordinator, 1 datanode Node D: 1 gtm slave First I stopped coordinator and data node from Node C. Then I executed pgxc_ctl remove datanode.... and pgxc_ctl remove coordinator... were ok. But I got "Failed to get pooled connections" message when I tried an insert or update in Node B. I also ran pgxc_pool_reload and restarted the cluster. If I run "SELECT * FROM pgxc_node" in Node B I only have one coordinator and one data node. The inserts and update commands was ran in a DISTRIBUTE BY REPLICATION table. 2014-04-02 0:32 GMT-05:00 鈴木 幸市 <ko...@in...<mailto:ko...@in...>>: Did you change your cluster configuration? Are you using pgxc_ctl or doing configuration/operation manually? They help to see what’s going on. Thank you; --- Koichi Suzuki 2014/04/02 14:02、Sergio Sinuco <ser...@da...<mailto:ser...@da...>> のメール: Hi. I dropped a coordinator node and a data node from a cluster, but when i try to make an insert or update I have a "Failed to get pooled connections" message. I executed pgxc_pool_reload() but it didn't work. What can I do? I use pgxc 1.1. -- Sergio E. Sinuco Leon Arquitecto de soluciones Datatraffic S.A.S. Móvil: (57) 310 884 26 50 Fijo (+571) 7426160 Ext 115<tel:%28%2B571%29%207426160%20Ext%20115> Calle 93 # 15-27 Ofc. 502 Calle 29 # 6 - 94 Ofc. 601 Bogotá, Colombia. www.datatraffic.com.co<http://www.datatraffic.com.co/> ------------------------------------------------------------------------------ _______________________________________________ Postgres-xc-general mailing list Pos...@li...<mailto:Pos...@li...> https://lists.sourceforge.net/lists/listinfo/postgres-xc-general -- Sergio E. Sinuco Leon Arquitecto de soluciones Datatraffic S.A.S. Móvil: (57) 310 884 26 50 Fijo (+571) 7426160 Ext 115<tel:%28%2B571%29%207426160%20Ext%20115> Calle 93 # 15-27 Ofc. 502 Calle 29 # 6 - 94 Ofc. 601 Bogotá, Colombia. www.datatraffic.com.co<http://www.datatraffic.com.co/> -- Sergio E. Sinuco Leon Arquitecto de soluciones Datatraffic S.A.S. Móvil: (57) 310 884 26 50 Fijo (+571) 7426160 Ext 115 Calle 93 # 15-27 Ofc. 502 Calle 29 # 6 - 94 Ofc. 601 Bogotá, Colombia. www.datatraffic.com.co<http://www.datatraffic.com.co/> |