You can subscribe to this list here.
2010 |
Jan
|
Feb
|
Mar
|
Apr
|
May
(2) |
Jun
|
Jul
|
Aug
(6) |
Sep
|
Oct
(19) |
Nov
(1) |
Dec
|
---|---|---|---|---|---|---|---|---|---|---|---|---|
2011 |
Jan
(12) |
Feb
(1) |
Mar
(4) |
Apr
(4) |
May
(32) |
Jun
(12) |
Jul
(11) |
Aug
(1) |
Sep
(6) |
Oct
(3) |
Nov
|
Dec
(10) |
2012 |
Jan
(11) |
Feb
(1) |
Mar
(3) |
Apr
(25) |
May
(53) |
Jun
(38) |
Jul
(103) |
Aug
(54) |
Sep
(31) |
Oct
(66) |
Nov
(77) |
Dec
(20) |
2013 |
Jan
(91) |
Feb
(86) |
Mar
(103) |
Apr
(107) |
May
(25) |
Jun
(37) |
Jul
(17) |
Aug
(59) |
Sep
(38) |
Oct
(78) |
Nov
(29) |
Dec
(15) |
2014 |
Jan
(23) |
Feb
(82) |
Mar
(118) |
Apr
(101) |
May
(103) |
Jun
(45) |
Jul
(6) |
Aug
(10) |
Sep
|
Oct
(32) |
Nov
|
Dec
(9) |
2015 |
Jan
(3) |
Feb
(5) |
Mar
|
Apr
(1) |
May
|
Jun
|
Jul
(9) |
Aug
(4) |
Sep
(3) |
Oct
|
Nov
|
Dec
|
2016 |
Jan
(3) |
Feb
|
Mar
|
Apr
|
May
|
Jun
|
Jul
|
Aug
|
Sep
|
Oct
|
Nov
|
Dec
|
2017 |
Jan
|
Feb
|
Mar
|
Apr
|
May
|
Jun
(3) |
Jul
|
Aug
|
Sep
|
Oct
|
Nov
|
Dec
|
2018 |
Jan
|
Feb
|
Mar
|
Apr
|
May
(4) |
Jun
|
Jul
|
Aug
|
Sep
|
Oct
|
Nov
|
Dec
|
S | M | T | W | T | F | S |
---|---|---|---|---|---|---|
|
|
1
|
2
(3) |
3
(3) |
4
(4) |
5
(1) |
6
|
7
(7) |
8
(6) |
9
(5) |
10
(7) |
11
(7) |
12
(1) |
13
|
14
(3) |
15
(4) |
16
(6) |
17
(13) |
18
(6) |
19
|
20
|
21
|
22
|
23
(1) |
24
(4) |
25
(5) |
26
(1) |
27
|
28
(2) |
29
(10) |
30
(2) |
|
|
|
From: Aaron J. <aja...@re...> - 2014-04-08 20:02:19
|
My apologies if this seems far too simple. I'm looking at Postgres-XC 1.2beta to build out a datastore. I've been through the documentation several times and I built out what I believed was a reasonable first step, with a GTM, single coordinator and two data nodes on an amazon i2 instance. I start all four instances (gtm, coord, data_node_1 and data_node_2), add the nodes to the coordinator and build my schema - to this point, everything is scripted. At this point, I already see failures to find a procs in the database logs. The failures match transactions that are coming in and are as follows: STATEMENT: COMMIT PREPARED 'T10010' LOG: failed to find proc 0x7f6bcbc0a7c0 in ProcArray STATEMENT: COMMIT PREPARED 'T10012' LOG: failed to find proc 0x7f6bcbc0a7c0 in ProcArray STATEMENT: COMMIT PREPARED 'T10014' LOG: failed to find proc 0x7f6bcbc0a7c0 in ProcArray STATEMENT: COMMIT PREPARED 'T10016' As these appear to be mostly benign (I'm sure they're not). I begin building the tables in my schema and this is usually about the point I begin to experience a breakdown, usually resulting with the coordinator reporting that the database is in recovery mode. There is nothing special about the DDL - for example, it can be as simple as the following: DROP TABLE IF EXISTS Foo.Bar; CREATE TABLE Foo.Bar( Foo int NOT NULL, Bar smallint NOT NULL ) DISTRIBUTE HASH( Foo ); The first message to come back is "PANIC: sorry, too many clients already" - followed shortly thereafter by "FATAL: the database system is in recovery mode" The configurations were built using initdb or initgtm directly. gtm/gtm.conf ---------------------------------------- nodename = 'one' port = 6666 data_coord/postgresql.conf ---------------------------------------- max_connections = 100 shared_buffers = 128MB max_prepared_transactions = 20 log_destination = 'stderr' logging_collector = on log_directory = 'pg_log' log_timezone = 'UTC' datestyle = 'iso, mdy' timezone = 'UTC' lc_messages = 'en_US.UTF-8' lc_monetary = 'en_US.UTF-8' lc_numeric = 'en_US.UTF-8' lc_time = 'en_US.UTF-8' default_text_search_config = 'pg_catalog.english' min_pool_size = 20 max_pool_size = 200 pgxc_node_name = 'coord_1' data_node_1/postgresql.conf ---------------------------------------- port = 15432 max_connections = 100 shared_buffers = 128MB max_prepared_transactions = 100 log_destination = 'stderr' logging_collector = on log_directory = 'pg_log' log_timezone = 'UTC' datestyle = 'iso, mdy' timezone = 'UTC' lc_messages = 'en_US.UTF-8' lc_monetary = 'en_US.UTF-8' lc_numeric = 'en_US.UTF-8' lc_time = 'en_US.UTF-8' default_text_search_config = 'pg_catalog.english' min_pool_size = 1 max_pool_size = 100 pgxc_node_name = 'node_1' data_node_2/postgresql.conf ---------------------------------------- port = 15433 max_connections = 100 shared_buffers = 128MB max_prepared_transactions = 100 log_destination = 'stderr' logging_collector = on log_directory = 'pg_log' log_timezone = 'UTC' datestyle = 'iso, mdy' timezone = 'UTC' lc_messages = 'en_US.UTF-8' lc_monetary = 'en_US.UTF-8' lc_numeric = 'en_US.UTF-8' lc_time = 'en_US.UTF-8' default_text_search_config = 'pg_catalog.english' min_pool_size = 1 max_pool_size = 100 pgxc_node_name = 'node_2' Any insight would be helpful in understanding what I've done wrong here. Thank you, Aaron |
From: Sergio S. <ser...@da...> - 2014-04-08 17:14:36
|
Hi. I decided to modify my configuration like this: Node A: 1 gtm master Node B: 1 gtm_proxy, 1 coordinator master, 1 coordinator slave (for node C coordinator), 1 datanode master, 1 datanode slave (for node C data node) Node C: 1 gtm_proxy, 1 coordinator master, 1 coordinator slave (for node B coordinator), 1 datanode master, 1 datanode slave (for node B data node) Node D: 1 gtm slave I added following lines in postgresql.conf of each data node slave. pgxc_ctl script didn't add them. archive_mode = off archive_command = '' max_wal_senders = 0 wal_level = minimal In normal operation, everything worked fine. Fist I killed data node master in Node B, I promoted corresponding data node slave to master with pgxc_ctl. I got the following error: ERROR: PGXC node datanode1: two nodes cannot be primary pgxc_pool_reload ------------------ t (1 row) ERROR: PGXC node datanode1: two nodes cannot be primary pgxc_pool_reload ------------------ t (1 row) In both master coordinators, I queried pgxc_node and the failed datanode still pointed to node B. I had to execute ALTER NODE for change host and primary property. After that, everything worked fine. Then, I killed coordinator master in Node B, I promoted corresponding coordinator slave to master. I got the following results: ALTER NODE pgxc_pool_reload ------------------ t (1 row) ERROR: Failed to get pooled connections CONTEXT: SQL statement "EXECUTE DIRECT ON (coord1) 'SELECT pg_catalog.pg_try_advisory_xact_lock_shared(65535, 0)'" pgxc_pool_reload ------------------ t (1 row) So in new master coordinator, when I queried pgxc_node the failed coordinator pointed correctly to node C. But in the original master coordinator of node C, when I queried pgxc_node the failed coordinator pointed to node B. In this coordinator I tried to modify failed coordinator with ALTER NODE but I got the same error Failed to get pooled connections. Finally when I execute ALTER TALBE … DELETE NODE for a table, in the new master coordinator it executed ok. But in the original master coordinator of node C I got the same error Failed to get pooled connections. Is this behaviour normal? What can i do? 2014-04-02 23:19 GMT-05:00 鈴木 幸市 <ko...@in...>: > Before removing a datanode, you should drop the datanode from tables > before you drop the datanode. You can do this with ALTER TALBE … DELETE > NODE as seen in > http://postgres-xc.sourceforge.net/docs/1_1/sql-altertable.html. This > will extract all the rows from the node and redistribute them (as well as > others) to the new set of nodes. > > List of the nodes where a table is distributed/replicated in pgxc_class. > > Regards; > --- > Koichi Suzuki > > 2014/04/02 15:06、Sergio Sinuco <ser...@da...> のメール: > > I use pgxc_ctl. > > My configuration is: > > Node A: 1 gtm master > Node B: 1 gtm_proxy, 1 coordinator, 1 datanode > Node C: 1 gtm_proxy, 1 coordinator, 1 datanode > Node D: 1 gtm slave > > First I stopped coordinator and data node from Node C. Then I executed* > pgxc_ctl remove datanode.... *and *pgxc_ctl remove coordinator...* were > ok. But I got "Failed to get pooled connections" message when I tried an > insert or update in Node B. I also ran pgxc_pool_reload and restarted the > cluster. If I run "SELECT * FROM pgxc_node" in Node B I only have one > coordinator and one data node. > > The inserts and update commands was ran in a DISTRIBUTE BY REPLICATION > table. > > > > 2014-04-02 0:32 GMT-05:00 鈴木 幸市 <ko...@in...>: > >> Did you change your cluster configuration? Are you using pgxc_ctl or >> doing configuration/operation manually? >> >> They help to see what’s going on. >> >> Thank you; >> --- >> Koichi Suzuki >> >> 2014/04/02 14:02、Sergio Sinuco <ser...@da...> のメール: >> >> Hi. I dropped a coordinator node and a data node from a cluster, but >> when i try to make an insert or update I have a "Failed to get pooled >> connections" message. I executed pgxc_pool_reload() but it didn't work. >> What can I do? I use pgxc 1.1. >> >> -- >> Sergio E. Sinuco Leon >> Arquitecto de soluciones >> Datatraffic S.A.S. >> Móvil: (57) 310 884 26 50 >> Fijo (+571) 7426160 Ext 115 >> Calle 93 # 15-27 Ofc. 502 >> Calle 29 # 6 - 94 Ofc. 601 >> Bogotá, Colombia. >> www.datatraffic.com.co >> >> ------------------------------------------------------------------------------ >> _______________________________________________ >> Postgres-xc-general mailing list >> Pos...@li... >> https://lists.sourceforge.net/lists/listinfo/postgres-xc-general >> >> >> > > > -- > Sergio E. Sinuco Leon > Arquitecto de soluciones > Datatraffic S.A.S. > Móvil: (57) 310 884 26 50 > Fijo (+571) 7426160 Ext 115 > Calle 93 # 15-27 Ofc. 502 > Calle 29 # 6 - 94 Ofc. 601 > Bogotá, Colombia. > www.datatraffic.com.co > > > -- Sergio E. Sinuco Leon Arquitecto de soluciones Datatraffic S.A.S. Móvil: (57) 310 884 26 50 Fijo (+571) 7426160 Ext 115 Calle 93 # 15-27 Ofc. 502 Calle 29 # 6 - 94 Ofc. 601 Bogotá, Colombia. www.datatraffic.com.co |
From: Mason S. <ms...@tr...> - 2014-04-08 11:59:21
|
Hi Tim, On Tue, Apr 8, 2014 at 12:28 AM, Tim Uckun <tim...@gm...> wrote: > It would be ideal if shards could be distributed amongst the data nodes in > a redundant fashion. Perhaps with limits like at least three nodes, no more > than five nodes etc. > As mentioned, there is no built-in HA. While some tables can be "replicated" there are not redundant copies of the other tables that are "distributed" (partiioned/sharded). The way to achieve this is analogous to HA in PostgreSQL; have a redundant copy of each node. In addition, Postgres-XC adds a a GTM Standby. My company has used Corosync/Pacemaker for failing over the components for our StormDB branch of Postgres-XC. What we do is have a replica of node1 on node2 and node3, have a replica of node2 on node3 and node4, etc. What you are describing above (shard redundancy policies), is actually what my employer, TransLattice, offers in its multi-master TED product. It decouples sharding from the nodes so that it is not a 1-1 mapping. It is not open source, however. Good luck! Regards, -- Mason Sharp TransLattice - http://www.translattice.com Distributed and Clustered Database Solutions |
From: Tim U. <tim...@gm...> - 2014-04-08 04:28:42
|
It would be ideal if shards could be distributed amongst the data nodes in a redundant fashion. Perhaps with limits like at least three nodes, no more than five nodes etc. Thanks. On Tue, Apr 8, 2014 at 12:59 PM, 鈴木 幸市 <ko...@in...> wrote: > Not thorough document of everything on it. So far, HA configuration > needs integration with other software like Pacemaker and has far more > variety of configuration to describe in a simple document. > > To configure XC with slaves and to failover the nodes, following > material will be helpful. > > Pgxc_ctl: Postgres-XC configuration and operation tool. Reference > maulal will be at > http://postgres-xc.sourceforge.net/docs/1_2_1/pgxc-ctl.html > pgxc_ctl sample configuration and sample operation scenario will be at > --- > https://sourceforge.net/apps/mediawiki/postgres-xc/index.php?title=PGOpen2013_Postgres_Open_2013 > > Regards; > --- > Koichi Suzuki > > 2014/04/08 8:20、Tim Uckun <tim...@gm...> のメール: > > Is there a document someplace which explains how to achieve high > availability with PGXC? For example do you set up two clusters and > replicate between them? Do you replicate every data node separately? Is > there a way to "raid" the data so that it exists in multiple nodes so you > can safely lose a data node? > > I know you can specify that a table be present in all the nodes but I am > guessing it won't really get you much if you choose every table to be on > every node. > > Thanks. > ------------------------------------------------------------------------------ > Put Bad Developers to Shame > Dominate Development with Jenkins Continuous Integration > Continuously Automate Build, Test & Deployment > Start a new project now. Try Jenkins in the cloud. > > http://p.sf.net/sfu/13600_Cloudbees_______________________________________________ > Postgres-xc-general mailing list > Pos...@li... > https://lists.sourceforge.net/lists/listinfo/postgres-xc-general > > > |
From: 鈴木 幸市 <ko...@in...> - 2014-04-08 00:59:16
|
Not thorough document of everything on it. So far, HA configuration needs integration with other software like Pacemaker and has far more variety of configuration to describe in a simple document. To configure XC with slaves and to failover the nodes, following material will be helpful. Pgxc_ctl: Postgres-XC configuration and operation tool. Reference maulal will be at http://postgres-xc.sourceforge.net/docs/1_2_1/pgxc-ctl.html pgxc_ctl sample configuration and sample operation scenario will be at ---https://sourceforge.net/apps/mediawiki/postgres-xc/index.php?title=PGOpen2013_Postgres_Open_2013 Regards; --- Koichi Suzuki 2014/04/08 8:20、Tim Uckun <tim...@gm...<mailto:tim...@gm...>> のメール: Is there a document someplace which explains how to achieve high availability with PGXC? For example do you set up two clusters and replicate between them? Do you replicate every data node separately? Is there a way to "raid" the data so that it exists in multiple nodes so you can safely lose a data node? I know you can specify that a table be present in all the nodes but I am guessing it won't really get you much if you choose every table to be on every node. Thanks. ------------------------------------------------------------------------------ Put Bad Developers to Shame Dominate Development with Jenkins Continuous Integration Continuously Automate Build, Test & Deployment Start a new project now. Try Jenkins in the cloud. http://p.sf.net/sfu/13600_Cloudbees_______________________________________________ Postgres-xc-general mailing list Pos...@li... https://lists.sourceforge.net/lists/listinfo/postgres-xc-general |
From: 鈴木 幸市 <ko...@in...> - 2014-04-08 00:46:35
|
So far, we don’t have any performance test data where coordinators/datanodes are geographically distributed from GTM. Any attempt will be helpful for future improvement. Regards; --- Koichi Suzuki 2014/04/07 23:40、Attila Berenyi <att...@se...<mailto:att...@se...>> のメール: Hi, Thanks for the lightning fast reply. 2. Postgres-XC The overall structure of the replication looks pretty convincing but I have a few question: - I read somewhere/saw in a slide deck (unfortunately I cannot find it any more) that it creates a performance bottleneck to have the coordinator and the datanode on the same machine (VMs in my test case). Is it still valid? No it is not valid. Instead, for simpler load balancing between coordinator and datanode, we advice to configure both on the same machine. This can also utilize data localization for better performance. That is good to know. - Is it possible to add 'slave' datanodes to the DB cluster, e.g an external web server? Yes. pgxc_ctl from contrib module will help much. You will find the reference at http://postgres-xc.sourceforge.net/docs/1_2_1/pgxc-ctl.html You will find sample configuration and pgxc_ctl demo scenario from https://sourceforge.net/apps/mediawiki/postgres-xc/index.php?title=PGOpen2013_Postgres_Open_2013 I will definitely check it out. - Approximate overhead/latency? I guess if the user connects to the local (i.e. the user's office) coordinator and commits some changes, the user still have to wait for the data to be sync'd with all the datanodes in the cluster, right? What if the DB cluser includes coordinators/datanodes from all over the globe? We’ve not assumed the usecase where coordinators/datanodes are geometrically distributed, not because of the latency but to provide full-featured transaction ACID capability over the cluster. In the local configuration, this latency is ignorable. I'll test this anyway; I am kind of curious what will be the performance like. Cheers, Attila ------------------------------------------------------------------------------ Put Bad Developers to Shame Dominate Development with Jenkins Continuous Integration Continuously Automate Build, Test & Deployment Start a new project now. Try Jenkins in the cloud. http://p.sf.net/sfu/13600_Cloudbees_APR_______________________________________________ Postgres-xc-general mailing list Pos...@li... https://lists.sourceforge.net/lists/listinfo/postgres-xc-general |