You can subscribe to this list here.
2010 |
Jan
|
Feb
|
Mar
|
Apr
|
May
(2) |
Jun
|
Jul
|
Aug
(6) |
Sep
|
Oct
(19) |
Nov
(1) |
Dec
|
---|---|---|---|---|---|---|---|---|---|---|---|---|
2011 |
Jan
(12) |
Feb
(1) |
Mar
(4) |
Apr
(4) |
May
(32) |
Jun
(12) |
Jul
(11) |
Aug
(1) |
Sep
(6) |
Oct
(3) |
Nov
|
Dec
(10) |
2012 |
Jan
(11) |
Feb
(1) |
Mar
(3) |
Apr
(25) |
May
(53) |
Jun
(38) |
Jul
(103) |
Aug
(54) |
Sep
(31) |
Oct
(66) |
Nov
(77) |
Dec
(20) |
2013 |
Jan
(91) |
Feb
(86) |
Mar
(103) |
Apr
(107) |
May
(25) |
Jun
(37) |
Jul
(17) |
Aug
(59) |
Sep
(38) |
Oct
(78) |
Nov
(29) |
Dec
(15) |
2014 |
Jan
(23) |
Feb
(82) |
Mar
(118) |
Apr
(101) |
May
(103) |
Jun
(45) |
Jul
(6) |
Aug
(10) |
Sep
|
Oct
(32) |
Nov
|
Dec
(9) |
2015 |
Jan
(3) |
Feb
(5) |
Mar
|
Apr
(1) |
May
|
Jun
|
Jul
(9) |
Aug
(4) |
Sep
(3) |
Oct
|
Nov
|
Dec
|
2016 |
Jan
(3) |
Feb
|
Mar
|
Apr
|
May
|
Jun
|
Jul
|
Aug
|
Sep
|
Oct
|
Nov
|
Dec
|
2017 |
Jan
|
Feb
|
Mar
|
Apr
|
May
|
Jun
(3) |
Jul
|
Aug
|
Sep
|
Oct
|
Nov
|
Dec
|
2018 |
Jan
|
Feb
|
Mar
|
Apr
|
May
(4) |
Jun
|
Jul
|
Aug
|
Sep
|
Oct
|
Nov
|
Dec
|
S | M | T | W | T | F | S |
---|---|---|---|---|---|---|
|
1
(2) |
2
(2) |
3
(4) |
4
(5) |
5
(17) |
6
(4) |
7
(7) |
8
(12) |
9
(1) |
10
(1) |
11
(6) |
12
(7) |
13
|
14
(1) |
15
(3) |
16
(2) |
17
(1) |
18
(2) |
19
(8) |
20
|
21
(4) |
22
(5) |
23
(3) |
24
|
25
(1) |
26
(3) |
27
(2) |
28
|
29
(1) |
30
(3) |
|
|
|
|
From: Michael P. <mic...@gm...> - 2013-04-06 13:00:53
|
On Sat, Apr 6, 2013 at 4:34 PM, Theodotos Andreou <th...@ub...> wrote: > Guys hi, > > I finally managed, with the help of pgxc_ctl to setup a HA postgres-xc > cluster. I have 4 nodes (coord/datanode) and 2 GTMs (active/)failover as > per the default setup of pgxc_ctl. > Now I have some questions to get me started. > > 1) When using the standard CREATE TABLE sql commands the tables are > distributed or replicated? I intend to copy the db schema from my live > postgresql system and I want to know what the default behavior is > You can specify the distribution type by using clause DISTRIBUTE BY of CREATE TABLE. Clause TO NODE/GROUP can be used to specified a list of nodes where data is distributed. Documentation is your friend: http://postgres-xc.sourceforge.net/docs/1_0_2/sql-createtable.html > > 2) Should there be an automatic failover mechanism when one of the > machines fail? What do you recommend? (heartbeat/pacemaker, keepalined, > monit, ...) > This is really up to you. Hearbeat would be fine, even Corosync/RDBMS. Why not using the one you are the kost familiar with? > 3) The default setup creates 4 coordinators: node1:20004, node2:20005, > node3:20004, node4:20005. Is it better to use a TCP/IP load balancer to > accept traffic at port 5432 and load balance to the 4 nodes or use an > application load balancer instead (pgpool, pgbouncer, ...) > I recall that an instance of pgbouncer can only pool connections to 1 node. So pgpool? > 4) How about optimizations? Shared memory and such? Should it be split > by 4 since there are 4 instances on each server? > Those are huge questions. First, you should setup a Datanode as you would set up a normal PG server. For Coordinators, shared_buffers can be reduced to a quantity enough to cache the catalog data as it doesn't have any relations. Depending on your application, you might for example not need to set up a high value of work_mem if you are able to push down sort operations to Datanodes for example. -- Michael |
From: Andrei M. <and...@gm...> - 2013-04-06 08:22:11
|
PRIMARY was introduced to avoid distributed deadlocks when updating replicated tables. To better understand the problem, imagine two transactions A and B are updating the same tuple in replicated concurrently. Normally coordinator sends the same commands to all datanodes at the same time, and if on some node A updates the tuple first, B will be waiting for the end of transaction A. If on other node B wins the race, both transactions will be waiting for each other. It is hard to detect such deadlock, the information about locks is not sent across network. But it is possible to avoid. The idea is to set one datanode as a primary, and execute distributed update on primary node first, and go on with other nodes only if operation succeeds on primary. With this approach row lock on primary would stop concurrent transactions from taking row locks on other nodes that could prevent command completion. So, to have this stuff working properly you should 1) set only one datanode as a primary; 2) if you have multiple coordinators, the same datanode should be set as a primary on each of them. Obvious drawback of the approach is double execution time of replicated updates. Note: "update" means any write access. Hope this answers 1)-3) Regarding 4), the query select nodeoids from pg_class, pgxc_class where pg_class.oid = pcrelid and relname = '<your table name>'; returns the list of nodes, where the specified table is distributed on. I guess there are 7 of them. 2013/4/5 Paul Jones <pb...@cm...> > > We are experimenting with an 8-datanode, 3-coordinator cluster and we > have some questions about the use of PRIMARY and a problem. > > The manual explains what PRIMARY means but does not provide much detail > about when you would use it or not use it. > > 1) Can PRIMARY apply to coordinators and if so, when would you > want it or not? > > 2) When would you use PRIMARY for datanodes or not, and would you > ever want more than one datanode to be a primary? > > 3) Does a pgxc_node datanode entry on its own server have to be > the FQDN server name or can it be 'localhost'? > > 4) We have a table that is defined as DISTRIBUTE BY REPLICATION. > It only loads on the first 7 nodes. It will just not load on > node 8. There are a lot of FK references from other tables to it, > but it itself only has a simple CHAR(11) PK, one constraint, > and 3 indices. > > Has anyone seen anything like this before? > > Thanks, > Paul Jones > > > ------------------------------------------------------------------------------ > Minimize network downtime and maximize team effectiveness. > Reduce network management and security costs.Learn how to hire > the most talented Cisco Certified professionals. Visit the > Employer Resources Portal > http://www.cisco.com/web/learning/employer_resources/index.html > _______________________________________________ > Postgres-xc-general mailing list > Pos...@li... > https://lists.sourceforge.net/lists/listinfo/postgres-xc-general > -- Andrei Martsinchyk StormDB - http://www.stormdb.com The Database Cloud |
From: Theodotos A. <th...@ub...> - 2013-04-06 07:34:19
|
Guys hi, I finally managed, with the help of pgxc_ctl to setup a HA postgres-xc cluster. I have 4 nodes (coord/datanode) and 2 GTMs (active/)failover as per the default setup of pgxc_ctl. Now I have some questions to get me started. 1) When using the standard CREATE TABLE sql commands the tables are distributed or replicated? I intend to copy the db schema from my live postgresql system and I want to know what the default behavior is 2) Should there be an automatic failover mechanism when one of the machines fail? What do you recommend? (heartbeat/pacemaker, keepalined, monit, ...) 3) The default setup creates 4 coordinators: node1:20004, node2:20005, node3:20004, node4:20005. Is it better to use a TCP/IP load balancer to accept traffic at port 5432 and load balance to the 4 nodes or use an application load balancer instead (pgpool, pgbouncer, ...) 4) How about optimizations? Shared memory and such? Should it be split by 4 since there are 4 instances on each server? |
From: Ashutosh B. <ash...@en...> - 2013-04-06 04:19:12
|
Can you please elaborate more on what kind of cascade failure you are seeing? What are the symptoms? Is it crash? How bad it is? Does it affect only the running transaction or entire systems needing reboot or needs data clusters to be rebuilt? On Fri, Apr 5, 2013 at 7:12 PM, Arni Sumarlidason < Arn...@md...> wrote: > Does that also explain the cascade failure?**** > > ** ** > > Is there a cursor limit per coordinator?**** > > ** ** > > *From:* Ashutosh Bapat [mailto:ash...@en...] > *Sent:* Friday, April 05, 2013 1:37 AM > *To:* Arni Sumarlidason > *Cc:* pos...@li... > *Subject:* Re: [Postgres-xc-general] Heavy update utilization**** > > ** ** > > Inheritance is a problem here. Otherwise, we try to send DMLs involving a > single table directly to the datanode, without any planning at coordinator. > Unfortunately in this case since inheritance is involved, is row is > modified first on coordinator and then changed at the datanode.**** > > If you are going to use inheritance a lot, may be you want to take up > optimizations for inheritance cases (this is in the context of other > thread, where you have asked for a todo list.)**** > > ** ** > > On Fri, Apr 5, 2013 at 10:18 AM, Arni Sumarlidason < > Arn...@md...> wrote:**** > > Schema same as before, we left our schema inherited because we don't want > to reload our data when support is pushed out. **** > > ** ** > > Think inheritance is also causing this issue as well?**** > > ** ** > > Statement: Update table set col = "stuff" where Id = 447938**** > > ** ** > > Also might be worth mentioning we are using psycopg2 python connector. *** > * > > ** ** > > Thank you for your advice. > > Sent from my iPhone**** > > > On Apr 4, 2013, at 11:57 PM, "Ashutosh Bapat" < > ash...@en...> wrote:**** > > Can you please post your update query here with all the relevant > definitions like table schema etc.?**** > > ** ** > > On Fri, Apr 5, 2013 at 3:15 AM, Arni Sumarlidason < > Arn...@md...> wrote:**** > > General,**** > > **** > > We had a problem doing updates on our database today, it seems like > updates are a pretty heavy operation for the coordinator. They tack our > machines at 60-70% utilization[1] with 5 update cursors, and for reasons > unknown a 6th connection causes a cascade failure across all > coordinators. **** > > **** > > **** > > Do you know what would cause this?**** > > **** > > [1]**** > > http://www.sumarlidason.com/perm/130404/load8.PNG<https://console.mxlogic.com/redir/?zC763hOYqejhOrv7cIcECzAQsCM0l6Rtyv93qlxO-bv5n4mrInoM-KEfs5wo4zt-psLuZXTLsTsTJ2rsWQ9YgRrpwHUj-aQpkZ0xOVIs-eodEIKfEII9ILCTXCMnWhEwdbomHip6fQbxZyV7Uv7jPiWq80L-CiNEwrmgSvCy1KszV2CPBm1KvxYY1NJ4SOrpop73zhO-Ur3Fk-amvt>- Four machines at bottom of screen are coordinators, which correspond to > the thick utilization in the graph.**** > > **** > > > > ------------------------------------------------------------------------------ > Minimize network downtime and maximize team effectiveness. > Reduce network management and security costs.Learn how to hire > the most talented Cisco Certified professionals. Visit the > Employer Resources Portal > http://www.cisco.com/web/learning/employer_resources/index.html<https://console.mxlogic.com/redir/?zC763hOYqejhOrv7cIcECzAQsCM0lFkJnBY05x_X64E4sZlo-FrUyIvFlKdLFw5Y3zt-psLuZXTLsTsTJ2rsWQ9YgRrpwHUj-aQpkZ0xOVIs-eodEIKfEII9ILCTXCMnWhEwdbomHip6fQbxZyV7Uv7jPiWq80L-CiNEwrmgSvCy1KszV2CPBm1KvxYY1NJcSOrpop73zhO-UrlC3O> > _______________________________________________ > Postgres-xc-general mailing list > Pos...@li... > https://lists.sourceforge.net/lists/listinfo/postgres-xc-general<https://console.mxlogic.com/redir/?2eosod7bNEVd79JYsOMOyqejhOr9PCJhbcfBiteFlKdLt00_MdM-l9QM-l9OwXn5GQChzZ2UvoKh-7NQYTvCnbTLuZXTdTdXgCTeJ2v4dmSoa-4_yJ6lfg8sKr7fzC3qbbzWbb2rbVJ-VI5-Aq83iS5GQChzZ2UvoKh-7NQYQKCy0b_FAIq86RAdDVEwrD8-gFIVlwrDUvf0srpdICSm6hMUQsLK6XlrO> > **** > > > > > -- > Best Wishes, > Ashutosh Bapat > EntepriseDB Corporation > The Enterprise Postgres Company**** > > > > > -- > Best Wishes, > Ashutosh Bapat > EntepriseDB Corporation > The Enterprise Postgres Company**** > -- Best Wishes, Ashutosh Bapat EntepriseDB Corporation The Enterprise Postgres Company |