You can subscribe to this list here.
2010 |
Jan
|
Feb
|
Mar
|
Apr
|
May
(2) |
Jun
|
Jul
|
Aug
(6) |
Sep
|
Oct
(19) |
Nov
(1) |
Dec
|
---|---|---|---|---|---|---|---|---|---|---|---|---|
2011 |
Jan
(12) |
Feb
(1) |
Mar
(4) |
Apr
(4) |
May
(32) |
Jun
(12) |
Jul
(11) |
Aug
(1) |
Sep
(6) |
Oct
(3) |
Nov
|
Dec
(10) |
2012 |
Jan
(11) |
Feb
(1) |
Mar
(3) |
Apr
(25) |
May
(53) |
Jun
(38) |
Jul
(103) |
Aug
(54) |
Sep
(31) |
Oct
(66) |
Nov
(77) |
Dec
(20) |
2013 |
Jan
(91) |
Feb
(86) |
Mar
(103) |
Apr
(107) |
May
(25) |
Jun
(37) |
Jul
(17) |
Aug
(59) |
Sep
(38) |
Oct
(78) |
Nov
(29) |
Dec
(15) |
2014 |
Jan
(23) |
Feb
(82) |
Mar
(118) |
Apr
(101) |
May
(103) |
Jun
(45) |
Jul
(6) |
Aug
(10) |
Sep
|
Oct
(32) |
Nov
|
Dec
(9) |
2015 |
Jan
(3) |
Feb
(5) |
Mar
|
Apr
(1) |
May
|
Jun
|
Jul
(9) |
Aug
(4) |
Sep
(3) |
Oct
|
Nov
|
Dec
|
2016 |
Jan
(3) |
Feb
|
Mar
|
Apr
|
May
|
Jun
|
Jul
|
Aug
|
Sep
|
Oct
|
Nov
|
Dec
|
2017 |
Jan
|
Feb
|
Mar
|
Apr
|
May
|
Jun
(3) |
Jul
|
Aug
|
Sep
|
Oct
|
Nov
|
Dec
|
2018 |
Jan
|
Feb
|
Mar
|
Apr
|
May
(4) |
Jun
|
Jul
|
Aug
|
Sep
|
Oct
|
Nov
|
Dec
|
S | M | T | W | T | F | S |
---|---|---|---|---|---|---|
|
|
|
|
|
1
(6) |
2
(3) |
3
(5) |
4
(4) |
5
|
6
(2) |
7
|
8
|
9
|
10
|
11
(1) |
12
|
13
|
14
(10) |
15
(3) |
16
(4) |
17
(4) |
18
(9) |
19
(18) |
20
(1) |
21
(6) |
22
(10) |
23
|
24
|
25
(1) |
26
(5) |
27
(5) |
28
(3) |
29
(1) |
30
(2) |
31
|
|
|
|
|
|
|
From: Chris B. <cb...@in...> - 2013-03-18 20:47:13
|
Hi all, I just read through: wiki.postgresql.org/images/4/44/Pgxc_HA_20121024.pdf<https://duckduckgo.com/k/?u=https%3A%2F%2Ffanyv88.com%3A443%2Fhttps%2Fwiki.postgresql.org%2Fimages%2F4%2F44%2FPgxc_HA_20121024.pdf> and I wanted to know how PG-XC solves the problem of split brain. For example, let's say there is a pair: Box 1: datanode1-master, datanode2-slave, coordinator master, gtm master Box 2: datanode1-slave, datanode2-master, coordinator slave, gtm slave and communication is severed between the two boxes. Another example: Box 1: datanode1-master, datanode2-slave, coordinator master, gtm proxy Box 2: datanode1-slave, datanode2-master, coordinator slave, gtm proxy Box 3: gtm (no slave exists for simplicity) and communication is severed between box 1 and 2, but both can still communicate with box 3. (Or another network outage scenario - there are obviously several.) Chris... |
From: Koichi S. <koi...@gm...> - 2013-03-18 08:48:00
|
Circular configuration will not make any difference in the recovery. Good thing is this configuration can apply to odd number of datanodes/coordinators. Pair configuration may help to build failover system with Pacemaker/corosync because it have to take care of the resources in just two servers. However, I'm not sure if such configuration of Pacemaker/corosync can be applied to GTM/GTM-slave too. Regards; ---------- Koichi Suzuki 2013/3/18 Theodotos Andreou <th...@ub...>: > Thanks. It gets more clear now. > > On each node I have a master and a standby coordinator in a pair > configuration, just like the example in your slides. Can I use the catalog > of the standby server (which is a copy of the master coord on the active > node) to restore the failed master coord? In this way I will not have to > take the coordinator offline. > > Also would it make recovery easier if I use a circular configuration with 4 > nodes? > > Regards > > Theo > > > On 03/18/2013 07:15 AM, Koichi Suzuki wrote: >> >> Yes, you're correct. Because coordinator is essentially identical, >> you can leave failed coordinators as is and use remaining ones. If >> the failed server is back, you may have to stop the whole cluster, >> copies one of the coordinator to the new (restored) server, change the >> node name etc., to failback the coordinator. If you keep running >> two, when the failed server is back, you can build new slave to the >> restored server and then failover to it. >> >> It is not reasonable to keep using the same port. Instead, (in my >> case) I assign different port to coordinators at different server. >> For example, if you have the server1 and server2 and run coordinator >> master and slave, I'd assign the port as follows: >> >> 5432: coord1, master at server1 and slave at server2, >> 5433: coord2, master at server2 and slave at server1. >> >> You don't have to worry about port number after the failover. >> >> We're now implementing on-line node addition/removal feature. This >> will make things much simpler. You can use only remaining >> coordinators and later add new one to the restored server. >> >> Regards; >> ---------- >> Koichi Suzuki >> >> >> 2013/3/18 Nikhil Sontakke <ni...@st...>: >>> >>> Hi Theodotos, >>> >>>> Now I have two master coordinators running on the same node at different >>>> ports (5432 and 5433). Does that mean I will have to reconfigure the >>>> load >>>> balancer to round robin the traffic on the working node, at two >>>> different >>>> ports or is this taken care by the master node on port 5432? >>>> >>> Why do you want to run two coordinators on one single node? I do not >>> see any benefits to that. Also to answer your question, the master >>> node on port 5432 does not automatically load balance between itself >>> and the other local coordinator. >>> >>> Regards, >>> Nikhils >>> >>>> Expect more stupid questions as I move along the guide! :) >>>> >>>> Thanks >>>> >>>> Theo >>>> >>>> >>>> ------------------------------------------------------------------------------ >>>> Everyone hates slow websites. So do we. >>>> Make your web apps faster with AppDynamics >>>> Download AppDynamics Lite for free today: >>>> http://p.sf.net/sfu/appdyn_d2d_mar >>>> _______________________________________________ >>>> Postgres-xc-general mailing list >>>> Pos...@li... >>>> https://lists.sourceforge.net/lists/listinfo/postgres-xc-general >>>> >>> >>> >>> -- >>> StormDB - http://www.stormdb.com >>> The Database Cloud >>> Postgres-XC Support and Service >>> >>> >>> ------------------------------------------------------------------------------ >>> Everyone hates slow websites. So do we. >>> Make your web apps faster with AppDynamics >>> Download AppDynamics Lite for free today: >>> http://p.sf.net/sfu/appdyn_d2d_mar >>> _______________________________________________ >>> Postgres-xc-general mailing list >>> Pos...@li... >>> https://lists.sourceforge.net/lists/listinfo/postgres-xc-general > > |
From: Theodotos A. <th...@ub...> - 2013-03-18 05:45:00
|
Thanks. It gets more clear now. On each node I have a master and a standby coordinator in a pair configuration, just like the example in your slides. Can I use the catalog of the standby server (which is a copy of the master coord on the active node) to restore the failed master coord? In this way I will not have to take the coordinator offline. Also would it make recovery easier if I use a circular configuration with 4 nodes? Regards Theo On 03/18/2013 07:15 AM, Koichi Suzuki wrote: > Yes, you're correct. Because coordinator is essentially identical, > you can leave failed coordinators as is and use remaining ones. If > the failed server is back, you may have to stop the whole cluster, > copies one of the coordinator to the new (restored) server, change the > node name etc., to failback the coordinator. If you keep running > two, when the failed server is back, you can build new slave to the > restored server and then failover to it. > > It is not reasonable to keep using the same port. Instead, (in my > case) I assign different port to coordinators at different server. > For example, if you have the server1 and server2 and run coordinator > master and slave, I'd assign the port as follows: > > 5432: coord1, master at server1 and slave at server2, > 5433: coord2, master at server2 and slave at server1. > > You don't have to worry about port number after the failover. > > We're now implementing on-line node addition/removal feature. This > will make things much simpler. You can use only remaining > coordinators and later add new one to the restored server. > > Regards; > ---------- > Koichi Suzuki > > > 2013/3/18 Nikhil Sontakke <ni...@st...>: >> Hi Theodotos, >> >>> Now I have two master coordinators running on the same node at different >>> ports (5432 and 5433). Does that mean I will have to reconfigure the load >>> balancer to round robin the traffic on the working node, at two different >>> ports or is this taken care by the master node on port 5432? >>> >> Why do you want to run two coordinators on one single node? I do not >> see any benefits to that. Also to answer your question, the master >> node on port 5432 does not automatically load balance between itself >> and the other local coordinator. >> >> Regards, >> Nikhils >> >>> Expect more stupid questions as I move along the guide! :) >>> >>> Thanks >>> >>> Theo >>> >>> ------------------------------------------------------------------------------ >>> Everyone hates slow websites. So do we. >>> Make your web apps faster with AppDynamics >>> Download AppDynamics Lite for free today: >>> http://p.sf.net/sfu/appdyn_d2d_mar >>> _______________________________________________ >>> Postgres-xc-general mailing list >>> Pos...@li... >>> https://lists.sourceforge.net/lists/listinfo/postgres-xc-general >>> >> >> >> -- >> StormDB - http://www.stormdb.com >> The Database Cloud >> Postgres-XC Support and Service >> >> ------------------------------------------------------------------------------ >> Everyone hates slow websites. So do we. >> Make your web apps faster with AppDynamics >> Download AppDynamics Lite for free today: >> http://p.sf.net/sfu/appdyn_d2d_mar >> _______________________________________________ >> Postgres-xc-general mailing list >> Pos...@li... >> https://lists.sourceforge.net/lists/listinfo/postgres-xc-general |
From: Koichi S. <koi...@gm...> - 2013-03-18 05:39:06
|
Please see my previous reply on running failed-over node at the same server. The following steps are not tested yet. 1. Stop all the cluster. 2. Copy all the material for remaining coordinator for the new coordinator. 3. Edit new coordinator's postgresql.conf, give new value of pgxc_node_name, 4. Edit new coordinator's postgresql.conf to configure gtm/gtm_proxy and pooler. 5. Edit other postgresql.conf if you feel necessary. 6. Edit pg_hba.conf if it is different from the original. 7. Start new coordinator. I'm not sure if the step 3 works fine. Now initdb initializes it's pgxc_node catalog. I'm afraid the step 3 main not update the catalog automatically. In this case, you may have to update pgxc_node using CREATE/UPDATE/DROP NODE. Please understand it is not tested yet either. So, at present, safest way is to run two coordinators at the same server until the failed server is back and then switch to the new server. Regards; ---------- Koichi Suzuki 2013/3/18 Theodotos Andreou <th...@ub...>: > On 03/18/2013 07:07 AM, Nikhil Sontakke wrote: >> Hi Theodotos, >> >>> Now I have two master coordinators running on the same node at different >>> ports (5432 and 5433). Does that mean I will have to reconfigure the load >>> balancer to round robin the traffic on the working node, at two different >>> ports or is this taken care by the master node on port 5432? >>> >> Why do you want to run two coordinators on one single node? I do not >> see any benefits to that. Also to answer your question, the master >> node on port 5432 does not automatically load balance between itself >> and the other local coordinator. >> >> Regards, >> Nikhils > Hi Nikhils, > > I am not sure if there is a benefit running two coordinators on a single > node. I was just following koichi's HA guide for a "pair configuration". > Maybe he can enlighten us. :) > > I was wondering, since all coordinators have a copy of the same catalog, > is there a real need for a standby coordinator? > > Also I would like to know the steps I need to take to restore the failed > coordinator because I am preparing automatic failover and restore scripts. >> >>> Expect more stupid questions as I move along the guide! :) >>> >>> Thanks >>> >>> Theo >>> >>> ------------------------------------------------------------------------------ >>> Everyone hates slow websites. So do we. >>> Make your web apps faster with AppDynamics >>> Download AppDynamics Lite for free today: >>> http://p.sf.net/sfu/appdyn_d2d_mar >>> _______________________________________________ >>> Postgres-xc-general mailing list >>> Pos...@li... >>> https://lists.sourceforge.net/lists/listinfo/postgres-xc-general >>> >> >> > > > ------------------------------------------------------------------------------ > Everyone hates slow websites. So do we. > Make your web apps faster with AppDynamics > Download AppDynamics Lite for free today: > http://p.sf.net/sfu/appdyn_d2d_mar > _______________________________________________ > Postgres-xc-general mailing list > Pos...@li... > https://lists.sourceforge.net/lists/listinfo/postgres-xc-general |
From: Theodotos A. <th...@ub...> - 2013-03-18 05:21:03
|
On 03/18/2013 07:07 AM, Nikhil Sontakke wrote: > Hi Theodotos, > >> Now I have two master coordinators running on the same node at different >> ports (5432 and 5433). Does that mean I will have to reconfigure the load >> balancer to round robin the traffic on the working node, at two different >> ports or is this taken care by the master node on port 5432? >> > Why do you want to run two coordinators on one single node? I do not > see any benefits to that. Also to answer your question, the master > node on port 5432 does not automatically load balance between itself > and the other local coordinator. > > Regards, > Nikhils Hi Nikhils, I am not sure if there is a benefit running two coordinators on a single node. I was just following koichi's HA guide for a "pair configuration". Maybe he can enlighten us. :) I was wondering, since all coordinators have a copy of the same catalog, is there a real need for a standby coordinator? Also I would like to know the steps I need to take to restore the failed coordinator because I am preparing automatic failover and restore scripts. > >> Expect more stupid questions as I move along the guide! :) >> >> Thanks >> >> Theo >> >> ------------------------------------------------------------------------------ >> Everyone hates slow websites. So do we. >> Make your web apps faster with AppDynamics >> Download AppDynamics Lite for free today: >> http://p.sf.net/sfu/appdyn_d2d_mar >> _______________________________________________ >> Postgres-xc-general mailing list >> Pos...@li... >> https://lists.sourceforge.net/lists/listinfo/postgres-xc-general >> > > |
From: Koichi S. <koi...@gm...> - 2013-03-18 05:16:04
|
Yes, you're correct. Because coordinator is essentially identical, you can leave failed coordinators as is and use remaining ones. If the failed server is back, you may have to stop the whole cluster, copies one of the coordinator to the new (restored) server, change the node name etc., to failback the coordinator. If you keep running two, when the failed server is back, you can build new slave to the restored server and then failover to it. It is not reasonable to keep using the same port. Instead, (in my case) I assign different port to coordinators at different server. For example, if you have the server1 and server2 and run coordinator master and slave, I'd assign the port as follows: 5432: coord1, master at server1 and slave at server2, 5433: coord2, master at server2 and slave at server1. You don't have to worry about port number after the failover. We're now implementing on-line node addition/removal feature. This will make things much simpler. You can use only remaining coordinators and later add new one to the restored server. Regards; ---------- Koichi Suzuki 2013/3/18 Nikhil Sontakke <ni...@st...>: > Hi Theodotos, > >> Now I have two master coordinators running on the same node at different >> ports (5432 and 5433). Does that mean I will have to reconfigure the load >> balancer to round robin the traffic on the working node, at two different >> ports or is this taken care by the master node on port 5432? >> > > Why do you want to run two coordinators on one single node? I do not > see any benefits to that. Also to answer your question, the master > node on port 5432 does not automatically load balance between itself > and the other local coordinator. > > Regards, > Nikhils > >> Expect more stupid questions as I move along the guide! :) >> >> Thanks >> >> Theo >> >> ------------------------------------------------------------------------------ >> Everyone hates slow websites. So do we. >> Make your web apps faster with AppDynamics >> Download AppDynamics Lite for free today: >> http://p.sf.net/sfu/appdyn_d2d_mar >> _______________________________________________ >> Postgres-xc-general mailing list >> Pos...@li... >> https://lists.sourceforge.net/lists/listinfo/postgres-xc-general >> > > > > -- > StormDB - http://www.stormdb.com > The Database Cloud > Postgres-XC Support and Service > > ------------------------------------------------------------------------------ > Everyone hates slow websites. So do we. > Make your web apps faster with AppDynamics > Download AppDynamics Lite for free today: > http://p.sf.net/sfu/appdyn_d2d_mar > _______________________________________________ > Postgres-xc-general mailing list > Pos...@li... > https://lists.sourceforge.net/lists/listinfo/postgres-xc-general |
From: Theodotos A. <th...@ub...> - 2013-03-18 05:12:10
|
"All the coordinators are essentially the same copy" right? But the failed coordinator is not the same copy as the others. What steps do I need to take in order to restore the failed coordinator? The guide says: "Failed coordinator can be restored offline ● Backup/restore ● Copy catalogue from a remaining coordinator" How is the above statement translated in commands? On 03/18/2013 04:25 AM, Koichi Suzuki wrote: > Because each coordinator is equivalent, both will bring the same > logical result (the same database contents). I've not tested which is > better (just use on e coordinator or both two) yet. > > Regards; > ---------- > Koichi Suzuki > > > 2013/3/17 Theodotos Andreou <th...@ub...>: >> Hi guys, >> >> I am using Koichi's HA guide to setup a two node pgxc cluster: >> >> wiki.postgresql.org/images/4/44/Pgxc_HA_20121024.pdf >> >> I am using a pair configuration according to page 36 >> >> I have two nodes (node1 and node2). The master coordinators run on port 5432 >> and the slave coordinators on port 5433. I have a TCP/IP loadbalancer (IPVS) >> that load balances (round robin) the traffic on both nodes on port 5432. >> >> When one of the nodes fail, the standby on the other is promoted to master. >> Now I have two master coordinators running on the same node at different >> ports (5432 and 5433). Does that mean I will have to reconfigure the load >> balancer to round robin the traffic on the working node, at two different >> ports or is this taken care by the master node on port 5432? >> >> Expect more stupid questions as I move along the guide! :) >> >> Thanks >> >> Theo >> >> ------------------------------------------------------------------------------ >> Everyone hates slow websites. So do we. >> Make your web apps faster with AppDynamics >> Download AppDynamics Lite for free today: >> http://p.sf.net/sfu/appdyn_d2d_mar >> _______________________________________________ >> Postgres-xc-general mailing list >> Pos...@li... >> https://lists.sourceforge.net/lists/listinfo/postgres-xc-general >> |
From: Nikhil S. <ni...@st...> - 2013-03-18 05:08:20
|
Hi Theodotos, > Now I have two master coordinators running on the same node at different > ports (5432 and 5433). Does that mean I will have to reconfigure the load > balancer to round robin the traffic on the working node, at two different > ports or is this taken care by the master node on port 5432? > Why do you want to run two coordinators on one single node? I do not see any benefits to that. Also to answer your question, the master node on port 5432 does not automatically load balance between itself and the other local coordinator. Regards, Nikhils > Expect more stupid questions as I move along the guide! :) > > Thanks > > Theo > > ------------------------------------------------------------------------------ > Everyone hates slow websites. So do we. > Make your web apps faster with AppDynamics > Download AppDynamics Lite for free today: > http://p.sf.net/sfu/appdyn_d2d_mar > _______________________________________________ > Postgres-xc-general mailing list > Pos...@li... > https://lists.sourceforge.net/lists/listinfo/postgres-xc-general > -- StormDB - http://www.stormdb.com The Database Cloud Postgres-XC Support and Service |
From: Koichi S. <koi...@gm...> - 2013-03-18 02:25:54
|
Because each coordinator is equivalent, both will bring the same logical result (the same database contents). I've not tested which is better (just use on e coordinator or both two) yet. Regards; ---------- Koichi Suzuki 2013/3/17 Theodotos Andreou <th...@ub...>: > Hi guys, > > I am using Koichi's HA guide to setup a two node pgxc cluster: > > wiki.postgresql.org/images/4/44/Pgxc_HA_20121024.pdf > > I am using a pair configuration according to page 36 > > I have two nodes (node1 and node2). The master coordinators run on port 5432 > and the slave coordinators on port 5433. I have a TCP/IP loadbalancer (IPVS) > that load balances (round robin) the traffic on both nodes on port 5432. > > When one of the nodes fail, the standby on the other is promoted to master. > Now I have two master coordinators running on the same node at different > ports (5432 and 5433). Does that mean I will have to reconfigure the load > balancer to round robin the traffic on the working node, at two different > ports or is this taken care by the master node on port 5432? > > Expect more stupid questions as I move along the guide! :) > > Thanks > > Theo > > ------------------------------------------------------------------------------ > Everyone hates slow websites. So do we. > Make your web apps faster with AppDynamics > Download AppDynamics Lite for free today: > http://p.sf.net/sfu/appdyn_d2d_mar > _______________________________________________ > Postgres-xc-general mailing list > Pos...@li... > https://lists.sourceforge.net/lists/listinfo/postgres-xc-general > |