You can subscribe to this list here.
2010 |
Jan
|
Feb
|
Mar
|
Apr
|
May
(2) |
Jun
|
Jul
|
Aug
(6) |
Sep
|
Oct
(19) |
Nov
(1) |
Dec
|
---|---|---|---|---|---|---|---|---|---|---|---|---|
2011 |
Jan
(12) |
Feb
(1) |
Mar
(4) |
Apr
(4) |
May
(32) |
Jun
(12) |
Jul
(11) |
Aug
(1) |
Sep
(6) |
Oct
(3) |
Nov
|
Dec
(10) |
2012 |
Jan
(11) |
Feb
(1) |
Mar
(3) |
Apr
(25) |
May
(53) |
Jun
(38) |
Jul
(103) |
Aug
(54) |
Sep
(31) |
Oct
(66) |
Nov
(77) |
Dec
(20) |
2013 |
Jan
(91) |
Feb
(86) |
Mar
(103) |
Apr
(107) |
May
(25) |
Jun
(37) |
Jul
(17) |
Aug
(59) |
Sep
(38) |
Oct
(78) |
Nov
(29) |
Dec
(15) |
2014 |
Jan
(23) |
Feb
(82) |
Mar
(118) |
Apr
(101) |
May
(103) |
Jun
(45) |
Jul
(6) |
Aug
(10) |
Sep
|
Oct
(32) |
Nov
|
Dec
(9) |
2015 |
Jan
(3) |
Feb
(5) |
Mar
|
Apr
(1) |
May
|
Jun
|
Jul
(9) |
Aug
(4) |
Sep
(3) |
Oct
|
Nov
|
Dec
|
2016 |
Jan
(3) |
Feb
|
Mar
|
Apr
|
May
|
Jun
|
Jul
|
Aug
|
Sep
|
Oct
|
Nov
|
Dec
|
2017 |
Jan
|
Feb
|
Mar
|
Apr
|
May
|
Jun
(3) |
Jul
|
Aug
|
Sep
|
Oct
|
Nov
|
Dec
|
2018 |
Jan
|
Feb
|
Mar
|
Apr
|
May
(4) |
Jun
|
Jul
|
Aug
|
Sep
|
Oct
|
Nov
|
Dec
|
S | M | T | W | T | F | S |
---|---|---|---|---|---|---|
|
|
|
|
1
(15) |
2
(10) |
3
(2) |
4
(6) |
5
|
6
(1) |
7
(23) |
8
|
9
|
10
|
11
|
12
(2) |
13
|
14
|
15
|
16
(2) |
17
(2) |
18
|
19
|
20
(1) |
21
(2) |
22
(3) |
23
(2) |
24
(5) |
25
(2) |
26
(3) |
27
(4) |
28
(6) |
29
(9) |
30
(3) |
31
|
From: Aaron J. <aja...@re...> - 2014-05-22 16:28:27
|
Given my past experience with compiler issues, I'm a little hesitant to even report this. That said, I have a three node cluster, each with a coordinator, data node and gtm proxy. I have a standalone gtm instance without a slave. Often, when I come in after the servers have been up for a while, I'm greeted with a variety of issues. There are several warnings in the coordinator and data node logs, that read "Do not have a GTM snapshot available" - I've discarded these as mostly benign for the moment. The coordinator is much worse.. 30770 | 2014-05-22 15:53:06 UTC | ERROR: current transaction is aborted, commands ignored until end of transaction block 30770 | 2014-05-22 15:53:06 UTC | STATEMENT: DISCARD ALL 4560 | 2014-05-22 15:54:30 UTC | LOG: failed to connect to Datanode 4560 | 2014-05-22 15:54:30 UTC | LOG: failed to connect to Datanode 4560 | 2014-05-22 15:54:30 UTC | WARNING: can not connect to node 16390 30808 | 2014-05-22 15:54:30 UTC | LOG: failed to acquire connections Usually, I reset the coordinator and datanode and the world is happy again. However, it makes me somewhat concerned that I'm seeing these kinds of failures on a daily basis. I wouldn't rule out the compiler again as it's been the reason for previous failures, but has anyone else seen anything like this?? Aaron |
From: Koichi S. <koi...@gm...> - 2014-05-22 13:21:35
|
Hello; Thanks a lot for the idea. Please find my comments inline. Hope you consider them and more forward to make your goal more feasible? Regards; --- Koichi Suzuki 2014-05-22 4:19 GMT-04:00 ZhangJulian <jul...@ou...>: > Hi All, > > I plan to implement it as the below idea. > 1. add a new GUC to the coordinator configuration, which control the > READ/WRITE Separation feature is ON/OFF. > 2. extend the catalog table pgxc_node by adding new columns: slave1_host, > slave1_port, slave1_id, slave2_host, slave2_port, slave2_id. Suppose at most > two slaves are supported. I don't think this is a good idea. If we have these info in the catalog, this will all goes to the slave by WAL shipping and will be used when a slave is promoted. This information is not valid when the master is gone and one of the slaves is promoted. > 3. a read only transaction or the front read only part of a transaction will > be routed to the slave node to execute. In current WAL shipping, we have to expect some difference when a transaction or statement update is visible to the slave. At least, even with synchronized replication, there's slight delay after the WAL record is received and is replayed to be available to hot standby. There's even a chance that such update is visible before it is visible at the master. Therefore, usecase of current hot standby should allow such differences. I don't think your example allow such WAL shipping replication characteristics. Moreover, current hot standby implementation assumes the slave will receive every XID in updates. It does not assume there could be missing XIDs and this assumption is used to generate snapshot to enforce update visibility. In XC, because of GXID nature, some GXID may be missing at some slave. At present, because we didn't have sufficient resources, snapshot generation is disabled. In addition to this, local snapshot may not work. We need global XID (GXID) to get consistent result. By such reasons, it is not simple to provide consistent database view from slaves. I discussed this in PGCon cluster summit this Tuesday and I'm afraid this need much more analysis, research and design. > > For example, > begin; > select ....; ==>go to slave node > select ....; ==>go to slave node > insert ....; ==>go to master node > select ....; ==>go to master node, since it may visit the row inserted by > the previous insert statement. > end; > > By this, in a cluster, > some coordinator can be configured to support the OLTP system, the query > will be routed to the master data nodes; > others coordinators can be configured to support the report system, the > query will be routed to the slave data nodes; > the different wordloads will be applied to different coordinators and data > nodes, then they can be isolated. > > Do you think if it is valuable? Do you have some advices? > > Thanks > Julian > > ------------------------------------------------------------------------------ > "Accelerate Dev Cycles with Automated Cross-Browser Testing - For FREE > Instantly run your Selenium tests across 300+ browser/OS combos. > Get unparalleled scalability from the best Selenium testing platform > available > Simple to use. Nothing to install. Get started now for free." > http://p.sf.net/sfu/SauceLabs > _______________________________________________ > Postgres-xc-general mailing list > Pos...@li... > https://lists.sourceforge.net/lists/listinfo/postgres-xc-general > |
From: ZhangJulian <jul...@ou...> - 2014-05-22 08:19:43
|
Hi All, I plan to implement it as the below idea. 1. add a new GUC to the coordinator configuration, which control the READ/WRITE Separation feature is ON/OFF. 2. extend the catalog table pgxc_node by adding new columns: slave1_host, slave1_port, slave1_id, slave2_host, slave2_port, slave2_id. Suppose at most two slaves are supported. 3. a read only transaction or the front read only part of a transaction will be routed to the slave node to execute. For example, begin; select ....; ==>go to slave node select ....; ==>go to slave node insert ....; ==>go to master node select ....; ==>go to master node, since it may visit the row inserted by the previous insert statement. end; By this, in a cluster, some coordinator can be configured to support the OLTP system, the query will be routed to the master data nodes; others coordinators can be configured to support the report system, the query will be routed to the slave data nodes; the different wordloads will be applied to different coordinators and data nodes, then they can be isolated. Do you think if it is valuable? Do you have some advices? Thanks Julian |