You can subscribe to this list here.
2010 |
Jan
|
Feb
|
Mar
|
Apr
(10) |
May
(17) |
Jun
(3) |
Jul
|
Aug
|
Sep
(8) |
Oct
(18) |
Nov
(51) |
Dec
(74) |
---|---|---|---|---|---|---|---|---|---|---|---|---|
2011 |
Jan
(47) |
Feb
(44) |
Mar
(44) |
Apr
(102) |
May
(35) |
Jun
(25) |
Jul
(56) |
Aug
(69) |
Sep
(32) |
Oct
(37) |
Nov
(31) |
Dec
(16) |
2012 |
Jan
(34) |
Feb
(127) |
Mar
(218) |
Apr
(252) |
May
(80) |
Jun
(137) |
Jul
(205) |
Aug
(159) |
Sep
(35) |
Oct
(50) |
Nov
(82) |
Dec
(52) |
2013 |
Jan
(107) |
Feb
(159) |
Mar
(118) |
Apr
(163) |
May
(151) |
Jun
(89) |
Jul
(106) |
Aug
(177) |
Sep
(49) |
Oct
(63) |
Nov
(46) |
Dec
(7) |
2014 |
Jan
(65) |
Feb
(128) |
Mar
(40) |
Apr
(11) |
May
(4) |
Jun
(8) |
Jul
(16) |
Aug
(11) |
Sep
(4) |
Oct
(1) |
Nov
(5) |
Dec
(16) |
2015 |
Jan
(5) |
Feb
|
Mar
(2) |
Apr
(5) |
May
(4) |
Jun
(12) |
Jul
|
Aug
|
Sep
|
Oct
|
Nov
|
Dec
(4) |
2019 |
Jan
|
Feb
|
Mar
|
Apr
|
May
|
Jun
|
Jul
(2) |
Aug
|
Sep
|
Oct
|
Nov
|
Dec
|
S | M | T | W | T | F | S |
---|---|---|---|---|---|---|
|
|
1
|
2
|
3
|
4
|
5
|
6
|
7
|
8
|
9
|
10
(6) |
11
|
12
(1) |
13
(1) |
14
(3) |
15
|
16
(2) |
17
(1) |
18
(3) |
19
|
20
(3) |
21
(9) |
22
(4) |
23
(4) |
24
(4) |
25
(2) |
26
|
27
|
28
(1) |
|
|
|
|
|
From: Koichi S. <koi...@gm...> - 2011-02-22 08:04:10
|
Hi, Please find my response inline... ---------- Koichi Suzuki 2011/2/22 Suzuki Hironobu <hir...@in...>: > Thanks your quick response. > > >>> >>> I'd like to know details more. >>> (1)How to replicate the data among gtm and gtm-standby(s)? >> >> Now it is under the development (too early to publish) and is similar >> to streaming replication. GTM-ACT sends it's update (each >> transaction status change, typically) to SBY. GTM-SBY can connect to >> GTM-ACT anytime. When GTM-ACT accepts GTM-SBY connection and begin >> to ship each transaction status update. >> > > It will show a good performance. > > Is data transmission among gtm synchronous or asynchronous? > And if it's asynchronous, > is there a mechanism that takes the synchronization among gtm-standby > when master-gtm crashed? Data transmission is synchronous. GTM-Standby has threads which corresponds to each GTM-ACT threads. Because GTM-Act threads coorrespond to GTM-Proxy worker threads, basically GTM-ACT just copies message from GTM-Proxy to GTM-Sandby and GTM-Standby can recreate transaction status. In fact, GTM-Standby's binary will be the same as GTM-ACT. So far, we don't backup GTM-SBY status to stable storage. When GTM-SBY crashes, we can get another GTM-SBY connected to GTM-ACT. Cascated GTM-SBY and multiple GT-SBY could be options for the future. > >>> (2)What failure do you assume? Only crash failure? or more? >>> And what's kind of failure detector does xcwatcher have? >>> Theoretically, is it eventually strong? eventually perfect? >> >> Hardware crash and software crash. Xcwatcher is a bit traditional. >> Network communication and process monitoring. Difference is that any >> Postgres-XC components >> (GTM/GTM-SBY/GT-PXY/Coordinator/Datanode/Mirror) can report the >> failure of other components they communicate to xcwatcher through >> XCM. Xcwatcher distributes this update (not only failure, but also >> start/stop and raise to ACT, etc) to all the servers. On the other >> hand, XCM evaluates the failure and advice xcwatcher what to do, as >> written in the document. >> > I understand. > I wanted to know that how do you think about failures. > Because words "timeout" and/or "omission" are not found in the document. Component (not server hardware) failure is detected by issuing "monitoring" command and check the response. Postgres-XC components are allowed to report other components' failures to xcwatcher through XCM module. Hardware failure detection is yet primitive. It is based upon response timeout. As I wrote, we may want to combine this with other hardware monitoring provided by general purpose HA middleware. > >>> >>> (3)As a possibility, I think that Postgres-XC components divide into >>> some parts due to the network failure etc. Is it correct? >> >> When xcwatcher fails to monitor servers or components, it treats them >> to be failed and tries to stop them just in case. When sufficient >> components are not recognized by xcwatcher, it will stop the whole >> cluster to enforce data integrity among datanodes/mirrors. >> >> All these action can be monitored through xcwatcher log by external >> tools. >> >> When xcwatcher itself fails, Postgres-XC can continue to run for a >> while. Operators can restart xcwatcher even in the different server. >> In this case, xcwatcher collects current cluster status through XCM >> (from all the servers involved) to rebuild global cluster status. >> >> I think they can be combined with hardware monitoring capability >> provided by many HA middleware. >> > > I was misunderstanding it a little. > I thought that we can construct a dependable system only with XCM. > > I think the setting of HA middleware seems to become difficult, > to correspond to difficult situations (for example: intermittent failure > of network). > However, it might be easy when assuming (perfect)crash failure only. In the case of intermittent failure, typically in the network, we can expect many things. Some transaction may fail but another may be successful. I think 2PC will maintain database integrity in the whole database. I believe this is what we should enforce. One thing we should be careful, for example, is the case that different coordinators observe different (intermittent) failure for different datanode mirrors. We have to be careful to keep "primary" datanode mirror consistent to maintain data integrity between mirrors. > > > I am looking forward to the trial of XCM. It's very simple. Enjoy. > > > ------------------------------------------------------------------------------ > Index, Search & Analyze Logs and other IT data in Real-Time with Splunk > Collect, index and harness all the fast moving IT data generated by your > applications, servers and devices whether physical, virtual or in the cloud. > Deliver compliance at lower cost and gain new business insights. > Free Software Download: http://p.sf.net/sfu/splunk-dev2dev > _______________________________________________ > Postgres-xc-developers mailing list > Pos...@li... > https://lists.sourceforge.net/lists/listinfo/postgres-xc-developers > |
From: Suzuki H. <hir...@in...> - 2011-02-22 05:35:19
|
Thanks your quick response. >> >> I'd like to know details more. >> (1)How to replicate the data among gtm and gtm-standby(s)? > > Now it is under the development (too early to publish) and is similar > to streaming replication. GTM-ACT sends it's update (each > transaction status change, typically) to SBY. GTM-SBY can connect to > GTM-ACT anytime. When GTM-ACT accepts GTM-SBY connection and begin > to ship each transaction status update. > It will show a good performance. Is data transmission among gtm synchronous or asynchronous? And if it's asynchronous, is there a mechanism that takes the synchronization among gtm-standby when master-gtm crashed? >> (2)What failure do you assume? Only crash failure? or more? >> And what's kind of failure detector does xcwatcher have? >> Theoretically, is it eventually strong? eventually perfect? > > Hardware crash and software crash. Xcwatcher is a bit traditional. > Network communication and process monitoring. Difference is that any > Postgres-XC components > (GTM/GTM-SBY/GT-PXY/Coordinator/Datanode/Mirror) can report the > failure of other components they communicate to xcwatcher through > XCM. Xcwatcher distributes this update (not only failure, but also > start/stop and raise to ACT, etc) to all the servers. On the other > hand, XCM evaluates the failure and advice xcwatcher what to do, as > written in the document. > I understand. I wanted to know that how do you think about failures. Because words "timeout" and/or "omission" are not found in the document. >> >> (3)As a possibility, I think that Postgres-XC components divide into >> some parts due to the network failure etc. Is it correct? > > When xcwatcher fails to monitor servers or components, it treats them > to be failed and tries to stop them just in case. When sufficient > components are not recognized by xcwatcher, it will stop the whole > cluster to enforce data integrity among datanodes/mirrors. > > All these action can be monitored through xcwatcher log by external > tools. > > When xcwatcher itself fails, Postgres-XC can continue to run for a > while. Operators can restart xcwatcher even in the different server. > In this case, xcwatcher collects current cluster status through XCM > (from all the servers involved) to rebuild global cluster status. > > I think they can be combined with hardware monitoring capability > provided by many HA middleware. > I was misunderstanding it a little. I thought that we can construct a dependable system only with XCM. I think the setting of HA middleware seems to become difficult, to correspond to difficult situations (for example: intermittent failure of network). However, it might be easy when assuming (perfect)crash failure only. I am looking forward to the trial of XCM. |
From: Michael P. <mic...@gm...> - 2011-02-22 04:07:23
|
Hi, Here is a little bit of feedback about the rule crash. I fixed myself an issue I found with rules this morning. So based on that I ran a couple of tests with your patch. 1) case do nothing: works well dbt1=# create table aa (a int, b int); CREATE TABLE dbt1=# create table bb (a int, b int) distribute by replication; CREATE TABLE dbt1=# create rule aa_ins as on insert to aa do instead nothing; CREATE RULE dbt1=# insert into aa values (1,2),(2,3); INSERT 0 0 dbt1=# select * from bb; a | b ---+--- (0 rows) dbt1=# select * from aa; a | b ---+--- (0 rows) This case works well. 2) with an insert rule: do also dbt1=# create table aa (a int, b int); CREATE TABLE dbt1=# create table bb (a int, b int) distribute by replication; CREATE TABLE dbt1=# create rule bb_ins as on insert to aa do also insert into bb values (new.a,new.b); CREATE RULE dbt1=# insert into aa values (1,2),(2,3); dbt1=# execute direct on node 1 'select * from aa'; a | b ---+--- 1 | 2 1 | 2 2 | 3 1 | 2 2 | 3 (5 rows) dbt1=# execute direct on node 2 'select * from aa'; a | b ---+--- 2 | 3 1 | 2 2 | 3 1 | 2 2 | 3 (5 rows) It looks that the query is not running on the good table. In RewriteInsertStmt, only one locator information is used when rewriting the query. Only the locator information of the tables whose rule is applied to is taken into account. For example, in my case queries are not rewritten for table bb but only for table aa. It may be possible to take into account also the table bb defined in the rules when building the lists of values. If the others have any ideas about how it could be able to do that smoothly, any ideas is welcome. I think you should modify RewriteInsertStmt to take into account also the rules that have been fired on this query. I suppose this information is visible in the parsing tree as it works well for one INSERT value. I attach a modified version of the patch you sent. It does exactly the same thing as your first version. Regards, -- Michael Paquier http://michaelpq.users.sourceforge.net |
From: Koichi S. <ko...@in...> - 2011-02-22 00:24:28
|
Thanks for quick response. (2011年02月22日 06:50), Suzuki Hironobu wrote: > Hi, > >> XCM module is added to Postgres-XC (ha_support branch so far). I >> added the following file in sourceforge development web-site. >> >> XCM_Module_Document_20110221.pdf > > I finished reading this document now. > I think that XCM is a great idea. > >> Misc is created to store temporary materials intended to be a part of >> further releases. > > I'd like to know details more. > (1)How to replicate the data among gtm and gtm-standby(s)? Now it is under the development (too early to publish) and is similar to streaming replication. GTM-ACT sends it's update (each transaction status change, typically) to SBY. GTM-SBY can connect to GTM-ACT anytime. When GTM-ACT accepts GTM-SBY connection and begin to ship each transaction status update. > > (2)What failure do you assume? Only crash failure? or more? > And what's kind of failure detector does xcwatcher have? > Theoretically, is it eventually strong? eventually perfect? Hardware crash and software crash. Xcwatcher is a bit traditional. Network communication and process monitoring. Difference is that any Postgres-XC components (GTM/GTM-SBY/GT-PXY/Coordinator/Datanode/Mirror) can report the failure of other components they communicate to xcwatcher through XCM. Xcwatcher distributes this update (not only failure, but also start/stop and raise to ACT, etc) to all the servers. On the other hand, XCM evaluates the failure and advice xcwatcher what to do, as written in the document. > > (3)As a possibility, I think that Postgres-XC components divide into > some parts due to the network failure etc. Is it correct? When xcwatcher fails to monitor servers or components, it treats them to be failed and tries to stop them just in case. When sufficient components are not recognized by xcwatcher, it will stop the whole cluster to enforce data integrity among datanodes/mirrors. All these action can be monitored through xcwatcher log by external tools. When xcwatcher itself fails, Postgres-XC can continue to run for a while. Operators can restart xcwatcher even in the different server. In this case, xcwatcher collects current cluster status through XCM (from all the servers involved) to rebuild global cluster status. I think they can be combined with hardware monitoring capability provided by many HA middleware. Thank you very much for your interest and involvement. --- Koichi > > > ---- > I cannot go to the conference on Friday. > Please teach if there is time. > > Regards, > > > > ------------------------------------------------------------------------------ > Index, Search& Analyze Logs and other IT data in Real-Time with Splunk > Collect, index and harness all the fast moving IT data generated by your > applications, servers and devices whether physical, virtual or in the cloud. > Deliver compliance at lower cost and gain new business insights. > Free Software Download: http://p.sf.net/sfu/splunk-dev2dev > _______________________________________________ > Postgres-xc-developers mailing list > Pos...@li... > https://lists.sourceforge.net/lists/listinfo/postgres-xc-developers > |