You can subscribe to this list here.
2010 |
Jan
|
Feb
|
Mar
|
Apr
(10) |
May
(17) |
Jun
(3) |
Jul
|
Aug
|
Sep
(8) |
Oct
(18) |
Nov
(51) |
Dec
(74) |
---|---|---|---|---|---|---|---|---|---|---|---|---|
2011 |
Jan
(47) |
Feb
(44) |
Mar
(44) |
Apr
(102) |
May
(35) |
Jun
(25) |
Jul
(56) |
Aug
(69) |
Sep
(32) |
Oct
(37) |
Nov
(31) |
Dec
(16) |
2012 |
Jan
(34) |
Feb
(127) |
Mar
(218) |
Apr
(252) |
May
(80) |
Jun
(137) |
Jul
(205) |
Aug
(159) |
Sep
(35) |
Oct
(50) |
Nov
(82) |
Dec
(52) |
2013 |
Jan
(107) |
Feb
(159) |
Mar
(118) |
Apr
(163) |
May
(151) |
Jun
(89) |
Jul
(106) |
Aug
(177) |
Sep
(49) |
Oct
(63) |
Nov
(46) |
Dec
(7) |
2014 |
Jan
(65) |
Feb
(128) |
Mar
(40) |
Apr
(11) |
May
(4) |
Jun
(8) |
Jul
(16) |
Aug
(11) |
Sep
(4) |
Oct
(1) |
Nov
(5) |
Dec
(16) |
2015 |
Jan
(5) |
Feb
|
Mar
(2) |
Apr
(5) |
May
(4) |
Jun
(12) |
Jul
|
Aug
|
Sep
|
Oct
|
Nov
|
Dec
(4) |
2019 |
Jan
|
Feb
|
Mar
|
Apr
|
May
|
Jun
|
Jul
(2) |
Aug
|
Sep
|
Oct
|
Nov
|
Dec
|
S | M | T | W | T | F | S |
---|---|---|---|---|---|---|
|
|
1
|
2
|
3
|
4
|
5
|
6
|
7
|
8
|
9
|
10
(6) |
11
|
12
(1) |
13
(1) |
14
(3) |
15
|
16
(2) |
17
(1) |
18
(3) |
19
|
20
(3) |
21
(9) |
22
(4) |
23
(4) |
24
(4) |
25
(2) |
26
|
27
|
28
(1) |
|
|
|
|
|
From: xiong w. <wan...@gm...> - 2011-02-28 06:15:43
|
Hi Michael, The enclosures are including such files: 1. The patch which fix the bugs about rules you reported. 2. The test cases about mutiple insert. 3. The expect files on insert.sql 4. The test cases on mutiple insert rules. There're bugs on rule in PGXC, therefore I only attached the test cases on rule without relative expect file. Regards, Benny 2011/2/22 Michael Paquier <mic...@gm...>: > Hi, > > Here is a little bit of feedback about the rule crash. > I fixed myself an issue I found with rules this morning. > > So based on that I ran a couple of tests with your patch. > 1) case do nothing: works well > dbt1=# create table aa (a int, b int); > CREATE TABLE > dbt1=# create table bb (a int, b int) distribute by replication; > CREATE TABLE > dbt1=# create rule aa_ins as on insert to aa do instead nothing; > CREATE RULE > dbt1=# insert into aa values (1,2),(2,3); > INSERT 0 0 > dbt1=# select * from bb; > a | b > ---+--- > (0 rows) > dbt1=# select * from aa; > a | b > ---+--- > (0 rows) > This case works well. > > 2) with an insert rule: do also > dbt1=# create table aa (a int, b int); > CREATE TABLE > dbt1=# create table bb (a int, b int) distribute by replication; > CREATE TABLE > dbt1=# create rule bb_ins as on insert to aa do also insert into bb values > (new.a,new.b); > CREATE RULE > dbt1=# insert into aa values (1,2),(2,3); > dbt1=# execute direct on node 1 'select * from aa'; > a | b > ---+--- > 1 | 2 > 1 | 2 > 2 | 3 > 1 | 2 > 2 | 3 > (5 rows) > dbt1=# execute direct on node 2 'select * from aa'; > a | b > ---+--- > 2 | 3 > 1 | 2 > 2 | 3 > 1 | 2 > 2 | 3 > (5 rows) > It looks that the query is not running on the good table. > In RewriteInsertStmt, only one locator information is used when rewriting > the query. > Only the locator information of the tables whose rule is applied to is taken > into account. > > For example, in my case queries are not rewritten for table bb but only for > table aa. > It may be possible to take into account also the table bb defined in the > rules when building the lists of values. > > If the others have any ideas about how it could be able to do that smoothly, > any ideas is welcome. > > I think you should modify RewriteInsertStmt to take into account also the > rules that have been fired on this query. > I suppose this information is visible in the parsing tree as it works well > for one INSERT value. > > I attach a modified version of the patch you sent. > It does exactly the same thing as your first version. > > Regards, > -- > Michael Paquier > http://michaelpq.users.sourceforge.net > > |
From: xabc1000 <xab...@16...> - 2011-02-25 01:08:04
|
hi, When a table with foreign key was been created, a crash was encounted. Atfer debugging, I found that the pointer "cxt->rel" in the function "checkLocalFKConstraints" was NULL. What's the function of "checkLocalFKConstraints"? what's the function of the following code? foreach(attritem, fkconstraint->fk_attrs) { char *attrname = (char *) strVal(lfirst(attritem)); if (strcmp(cxt->rel->rd_locator_info->partAttrName, attrname) == 0) { /* Found the ordinal position in constraint */ break; } pos++; } If someone can help me, I would be very grateful. yours xcix 2011-02-25 xabc1000 |
From: Suzuki H. <hir...@in...> - 2011-02-24 14:58:33
|
Thanks your kind answer. > I have not yet studied the latest docs for GTM Standby in detail, but > I think something could be done if message ids are assigned, and > recent messages are buffered, such that when a GTM-Standby is > promoted, it can compare the last message id with the others and they > can sync up any missing messages. Perhaps this process can be done in > such a way that cluster-wide it is known in which order the > GTM-standbys will be contacted (ignoring the fact that broadcasting > with acknowledgements could also be coded one day). This is the answer that I expected. I think that this is a general solution:"... message ids are assigned, and recent messages are buffered, .." I also used same idea with some systems I made. I questioned because I could not find any API for the purpose above in the document of XCM. (Actually, Koichi san said before "gtm must move faster. I want to avoid context switches as much as possible,etc..." I thought that he worried about the speed very much. Therefore, I wanted to know what method HA-gtm use. ) > Anyway, whatever the exact technique that is chosen, I do not think > this is an unsurmountable issue. Yes, I think so. It's not difficult. > Also, for more background, code is currently being written such that > even if there is no GTM standby, a restarted GTM can rebuild its state > from information from the other nodes, but failover could be quicker > with a dedicated GTM standby. Great. I think that it's a big technical challenge. Thanks a lot. |
From: Mason S. <mas...@gm...> - 2011-02-24 13:33:25
|
On Thu, Feb 24, 2011 at 2:15 AM, Suzuki Hironobu <hir...@in...> wrote: > Thank you kind response. > >> We're assuring every message has been reached by receiving responses >> except for very few case. One of them is reporting a failure to >> xcwatcher. Here, because the failure will be reported from other >> source sooner or later, we don't care each report has to be reached to >> xcwatcher. In very critical case, xcwatcher will find no connection >> to monitoring agent, or monitoring agent will detect its local >> component failure. When we use UDB, we always have backups and we >> limit this use so that it does not affect database integrity within >> the cluster. > > I understand xcwatchers are able to find almost failures. > >> We're assuring every message has been reached by receiving responses >> except for very few case. > I'm interested in gtm, because it is SPOF of XC. > I'm especially interested in this "very few case". > > My questions were very simple. > In the case below: >>> For example: >>> (1)gtm-standby1 receives a message from gtm-act, >>> (2)gtm-act crashed! >>> (3)gtm-standby2 never receive it. >>> This is a typical case, and there are many similar cases. > First. > I think that there is a possibility that this happens though it is very > rare. > Is my thought correct? > > Second. > If my thought is correct, > can gtm-standby2 receive the last message not reached after xcwatcher > detects failure? > Or > if my thought is not correct, how are all messages perfectly sent? > > > The most fundamental question is: > How is the consistency of the data among gtm-act and two or more > gtm-standbys kept? > (I think that the consistency of data among gtms is the necessary > condition of HA-gtm.) I have not yet studied the latest docs for GTM Standby in detail, but I think something could be done if message ids are assigned, and recent messages are buffered, such that when a GTM-Standby is promoted, it can compare the last message id with the others and they can sync up any missing messages. Perhaps this process can be done in such a way that cluster-wide it is known in which order the GTM-standbys will be contacted (ignoring the fact that broadcasting with acknowledgements could also be coded one day). Also, for more background, code is currently being written such that even if there is no GTM standby, a restarted GTM can rebuild its state from information from the other nodes, but failover could be quicker with a dedicated GTM standby. Also, GTM currently can save its state when shut down gracefully. Such state info could theoretically be sent over from the promoted standby from the other ones if there is a problem. Similarly, this info could be sent over when spinning up a new GTM Standby dynamically. Anyway, whatever the exact technique that is chosen, I do not think this is an unsurmountable issue. Regards, Mason > > Of course, if only one GTM-standby runs, the problem is easy. > > > Regards, > > > ------------------------------------------------------------------------------ > Free Software Download: Index, Search & Analyze Logs and other IT data in > Real-Time with Splunk. Collect, index and harness all the fast moving IT data > generated by your applications, servers and devices whether physical, virtual > or in the cloud. Deliver compliance at lower cost and gain new business > insights. http://p.sf.net/sfu/splunk-dev2dev > _______________________________________________ > Postgres-xc-developers mailing list > Pos...@li... > https://lists.sourceforge.net/lists/listinfo/postgres-xc-developers > |
From: Suzuki H. <hir...@in...> - 2011-02-24 07:14:51
|
Thank you kind response. > We're assuring every message has been reached by receiving responses > except for very few case. One of them is reporting a failure to > xcwatcher. Here, because the failure will be reported from other > source sooner or later, we don't care each report has to be reached to > xcwatcher. In very critical case, xcwatcher will find no connection > to monitoring agent, or monitoring agent will detect its local > component failure. When we use UDB, we always have backups and we > limit this use so that it does not affect database integrity within > the cluster. I understand xcwatchers are able to find almost failures. > We're assuring every message has been reached by receiving responses > except for very few case. I'm interested in gtm, because it is SPOF of XC. I'm especially interested in this "very few case". My questions were very simple. In the case below: >> For example: >> (1)gtm-standby1 receives a message from gtm-act, >> (2)gtm-act crashed! >> (3)gtm-standby2 never receive it. >> This is a typical case, and there are many similar cases. First. I think that there is a possibility that this happens though it is very rare. Is my thought correct? Second. If my thought is correct, can gtm-standby2 receive the last message not reached after xcwatcher detects failure? Or if my thought is not correct, how are all messages perfectly sent? The most fundamental question is: How is the consistency of the data among gtm-act and two or more gtm-standbys kept? (I think that the consistency of data among gtms is the necessary condition of HA-gtm.) Of course, if only one GTM-standby runs, the problem is easy. Regards, |
From: Koichi S. <koi...@gm...> - 2011-02-23 08:05:41
|
Hi, ---------- Koichi Suzuki 2011/2/23 Suzuki Hironobu <hir...@in...>: > Thank you, and this is final question, maybe. > >>>> Now it is under the development (too early to publish) and is similar >>>> to streaming replication. GTM-ACT sends it's update (each > > At least ver9.0's SR is asynchronous. > By the way, Yes. There's a plan to make it synchronous and we're waiting for it. If they don't come early, we may make local extension. > >> Data transmission is synchronous. GTM-Standby has threads which >> corresponds to each GTM-ACT threads. Because GTM-Act threads >> coorrespond to GTM-Proxy worker threads, basically GTM-ACT just copies >> message from GTM-Proxy to GTM-Sandby and GTM-Standby can recreate >> transaction status. >> > > I heard that PostgresXC do not use reliable communication protocol. > Then, under very critical condition when gtm-act (or other components) > crashed, > I think that there is a possibility to which gtm-standby's binary do not > correspond. > (This is a critical timing issue.) We're assuring every message has been reached by receiving responses except for very few case. One of them is reporting a failure to xcwatcher. Here, because the failure will be reported from other source sooner or later, we don't care each report has to be reached to xcwatcher. In very critical case, xcwatcher will find no connection to monitoring agent, or monitoring agent will detect its local component failure. When we use UDB, we always have backups and we limit this use so that it does not affect database integrity within the cluster. Communication among GTM, GTM-Proxy, GTM-Standby, Coordinator and Datanode/Mirros are reliable. > > For example: > (1)gtm-standby1 receives a message from gtm-act, > (2)gtm-act crashed! > (3)gtm-standby2 never receive it. > This is a typical case, and there are many similar cases. > Therefore, I think that the data consistency among gtms(gtm-act and > gtm-standbys) is not guaranteed. > Is it true? Or are there any mechanisms to avoid this problem? > >> In the case of intermittent failure, typically in the network, we can >> expect many things. >> >> Some transaction may fail but another may be successful. I think >> 2PC will maintain database integrity in the whole database. I >> believe this is what we should enforce. One thing we should be >> careful, for example, is the case that different coordinators observe >> different (intermittent) failure for different datanode mirrors. We >> have to be careful to keep "primary" datanode mirror consistent to >> maintain data integrity between mirrors. > > I wish you success. > > > > ------------------------------------------------------------------------------ > Free Software Download: Index, Search & Analyze Logs and other IT data in > Real-Time with Splunk. Collect, index and harness all the fast moving IT data > generated by your applications, servers and devices whether physical, virtual > or in the cloud. Deliver compliance at lower cost and gain new business > insights. http://p.sf.net/sfu/splunk-dev2dev > _______________________________________________ > Postgres-xc-developers mailing list > Pos...@li... > https://lists.sourceforge.net/lists/listinfo/postgres-xc-developers > Thank you; --- Koichi Suzuki |
From: Suzuki H. <hir...@in...> - 2011-02-23 06:24:47
|
Thank you, and this is final question, maybe. >>> Now it is under the development (too early to publish) and is similar >>> to streaming replication. GTM-ACT sends it's update (each At least ver9.0's SR is asynchronous. By the way, > Data transmission is synchronous. GTM-Standby has threads which > corresponds to each GTM-ACT threads. Because GTM-Act threads > coorrespond to GTM-Proxy worker threads, basically GTM-ACT just copies > message from GTM-Proxy to GTM-Sandby and GTM-Standby can recreate > transaction status. > I heard that PostgresXC do not use reliable communication protocol. Then, under very critical condition when gtm-act (or other components) crashed, I think that there is a possibility to which gtm-standby's binary do not correspond. (This is a critical timing issue.) For example: (1)gtm-standby1 receives a message from gtm-act, (2)gtm-act crashed! (3)gtm-standby2 never receive it. This is a typical case, and there are many similar cases. Therefore, I think that the data consistency among gtms(gtm-act and gtm-standbys) is not guaranteed. Is it true? Or are there any mechanisms to avoid this problem? > In the case of intermittent failure, typically in the network, we can > expect many things. > > Some transaction may fail but another may be successful. I think > 2PC will maintain database integrity in the whole database. I > believe this is what we should enforce. One thing we should be > careful, for example, is the case that different coordinators observe > different (intermittent) failure for different datanode mirrors. We > have to be careful to keep "primary" datanode mirror consistent to > maintain data integrity between mirrors. I wish you success. |
From: Michael P. <mic...@gm...> - 2011-02-23 04:07:41
|
On Wed, Feb 23, 2011 at 12:59 PM, xiong wang <wan...@gm...> wrote: > Dears, > There's an error when I drop database. > > postgres=# create database test; > CREATE DATABASE > postgres=# drop database test; > ERROR: Clean connections not completed > > Regards, > Benny > I am able to reproduce that. This error seems to occur only when you create and drop a database when connected to the database postgres. -- test You are now connected to database "template1". template1=# create database dbt1; CREATE DATABASE template1=# drop database dbt1; DROP DATABASE template1=# create database dbt1; CREATE DATABASE template1=# drop database dbt1; DROP DATABASE -- Michael Paquier http://michaelpq.users.sourceforge.net |
From: xiong w. <wan...@gm...> - 2011-02-23 03:59:35
|
Dears, There's an error when I drop database. postgres=# create database test; CREATE DATABASE postgres=# drop database test; ERROR: Clean connections not completed Regards, Benny |
From: Koichi S. <koi...@gm...> - 2011-02-22 08:04:10
|
Hi, Please find my response inline... ---------- Koichi Suzuki 2011/2/22 Suzuki Hironobu <hir...@in...>: > Thanks your quick response. > > >>> >>> I'd like to know details more. >>> (1)How to replicate the data among gtm and gtm-standby(s)? >> >> Now it is under the development (too early to publish) and is similar >> to streaming replication. GTM-ACT sends it's update (each >> transaction status change, typically) to SBY. GTM-SBY can connect to >> GTM-ACT anytime. When GTM-ACT accepts GTM-SBY connection and begin >> to ship each transaction status update. >> > > It will show a good performance. > > Is data transmission among gtm synchronous or asynchronous? > And if it's asynchronous, > is there a mechanism that takes the synchronization among gtm-standby > when master-gtm crashed? Data transmission is synchronous. GTM-Standby has threads which corresponds to each GTM-ACT threads. Because GTM-Act threads coorrespond to GTM-Proxy worker threads, basically GTM-ACT just copies message from GTM-Proxy to GTM-Sandby and GTM-Standby can recreate transaction status. In fact, GTM-Standby's binary will be the same as GTM-ACT. So far, we don't backup GTM-SBY status to stable storage. When GTM-SBY crashes, we can get another GTM-SBY connected to GTM-ACT. Cascated GTM-SBY and multiple GT-SBY could be options for the future. > >>> (2)What failure do you assume? Only crash failure? or more? >>> And what's kind of failure detector does xcwatcher have? >>> Theoretically, is it eventually strong? eventually perfect? >> >> Hardware crash and software crash. Xcwatcher is a bit traditional. >> Network communication and process monitoring. Difference is that any >> Postgres-XC components >> (GTM/GTM-SBY/GT-PXY/Coordinator/Datanode/Mirror) can report the >> failure of other components they communicate to xcwatcher through >> XCM. Xcwatcher distributes this update (not only failure, but also >> start/stop and raise to ACT, etc) to all the servers. On the other >> hand, XCM evaluates the failure and advice xcwatcher what to do, as >> written in the document. >> > I understand. > I wanted to know that how do you think about failures. > Because words "timeout" and/or "omission" are not found in the document. Component (not server hardware) failure is detected by issuing "monitoring" command and check the response. Postgres-XC components are allowed to report other components' failures to xcwatcher through XCM module. Hardware failure detection is yet primitive. It is based upon response timeout. As I wrote, we may want to combine this with other hardware monitoring provided by general purpose HA middleware. > >>> >>> (3)As a possibility, I think that Postgres-XC components divide into >>> some parts due to the network failure etc. Is it correct? >> >> When xcwatcher fails to monitor servers or components, it treats them >> to be failed and tries to stop them just in case. When sufficient >> components are not recognized by xcwatcher, it will stop the whole >> cluster to enforce data integrity among datanodes/mirrors. >> >> All these action can be monitored through xcwatcher log by external >> tools. >> >> When xcwatcher itself fails, Postgres-XC can continue to run for a >> while. Operators can restart xcwatcher even in the different server. >> In this case, xcwatcher collects current cluster status through XCM >> (from all the servers involved) to rebuild global cluster status. >> >> I think they can be combined with hardware monitoring capability >> provided by many HA middleware. >> > > I was misunderstanding it a little. > I thought that we can construct a dependable system only with XCM. > > I think the setting of HA middleware seems to become difficult, > to correspond to difficult situations (for example: intermittent failure > of network). > However, it might be easy when assuming (perfect)crash failure only. In the case of intermittent failure, typically in the network, we can expect many things. Some transaction may fail but another may be successful. I think 2PC will maintain database integrity in the whole database. I believe this is what we should enforce. One thing we should be careful, for example, is the case that different coordinators observe different (intermittent) failure for different datanode mirrors. We have to be careful to keep "primary" datanode mirror consistent to maintain data integrity between mirrors. > > > I am looking forward to the trial of XCM. It's very simple. Enjoy. > > > ------------------------------------------------------------------------------ > Index, Search & Analyze Logs and other IT data in Real-Time with Splunk > Collect, index and harness all the fast moving IT data generated by your > applications, servers and devices whether physical, virtual or in the cloud. > Deliver compliance at lower cost and gain new business insights. > Free Software Download: http://p.sf.net/sfu/splunk-dev2dev > _______________________________________________ > Postgres-xc-developers mailing list > Pos...@li... > https://lists.sourceforge.net/lists/listinfo/postgres-xc-developers > |
From: Suzuki H. <hir...@in...> - 2011-02-22 05:35:19
|
Thanks your quick response. >> >> I'd like to know details more. >> (1)How to replicate the data among gtm and gtm-standby(s)? > > Now it is under the development (too early to publish) and is similar > to streaming replication. GTM-ACT sends it's update (each > transaction status change, typically) to SBY. GTM-SBY can connect to > GTM-ACT anytime. When GTM-ACT accepts GTM-SBY connection and begin > to ship each transaction status update. > It will show a good performance. Is data transmission among gtm synchronous or asynchronous? And if it's asynchronous, is there a mechanism that takes the synchronization among gtm-standby when master-gtm crashed? >> (2)What failure do you assume? Only crash failure? or more? >> And what's kind of failure detector does xcwatcher have? >> Theoretically, is it eventually strong? eventually perfect? > > Hardware crash and software crash. Xcwatcher is a bit traditional. > Network communication and process monitoring. Difference is that any > Postgres-XC components > (GTM/GTM-SBY/GT-PXY/Coordinator/Datanode/Mirror) can report the > failure of other components they communicate to xcwatcher through > XCM. Xcwatcher distributes this update (not only failure, but also > start/stop and raise to ACT, etc) to all the servers. On the other > hand, XCM evaluates the failure and advice xcwatcher what to do, as > written in the document. > I understand. I wanted to know that how do you think about failures. Because words "timeout" and/or "omission" are not found in the document. >> >> (3)As a possibility, I think that Postgres-XC components divide into >> some parts due to the network failure etc. Is it correct? > > When xcwatcher fails to monitor servers or components, it treats them > to be failed and tries to stop them just in case. When sufficient > components are not recognized by xcwatcher, it will stop the whole > cluster to enforce data integrity among datanodes/mirrors. > > All these action can be monitored through xcwatcher log by external > tools. > > When xcwatcher itself fails, Postgres-XC can continue to run for a > while. Operators can restart xcwatcher even in the different server. > In this case, xcwatcher collects current cluster status through XCM > (from all the servers involved) to rebuild global cluster status. > > I think they can be combined with hardware monitoring capability > provided by many HA middleware. > I was misunderstanding it a little. I thought that we can construct a dependable system only with XCM. I think the setting of HA middleware seems to become difficult, to correspond to difficult situations (for example: intermittent failure of network). However, it might be easy when assuming (perfect)crash failure only. I am looking forward to the trial of XCM. |
From: Michael P. <mic...@gm...> - 2011-02-22 04:07:23
|
Hi, Here is a little bit of feedback about the rule crash. I fixed myself an issue I found with rules this morning. So based on that I ran a couple of tests with your patch. 1) case do nothing: works well dbt1=# create table aa (a int, b int); CREATE TABLE dbt1=# create table bb (a int, b int) distribute by replication; CREATE TABLE dbt1=# create rule aa_ins as on insert to aa do instead nothing; CREATE RULE dbt1=# insert into aa values (1,2),(2,3); INSERT 0 0 dbt1=# select * from bb; a | b ---+--- (0 rows) dbt1=# select * from aa; a | b ---+--- (0 rows) This case works well. 2) with an insert rule: do also dbt1=# create table aa (a int, b int); CREATE TABLE dbt1=# create table bb (a int, b int) distribute by replication; CREATE TABLE dbt1=# create rule bb_ins as on insert to aa do also insert into bb values (new.a,new.b); CREATE RULE dbt1=# insert into aa values (1,2),(2,3); dbt1=# execute direct on node 1 'select * from aa'; a | b ---+--- 1 | 2 1 | 2 2 | 3 1 | 2 2 | 3 (5 rows) dbt1=# execute direct on node 2 'select * from aa'; a | b ---+--- 2 | 3 1 | 2 2 | 3 1 | 2 2 | 3 (5 rows) It looks that the query is not running on the good table. In RewriteInsertStmt, only one locator information is used when rewriting the query. Only the locator information of the tables whose rule is applied to is taken into account. For example, in my case queries are not rewritten for table bb but only for table aa. It may be possible to take into account also the table bb defined in the rules when building the lists of values. If the others have any ideas about how it could be able to do that smoothly, any ideas is welcome. I think you should modify RewriteInsertStmt to take into account also the rules that have been fired on this query. I suppose this information is visible in the parsing tree as it works well for one INSERT value. I attach a modified version of the patch you sent. It does exactly the same thing as your first version. Regards, -- Michael Paquier http://michaelpq.users.sourceforge.net |
From: Koichi S. <ko...@in...> - 2011-02-22 00:24:28
|
Thanks for quick response. (2011年02月22日 06:50), Suzuki Hironobu wrote: > Hi, > >> XCM module is added to Postgres-XC (ha_support branch so far). I >> added the following file in sourceforge development web-site. >> >> XCM_Module_Document_20110221.pdf > > I finished reading this document now. > I think that XCM is a great idea. > >> Misc is created to store temporary materials intended to be a part of >> further releases. > > I'd like to know details more. > (1)How to replicate the data among gtm and gtm-standby(s)? Now it is under the development (too early to publish) and is similar to streaming replication. GTM-ACT sends it's update (each transaction status change, typically) to SBY. GTM-SBY can connect to GTM-ACT anytime. When GTM-ACT accepts GTM-SBY connection and begin to ship each transaction status update. > > (2)What failure do you assume? Only crash failure? or more? > And what's kind of failure detector does xcwatcher have? > Theoretically, is it eventually strong? eventually perfect? Hardware crash and software crash. Xcwatcher is a bit traditional. Network communication and process monitoring. Difference is that any Postgres-XC components (GTM/GTM-SBY/GT-PXY/Coordinator/Datanode/Mirror) can report the failure of other components they communicate to xcwatcher through XCM. Xcwatcher distributes this update (not only failure, but also start/stop and raise to ACT, etc) to all the servers. On the other hand, XCM evaluates the failure and advice xcwatcher what to do, as written in the document. > > (3)As a possibility, I think that Postgres-XC components divide into > some parts due to the network failure etc. Is it correct? When xcwatcher fails to monitor servers or components, it treats them to be failed and tries to stop them just in case. When sufficient components are not recognized by xcwatcher, it will stop the whole cluster to enforce data integrity among datanodes/mirrors. All these action can be monitored through xcwatcher log by external tools. When xcwatcher itself fails, Postgres-XC can continue to run for a while. Operators can restart xcwatcher even in the different server. In this case, xcwatcher collects current cluster status through XCM (from all the servers involved) to rebuild global cluster status. I think they can be combined with hardware monitoring capability provided by many HA middleware. Thank you very much for your interest and involvement. --- Koichi > > > ---- > I cannot go to the conference on Friday. > Please teach if there is time. > > Regards, > > > > ------------------------------------------------------------------------------ > Index, Search& Analyze Logs and other IT data in Real-Time with Splunk > Collect, index and harness all the fast moving IT data generated by your > applications, servers and devices whether physical, virtual or in the cloud. > Deliver compliance at lower cost and gain new business insights. > Free Software Download: http://p.sf.net/sfu/splunk-dev2dev > _______________________________________________ > Postgres-xc-developers mailing list > Pos...@li... > https://lists.sourceforge.net/lists/listinfo/postgres-xc-developers > |
From: Suzuki H. <hir...@in...> - 2011-02-21 21:49:59
|
Hi, > XCM module is added to Postgres-XC (ha_support branch so far). I > added the following file in sourceforge development web-site. > > XCM_Module_Document_20110221.pdf I finished reading this document now. I think that XCM is a great idea. > Misc is created to store temporary materials intended to be a part of > further releases. I'd like to know details more. (1)How to replicate the data among gtm and gtm-standby(s)? (2)What failure do you assume? Only crash failure? or more? And what's kind of failure detector does xcwatcher have? Theoretically, is it eventually strong? eventually perfect? (3)As a possibility, I think that Postgres-XC components divide into some parts due to the network failure etc. Is it correct? ---- I cannot go to the conference on Friday. Please teach if there is time. Regards, |
From: Devrim G. <de...@gu...> - 2011-02-21 08:32:04
|
On Mon, 2011-02-21 at 17:07 +0900, Koichi Suzuki wrote: > Does postgresql.org help to create mailing list for such very > specific > project? Yeah. As I said, we moved psycopg2 lists to there 3-4 weeks before. If noone objects, I can ask website team for assistance. Regards, -- Devrim GÜNDÜZ EnterpriseDB: http://www.enterprisedb.com PostgreSQL Danışmanı/Consultant, Red Hat Certified Engineer Community: devrim~PostgreSQL.org, devrim.gunduz~linux.org.tr http://www.gunduz.org Twitter: http://twitter.com/devrimgunduz |
From: Koichi S. <ko...@in...> - 2011-02-21 08:06:01
|
Hi, Nice to hear from you; Does postgresql.org help to create mailing list for such very specific project? (2011年02月21日 17:02), Devrim GÜNDÜZ wrote: > On Mon, 2011-02-21 at 09:52 +0900, Michael Paquier wrote: >> >> I found another mailing list system. >> It looks that they do not intrusively insert advert messages at the >> end of >> each email. >> http://lists.freebsd.org/mailman/create >> >> So, why not moving to this service? > > Why don't we move to postgresql.org ? We recently moved psycopg2 mailing > lists to there. It would be better than using an external service. > > Regards, > > > > ------------------------------------------------------------------------------ > The ultimate all-in-one performance toolkit: Intel(R) Parallel Studio XE: > Pinpoint memory and threading errors before they happen. > Find and fix more than 250 security defects in the development cycle. > Locate bottlenecks in serial and parallel code that limit performance. > http://p.sf.net/sfu/intel-dev2devfeb > > > > _______________________________________________ > Postgres-xc-developers mailing list > Pos...@li... > https://lists.sourceforge.net/lists/listinfo/postgres-xc-developers |
From: Devrim G. <de...@gu...> - 2011-02-21 08:02:43
|
On Mon, 2011-02-21 at 11:11 +0900, Koichi Suzuki wrote: > Should we consider to have postgres-xc.org? You know I have it :) -- Devrim GÜNDÜZ EnterpriseDB: http://www.enterprisedb.com PostgreSQL Danışmanı/Consultant, Red Hat Certified Engineer Community: devrim~PostgreSQL.org, devrim.gunduz~linux.org.tr http://www.gunduz.org Twitter: http://twitter.com/devrimgunduz |
From: Devrim G. <de...@gu...> - 2011-02-21 08:01:29
|
On Mon, 2011-02-21 at 09:52 +0900, Michael Paquier wrote: > > I found another mailing list system. > It looks that they do not intrusively insert advert messages at the > end of > each email. > http://lists.freebsd.org/mailman/create > > So, why not moving to this service? Why don't we move to postgresql.org ? We recently moved psycopg2 mailing lists to there. It would be better than using an external service. Regards, -- Devrim GÜNDÜZ EnterpriseDB: http://www.enterprisedb.com PostgreSQL Danışmanı/Consultant, Red Hat Certified Engineer Community: devrim~PostgreSQL.org, devrim.gunduz~linux.org.tr http://www.gunduz.org Twitter: http://twitter.com/devrimgunduz |
From: Koichi S. <koi...@gm...> - 2011-02-21 06:26:40
|
Hi, XCM module is added to Postgres-XC (ha_support branch so far). I added the following file in sourceforge development web-site. XCM_Module_Document_20110221.pdf You can download this from the page https://sourceforge.net/projects/postgres-xc/files/misc/ Misc is created to store temporary materials intended to be a part of further releases. Good luck. ---------- Koichi Suzuki |
From: Koichi S. <koi...@gm...> - 2011-02-21 05:33:16
|
Hi, Mason; ---------- Koichi Suzuki 2011/2/18 Mason <ma...@us...>: > ---------- Forwarded message ---------- > From: Michael Paquier <mic...@us...> > Date: Thu, Feb 17, 2011 at 11:06 PM > Subject: [Postgres-xc-committers] Postgres-XC branch, ha_support, > updated. v0.9.3-55-gd73ae51 > To: pos...@li... > > > Project "Postgres-XC". > > The branch, ha_support has been updated > via d73ae5182149b08e0728edb96eee339e0c0498b7 (commit) > from f42b489b49f366c78d816708d47b380f9db640d9 (commit) > > > - Log ----------------------------------------------------------------- > commit d73ae5182149b08e0728edb96eee339e0c0498b7 > Author: Michael P <mic...@us...> > Date: Fri Feb 18 15:49:39 2011 +0900 > > Mirroring and XCM (XC Cluster Manager) implementation > > > Hi Michael, > > I will try and take a closer look when I have time. > > Just wondering, have you guys done any measurements to show the > performance impact on DBT-1? Not yet. Because we have to double-write in the case of updating transaction, and because of involved 2PC, this will have some impact to the performance. Mirroring is for the applications which needs very high availablity and no transaction loss is allowed even if datanode fails. People may think why we do this first and why we don't use streaming replication. We were waiting for synchronous mode streaming replication, which means that MASTER guarantees all the WAL records are shipped to the SLAVE and no committed transaction will be lost. I think this is important to maintain database integrity in the whole cluster. Unfortunately, I'm afraid it might not be in PG 9.1. Anyway, I'd like to include streaming replication someday in this year so that we can maintain whole performance with reasonable availability. Regards; --- Koichi > > Thanks, > > Mason > > ------------------------------------------------------------------------------ > The ultimate all-in-one performance toolkit: Intel(R) Parallel Studio XE: > Pinpoint memory and threading errors before they happen. > Find and fix more than 250 security defects in the development cycle. > Locate bottlenecks in serial and parallel code that limit performance. > http://p.sf.net/sfu/intel-dev2devfeb > _______________________________________________ > Postgres-xc-developers mailing list > Pos...@li... > https://lists.sourceforge.net/lists/listinfo/postgres-xc-developers > |
From: Koichi S. <koi...@gm...> - 2011-02-21 02:11:52
|
Are there other candidate hosts where we can move to? Should we consider to have postgres-xc.org? ---------- Koichi Suzuki 2011/2/21 Michael Paquier <mic...@gm...>: > Hi all, > > Perhaps you noticed for the last couple of weeks that some nice (irony?!) > advertisement > messages are introduced at the end of each message we send on the mailing > list. > > Having some advertisement about for instance Oracle RAC (close source, > private owner) in the mailing list of an open source project > dealing about database clustering is perhaps not the best situation for > users of Postgres-XC. > > I found another mailing list system. > It looks that they do not intrusively insert advert messages at the end of > each email. > http://lists.freebsd.org/mailman/create > > So, why not moving to this service? > -- > Michael Paquier > http://michaelpq.users.sourceforge.net > > > ------------------------------------------------------------------------------ > The ultimate all-in-one performance toolkit: Intel(R) Parallel Studio XE: > Pinpoint memory and threading errors before they happen. > Find and fix more than 250 security defects in the development cycle. > Locate bottlenecks in serial and parallel code that limit performance. > http://p.sf.net/sfu/intel-dev2devfeb > _______________________________________________ > Postgres-xc-developers mailing list > Pos...@li... > https://lists.sourceforge.net/lists/listinfo/postgres-xc-developers > > |
From: Michael P. <mic...@gm...> - 2011-02-21 00:52:33
|
Hi all, Perhaps you noticed for the last couple of weeks that some nice (irony?!) advertisement messages are introduced at the end of each message we send on the mailing list. Having some advertisement about for instance Oracle RAC (close source, private owner) in the mailing list of an open source project dealing about database clustering is perhaps not the best situation for users of Postgres-XC. I found another mailing list system. It looks that they do not intrusively insert advert messages at the end of each email. http://lists.freebsd.org/mailman/create So, why not moving to this service? -- Michael Paquier http://michaelpq.users.sourceforge.net |
From: Devrim G. <de...@gu...> - 2011-02-20 12:23:42
|
Hi, On Sun, 2011-02-20 at 21:16 +0900, Michael Paquier wrote: > Nice to hear from you. > Yes, we are planning to do the release 0.9.4 by the end of March. > I know that the website of the project has not been updated > lately...[?] > > But this release will be done with all the documentation necessary. Perfect. Thanks for the update. Regards, -- Devrim GÜNDÜZ EnterpriseDB: http://www.enterprisedb.com PostgreSQL Danışmanı/Consultant, Red Hat Certified Engineer Community: devrim~PostgreSQL.org, devrim.gunduz~linux.org.tr http://www.gunduz.org Twitter: http://twitter.com/devrimgunduz |
From: Devrim G. <de...@gu...> - 2011-02-20 10:50:39
|
Hi, There have been lots of commits since 0.9.3. Do you think that it is a good time for 0.9.4? Regards, -- Devrim GÜNDÜZ EnterpriseDB: http://www.enterprisedb.com PostgreSQL Danışmanı/Consultant, Red Hat Certified Engineer Community: devrim~PostgreSQL.org, devrim.gunduz~linux.org.tr http://www.gunduz.org Twitter: http://twitter.com/devrimgunduz |
From: Mason <ma...@us...> - 2011-02-18 13:52:28
|
---------- Forwarded message ---------- From: Michael Paquier <mic...@us...> Date: Thu, Feb 17, 2011 at 11:06 PM Subject: [Postgres-xc-committers] Postgres-XC branch, ha_support, updated. v0.9.3-55-gd73ae51 To: pos...@li... Project "Postgres-XC". The branch, ha_support has been updated via d73ae5182149b08e0728edb96eee339e0c0498b7 (commit) from f42b489b49f366c78d816708d47b380f9db640d9 (commit) - Log ----------------------------------------------------------------- commit d73ae5182149b08e0728edb96eee339e0c0498b7 Author: Michael P <mic...@us...> Date: Fri Feb 18 15:49:39 2011 +0900 Mirroring and XCM (XC Cluster Manager) implementation Hi Michael, I will try and take a closer look when I have time. Just wondering, have you guys done any measurements to show the performance impact on DBT-1? Thanks, Mason |