You can subscribe to this list here.
2010 |
Jan
|
Feb
|
Mar
|
Apr
(10) |
May
(17) |
Jun
(3) |
Jul
|
Aug
|
Sep
(8) |
Oct
(18) |
Nov
(51) |
Dec
(74) |
---|---|---|---|---|---|---|---|---|---|---|---|---|
2011 |
Jan
(47) |
Feb
(44) |
Mar
(44) |
Apr
(102) |
May
(35) |
Jun
(25) |
Jul
(56) |
Aug
(69) |
Sep
(32) |
Oct
(37) |
Nov
(31) |
Dec
(16) |
2012 |
Jan
(34) |
Feb
(127) |
Mar
(218) |
Apr
(252) |
May
(80) |
Jun
(137) |
Jul
(205) |
Aug
(159) |
Sep
(35) |
Oct
(50) |
Nov
(82) |
Dec
(52) |
2013 |
Jan
(107) |
Feb
(159) |
Mar
(118) |
Apr
(163) |
May
(151) |
Jun
(89) |
Jul
(106) |
Aug
(177) |
Sep
(49) |
Oct
(63) |
Nov
(46) |
Dec
(7) |
2014 |
Jan
(65) |
Feb
(128) |
Mar
(40) |
Apr
(11) |
May
(4) |
Jun
(8) |
Jul
(16) |
Aug
(11) |
Sep
(4) |
Oct
(1) |
Nov
(5) |
Dec
(16) |
2015 |
Jan
(5) |
Feb
|
Mar
(2) |
Apr
(5) |
May
(4) |
Jun
(12) |
Jul
|
Aug
|
Sep
|
Oct
|
Nov
|
Dec
(4) |
2019 |
Jan
|
Feb
|
Mar
|
Apr
|
May
|
Jun
|
Jul
(2) |
Aug
|
Sep
|
Oct
|
Nov
|
Dec
|
S | M | T | W | T | F | S |
---|---|---|---|---|---|---|
|
|
|
|
|
1
(12) |
2
(4) |
3
|
4
(17) |
5
(2) |
6
(5) |
7
(5) |
8
(23) |
9
|
10
(1) |
11
|
12
(2) |
13
|
14
|
15
|
16
|
17
|
18
(3) |
19
(1) |
20
(3) |
21
(10) |
22
(2) |
23
|
24
(1) |
25
(4) |
26
(8) |
27
(5) |
28
|
29
(3) |
30
(6) |
31
(1) |
|
|
|
|
|
|
From: Karthik S. <kse...@ad...> - 2013-03-22 18:53:26
|
+Alex and Sasi On 3/21/13 7:58 AM, "Bei Xu" <be...@ad...> wrote: >Hi, Koichi: >Base on your reply, > >Since slave is a copy of master, the slave has the same GTM_proxy listed >in postgresql.conf as the master, it will connect to server3's proxy AFTER >SLAVE IS STARTED, >And we will only change the slave's proxy to server 4 AFTER promotion, >correct? > >Thus, looks like SLAVE needs to connect to A PROXY at ALL TIME: before >promotion is server3's proxy, after promotion is server 4's proxy. > >Please take a look at following 2 senarios: >Senario1: If slave was configured with server4's proxy AFTER SLAVE IS >STARTED, upon server 3 failure, we will do : >1) promote on slave >Since slave is already connect server 4's proxy, we don't have to do >anything here. > >senario2: If slave was configured with server3's proxy AFTER SLAVE IS >STARTED, upon server 3 failure, we will do: >1) restart slave to change proxy from server3's proxy value to server4's >proxy value >2) promote on slave > >Obviously, senario1 has less steps and simpler, senario2 is suggested by >you. Is there any reason you suggested senario2? > >My concern is, If a slave is connect to any active proxy (the proxy is >started and pointing to the GTM), will the transaction be applied TWICE? >One from proxy, one from the master? > > > > > > >On 3/21/13 12:40 AM, "Koichi Suzuki" <koi...@gm...> wrote: > >>Only after promotion. Before promotion, they will not be connected >>to gtm_proxy. >> >>Regards; >>---------- >>Koichi Suzuki >> >> >>2013/3/21 Bei Xu <be...@ad...>: >>> Hi Koichi: >>> Thanks for the reply. I still have doubts for item 1. If we setup >>> proxy on server 4, do we reconfigure server 4's coordinator/datanodes >>>to >>> point to server 4's proxy at ALL TIME(after replication is setup, I can >>> change gtm_host to point to server4's proxy before I bring up slaves) >>>or >>> only AFTER promotion? >>> >>> >>> On 3/20/13 11:08 PM, "Koichi Suzuki" <koi...@gm...> wrote: >>> >>>>1. It's better to have gtm proxy at server 4 when you failover to this >>>>server. We need gtm proxy now to failover GTM while >>>>coordinators/datanodes are running. When you simply make a copy of >>>>coordinator/datanode with pg_basebackup and promote them, they will >>>>try to connect to gtm_proxy at server3. You need to reconfigure them >>>>to connect to gtm_proxy at server4. >>>> >>>>2. Only one risk is the recovery point could be different from >>>>component to component, I mean, some transaction may be committed at >>>>some node but aborted at another because there could be some >>>>difference in available WAL records. It may possible to improve the >>>>core to handle this to some extent but please understand there will be >>>>some corner case, especially if DDL is involved in such a case. This >>>>chance could be small and you may be able to correct this manually or >>>>this can be allowed in some applications. >>>> >>>>Regards; >>>>---------- >>>>Koichi Suzuki >>>> >>>> >>>>2013/3/21 Bei Xu <be...@ad...>: >>>>> Hi, I want to set up HA for pgxc, please see below for my current >>>>>setup. >>>>> >>>>> server1: 1 GTM >>>>> server2: 1 GTM_Standby >>>>> server3 (master): 1 proxy >>>>> 1 coordinator >>>>> 2 datanode >>>>> >>>>> Server4: (stream replication slave) : 1 standalone proxy ?? >>>>> 1 replicated coordinator (slave of >>>>> server3's coordinator) >>>>> 2 replicated datanode (slave of >>>>> server3's datanodes) >>>>> >>>>> >>>>> server3's coordinator and datanodes are the master of the server4's >>>>> coordinator/datanodes by stream replication. >>>>> >>>>> Question. >>>>> 1. Should there be a proxy on server 4? If not, which proxy should >>>>>the >>>>> server4's coordinator and datanodes pointing to? (I have to specify >>>>>the >>>>> gtm_host in postgresql.conf)/ >>>>> 2. Do I have to use synchronous replication vs Asynchrous >>>>>replication? >>>>>I am >>>>> currently using Asynchrnous replication because I think if I use >>>>> synchronous, slave failour will affect master. >>>>> >>>>> >>>>>---------------------------------------------------------------------- >>>>>- >>>>>-- >>>>>----- >>>>> Everyone hates slow websites. So do we. >>>>> Make your web apps faster with AppDynamics >>>>> Download AppDynamics Lite for free today: >>>>> http://p.sf.net/sfu/appdyn_d2d_mar >>>>> _______________________________________________ >>>>> Postgres-xc-developers mailing list >>>>> Pos...@li... >>>>> https://lists.sourceforge.net/lists/listinfo/postgres-xc-developers >>>>> >>>> >>> >>> >> > |
From: Ashutosh B. <ash...@en...> - 2013-03-22 05:02:30
|
I had forgotten to send it on hackers list. ---------- Forwarded message ---------- From: Ashutosh Bapat <ash...@en...> Date: Fri, Feb 15, 2013 at 1:45 PM Subject: Subqueries and permission checks To: Postgres-XC core <Pos...@li...> Hi All, It seems subqueries FQS or faster planning has many unknown problems. This one is second in the series, Permissions on object involved in queries are checked at the time of executing the query (not planning or parsing). This is done by traversing the final range table collected in PlannerGlobal during the planning phase. This final range table contains the RTEs from the subqueries and sublinks in the query, collected during pull_up_subqueries() and pull_up_sublinks(). While FQSing a query, we need to do the same. We need to collect the RTEs from the subqueries and sublinks. We need to do it after we have deemed a query to be FQSable. We can do it while walking the tree for shippability, but read on. In very near future, we should be able to use the infrastructure for FQS for shipping sub-queries without planning at the coordinator, the whole query is not shippable. If we get there, (which I am hoping to do in this quarter), we can't rely on the shippability walker to gather the RTEs as said above, since this collection will be lost once we come out of FQS planner. Instead we need to do it, after we have decided to FQS a certain subquery or sublink. To do so, the only way I see is to add yet another walker just to collect the RTEs. Does anybody see any other way? -- Best Wishes, Ashutosh Bapat EntepriseDB Corporation The Enterprise Postgres Company -- Best Wishes, Ashutosh Bapat EntepriseDB Corporation The Enterprise Postgres Company |
From: Bei Xu <be...@ad...> - 2013-03-21 15:13:51
|
Hi, Koichi: Base on your reply, Since slave is a copy of master, the slave has the same GTM_proxy listed in postgresql.conf as the master, it will connect to server3's proxy AFTER SLAVE IS STARTED, And we will only change the slave's proxy to server 4 AFTER promotion, correct? Thus, looks like SLAVE needs to connect to A PROXY at ALL TIME: before promotion is server3's proxy, after promotion is server 4's proxy. Please take a look at following 2 senarios: Senario1: If slave was configured with server4's proxy AFTER SLAVE IS STARTED, upon server 3 failure, we will do : 1) promote on slave Since slave is already connect server 4's proxy, we don't have to do anything here. senario2: If slave was configured with server3's proxy AFTER SLAVE IS STARTED, upon server 3 failure, we will do: 1) restart slave to change proxy from server3's proxy value to server4's proxy value 2) promote on slave Obviously, senario1 has less steps and simpler, senario2 is suggested by you. Is there any reason you suggested senario2? My concern is, If a slave is connect to any active proxy (the proxy is started and pointing to the GTM), will the transaction be applied TWICE? One from proxy, one from the master? On 3/21/13 12:40 AM, "Koichi Suzuki" <koi...@gm...> wrote: >Only after promotion. Before promotion, they will not be connected >to gtm_proxy. > >Regards; >---------- >Koichi Suzuki > > >2013/3/21 Bei Xu <be...@ad...>: >> Hi Koichi: >> Thanks for the reply. I still have doubts for item 1. If we setup >> proxy on server 4, do we reconfigure server 4's coordinator/datanodes to >> point to server 4's proxy at ALL TIME(after replication is setup, I can >> change gtm_host to point to server4's proxy before I bring up slaves) or >> only AFTER promotion? >> >> >> On 3/20/13 11:08 PM, "Koichi Suzuki" <koi...@gm...> wrote: >> >>>1. It's better to have gtm proxy at server 4 when you failover to this >>>server. We need gtm proxy now to failover GTM while >>>coordinators/datanodes are running. When you simply make a copy of >>>coordinator/datanode with pg_basebackup and promote them, they will >>>try to connect to gtm_proxy at server3. You need to reconfigure them >>>to connect to gtm_proxy at server4. >>> >>>2. Only one risk is the recovery point could be different from >>>component to component, I mean, some transaction may be committed at >>>some node but aborted at another because there could be some >>>difference in available WAL records. It may possible to improve the >>>core to handle this to some extent but please understand there will be >>>some corner case, especially if DDL is involved in such a case. This >>>chance could be small and you may be able to correct this manually or >>>this can be allowed in some applications. >>> >>>Regards; >>>---------- >>>Koichi Suzuki >>> >>> >>>2013/3/21 Bei Xu <be...@ad...>: >>>> Hi, I want to set up HA for pgxc, please see below for my current >>>>setup. >>>> >>>> server1: 1 GTM >>>> server2: 1 GTM_Standby >>>> server3 (master): 1 proxy >>>> 1 coordinator >>>> 2 datanode >>>> >>>> Server4: (stream replication slave) : 1 standalone proxy ?? >>>> 1 replicated coordinator (slave of >>>> server3's coordinator) >>>> 2 replicated datanode (slave of >>>> server3's datanodes) >>>> >>>> >>>> server3's coordinator and datanodes are the master of the server4's >>>> coordinator/datanodes by stream replication. >>>> >>>> Question. >>>> 1. Should there be a proxy on server 4? If not, which proxy should >>>>the >>>> server4's coordinator and datanodes pointing to? (I have to specify >>>>the >>>> gtm_host in postgresql.conf)/ >>>> 2. Do I have to use synchronous replication vs Asynchrous replication? >>>>I am >>>> currently using Asynchrnous replication because I think if I use >>>> synchronous, slave failour will affect master. >>>> >>>> >>>>----------------------------------------------------------------------- >>>>-- >>>>----- >>>> Everyone hates slow websites. So do we. >>>> Make your web apps faster with AppDynamics >>>> Download AppDynamics Lite for free today: >>>> http://p.sf.net/sfu/appdyn_d2d_mar >>>> _______________________________________________ >>>> Postgres-xc-developers mailing list >>>> Pos...@li... >>>> https://lists.sourceforge.net/lists/listinfo/postgres-xc-developers >>>> >>> >> >> > |
From: Abbas B. <abb...@en...> - 2013-03-21 11:58:23
|
Hi, In the last meeting we discussed that we should try and catch DDLs at a central place to block them for taking backup and for that we should try and use object access hooks. The comments at top of the file src/include/catalog/objectaccess.h read * Object access hooks are intended to be called just before or just after * performing certain actions on a SQL object. This is intended as * infrastructure for security or logging pluggins. * * OAT_POST_CREATE should be invoked just after the object is created. * Typically, this is done after inserting the primary catalog records and * associated dependencies. * * OAT_DROP should be invoked just before deletion of objects; typically * deleteOneObject(). Its arguments are packed within ObjectAccessDrop. Please note that since object access hooks are called AFTER the creation of objects we cannot use them to block object creation. The way object access hooks work is that they have added a function call InvokeObjectAccessHook at the end of every object creation. This means that they have added a function call at the end of CollationCreate, OperatorCreate, TypeShellMake and so on. If we choose to add another object access hook type say OAT_PRE_CREATE, we will have to call the function InvokeObjectAccessHook at the start of every object create function, which again won't be a central place. Another point discussed was that in future a new DDL might get introduced via merge, so we should avoid doing a switch case on statement tag. I propose a solution to this problem and that is to have any unknown DDL in the deny list by default. This way we will always be on the safe side, and if the we want to allow that particular DDL, it can always be added to allow list. It was discussed that event triggers might be using a central place to catch DDLs, so we should explore its implementation. I looked at the patches committed in the repository and found that they are calling a function in each of the cases in standard_ProcessUtility to have the triggers to fire added in a list. See the following code fragment to have an idea case T_CreateConversionStmt: + if (isCompleteQuery) + EventTriggerDDLCommandStart(parsetree); CreateConversionCommand((CreateConversionStmt *) parsetree); break; case T_CreateCastStmt: + if (isCompleteQuery) + EventTriggerDDLCommandStart(parsetree); CreateCast((CreateCastStmt *) parsetree); break; case T_CreateOpClassStmt: + if (isCompleteQuery) + EventTriggerDDLCommandStart(parsetree); DefineOpClass((CreateOpClassStmt *) parsetree); break; Obviously we cannot adopt this approach because it will increase the footprint of our code unnecessarily. I therefore have to move forward with the switch case on statement tag approach, unless we come up with some other idea. Best Regards -- Abbas Architect EnterpriseDB Corporation The Enterprise PostgreSQL Company Phone: 92-334-5100153 Website: www.enterprisedb.com EnterpriseDB Blog: http://blogs.enterprisedb.com/ Follow us on Twitter: http://www.twitter.com/enterprisedb This e-mail message (and any attachment) is intended for the use of the individual or entity to whom it is addressed. This message contains information from EnterpriseDB Corporation that may be privileged, confidential, or exempt from disclosure under applicable law. If you are not the intended recipient or authorized to receive this for the intended recipient, any use, dissemination, distribution, retention, archiving, or copying of this communication is strictly prohibited. If you have received this e-mail in error, please notify the sender immediately by reply e-mail and delete this message. |
From: Koichi S. <koi...@gm...> - 2013-03-21 07:40:42
|
Only after promotion. Before promotion, they will not be connected to gtm_proxy. Regards; ---------- Koichi Suzuki 2013/3/21 Bei Xu <be...@ad...>: > Hi Koichi: > Thanks for the reply. I still have doubts for item 1. If we setup > proxy on server 4, do we reconfigure server 4's coordinator/datanodes to > point to server 4's proxy at ALL TIME(after replication is setup, I can > change gtm_host to point to server4's proxy before I bring up slaves) or > only AFTER promotion? > > > On 3/20/13 11:08 PM, "Koichi Suzuki" <koi...@gm...> wrote: > >>1. It's better to have gtm proxy at server 4 when you failover to this >>server. We need gtm proxy now to failover GTM while >>coordinators/datanodes are running. When you simply make a copy of >>coordinator/datanode with pg_basebackup and promote them, they will >>try to connect to gtm_proxy at server3. You need to reconfigure them >>to connect to gtm_proxy at server4. >> >>2. Only one risk is the recovery point could be different from >>component to component, I mean, some transaction may be committed at >>some node but aborted at another because there could be some >>difference in available WAL records. It may possible to improve the >>core to handle this to some extent but please understand there will be >>some corner case, especially if DDL is involved in such a case. This >>chance could be small and you may be able to correct this manually or >>this can be allowed in some applications. >> >>Regards; >>---------- >>Koichi Suzuki >> >> >>2013/3/21 Bei Xu <be...@ad...>: >>> Hi, I want to set up HA for pgxc, please see below for my current setup. >>> >>> server1: 1 GTM >>> server2: 1 GTM_Standby >>> server3 (master): 1 proxy >>> 1 coordinator >>> 2 datanode >>> >>> Server4: (stream replication slave) : 1 standalone proxy ?? >>> 1 replicated coordinator (slave of >>> server3's coordinator) >>> 2 replicated datanode (slave of >>> server3's datanodes) >>> >>> >>> server3's coordinator and datanodes are the master of the server4's >>> coordinator/datanodes by stream replication. >>> >>> Question. >>> 1. Should there be a proxy on server 4? If not, which proxy should >>>the >>> server4's coordinator and datanodes pointing to? (I have to specify the >>> gtm_host in postgresql.conf)/ >>> 2. Do I have to use synchronous replication vs Asynchrous replication? >>>I am >>> currently using Asynchrnous replication because I think if I use >>> synchronous, slave failour will affect master. >>> >>> >>>------------------------------------------------------------------------- >>>----- >>> Everyone hates slow websites. So do we. >>> Make your web apps faster with AppDynamics >>> Download AppDynamics Lite for free today: >>> http://p.sf.net/sfu/appdyn_d2d_mar >>> _______________________________________________ >>> Postgres-xc-developers mailing list >>> Pos...@li... >>> https://lists.sourceforge.net/lists/listinfo/postgres-xc-developers >>> >> > > |
From: Ashutosh B. <ash...@en...> - 2013-03-21 06:43:26
|
On Thu, Mar 21, 2013 at 10:35 AM, Bei Xu <be...@ad...> wrote: > Ashutosh: > Thanks for the suggestion. We only have limited 6 servers allocated. > 3 servers are masters, 3 servers are slaves (stream replication). > If we set up 1 datanode per server, then we only have 3 active datanodes > in total. That's why we set up 2 datanodes per server in order to have 6 > active datanodes. > Do you think 3 active datanodes in 3 servers perform better than 6 active > datanodes in 3 servers? > 3 active datanodes on 3 separate server is expected to do better than 6 active datanodes on 3 servers. But, in rare cases (if they balance CPU and IO amongst two datanodes on same server) 6 datanodes on 3 servers might be as good as other configuration. Please see if the later is the case with you, but that would be rare, I guess. > > From: Ashutosh Bapat <ash...@en...> > Date: Wednesday, March 20, 2013 11:25 PM > To: Xu Bei <be...@ad...> > Cc: "pos...@li..." < > pos...@li...>, Karthik Sethupathy < > kse...@ad...>, Venky Kandaswamy <ve...@ad...> > Subject: Re: [Postgres-xc-developers] proxy setup on standby server > > Hi Bei, > Suzuki-san has replied to your questions. I have different suggestion. > > You may want to use separate servers for the two datanodes, that way > improves performance because CPU and IO loads are divided. > > On Wed, Mar 20, 2013 at 9:55 PM, Bei Xu <be...@ad...> wrote: > >> Hi, I want to set up HA for pgxc, please see below for my current setup. >> >> server1: 1 GTM >> server2: 1 GTM_Standby >> server3 (master): 1 proxy >> 1 coordinator >> 2 datanode >> >> Server4: (stream replication slave) : 1 standalone proxy ?? >> 1 replicated coordinator (slave of >> server3's coordinator) >> 2 replicated datanode (slave of >> server3's datanodes) >> >> >> server3's coordinator and datanodes are the master of the server4's >> coordinator/datanodes by stream replication. >> >> Question. >> 1. Should there be a proxy on server 4? If not, which proxy should the >> server4's coordinator and datanodes pointing to? (I have to specify the >> gtm_host in postgresql.conf)/ >> 2. Do I have to use synchronous replication vs Asynchrous replication? I >> am currently using Asynchrnous replication because I think if I use >> synchronous, slave failour will affect master. >> >> >> ------------------------------------------------------------------------------ >> Everyone hates slow websites. So do we. >> Make your web apps faster with AppDynamics >> Download AppDynamics Lite for free today: >> http://p.sf.net/sfu/appdyn_d2d_mar >> _______________________________________________ >> Postgres-xc-developers mailing list >> Pos...@li... >> https://lists.sourceforge.net/lists/listinfo/postgres-xc-developers >> >> > > > -- > Best Wishes, > Ashutosh Bapat > EntepriseDB Corporation > The Enterprise Postgres Company > -- Best Wishes, Ashutosh Bapat EntepriseDB Corporation The Enterprise Postgres Company |
From: Nikhil S. <ni...@st...> - 2013-03-21 06:38:45
|
Hi Bei, > server1: 1 GTM > server2: 1 GTM_Standby > server3 (master): 1 proxy > 1 coordinator > 2 datanode > > Server4: (stream replication slave) : 1 standalone proxy ?? > 1 replicated coordinator (slave of > server3's coordinator) > 2 replicated datanode (slave of > server3's datanodes) > > > server3's coordinator and datanodes are the master of the server4's > coordinator/datanodes by stream replication. > IMO, a better config would be to have datanode1 running on server3 and datanode 2 running on server4. Also their replicas should then respectively go to server4 and server3 respectively. > 2. Do I have to use synchronous replication vs Asynchrous replication? I am > currently using Asynchrnous replication because I think if I use > synchronous, slave failour will affect master. > Or consider having two synchronous replicas configured. Also the replicas need not be hot standby replicas. Regards, Nikhils -- StormDB - http://www.stormdb.com The Database Cloud Postgres-XC Support and Service |
From: Bei Xu <be...@ad...> - 2013-03-21 06:35:38
|
Ashutosh: Thanks for the suggestion. We only have limited 6 servers allocated. 3 servers are masters, 3 servers are slaves (stream replication). If we set up 1 datanode per server, then we only have 3 active datanodes in total. That's why we set up 2 datanodes per server in order to have 6 active datanodes. Do you think 3 active datanodes in 3 servers perform better than 6 active datanodes in 3 servers? From: Ashutosh Bapat <ash...@en...<mailto:ash...@en...>> Date: Wednesday, March 20, 2013 11:25 PM To: Xu Bei <be...@ad...<mailto:be...@ad...>> Cc: "pos...@li...<mailto:pos...@li...>" <pos...@li...<mailto:pos...@li...>>, Karthik Sethupathy <kse...@ad...<mailto:kse...@ad...>>, Venky Kandaswamy <ve...@ad...<mailto:ve...@ad...>> Subject: Re: [Postgres-xc-developers] proxy setup on standby server Hi Bei, Suzuki-san has replied to your questions. I have different suggestion. You may want to use separate servers for the two datanodes, that way improves performance because CPU and IO loads are divided. On Wed, Mar 20, 2013 at 9:55 PM, Bei Xu <be...@ad...<mailto:be...@ad...>> wrote: Hi, I want to set up HA for pgxc, please see below for my current setup. server1: 1 GTM server2: 1 GTM_Standby server3 (master): 1 proxy 1 coordinator 2 datanode Server4: (stream replication slave) : 1 standalone proxy ?? 1 replicated coordinator (slave of server3's coordinator) 2 replicated datanode (slave of server3's datanodes) server3's coordinator and datanodes are the master of the server4's coordinator/datanodes by stream replication. Question. 1. Should there be a proxy on server 4? If not, which proxy should the server4's coordinator and datanodes pointing to? (I have to specify the gtm_host in postgresql.conf)/ 2. Do I have to use synchronous replication vs Asynchrous replication? I am currently using Asynchrnous replication because I think if I use synchronous, slave failour will affect master. ------------------------------------------------------------------------------ Everyone hates slow websites. So do we. Make your web apps faster with AppDynamics Download AppDynamics Lite for free today: http://p.sf.net/sfu/appdyn_d2d_mar _______________________________________________ Postgres-xc-developers mailing list Pos...@li...<mailto:Pos...@li...> https://lists.sourceforge.net/lists/listinfo/postgres-xc-developers -- Best Wishes, Ashutosh Bapat EntepriseDB Corporation The Enterprise Postgres Company |
From: Bei Xu <be...@ad...> - 2013-03-21 06:31:09
|
Hi Koichi: Thanks for the reply. I still have doubts for item 1. If we setup proxy on server 4, do we reconfigure server 4's coordinator/datanodes to point to server 4's proxy at ALL TIME(after replication is setup, I can change gtm_host to point to server4's proxy before I bring up slaves) or only AFTER promotion? On 3/20/13 11:08 PM, "Koichi Suzuki" <koi...@gm...> wrote: >1. It's better to have gtm proxy at server 4 when you failover to this >server. We need gtm proxy now to failover GTM while >coordinators/datanodes are running. When you simply make a copy of >coordinator/datanode with pg_basebackup and promote them, they will >try to connect to gtm_proxy at server3. You need to reconfigure them >to connect to gtm_proxy at server4. > >2. Only one risk is the recovery point could be different from >component to component, I mean, some transaction may be committed at >some node but aborted at another because there could be some >difference in available WAL records. It may possible to improve the >core to handle this to some extent but please understand there will be >some corner case, especially if DDL is involved in such a case. This >chance could be small and you may be able to correct this manually or >this can be allowed in some applications. > >Regards; >---------- >Koichi Suzuki > > >2013/3/21 Bei Xu <be...@ad...>: >> Hi, I want to set up HA for pgxc, please see below for my current setup. >> >> server1: 1 GTM >> server2: 1 GTM_Standby >> server3 (master): 1 proxy >> 1 coordinator >> 2 datanode >> >> Server4: (stream replication slave) : 1 standalone proxy ?? >> 1 replicated coordinator (slave of >> server3's coordinator) >> 2 replicated datanode (slave of >> server3's datanodes) >> >> >> server3's coordinator and datanodes are the master of the server4's >> coordinator/datanodes by stream replication. >> >> Question. >> 1. Should there be a proxy on server 4? If not, which proxy should >>the >> server4's coordinator and datanodes pointing to? (I have to specify the >> gtm_host in postgresql.conf)/ >> 2. Do I have to use synchronous replication vs Asynchrous replication? >>I am >> currently using Asynchrnous replication because I think if I use >> synchronous, slave failour will affect master. >> >> >>------------------------------------------------------------------------- >>----- >> Everyone hates slow websites. So do we. >> Make your web apps faster with AppDynamics >> Download AppDynamics Lite for free today: >> http://p.sf.net/sfu/appdyn_d2d_mar >> _______________________________________________ >> Postgres-xc-developers mailing list >> Pos...@li... >> https://lists.sourceforge.net/lists/listinfo/postgres-xc-developers >> > |
From: Ashutosh B. <ash...@en...> - 2013-03-21 06:25:38
|
Hi Bei, Suzuki-san has replied to your questions. I have different suggestion. You may want to use separate servers for the two datanodes, that way improves performance because CPU and IO loads are divided. On Wed, Mar 20, 2013 at 9:55 PM, Bei Xu <be...@ad...> wrote: > Hi, I want to set up HA for pgxc, please see below for my current setup. > > server1: 1 GTM > server2: 1 GTM_Standby > server3 (master): 1 proxy > 1 coordinator > 2 datanode > > Server4: (stream replication slave) : 1 standalone proxy ?? > 1 replicated coordinator (slave of > server3's coordinator) > 2 replicated datanode (slave of > server3's datanodes) > > > server3's coordinator and datanodes are the master of the server4's > coordinator/datanodes by stream replication. > > Question. > 1. Should there be a proxy on server 4? If not, which proxy should the > server4's coordinator and datanodes pointing to? (I have to specify the > gtm_host in postgresql.conf)/ > 2. Do I have to use synchronous replication vs Asynchrous replication? I > am currently using Asynchrnous replication because I think if I use > synchronous, slave failour will affect master. > > > ------------------------------------------------------------------------------ > Everyone hates slow websites. So do we. > Make your web apps faster with AppDynamics > Download AppDynamics Lite for free today: > http://p.sf.net/sfu/appdyn_d2d_mar > _______________________________________________ > Postgres-xc-developers mailing list > Pos...@li... > https://lists.sourceforge.net/lists/listinfo/postgres-xc-developers > > -- Best Wishes, Ashutosh Bapat EntepriseDB Corporation The Enterprise Postgres Company |
From: Koichi S. <koi...@gm...> - 2013-03-21 06:18:47
|
GTM and GTM standby name can be the same. Others should be unique. GTM will reject connection. You cannot issue CREATE NODE for the node which shares the name with others. Regards; ---------- Koichi Suzuki 2013/3/21 Bei Xu <be...@ad...>: > Hi, All: > Does "nodename" parameter has to be to different on all the components in > pgxc cluster? > For instance, gtm and gtm_standby > Datanode and datanode replica. > Proxy names > > ------------------------------------------------------------------------------ > Everyone hates slow websites. So do we. > Make your web apps faster with AppDynamics > Download AppDynamics Lite for free today: > http://p.sf.net/sfu/appdyn_d2d_mar > _______________________________________________ > Postgres-xc-developers mailing list > Pos...@li... > https://lists.sourceforge.net/lists/listinfo/postgres-xc-developers > |
From: Koichi S. <koi...@gm...> - 2013-03-21 06:08:54
|
1. It's better to have gtm proxy at server 4 when you failover to this server. We need gtm proxy now to failover GTM while coordinators/datanodes are running. When you simply make a copy of coordinator/datanode with pg_basebackup and promote them, they will try to connect to gtm_proxy at server3. You need to reconfigure them to connect to gtm_proxy at server4. 2. Only one risk is the recovery point could be different from component to component, I mean, some transaction may be committed at some node but aborted at another because there could be some difference in available WAL records. It may possible to improve the core to handle this to some extent but please understand there will be some corner case, especially if DDL is involved in such a case. This chance could be small and you may be able to correct this manually or this can be allowed in some applications. Regards; ---------- Koichi Suzuki 2013/3/21 Bei Xu <be...@ad...>: > Hi, I want to set up HA for pgxc, please see below for my current setup. > > server1: 1 GTM > server2: 1 GTM_Standby > server3 (master): 1 proxy > 1 coordinator > 2 datanode > > Server4: (stream replication slave) : 1 standalone proxy ?? > 1 replicated coordinator (slave of > server3's coordinator) > 2 replicated datanode (slave of > server3's datanodes) > > > server3's coordinator and datanodes are the master of the server4's > coordinator/datanodes by stream replication. > > Question. > 1. Should there be a proxy on server 4? If not, which proxy should the > server4's coordinator and datanodes pointing to? (I have to specify the > gtm_host in postgresql.conf)/ > 2. Do I have to use synchronous replication vs Asynchrous replication? I am > currently using Asynchrnous replication because I think if I use > synchronous, slave failour will affect master. > > ------------------------------------------------------------------------------ > Everyone hates slow websites. So do we. > Make your web apps faster with AppDynamics > Download AppDynamics Lite for free today: > http://p.sf.net/sfu/appdyn_d2d_mar > _______________________________________________ > Postgres-xc-developers mailing list > Pos...@li... > https://lists.sourceforge.net/lists/listinfo/postgres-xc-developers > |
From: Bei Xu <be...@ad...> - 2013-03-20 18:26:38
|
Hi, All: Does "nodename" parameter has to be to different on all the components in pgxc cluster? For instance, gtm and gtm_standby Datanode and datanode replica. Proxy names |
From: Bei Xu <be...@ad...> - 2013-03-20 18:11:06
|
Hi, I want to set up HA for pgxc, please see below for my current setup. server1: 1 GTM server2: 1 GTM_Standby server3 (master): 1 proxy 1 coordinator 2 datanode Server4: (stream replication slave) : 1 standalone proxy ?? 1 replicated coordinator (slave of server3's coordinator) 2 replicated datanode (slave of server3's datanodes) server3's coordinator and datanodes are the master of the server4's coordinator/datanodes by stream replication. Question. 1. Should there be a proxy on server 4? If not, which proxy should the server4's coordinator and datanodes pointing to? (I have to specify the gtm_host in postgresql.conf)/ 2. Do I have to use synchronous replication vs Asynchrous replication? I am currently using Asynchrnous replication because I think if I use synchronous, slave failour will affect master. |
From: Amit K. <ami...@en...> - 2013-03-19 06:19:07
|
Yes this looks good to go. Thanks for adding the single-node scenario in the tests. On 18 March 2013 16:21, Ashutosh Bapat <ash...@en...>wrote: > Hi Amit, > PFA the patch with changes. Let me know if it's good to commit. > > > On Mon, Mar 18, 2013 at 2:11 PM, Ashutosh Bapat < > ash...@en...> wrote: > >> Ok, I think it's better to leave distributed as distributed and handle >> each separately. >> >> >> On Mon, Mar 18, 2013 at 2:02 PM, Amit Khandekar < >> ami...@en...> wrote: >> >>> >>> >>> On 8 March 2013 14:00, Ashutosh Bapat <ash...@en...>wrote: >>> >>>> Hi Amit, >>>> Please find my replies inlined, >>>> >>>> >>>> >>>>> I think the logic of shippability of outer joins is flawless. Didn't >>>>> find any holes. Patch comments below : >>>>> >>>>> ------- >>>>> >>>>> In case of distributed equi-join case, why is >>>>> IsExecNodesColumnDistributed() used instead of >>>>> IsExecNodesDistributedByValue() ? We want to always rule out the round >>>>> robin case, no ? I can see that pgxc_find_dist_equijoin_qual() will >>>>> always fail for round robin tables because they won't have any distrib >>>>> columns, but still , just curious ... >>>>> >>>>> >>>> It keeps open the possibility that we will be able to ship equi-join if >>>> we can somehow infer that the rows from both the sides of join, >>>> participating in the result of join are collocated. >>>> >>>> >>>>> ------- >>>>> >>>>> * PGXC_TODO: What do we do when baselocatortype is >>>>> * LOCATOR_TYPE_DISTRIBUTED? It could be anything HASH >>>>> distributed or >>>>> * MODULO distributed. In that case, having equi-join >>>>> doesn't work >>>>> * really, because same value from different relation >>>>> will go to >>>>> * different node. >>>>> >>>>> The above comment says that it does not work if one of the tables is >>>>> distributed by hash and other table is distributed by modulo. But the >>>>> code is actually checking the baselocatortype also, so I guess it >>>>> works correctly after all ? I did not get what is the TODO here. Or >>>>> does it mean this ? : >>>>> For (t1_hash join t2_hash on ...) tj1 join (t1_mod join t2_mod on ...) >>>>> tj2 on tj1.col1 = tj2.col4 >>>>> the merged nodes for tj1 will have LOCATOR_TYPE_DISTRIBUTED, and the >>>>> merged nodes for tj2 will also be LOCATOR_TYPE_DISTRIBUTED, and so tj1 >>>>> join tj2 would be wrongly marked shippable even though they should not >>>>> be shippable because of the mix of hash and modulo ? >>>>> >>>>> >>>> That's correct. This should be taken care by my second patch up for >>>> review. I think with that patch, we won't need LOCATOR_TYPE_DISTRIBUTED. >>>> While reviewing that patch, can you please also review if this is true. >>>> >>>> >>>>> ------- >>>>> >>>>> Is pgxc_is_expr_shippable(equi_join_expr) necessary ? Won't this qual >>>>> be examined in is_query_shippable() walker ? >>>>> >>>> >>>> This code will get executed in standard_planner() as well, so it's >>>> possible that some of the join quals will be shippable and some are not. >>>> While this is fine for an inner join, we want to make sure the a qual which >>>> implies collocation of rows is shippable. This check is more from future >>>> extension perspective than anything else. >>>> >>>> >>> >>> Ok. Understood all the comments above. >>> >>> >>>> >>>>> -------- >>>>> >>>>> If both tables reside on a single datanode, every join case should be >>>>> shippable, which doesn't seem to be happening : >>>>> postgres=# create table tab2 (id2 int, v varchar) distribute by >>>>> replication to node (datanode_1); >>>>> postgres=# create table tab1 (id1 int, v varchar) to node (datanode_1); >>>>> postgres=# explain select * from (tab1 full outer join tab2 on id1 = >>>>> id2 ) ; >>>>> QUERY PLAN >>>>> >>>>> ------------------------------------------------------------------------------------------------- >>>>> Hash Full Join (cost=0.12..0.26 rows=10 width=72) >>>>> Hash Cond: (tab1.id1 = tab2.id2) >>>>> -> Data Node Scan on tab1 "_REMOTE_TABLE_QUERY_" (cost=0.00..0.00 >>>>> rows=1000 width=36) >>>>> Node/s: datanode_1 >>>>> -> Hash (cost=0.00..0.00 rows=1000 width=36) >>>>> -> Data Node Scan on tab2 "_REMOTE_TABLE_QUERY_" >>>>> (cost=0.00..0.00 rows=1000 width=36) >>>>> Node/s: datanode_1 >>>>> >>>>> Probably you need to take out the following statement out of the >>>>> distributed case and apply it as a general rule: >>>>> /* If there is only single node, try merging the nodes >>>>> */ >>>>> if (list_length(inner_en->nodeList) == 1 && >>>>> list_length(outer_en->nodeList) == 1) >>>>> merge_nodes = true; >>>>> >>>>> >>>> I am thinking about this and actually thought that we should mark a >>>> single node ExecNodes as REPLICATED, so that it doesn't need any special >>>> handling. What do you think? >>>> >>> >>> I am concerned about loss of information that the underlying table is >>> actually distributed. Also, there is a function >>> IsReturningDMLOnReplicatedTable() which is using this information, although >>> not sure how much it's making use of that information. I leave that to you >>> for deciding which option to choose. I personally feel it's always good to >>> be explicit while checking for this condition. >>> >>> >>>> >>>> >>>>> >>>>> >>> -- >>>>> >>> Best Wishes, >>>>> >>> Ashutosh Bapat >>>>> >>> EntepriseDB Corporation >>>>> >>> The Enterprise Postgres Company >>>>> >> >>>>> >> >>>>> >> >>>>> >> >>>>> >> -- >>>>> >> Best Wishes, >>>>> >> Ashutosh Bapat >>>>> >> EntepriseDB Corporation >>>>> >> The Enterprise Postgres Company >>>>> > >>>>> > >>>>> > >>>>> > >>>>> > -- >>>>> > Best Wishes, >>>>> > Ashutosh Bapat >>>>> > EntepriseDB Corporation >>>>> > The Enterprise Postgres Company >>>>> > >>>>> > >>>>> ------------------------------------------------------------------------------ >>>>> > Free Next-Gen Firewall Hardware Offer >>>>> > Buy your Sophos next-gen firewall before the end March 2013 >>>>> > and get the hardware for free! Learn more. >>>>> > http://p.sf.net/sfu/sophos-d2d-feb >>>>> > _______________________________________________ >>>>> > Postgres-xc-developers mailing list >>>>> > Pos...@li... >>>>> > https://lists.sourceforge.net/lists/listinfo/postgres-xc-developers >>>>> > >>>>> >>>> >>>> >>>> >>>> -- >>>> Best Wishes, >>>> Ashutosh Bapat >>>> EntepriseDB Corporation >>>> The Enterprise Postgres Company >>>> >>> >>> >> >> >> -- >> Best Wishes, >> Ashutosh Bapat >> EntepriseDB Corporation >> The Enterprise Postgres Company >> > > > > -- > Best Wishes, > Ashutosh Bapat > EntepriseDB Corporation > The Enterprise Postgres Company > |
From: Ashutosh B. <ash...@en...> - 2013-03-18 10:12:02
|
Ok, I think it's better to leave distributed as distributed and handle each separately. On Mon, Mar 18, 2013 at 2:02 PM, Amit Khandekar < ami...@en...> wrote: > > > On 8 March 2013 14:00, Ashutosh Bapat <ash...@en...>wrote: > >> Hi Amit, >> Please find my replies inlined, >> >> >> >>> I think the logic of shippability of outer joins is flawless. Didn't >>> find any holes. Patch comments below : >>> >>> ------- >>> >>> In case of distributed equi-join case, why is >>> IsExecNodesColumnDistributed() used instead of >>> IsExecNodesDistributedByValue() ? We want to always rule out the round >>> robin case, no ? I can see that pgxc_find_dist_equijoin_qual() will >>> always fail for round robin tables because they won't have any distrib >>> columns, but still , just curious ... >>> >>> >> It keeps open the possibility that we will be able to ship equi-join if >> we can somehow infer that the rows from both the sides of join, >> participating in the result of join are collocated. >> >> >>> ------- >>> >>> * PGXC_TODO: What do we do when baselocatortype is >>> * LOCATOR_TYPE_DISTRIBUTED? It could be anything HASH >>> distributed or >>> * MODULO distributed. In that case, having equi-join >>> doesn't work >>> * really, because same value from different relation >>> will go to >>> * different node. >>> >>> The above comment says that it does not work if one of the tables is >>> distributed by hash and other table is distributed by modulo. But the >>> code is actually checking the baselocatortype also, so I guess it >>> works correctly after all ? I did not get what is the TODO here. Or >>> does it mean this ? : >>> For (t1_hash join t2_hash on ...) tj1 join (t1_mod join t2_mod on ...) >>> tj2 on tj1.col1 = tj2.col4 >>> the merged nodes for tj1 will have LOCATOR_TYPE_DISTRIBUTED, and the >>> merged nodes for tj2 will also be LOCATOR_TYPE_DISTRIBUTED, and so tj1 >>> join tj2 would be wrongly marked shippable even though they should not >>> be shippable because of the mix of hash and modulo ? >>> >>> >> That's correct. This should be taken care by my second patch up for >> review. I think with that patch, we won't need LOCATOR_TYPE_DISTRIBUTED. >> While reviewing that patch, can you please also review if this is true. >> >> >>> ------- >>> >>> Is pgxc_is_expr_shippable(equi_join_expr) necessary ? Won't this qual >>> be examined in is_query_shippable() walker ? >>> >> >> This code will get executed in standard_planner() as well, so it's >> possible that some of the join quals will be shippable and some are not. >> While this is fine for an inner join, we want to make sure the a qual which >> implies collocation of rows is shippable. This check is more from future >> extension perspective than anything else. >> >> > > Ok. Understood all the comments above. > > >> >>> -------- >>> >>> If both tables reside on a single datanode, every join case should be >>> shippable, which doesn't seem to be happening : >>> postgres=# create table tab2 (id2 int, v varchar) distribute by >>> replication to node (datanode_1); >>> postgres=# create table tab1 (id1 int, v varchar) to node (datanode_1); >>> postgres=# explain select * from (tab1 full outer join tab2 on id1 = id2 >>> ) ; >>> QUERY PLAN >>> >>> ------------------------------------------------------------------------------------------------- >>> Hash Full Join (cost=0.12..0.26 rows=10 width=72) >>> Hash Cond: (tab1.id1 = tab2.id2) >>> -> Data Node Scan on tab1 "_REMOTE_TABLE_QUERY_" (cost=0.00..0.00 >>> rows=1000 width=36) >>> Node/s: datanode_1 >>> -> Hash (cost=0.00..0.00 rows=1000 width=36) >>> -> Data Node Scan on tab2 "_REMOTE_TABLE_QUERY_" >>> (cost=0.00..0.00 rows=1000 width=36) >>> Node/s: datanode_1 >>> >>> Probably you need to take out the following statement out of the >>> distributed case and apply it as a general rule: >>> /* If there is only single node, try merging the nodes */ >>> if (list_length(inner_en->nodeList) == 1 && >>> list_length(outer_en->nodeList) == 1) >>> merge_nodes = true; >>> >>> >> I am thinking about this and actually thought that we should mark a >> single node ExecNodes as REPLICATED, so that it doesn't need any special >> handling. What do you think? >> > > I am concerned about loss of information that the underlying table is > actually distributed. Also, there is a function > IsReturningDMLOnReplicatedTable() which is using this information, although > not sure how much it's making use of that information. I leave that to you > for deciding which option to choose. I personally feel it's always good to > be explicit while checking for this condition. > > >> >> >>> >>> >>> -- >>> >>> Best Wishes, >>> >>> Ashutosh Bapat >>> >>> EntepriseDB Corporation >>> >>> The Enterprise Postgres Company >>> >> >>> >> >>> >> >>> >> >>> >> -- >>> >> Best Wishes, >>> >> Ashutosh Bapat >>> >> EntepriseDB Corporation >>> >> The Enterprise Postgres Company >>> > >>> > >>> > >>> > >>> > -- >>> > Best Wishes, >>> > Ashutosh Bapat >>> > EntepriseDB Corporation >>> > The Enterprise Postgres Company >>> > >>> > >>> ------------------------------------------------------------------------------ >>> > Free Next-Gen Firewall Hardware Offer >>> > Buy your Sophos next-gen firewall before the end March 2013 >>> > and get the hardware for free! Learn more. >>> > http://p.sf.net/sfu/sophos-d2d-feb >>> > _______________________________________________ >>> > Postgres-xc-developers mailing list >>> > Pos...@li... >>> > https://lists.sourceforge.net/lists/listinfo/postgres-xc-developers >>> > >>> >> >> >> >> -- >> Best Wishes, >> Ashutosh Bapat >> EntepriseDB Corporation >> The Enterprise Postgres Company >> > > -- Best Wishes, Ashutosh Bapat EntepriseDB Corporation The Enterprise Postgres Company |
From: Amit K. <ami...@en...> - 2013-03-18 10:03:27
|
On 8 March 2013 14:00, Ashutosh Bapat <ash...@en...>wrote: > Hi Amit, > Please find my replies inlined, > > > >> I think the logic of shippability of outer joins is flawless. Didn't >> find any holes. Patch comments below : >> >> ------- >> >> In case of distributed equi-join case, why is >> IsExecNodesColumnDistributed() used instead of >> IsExecNodesDistributedByValue() ? We want to always rule out the round >> robin case, no ? I can see that pgxc_find_dist_equijoin_qual() will >> always fail for round robin tables because they won't have any distrib >> columns, but still , just curious ... >> >> > It keeps open the possibility that we will be able to ship equi-join if we > can somehow infer that the rows from both the sides of join, participating > in the result of join are collocated. > > >> ------- >> >> * PGXC_TODO: What do we do when baselocatortype is >> * LOCATOR_TYPE_DISTRIBUTED? It could be anything HASH >> distributed or >> * MODULO distributed. In that case, having equi-join >> doesn't work >> * really, because same value from different relation >> will go to >> * different node. >> >> The above comment says that it does not work if one of the tables is >> distributed by hash and other table is distributed by modulo. But the >> code is actually checking the baselocatortype also, so I guess it >> works correctly after all ? I did not get what is the TODO here. Or >> does it mean this ? : >> For (t1_hash join t2_hash on ...) tj1 join (t1_mod join t2_mod on ...) >> tj2 on tj1.col1 = tj2.col4 >> the merged nodes for tj1 will have LOCATOR_TYPE_DISTRIBUTED, and the >> merged nodes for tj2 will also be LOCATOR_TYPE_DISTRIBUTED, and so tj1 >> join tj2 would be wrongly marked shippable even though they should not >> be shippable because of the mix of hash and modulo ? >> >> > That's correct. This should be taken care by my second patch up for > review. I think with that patch, we won't need LOCATOR_TYPE_DISTRIBUTED. > While reviewing that patch, can you please also review if this is true. > > >> ------- >> >> Is pgxc_is_expr_shippable(equi_join_expr) necessary ? Won't this qual >> be examined in is_query_shippable() walker ? >> > > This code will get executed in standard_planner() as well, so it's > possible that some of the join quals will be shippable and some are not. > While this is fine for an inner join, we want to make sure the a qual which > implies collocation of rows is shippable. This check is more from future > extension perspective than anything else. > > Ok. Understood all the comments above. > >> -------- >> >> If both tables reside on a single datanode, every join case should be >> shippable, which doesn't seem to be happening : >> postgres=# create table tab2 (id2 int, v varchar) distribute by >> replication to node (datanode_1); >> postgres=# create table tab1 (id1 int, v varchar) to node (datanode_1); >> postgres=# explain select * from (tab1 full outer join tab2 on id1 = id2 >> ) ; >> QUERY PLAN >> >> ------------------------------------------------------------------------------------------------- >> Hash Full Join (cost=0.12..0.26 rows=10 width=72) >> Hash Cond: (tab1.id1 = tab2.id2) >> -> Data Node Scan on tab1 "_REMOTE_TABLE_QUERY_" (cost=0.00..0.00 >> rows=1000 width=36) >> Node/s: datanode_1 >> -> Hash (cost=0.00..0.00 rows=1000 width=36) >> -> Data Node Scan on tab2 "_REMOTE_TABLE_QUERY_" >> (cost=0.00..0.00 rows=1000 width=36) >> Node/s: datanode_1 >> >> Probably you need to take out the following statement out of the >> distributed case and apply it as a general rule: >> /* If there is only single node, try merging the nodes */ >> if (list_length(inner_en->nodeList) == 1 && >> list_length(outer_en->nodeList) == 1) >> merge_nodes = true; >> >> > I am thinking about this and actually thought that we should mark a single > node ExecNodes as REPLICATED, so that it doesn't need any special handling. > What do you think? > I am concerned about loss of information that the underlying table is actually distributed. Also, there is a function IsReturningDMLOnReplicatedTable() which is using this information, although not sure how much it's making use of that information. I leave that to you for deciding which option to choose. I personally feel it's always good to be explicit while checking for this condition. > > >> >> >>> -- >> >>> Best Wishes, >> >>> Ashutosh Bapat >> >>> EntepriseDB Corporation >> >>> The Enterprise Postgres Company >> >> >> >> >> >> >> >> >> >> -- >> >> Best Wishes, >> >> Ashutosh Bapat >> >> EntepriseDB Corporation >> >> The Enterprise Postgres Company >> > >> > >> > >> > >> > -- >> > Best Wishes, >> > Ashutosh Bapat >> > EntepriseDB Corporation >> > The Enterprise Postgres Company >> > >> > >> ------------------------------------------------------------------------------ >> > Free Next-Gen Firewall Hardware Offer >> > Buy your Sophos next-gen firewall before the end March 2013 >> > and get the hardware for free! Learn more. >> > http://p.sf.net/sfu/sophos-d2d-feb >> > _______________________________________________ >> > Postgres-xc-developers mailing list >> > Pos...@li... >> > https://lists.sourceforge.net/lists/listinfo/postgres-xc-developers >> > >> > > > > -- > Best Wishes, > Ashutosh Bapat > EntepriseDB Corporation > The Enterprise Postgres Company > |
From: Michael P. <mic...@gm...> - 2013-03-12 11:10:13
|
On Tue, Mar 12, 2013 at 7:56 PM, Ashutosh Bapat < ash...@en...> wrote: > Hi All, > Support for materialised view in XC is going to be a problem when we will > pull this feature in XC. The problem is where should be storage of the > materialized view. There are two ways, we can store a materialised view - > a. make it coordinator only and store the materialized view result at the > coordinator. OR b. store it like any other table, replicated or > distributed. > > I am assuming that materialized views will be created for frequently > occurring queries, such that a single materialized view is capable of > serving whole query. In that case, having coordinator local storage would > improve the performance, since the query doesn't need any fetches from > datanode. We will need to create the infrastructure to have coordinator > local storage for user data. > > The second option doesn't look that attractive unless a materialized view > is being mis-used so that higher percentage of queries need joins with > other views or tables. I assume that materialized views should be replicated at each Coordinator with storage on Coordinator. Only a thought though... This would really improve performance on some query joins. Then, what about refresh? After that, doing a refresh on all the Coordinators within the same transaction could be problematic as each Coordinator would need to connect to each remote node, finishing with a dangerous state where multiple connections would be open on remote nodes for the same session. A refresh that runs only locally for each Coordinator is enough I think. -- Michael |
From: Ashutosh B. <ash...@en...> - 2013-03-12 10:56:58
|
Hi All, Support for materialised view in XC is going to be a problem when we will pull this feature in XC. The problem is where should be storage of the materialized view. There are two ways, we can store a materialised view - a. make it coordinator only and store the materialized view result at the coordinator. OR b. store it like any other table, replicated or distributed. I am assuming that materialized views will be created for frequently occurring queries, such that a single materialized view is capable of serving whole query. In that case, having coordinator local storage would improve the performance, since the query doesn't need any fetches from datanode. We will need to create the infrastructure to have coordinator local storage for user data. The second option doesn't look that attractive unless a materialized view is being mis-used so that higher percentage of queries need joins with other views or tables. ---------- Forwarded message ---------- From: Kevin Grittner <kg...@ym...> Date: Sun, Mar 3, 2013 at 11:14 PM Subject: [HACKERS] materialized views and FDWs To: "pgs...@po..." <pgs...@po...> In final testing and documentation today, it occurred to me to test a materialized view with foreign data wrapper. I picked the file_fdw for convenience, but I think this should work as well with any other FDW. The idea is to create an MV which mirrors an FDW so that it can be indexed and quickly accessed. Timings below are all fully cached to minimize caching effects. test=# create extension file_fdw; CREATE EXTENSION test=# create server local_file foreign data wrapper file_fdw ; CREATE SERVER test=# create foreign table words (word text not null) server local_file options (filename '/etc/dictionaries-common/words'); CREATE FOREIGN TABLE test=# create materialized view wrd as select * from words; SELECT 99171 test=# create unique index wrd_word on wrd (word); CREATE INDEX test=# create extension pg_trgm ; CREATE EXTENSION test=# create index wrd_trgm on wrd using gist (word gist_trgm_ops); CREATE INDEX test=# vacuum analyze wrd; VACUUM test=# select word from wrd order by word <-> 'caterpiler' limit 10; word --------------- cater caterpillar Caterpillar caterpillars caterpillar's Caterpillar's caterer caterer's caters catered (10 rows) test=# explain analyze select word from words order by word <-> 'caterpiler' limit 10; QUERY PLAN ----------------------------------------------------------------------------------------------------------------------------- Limit (cost=2195.70..2195.72 rows=10 width=32) (actual time=218.904..218.906 rows=10 loops=1) -> Sort (cost=2195.70..2237.61 rows=16765 width=32) (actual time=218.902..218.904 rows=10 loops=1) Sort Key: ((word <-> 'caterpiler'::text)) Sort Method: top-N heapsort Memory: 25kB -> Foreign Scan on words (cost=0.00..1833.41 rows=16765 width=32) (actual time=0.046..200.965 rows=99171 loops=1) Foreign File: /etc/dictionaries-common/words Foreign File Size: 938848 Total runtime: 218.966 ms (8 rows) test=# set enable_indexscan = off; test=# explain analyze select word from wrd order by word <-> 'caterpiler' limit 10; QUERY PLAN ---------------------------------------------------------------------------------------------------------------------- Limit (cost=3883.69..3883.71 rows=10 width=9) (actual time=203.819..203.821 rows=10 loops=1) -> Sort (cost=3883.69..4131.61 rows=99171 width=9) (actual time=203.818..203.818 rows=10 loops=1) Sort Key: ((word <-> 'caterpiler'::text)) Sort Method: top-N heapsort Memory: 25kB -> Seq Scan on wrd (cost=0.00..1740.64 rows=99171 width=9) (actual time=0.029..186.749 rows=99171 loops=1) Total runtime: 203.851 ms (6 rows) test=# reset enable_indexscan; test=# explain analyze select word from wrd order by word <-> 'caterpiler' limit 10; QUERY PLAN ------------------------------------------------------------------------------------------------------------------------------ Limit (cost=0.28..1.02 rows=10 width=9) (actual time=24.916..25.079 rows=10 loops=1) -> Index Scan using wrd_trgm on wrd (cost=0.28..7383.70 rows=99171 width=9) (actual time=24.914..25.076 rows=10 loops=1) Order By: (word <-> 'caterpiler'::text) Total runtime: 25.884 ms (4 rows) Does this deserve specific treatment in the docs? Where? -- Kevin Grittner EnterpriseDB: http://www.enterprisedb.com The Enterprise PostgreSQL Company -- Sent via pgsql-hackers mailing list (pgs...@po...) To make changes to your subscription: http://www.postgresql.org/mailpref/pgsql-hackers -- Best Wishes, Ashutosh Bapat EntepriseDB Corporation The Enterprise Postgres Company |
From: Abbas B. <abb...@en...> - 2013-03-10 14:59:39
|
Hi, Attached please find a patch that adds support in pg_dump to dump nodes and node groups. This is required while adding a new node to the cluster. -- Abbas Architect EnterpriseDB Corporation The Enterprise PostgreSQL Company Phone: 92-334-5100153 Website: www.enterprisedb.com EnterpriseDB Blog: http://blogs.enterprisedb.com/ Follow us on Twitter: http://www.twitter.com/enterprisedb This e-mail message (and any attachment) is intended for the use of the individual or entity to whom it is addressed. This message contains information from EnterpriseDB Corporation that may be privileged, confidential, or exempt from disclosure under applicable law. If you are not the intended recipient or authorized to receive this for the intended recipient, any use, dissemination, distribution, retention, archiving, or copying of this communication is strictly prohibited. If you have received this e-mail in error, please notify the sender immediately by reply e-mail and delete this message. |
From: Michael P. <mic...@gm...> - 2013-03-08 10:32:27
|
On Fri, Mar 8, 2013 at 5:09 PM, Nikhil Sontakke <ni...@st...> wrote: > I use a simple 'psql -c "\x"' query to monitor coordinator/datanodes. > The psql call ensures that the connection protocol is followed and > accepted by that node. It then does an innocuous activity on the psql > side before exiting. Works well for me. > +1. -- Michael |
From: Amit K. <ami...@en...> - 2013-03-08 10:04:00
|
On 6 March 2013 15:20, Abbas Butt <abb...@en...> wrote: > > > On Fri, Mar 1, 2013 at 5:48 PM, Amit Khandekar < > ami...@en...> wrote: > >> On 19 February 2013 12:37, Abbas Butt <abb...@en...> >> wrote: >> > >> > Hi, >> > Attached please find a patch that locks the cluster so that dump can be >> > taken to be restored on the new node to be added. >> > >> > To lock the cluster the patch adds a new GUC parameter called >> > xc_lock_for_backup, however its status is maintained by the pooler. The >> > reason is that the default behavior of XC is to release connections as >> soon >> > as a command is done and it uses PersistentConnections GUC to control >> the >> > behavior. We in this case however need a status that is independent of >> the >> > setting of PersistentConnections. >> > >> > Assume we have two coordinator cluster, the patch provides this >> behavior: >> > >> > Case 1: set and show >> > ==================== >> > psql test -p 5432 >> > set xc_lock_for_backup=yes; >> > show xc_lock_for_backup; >> > xc_lock_for_backup >> > -------------------- >> > yes >> > (1 row) >> > >> > Case 2: set from one client show from other >> > ================================== >> > psql test -p 5432 >> > set xc_lock_for_backup=yes; >> > (From another tab) >> > psql test -p 5432 >> > show xc_lock_for_backup; >> > xc_lock_for_backup >> > -------------------- >> > yes >> > (1 row) >> > >> > Case 3: set from one, quit it, run again and show >> > ====================================== >> > psql test -p 5432 >> > set xc_lock_for_backup=yes; >> > \q >> > psql test -p 5432 >> > show xc_lock_for_backup; >> > xc_lock_for_backup >> > -------------------- >> > yes >> > (1 row) >> > >> > Case 4: set on one coordinator, show from other >> > ===================================== >> > psql test -p 5432 >> > set xc_lock_for_backup=yes; >> > (From another tab) >> > psql test -p 5433 >> > show xc_lock_for_backup; >> > xc_lock_for_backup >> > -------------------- >> > yes >> > (1 row) >> > >> > pg_dump and pg_dumpall seem to work fine after locking the cluster for >> > backup but I would test these utilities in detail next. >> > >> > Also I have yet to look in detail that standard_ProcessUtility is the >> only >> > place that updates the portion of catalog that is dumped. There may be >> some >> > other places too that need to be blocked for catalog updates. >> > >> > The patch adds no extra warnings and regression shows no extra failure. >> > >> > Comments are welcome. >> >> Abbas wrote on another thread: >> >> > Amit wrote on another thread: >> >> I haven't given a thought on the earlier patch you sent for cluster >> lock >> >> implementation; may be we can discuss this on that thread, but just a >> quick >> >> question: >> >> >> >> Does the cluster-lock command wait for the ongoing DDL commands to >> finish >> >> ? If not, we have problems. The subsequent pg_dump would not contain >> objects >> >> created by these particular DDLs. >> > >> > >> > Suppose you have a two coordinator cluster. Assume one client connected >> to >> > each. Suppose one client issues a lock cluster command and the other >> issues >> > a DDL. Is this what you mean by an ongoing DDL? If true then answer to >> your >> > question is Yes. >> > >> > Suppose you have a prepared transaction that has a DDL in it, again if >> this >> > can be considered an on going DDL, then again answer to your question is >> > Yes. >> > >> > Suppose you have a two coordinator cluster. Assume one client connected >> to >> > each. One client starts a transaction and issues a DDL, the second >> client >> > issues a lock cluster command, the first commits the transaction. If >> this is >> > an ongoing DDL, then the answer to your question is No. >> >> Yes this last scenario is what I meant: A DDL has been executed on nodes, >> but >> not committed, when the cluster lock command is run and then pg_dump >> immediately >> starts its transaction before the DDL is committed. Here pg_dump does >> not see the new objects that would be created. >> >> I myself am not sure how would we prevent this from happening. There >> are two callback hooks that might be worth considering though: >> 1. Transaction End callback (CallXactCallbacks) >> 2. Object creation/drop hook (InvokeObjectAccessHook) >> >> Suppose we create an object creation/drop hook function that would : >> 1. store the current transaction id in a global objects_created list >> if the cluster is not locked, >> 2. or else if the cluster is locked, this hook would ereport() saying >> "cannot create catalog objects in this mode". >> >> And then during transaction commit , a new transaction callback hook will: >> 1. Check the above objects_created list to see if the current >> transaction has any objects created/dropped. >> 2. If found and if the cluster-lock is on, it will again ereport() >> saying "cannot create catalog objects in this mode" >> >> Thinking more on the object creation hook, we can even consider this >> as a substitute for checking the cluster-lock status in >> standardProcessUtility(). But I am not sure whether this hook does get >> called on each of the catalog objects. At least the code comments say >> it does. >> > > Thanks for the ideas, here is how I handled the problem of ongoing DDLs. > > 1. Online node addition feature requires that each transaction > should be monitored for any activity that would be prohibited > if the cluster is locked before the transaction commit. > This obviously adds some overhead in each transaction. > If the database administrator is sure that the deployed > cluster would never require online addition of nodes > OR the database administrator decides that node addition > will be done by bringing the cluster down then a > command line parameter "disable-online-node-addition" > can be used to disable transaction monitoring for online node addition > By default on line addition of nodes will be available. > Is this overhead because you do pooler communication during commit ? If so, yes, that is a overhead. In other reply, you said, we have to keep the lock across the sessions; if we leave that session, the lock goes away, so we would have the restriction that everything else should be run in the same session. So if we acquire a session lock in pg_dump itself, would that solve the problem ? 2. Suppose we have a two coordinator cluster CO1 and CO2 > Assume one client connected to each coordinator. > Further assume one client starts a transaction > and issues a DDL. This is an unfinished transaction. > Now assume the second client issues > SET xc_lock_for_backup=yes > The commit on the unfinished transaction should now > fail. To handle this situation we monitor each > transaction for any activity that would be prohibited > if the cluster is locked before transaction commit. > At the time of commit we check that if the transaction > had issued a prohibited statement and now the cluster > has been locked, we abort the commit. > This is done only if online addition of nodes has not > been disabled explicitly and the server is not running > in bootstrap mode. > > Does the object access hook seem to be a feasible option for keeping track of unfinished DDLs ? If this is feasible, we don't have to prohibit according to wihch DDL is being run. -- > 3. I did not use CallXactCallbacks because the comment in > CommitTransaction reads > * This is all post-commit cleanup. Note that if an error is raised > here, > * it's too late to abort the transaction. This should be just > * noncritical resource releasing. > Yes, you are right. The transaction has already been committed when this callback gets invoked. > I have attached the revised patch with detailed comments. > > >> >> >> > But its a matter of >> > deciding which camp are we going to put COMMIT in, the allow camp, or >> the >> > deny camp. I decided to put it in allow camp, because I have not yet >> written >> > any code to detect whether a transaction being committed has a DDL in >> it or >> > not, and stopping all transactions from committing looks too >> restrictive to >> > me. >> >> >> > >> > Do you have some other meaning of an ongoing DDL? >> >> >> >> > >> > -- >> > Abbas >> > Architect >> > EnterpriseDB Corporation >> > The Enterprise PostgreSQL Company >> > >> > Phone: 92-334-5100153 >> > >> > Website: www.enterprisedb.com >> > EnterpriseDB Blog: http://blogs.enterprisedb.com/ >> > Follow us on Twitter: http://www.twitter.com/enterprisedb >> > >> > This e-mail message (and any attachment) is intended for the use of >> > the individual or entity to whom it is addressed. This message >> > contains information from EnterpriseDB Corporation that may be >> > privileged, confidential, or exempt from disclosure under applicable >> > law. If you are not the intended recipient or authorized to receive >> > this for the intended recipient, any use, dissemination, distribution, >> > retention, archiving, or copying of this communication is strictly >> > prohibited. If you have received this e-mail in error, please notify >> > the sender immediately by reply e-mail and delete this message. >> > >> > >> ------------------------------------------------------------------------------ >> > Everyone hates slow websites. So do we. >> > Make your web apps faster with AppDynamics >> > Download AppDynamics Lite for free today: >> > http://p.sf.net/sfu/appdyn_d2d_feb >> > _______________________________________________ >> > Postgres-xc-developers mailing list >> > Pos...@li... >> > https://lists.sourceforge.net/lists/listinfo/postgres-xc-developers >> > >> > > > > -- > -- > Abbas > Architect > EnterpriseDB Corporation > The Enterprise PostgreSQL Company > > Phone: 92-334-5100153 > > Website: www.enterprisedb.com > EnterpriseDB Blog: http://blogs.enterprisedb.com/ > Follow us on Twitter: http://www.twitter.com/enterprisedb > > This e-mail message (and any attachment) is intended for the use of > the individual or entity to whom it is addressed. This message > contains information from EnterpriseDB Corporation that may be > privileged, confidential, or exempt from disclosure under applicable > law. If you are not the intended recipient or authorized to receive > this for the intended recipient, any use, dissemination, distribution, > retention, archiving, or copying of this communication is strictly > prohibited. If you have received this e-mail in error, please notify > the sender immediately by reply e-mail and delete this message. > |
From: Ashutosh B. <ash...@en...> - 2013-03-08 10:00:56
|
Hi Amit, Please find my replies inlined, > I think the logic of shippability of outer joins is flawless. Didn't > find any holes. Patch comments below : > > ------- > > In case of distributed equi-join case, why is > IsExecNodesColumnDistributed() used instead of > IsExecNodesDistributedByValue() ? We want to always rule out the round > robin case, no ? I can see that pgxc_find_dist_equijoin_qual() will > always fail for round robin tables because they won't have any distrib > columns, but still , just curious ... > > It keeps open the possibility that we will be able to ship equi-join if we can somehow infer that the rows from both the sides of join, participating in the result of join are collocated. > ------- > > * PGXC_TODO: What do we do when baselocatortype is > * LOCATOR_TYPE_DISTRIBUTED? It could be anything HASH > distributed or > * MODULO distributed. In that case, having equi-join > doesn't work > * really, because same value from different relation will > go to > * different node. > > The above comment says that it does not work if one of the tables is > distributed by hash and other table is distributed by modulo. But the > code is actually checking the baselocatortype also, so I guess it > works correctly after all ? I did not get what is the TODO here. Or > does it mean this ? : > For (t1_hash join t2_hash on ...) tj1 join (t1_mod join t2_mod on ...) > tj2 on tj1.col1 = tj2.col4 > the merged nodes for tj1 will have LOCATOR_TYPE_DISTRIBUTED, and the > merged nodes for tj2 will also be LOCATOR_TYPE_DISTRIBUTED, and so tj1 > join tj2 would be wrongly marked shippable even though they should not > be shippable because of the mix of hash and modulo ? > > That's correct. This should be taken care by my second patch up for review. I think with that patch, we won't need LOCATOR_TYPE_DISTRIBUTED. While reviewing that patch, can you please also review if this is true. > ------- > > Is pgxc_is_expr_shippable(equi_join_expr) necessary ? Won't this qual > be examined in is_query_shippable() walker ? > This code will get executed in standard_planner() as well, so it's possible that some of the join quals will be shippable and some are not. While this is fine for an inner join, we want to make sure the a qual which implies collocation of rows is shippable. This check is more from future extension perspective than anything else. > > -------- > > If both tables reside on a single datanode, every join case should be > shippable, which doesn't seem to be happening : > postgres=# create table tab2 (id2 int, v varchar) distribute by > replication to node (datanode_1); > postgres=# create table tab1 (id1 int, v varchar) to node (datanode_1); > postgres=# explain select * from (tab1 full outer join tab2 on id1 = id2 ) > ; > QUERY PLAN > > ------------------------------------------------------------------------------------------------- > Hash Full Join (cost=0.12..0.26 rows=10 width=72) > Hash Cond: (tab1.id1 = tab2.id2) > -> Data Node Scan on tab1 "_REMOTE_TABLE_QUERY_" (cost=0.00..0.00 > rows=1000 width=36) > Node/s: datanode_1 > -> Hash (cost=0.00..0.00 rows=1000 width=36) > -> Data Node Scan on tab2 "_REMOTE_TABLE_QUERY_" > (cost=0.00..0.00 rows=1000 width=36) > Node/s: datanode_1 > > Probably you need to take out the following statement out of the > distributed case and apply it as a general rule: > /* If there is only single node, try merging the nodes */ > if (list_length(inner_en->nodeList) == 1 && > list_length(outer_en->nodeList) == 1) > merge_nodes = true; > > I am thinking about this and actually thought that we should mark a single node ExecNodes as REPLICATED, so that it doesn't need any special handling. What do you think? > > >>> -- > >>> Best Wishes, > >>> Ashutosh Bapat > >>> EntepriseDB Corporation > >>> The Enterprise Postgres Company > >> > >> > >> > >> > >> -- > >> Best Wishes, > >> Ashutosh Bapat > >> EntepriseDB Corporation > >> The Enterprise Postgres Company > > > > > > > > > > -- > > Best Wishes, > > Ashutosh Bapat > > EntepriseDB Corporation > > The Enterprise Postgres Company > > > > > ------------------------------------------------------------------------------ > > Free Next-Gen Firewall Hardware Offer > > Buy your Sophos next-gen firewall before the end March 2013 > > and get the hardware for free! Learn more. > > http://p.sf.net/sfu/sophos-d2d-feb > > _______________________________________________ > > Postgres-xc-developers mailing list > > Pos...@li... > > https://lists.sourceforge.net/lists/listinfo/postgres-xc-developers > > > -- Best Wishes, Ashutosh Bapat EntepriseDB Corporation The Enterprise Postgres Company |
From: Abbas B. <abb...@en...> - 2013-03-08 09:55:39
|
Thanks, I got it to work. On Fri, Mar 8, 2013 at 1:40 PM, Koichi Suzuki <koi...@gm...>wrote: > I fond that the documentation does not reflect the change. I visited > the code and found they're implemented. > > Could you take a look at gram.y? > > We need to revise the document to include all these changes. > > Regards; > ---------- > Koichi Suzuki > > > 2013/3/8 Abbas Butt <abb...@en...>: > > Hi, > > ALTER TABLE REDISTRIBUTE does not support TO NODE clause: > > How would we redistribute data after e.g. adding a node? > > OR > > How would we redistribute the data before removing a node? > > > > I think this functionality will have to be added in the system to > complete > > the whole picture. > > > > -- > > Abbas > > Architect > > EnterpriseDB Corporation > > The Enterprise PostgreSQL Company > > > > Phone: 92-334-5100153 > > > > Website: www.enterprisedb.com > > EnterpriseDB Blog: http://blogs.enterprisedb.com/ > > Follow us on Twitter: http://www.twitter.com/enterprisedb > > > > This e-mail message (and any attachment) is intended for the use of > > the individual or entity to whom it is addressed. This message > > contains information from EnterpriseDB Corporation that may be > > privileged, confidential, or exempt from disclosure under applicable > > law. If you are not the intended recipient or authorized to receive > > this for the intended recipient, any use, dissemination, distribution, > > retention, archiving, or copying of this communication is strictly > > prohibited. If you have received this e-mail in error, please notify > > the sender immediately by reply e-mail and delete this message. > > > ------------------------------------------------------------------------------ > > Symantec Endpoint Protection 12 positioned as A LEADER in The Forrester > > Wave(TM): Endpoint Security, Q1 2013 and "remains a good choice" in the > > endpoint security space. For insight on selecting the right partner to > > tackle endpoint security challenges, access the full report. > > http://p.sf.net/sfu/symantec-dev2dev > > _______________________________________________ > > Postgres-xc-developers mailing list > > Pos...@li... > > https://lists.sourceforge.net/lists/listinfo/postgres-xc-developers > > > -- -- Abbas Architect EnterpriseDB Corporation The Enterprise PostgreSQL Company Phone: 92-334-5100153 Website: www.enterprisedb.com EnterpriseDB Blog: http://blogs.enterprisedb.com/ Follow us on Twitter: http://www.twitter.com/enterprisedb This e-mail message (and any attachment) is intended for the use of the individual or entity to whom it is addressed. This message contains information from EnterpriseDB Corporation that may be privileged, confidential, or exempt from disclosure under applicable law. If you are not the intended recipient or authorized to receive this for the intended recipient, any use, dissemination, distribution, retention, archiving, or copying of this communication is strictly prohibited. If you have received this e-mail in error, please notify the sender immediately by reply e-mail and delete this message. |
From: Nikhil S. <ni...@st...> - 2013-03-08 09:23:15
|
> Does it work correctly if gtm/gtm_proxy is not running? Yeah, it does. I faced the same issues that if gtm is down, then the call would error out and the HA infrastructure would wrongly assume that this node is down and do failover. With this simple psql call all that's avoided. Regards, Nikhils >I found > PQping is lighter and easier to use, which is dedicated API to check > if the server is running. It is independent from users/databases and > does not require any password. Just check the target is working. > > I think this is more flexible to be used in various setups. > > Regards; > ---------- > Koichi Suzuki > > > 2013/3/8 Nikhil Sontakke <ni...@st...>: >> I use a simple 'psql -c "\x"' query to monitor coordinator/datanodes. >> The psql call ensures that the connection protocol is followed and >> accepted by that node. It then does an innocuous activity on the psql >> side before exiting. Works well for me. >> >> Regards, >> Nikhils >> >> On Fri, Mar 8, 2013 at 12:48 PM, Koichi Suzuki >> <koi...@gm...> wrote: >>> Okay, here's a patch which uses PQping. This is new to 9.1 and is >>> extremely simple and matches my needs. >>> >>> Regards; >>> ---------- >>> Koichi Suzuki >>> >>> >>> 2013/3/8 Michael Paquier <mic...@gm...>: >>>> >>>> >>>> On Fri, Mar 8, 2013 at 12:13 PM, Koichi Suzuki <koi...@gm...> >>>> wrote: >>>>> >>>>> Because 9.3 merge will not be done in 1.1, I don't think it's feasible >>>>> at present. Second means will be to use PQ* functions. Anyway, >>>>> this will be provided by pgxc_monitor. May be a good idea to use >>>>> custom background, but this could be too much because the requirement >>>>> is very small. >>>> >>>> In this case use something like PQPing or similar, but simply do not involve >>>> core. There would be underlying performance impact for sure. >>>> -- >>>> Michael >>> >>> ------------------------------------------------------------------------------ >>> Symantec Endpoint Protection 12 positioned as A LEADER in The Forrester >>> Wave(TM): Endpoint Security, Q1 2013 and "remains a good choice" in the >>> endpoint security space. For insight on selecting the right partner to >>> tackle endpoint security challenges, access the full report. >>> http://p.sf.net/sfu/symantec-dev2dev >>> _______________________________________________ >>> Postgres-xc-developers mailing list >>> Pos...@li... >>> https://lists.sourceforge.net/lists/listinfo/postgres-xc-developers >>> >> >> >> >> -- >> StormDB - http://www.stormdb.com >> The Database Cloud >> Postgres-XC Support and Service -- StormDB - http://www.stormdb.com The Database Cloud Postgres-XC Support and Service |