You can subscribe to this list here.
2010 |
Jan
|
Feb
|
Mar
|
Apr
|
May
(2) |
Jun
|
Jul
|
Aug
(6) |
Sep
|
Oct
(19) |
Nov
(1) |
Dec
|
---|---|---|---|---|---|---|---|---|---|---|---|---|
2011 |
Jan
(12) |
Feb
(1) |
Mar
(4) |
Apr
(4) |
May
(32) |
Jun
(12) |
Jul
(11) |
Aug
(1) |
Sep
(6) |
Oct
(3) |
Nov
|
Dec
(10) |
2012 |
Jan
(11) |
Feb
(1) |
Mar
(3) |
Apr
(25) |
May
(53) |
Jun
(38) |
Jul
(103) |
Aug
(54) |
Sep
(31) |
Oct
(66) |
Nov
(77) |
Dec
(20) |
2013 |
Jan
(91) |
Feb
(86) |
Mar
(103) |
Apr
(107) |
May
(25) |
Jun
(37) |
Jul
(17) |
Aug
(59) |
Sep
(38) |
Oct
(78) |
Nov
(29) |
Dec
(15) |
2014 |
Jan
(23) |
Feb
(82) |
Mar
(118) |
Apr
(101) |
May
(103) |
Jun
(45) |
Jul
(6) |
Aug
(10) |
Sep
|
Oct
(32) |
Nov
|
Dec
(9) |
2015 |
Jan
(3) |
Feb
(5) |
Mar
|
Apr
(1) |
May
|
Jun
|
Jul
(9) |
Aug
(4) |
Sep
(3) |
Oct
|
Nov
|
Dec
|
2016 |
Jan
(3) |
Feb
|
Mar
|
Apr
|
May
|
Jun
|
Jul
|
Aug
|
Sep
|
Oct
|
Nov
|
Dec
|
2017 |
Jan
|
Feb
|
Mar
|
Apr
|
May
|
Jun
(3) |
Jul
|
Aug
|
Sep
|
Oct
|
Nov
|
Dec
|
2018 |
Jan
|
Feb
|
Mar
|
Apr
|
May
(4) |
Jun
|
Jul
|
Aug
|
Sep
|
Oct
|
Nov
|
Dec
|
From: 鈴木 幸市 <ko...@in...> - 2014-06-18 00:42:55
|
Could you try “reconnect gtm_proxy”? --- Koichi Suzuki 2014/06/17 22:25、Bruno Cezar Oliveira <bru...@ho...<mailto:bru...@ho...>> のメール: Hi Koichi Suzuki, i'm using the pgxc_ctl but how the gtm-proxy can commute to gtm standby automatic, without a manual intervention? Whem the gtm-proxy have a timeout more then 5 seconds, for exemple, the gtm-proxy commute automatically to the gtm standby. The gtm-proxy can do this? Or i need to create a script to do this? Thanks for the answer! 2014-06-16 21:47 GMT-03:00 鈴木 幸市 <ko...@in...<mailto:ko...@in...>>: pgxc_ctl provides automated process for this. You have to configure XC cluster with pgxc_ctl though. In this case, although, you need to promote GTM standby and reconnect gtm_proxy with separate pgxc_ctl command. pgxc_ctl source material is under contrib directory. Reference and tutorial are available at http://postgres-xc.sourceforge.net/docs/1_2_1/pgxc-ctl.html and https://sourceforge.net/projects/postgres-xc/files/Pgxc_ctl_primer/ respectively. Regards; --- Koichi Suzuki 2014/06/17 3:52、Bruno Cezar Oliveira <bru...@ho...<mailto:bru...@ho...>> のメール: Hi! I'm using this tutorial to simulate a HA environment with Postgres-xc. http://postgresxc.wikia.com/wiki/GTM_Standby_Configuration I did the gtm down and gtm-standby going up. The problem is that is a manual process, i had to run a command in all gtm-proxy and gtm standby. How i can make this process automatic? Thank you! -- Att, Bruno Cezar de Oliveira. bru...@ho...<mailto:bru...@ho...> ------------------------------------------------------- Uberlândia - MG. ------------------------------------------------------------------------------ HPCC Systems Open Source Big Data Platform from LexisNexis Risk Solutions Find What Matters Most in Your Big Data with HPCC Systems Open Source. Fast. Scalable. Simple. Ideal for Dirty Data. Leverages Graph Analysis for Fast Processing & Easy Data Exploration http://p.sf.net/sfu/hpccsystems_______________________________________________ Postgres-xc-general mailing list Pos...@li...<mailto:Pos...@li...> https://lists.sourceforge.net/lists/listinfo/postgres-xc-general -- Att, Bruno Cezar de Oliveira. bru...@ho...<mailto:bru...@ho...> ------------------------------------------------------- Uberlândia - MG. |
From: Aaron J. <aja...@re...> - 2014-06-17 18:46:54
|
So, it would appear that after applying this patch, that pg_dump works the first time. However, I prematurely terminated pg_dump (SIGINT on the terminal) and subsequently issued a count on one of my tables. The query hung and eventually returned with “WARNING: Xid is invalid.” – the call to psql never actually finished. I ended up restarting the gtm_proxy, data node and coordinator on host 1. I think at this point, I’ll do a dump and rebuild the cluster since it seems unstable. Aaron From: 鈴木 幸市 [mailto:ko...@in...] Sent: Monday, June 16, 2014 8:05 PM To: Aaron Jackson Cc: Postgres-XC mailing list Subject: Re: [Postgres-xc-general] Problem with taking database dump with pgxc-1.2.1 Year, it’s XC bug. Because of the sequence handling difference from PG, transaction should not be read-only. Please try the attached patch. It will go to 1.2 and master tree if it works. Hope it helps. Regards; --- Koichi Suzuki 2014/06/17 4:25ã€Aaron Jackson <aja...@re...<mailto:aja...@re...>> ã®ãƒ¡ãƒ¼ãƒ«ï¼š So, I tried the same thing and applied this patch. I even verified that it was setting the transaction mode to "serializable, read only, deferrable" - yet the problem persists. I believe my execution is similar to the OP. pg_dump -d analysisdb --serializable-deferrable pg_dump: [archiver (db)] query failed: ERROR: cannot execute nextval() in a read-only transaction pg_dump: [archiver (db)] query was: SELECT pg_catalog.nextval('customtype_customtypeid_seq'); Did I miss something else? Aaron ________________________________ From: Juned Khan [jkh...@gm...<mailto:jkh...@gm...>] Sent: Friday, April 11, 2014 2:05 AM To: Pavan Deolasee Cc: Postgres-XC mailing list Subject: Re: [Postgres-xc-general] Problem with taking database dump with pgxc-1.2.1 Thanks to Pavan and Koichi for your inputs. So i think as of now i do not need to worry about this On Fri, Apr 11, 2014 at 12:31 PM, Pavan Deolasee <pav...@gm...<mailto:pav...@gm...>> wrote: On Fri, Apr 11, 2014 at 12:07 PM, Juned Khan <jkh...@gm...<mailto:jkh...@gm...>> wrote: As of now this fix worked fine for me. is there any side effect of this patch ? IMV there won't be any significant side effect of the patch. pg_dump used to run in RW mode till we fixed it in 9.3. If it has worked for so many years, it may work for some more time too :-) Marking pg_dump READ ONLY was exactly to catch bugs like these. But other than that, I think its OK. Thanks, Pavan -- Pavan Deolasee http://www.linkedin.com/in/pavandeolasee -- Thanks, Juned Khan iNextrix Technologies Pvt Ltd. www.inextrix.com<http://www.inextrix.com/> ------------------------------------------------------------------------------ HPCC Systems Open Source Big Data Platform from LexisNexis Risk Solutions Find What Matters Most in Your Big Data with HPCC Systems Open Source. Fast. Scalable. Simple. Ideal for Dirty Data. Leverages Graph Analysis for Fast Processing & Easy Data Exploration http://p.sf.net/sfu/hpccsystems_______________________________________________ Postgres-xc-general mailing list Pos...@li...<mailto:Pos...@li...> https://lists.sourceforge.net/lists/listinfo/postgres-xc-general |
From: Bruno C. O. <bru...@ho...> - 2014-06-17 13:25:26
|
Hi Koichi Suzuki, i'm using the pgxc_ctl but how the gtm-proxy can commute to gtm standby automatic, without a manual intervention? Whem the gtm-proxy have a timeout more then 5 seconds, for exemple, the gtm-proxy commute automatically to the gtm standby. The gtm-proxy can do this? Or i need to create a script to do this? Thanks for the answer! 2014-06-16 21:47 GMT-03:00 鈴木 幸市 <ko...@in...>: > pgxc_ctl provides automated process for this. You have to configure XC > cluster with pgxc_ctl though. In this case, although, you need to promote > GTM standby and reconnect gtm_proxy with separate pgxc_ctl command. > > pgxc_ctl source material is under contrib directory. Reference and > tutorial are available at > http://postgres-xc.sourceforge.net/docs/1_2_1/pgxc-ctl.html and > https://sourceforge.net/projects/postgres-xc/files/Pgxc_ctl_primer/ > respectively. > > Regards; > --- > Koichi Suzuki > > 2014/06/17 3:52、Bruno Cezar Oliveira <bru...@ho...> のメール: > > Hi! > > I'm using this tutorial to simulate a HA environment with Postgres-xc. > http://postgresxc.wikia.com/wiki/GTM_Standby_Configuration > > I did the gtm down and gtm-standby going up. > The problem is that is a manual process, i had to run a command in all > gtm-proxy and gtm standby. > How i can make this process automatic? > > Thank you! > > -- > Att, > Bruno Cezar de Oliveira. > bru...@ho... > ------------------------------------------------------- > Uberlândia - MG. > > ------------------------------------------------------------------------------ > HPCC Systems Open Source Big Data Platform from LexisNexis Risk Solutions > Find What Matters Most in Your Big Data with HPCC Systems > Open Source. Fast. Scalable. Simple. Ideal for Dirty Data. > Leverages Graph Analysis for Fast Processing & Easy Data Exploration > > http://p.sf.net/sfu/hpccsystems_______________________________________________ > Postgres-xc-general mailing list > Pos...@li... > https://lists.sourceforge.net/lists/listinfo/postgres-xc-general > > > -- Att, Bruno Cezar de Oliveira. bru...@ho... ------------------------------------------------------- Uberlândia - MG. |
From: Masataka S. <pg...@gm...> - 2014-06-17 08:42:26
|
On 17 June 2014 17:07, 鈴木 幸市 <ko...@in...> wrote: > This came in 1.2. pg_dump runs as a single transaction so it looks okay. No, the isolation level is changed even before 1.1. The location of statement block is moved from main to setup_connection and appended READ ONLY attribute, but REPEATABLE READ or SERIALIZABLE level has been used. I thought it intend to protect from other concurrent transaction, and removing it brings something wrong. Are we ever seeing such error after Pavan's patch? He get rid of "READ ONLY" from isolation level if the remote version is equal or higher than PG9.1 (and XC 1.0 is based on PG9.1.) > --- > Koichi Suzuki > > 2014/06/17 16:31、Masataka Saito <pg...@gm...> のメール: > >> On 17 June 2014 10:05, 鈴木 幸市 <ko...@in...> wrote: >>> Year, it’s XC bug. Because of the sequence handling difference from PG, >>> transaction should not be read-only. Please try the attached patch. It >>> will go to 1.2 and master tree if it works. >> >> Your patch looks skipping to set transaction isolation level. Is it OK >> to continue with READ COMMITTED transaction? >> >>> >>> Hope it helps. >>> >>> Regards; >>> --- >>> Koichi Suzuki >>> >>> 2014/06/17 4:25〠Aaron Jackson <aja...@re...> 㠮メール: >>> >>> So, I tried the same thing and applied this patch. I even verified that it >>> was setting the transaction mode to "serializable, read only, deferrable" - >>> yet the problem persists. I believe my execution is similar to the OP. >>> >>> pg_dump -d analysisdb --serializable-deferrable >>> >>> pg_dump: [archiver (db)] query failed: ERROR: cannot execute nextval() in a >>> read-only transaction >>> pg_dump: [archiver (db)] query was: SELECT >>> pg_catalog.nextval('customtype_customtypeid_seq'); >>> >>> Did I miss something else? >>> >>> Aaron >>> >>> ________________________________ >>> From: Juned Khan [jkh...@gm...] >>> Sent: Friday, April 11, 2014 2:05 AM >>> To: Pavan Deolasee >>> Cc: Postgres-XC mailing list >>> Subject: Re: [Postgres-xc-general] Problem with taking database dump with >>> pgxc-1.2.1 >>> >>> Thanks to Pavan and Koichi for your inputs. >>> >>> So i think as of now i do not need to worry about >>> this >>> >>> >>> >>> >>> On Fri, Apr 11, 2014 at 12:31 PM, Pavan Deolasee <pav...@gm...> >>> wrote: >>> >>> On Fri, Apr 11, 2014 at 12:07 PM, Juned Khan <jkh...@gm...> wrote: >>> >>> As of now this fix worked fine for me. is there any side effect of this >>> patch ? >>> >>> >>> IMV there won't be any significant side effect of the patch. pg_dump used to >>> run in RW mode till we fixed it in 9.3. If it has worked for so many years, >>> it may work for some more time too :-) Marking pg_dump READ ONLY was exactly >>> to catch bugs like these. But other than that, I think its OK. >>> >>> Thanks, >>> Pavan >>> >>> -- >>> Pavan Deolasee >>> http://www.linkedin.com/in/pavandeolasee >>> >>> >>> >>> >>> -- >>> Thanks, >>> Juned Khan >>> iNextrix Technologies Pvt Ltd. >>> www.inextrix.com >>> ------------------------------------------------------------------------------ >>> HPCC Systems Open Source Big Data Platform from LexisNexis Risk Solutions >>> Find What Matters Most in Your Big Data with HPCC Systems >>> Open Source. Fast. Scalable. Simple. Ideal for Dirty Data. >>> Leverages Graph Analysis for Fast Processing & Easy Data Exploration >>> http://p.sf.net/sfu/hpccsystems_______________________________________________ >>> >>> Postgres-xc-general mailing list >>> Pos...@li... >>> https://lists.sourceforge.net/lists/listinfo/postgres-xc-general >>> >>> >>> >>> ------------------------------------------------------------------------------ >>> HPCC Systems Open Source Big Data Platform from LexisNexis Risk Solutions >>> Find What Matters Most in Your Big Data with HPCC Systems >>> Open Source. Fast. Scalable. Simple. Ideal for Dirty Data. >>> Leverages Graph Analysis for Fast Processing & Easy Data Exploration >>> http://p.sf.net/sfu/hpccsystems >>> _______________________________________________ >>> Postgres-xc-general mailing list >>> Pos...@li... >>> https://lists.sourceforge.net/lists/listinfo/postgres-xc-general >>> >> > |
From: 鈴木 幸市 <ko...@in...> - 2014-06-17 08:07:36
|
This came in 1.2. pg_dump runs as a single transaction so it looks okay. --- Koichi Suzuki 2014/06/17 16:31、Masataka Saito <pg...@gm...> のメール: > On 17 June 2014 10:05, 鈴木 幸市 <ko...@in...> wrote: >> Year, it’s XC bug. Because of the sequence handling difference from PG, >> transaction should not be read-only. Please try the attached patch. It >> will go to 1.2 and master tree if it works. > > Your patch looks skipping to set transaction isolation level. Is it OK > to continue with READ COMMITTED transaction? > >> >> Hope it helps. >> >> Regards; >> --- >> Koichi Suzuki >> >> 2014/06/17 4:25ã€Aaron Jackson <aja...@re...> ã®ãƒ¡ãƒ¼ãƒ«ï¼š >> >> So, I tried the same thing and applied this patch. I even verified that it >> was setting the transaction mode to "serializable, read only, deferrable" - >> yet the problem persists. I believe my execution is similar to the OP. >> >> pg_dump -d analysisdb --serializable-deferrable >> >> pg_dump: [archiver (db)] query failed: ERROR: cannot execute nextval() in a >> read-only transaction >> pg_dump: [archiver (db)] query was: SELECT >> pg_catalog.nextval('customtype_customtypeid_seq'); >> >> Did I miss something else? >> >> Aaron >> >> ________________________________ >> From: Juned Khan [jkh...@gm...] >> Sent: Friday, April 11, 2014 2:05 AM >> To: Pavan Deolasee >> Cc: Postgres-XC mailing list >> Subject: Re: [Postgres-xc-general] Problem with taking database dump with >> pgxc-1.2.1 >> >> Thanks to Pavan and Koichi for your inputs. >> >> So i think as of now i do not need to worry about >> this >> >> >> >> >> On Fri, Apr 11, 2014 at 12:31 PM, Pavan Deolasee <pav...@gm...> >> wrote: >> >> On Fri, Apr 11, 2014 at 12:07 PM, Juned Khan <jkh...@gm...> wrote: >> >> As of now this fix worked fine for me. is there any side effect of this >> patch ? >> >> >> IMV there won't be any significant side effect of the patch. pg_dump used to >> run in RW mode till we fixed it in 9.3. If it has worked for so many years, >> it may work for some more time too :-) Marking pg_dump READ ONLY was exactly >> to catch bugs like these. But other than that, I think its OK. >> >> Thanks, >> Pavan >> >> -- >> Pavan Deolasee >> http://www.linkedin.com/in/pavandeolasee >> >> >> >> >> -- >> Thanks, >> Juned Khan >> iNextrix Technologies Pvt Ltd. >> www.inextrix.com >> ------------------------------------------------------------------------------ >> HPCC Systems Open Source Big Data Platform from LexisNexis Risk Solutions >> Find What Matters Most in Your Big Data with HPCC Systems >> Open Source. Fast. Scalable. Simple. Ideal for Dirty Data. >> Leverages Graph Analysis for Fast Processing & Easy Data Exploration >> http://p.sf.net/sfu/hpccsystems_______________________________________________ >> >> Postgres-xc-general mailing list >> Pos...@li... >> https://lists.sourceforge.net/lists/listinfo/postgres-xc-general >> >> >> >> ------------------------------------------------------------------------------ >> HPCC Systems Open Source Big Data Platform from LexisNexis Risk Solutions >> Find What Matters Most in Your Big Data with HPCC Systems >> Open Source. Fast. Scalable. Simple. Ideal for Dirty Data. >> Leverages Graph Analysis for Fast Processing & Easy Data Exploration >> http://p.sf.net/sfu/hpccsystems >> _______________________________________________ >> Postgres-xc-general mailing list >> Pos...@li... >> https://lists.sourceforge.net/lists/listinfo/postgres-xc-general >> > |
From: Masataka S. <pg...@gm...> - 2014-06-17 07:31:56
|
On 17 June 2014 10:05, 鈴木 幸市 <ko...@in...> wrote: > Year, it’s XC bug. Because of the sequence handling difference from PG, > transaction should not be read-only. Please try the attached patch. It > will go to 1.2 and master tree if it works. Your patch looks skipping to set transaction isolation level. Is it OK to continue with READ COMMITTED transaction? > > Hope it helps. > > Regards; > --- > Koichi Suzuki > > 2014/06/17 4:25ã€Aaron Jackson <aja...@re...> ã®ãƒ¡ãƒ¼ãƒ«ï¼š > > So, I tried the same thing and applied this patch. I even verified that it > was setting the transaction mode to "serializable, read only, deferrable" - > yet the problem persists. I believe my execution is similar to the OP. > > pg_dump -d analysisdb --serializable-deferrable > > pg_dump: [archiver (db)] query failed: ERROR: cannot execute nextval() in a > read-only transaction > pg_dump: [archiver (db)] query was: SELECT > pg_catalog.nextval('customtype_customtypeid_seq'); > > Did I miss something else? > > Aaron > > ________________________________ > From: Juned Khan [jkh...@gm...] > Sent: Friday, April 11, 2014 2:05 AM > To: Pavan Deolasee > Cc: Postgres-XC mailing list > Subject: Re: [Postgres-xc-general] Problem with taking database dump with > pgxc-1.2.1 > > Thanks to Pavan and Koichi for your inputs. > > So i think as of now i do not need to worry about > this > > > > > On Fri, Apr 11, 2014 at 12:31 PM, Pavan Deolasee <pav...@gm...> > wrote: > > On Fri, Apr 11, 2014 at 12:07 PM, Juned Khan <jkh...@gm...> wrote: > > As of now this fix worked fine for me. is there any side effect of this > patch ? > > > IMV there won't be any significant side effect of the patch. pg_dump used to > run in RW mode till we fixed it in 9.3. If it has worked for so many years, > it may work for some more time too :-) Marking pg_dump READ ONLY was exactly > to catch bugs like these. But other than that, I think its OK. > > Thanks, > Pavan > > -- > Pavan Deolasee > http://www.linkedin.com/in/pavandeolasee > > > > > -- > Thanks, > Juned Khan > iNextrix Technologies Pvt Ltd. > www.inextrix.com > ------------------------------------------------------------------------------ > HPCC Systems Open Source Big Data Platform from LexisNexis Risk Solutions > Find What Matters Most in Your Big Data with HPCC Systems > Open Source. Fast. Scalable. Simple. Ideal for Dirty Data. > Leverages Graph Analysis for Fast Processing & Easy Data Exploration > http://p.sf.net/sfu/hpccsystems_______________________________________________ > > Postgres-xc-general mailing list > Pos...@li... > https://lists.sourceforge.net/lists/listinfo/postgres-xc-general > > > > ------------------------------------------------------------------------------ > HPCC Systems Open Source Big Data Platform from LexisNexis Risk Solutions > Find What Matters Most in Your Big Data with HPCC Systems > Open Source. Fast. Scalable. Simple. Ideal for Dirty Data. > Leverages Graph Analysis for Fast Processing & Easy Data Exploration > http://p.sf.net/sfu/hpccsystems > _______________________________________________ > Postgres-xc-general mailing list > Pos...@li... > https://lists.sourceforge.net/lists/listinfo/postgres-xc-general > |
From: Koichi S. <koi...@gm...> - 2014-06-17 05:17:04
|
Hello; This is to announce the renewal of Postgres-XC roadmap and bug list page, at https://sourceforge.net/apps/mediawiki/postgres-xc/index.php?title=Roadmap and https://sourceforge.net/apps/mediawiki/postgres-xc/index.php?title=Bugs Now roadmap and its priority is subject to open discussion/input/proposal. It is very helpful to have an input of usecase of the feature, more bug report and to share your experience with XC. Best Regards; --- Koichi Suzuki |
From: 鈴木 幸市 <ko...@in...> - 2014-06-17 00:47:34
|
pgxc_ctl provides automated process for this. You have to configure XC cluster with pgxc_ctl though. In this case, although, you need to promote GTM standby and reconnect gtm_proxy with separate pgxc_ctl command. pgxc_ctl source material is under contrib directory. Reference and tutorial are available at http://postgres-xc.sourceforge.net/docs/1_2_1/pgxc-ctl.html and https://sourceforge.net/projects/postgres-xc/files/Pgxc_ctl_primer/ respectively. Regards; --- Koichi Suzuki 2014/06/17 3:52、Bruno Cezar Oliveira <bru...@ho...<mailto:bru...@ho...>> のメール: Hi! I'm using this tutorial to simulate a HA environment with Postgres-xc. http://postgresxc.wikia.com/wiki/GTM_Standby_Configuration I did the gtm down and gtm-standby going up. The problem is that is a manual process, i had to run a command in all gtm-proxy and gtm standby. How i can make this process automatic? Thank you! -- Att, Bruno Cezar de Oliveira. bru...@ho...<mailto:bru...@ho...> ------------------------------------------------------- Uberlândia - MG. ------------------------------------------------------------------------------ HPCC Systems Open Source Big Data Platform from LexisNexis Risk Solutions Find What Matters Most in Your Big Data with HPCC Systems Open Source. Fast. Scalable. Simple. Ideal for Dirty Data. Leverages Graph Analysis for Fast Processing & Easy Data Exploration http://p.sf.net/sfu/hpccsystems_______________________________________________ Postgres-xc-general mailing list Pos...@li... https://lists.sourceforge.net/lists/listinfo/postgres-xc-general |
From: Aaron J. <aja...@re...> - 2014-06-16 19:25:33
|
So, I tried the same thing and applied this patch. I even verified that it was setting the transaction mode to "serializable, read only, deferrable" - yet the problem persists. I believe my execution is similar to the OP. pg_dump -d analysisdb --serializable-deferrable pg_dump: [archiver (db)] query failed: ERROR: cannot execute nextval() in a read-only transaction pg_dump: [archiver (db)] query was: SELECT pg_catalog.nextval('customtype_customtypeid_seq'); Did I miss something else? Aaron ________________________________ From: Juned Khan [jkh...@gm...] Sent: Friday, April 11, 2014 2:05 AM To: Pavan Deolasee Cc: Postgres-XC mailing list Subject: Re: [Postgres-xc-general] Problem with taking database dump with pgxc-1.2.1 Thanks to Pavan and Koichi for your inputs. So i think as of now i do not need to worry about this On Fri, Apr 11, 2014 at 12:31 PM, Pavan Deolasee <pav...@gm...<mailto:pav...@gm...>> wrote: On Fri, Apr 11, 2014 at 12:07 PM, Juned Khan <jkh...@gm...<mailto:jkh...@gm...>> wrote: As of now this fix worked fine for me. is there any side effect of this patch ? IMV there won't be any significant side effect of the patch. pg_dump used to run in RW mode till we fixed it in 9.3. If it has worked for so many years, it may work for some more time too :-) Marking pg_dump READ ONLY was exactly to catch bugs like these. But other than that, I think its OK. Thanks, Pavan -- Pavan Deolasee http://www.linkedin.com/in/pavandeolasee -- Thanks, Juned Khan iNextrix Technologies Pvt Ltd. www.inextrix.com<http://www.inextrix.com/> |
From: Bruno C. O. <bru...@ho...> - 2014-06-16 18:52:24
|
Hi! I'm using this tutorial to simulate a HA environment with Postgres-xc. http://postgresxc.wikia.com/wiki/GTM_Standby_Configuration I did the gtm down and gtm-standby going up. The problem is that is a manual process, i had to run a command in all gtm-proxy and gtm standby. How i can make this process automatic? Thank you! -- Att, Bruno Cezar de Oliveira. bru...@ho... ------------------------------------------------------- Uberlândia - MG. |
From: 鈴木 幸市 <ko...@in...> - 2014-06-16 00:57:10
|
We’re doing regression test as well as DBT-1 benchmark for each commit. There’s another separate effort of the test but it is not for sharing in pubic. We’d be very pleased if you agree to share your experience. Thank you very much; --- Koichi Suzuki 2014/06/13 20:49、Wilkerson, Daniel <dwi...@fu...> のメール: > Ok cool. Thank you for your input Koichi. I appreciate it. > > I'm working on an XC production prototype and am currently spinning it up > using HAProxy to see how well it plays with XC. > > Do you all have a location in Github where you are storing test results? > Has a specific testing methodology been adopted by the project? I'd be > happy to share what I can about my XC journey if it would be of value to > the project. > > dw > > On 6/12/14 8:53 PM, "鈴木 幸市" <ko...@in...> wrote: > >> A couple of users are seriously testing XC for production. We don’t >> have good balancers yet. There are couple of ideas how to implement >> balancers, for example, random connection by modifying JDBC driver and >> DNS-based balancer considering each servers workload. >> >> Thank you very much; >> --- >> Koichi Suzuki >> >> 2014/06/12 23:45、Wilkerson, Daniel <dwi...@fu...> のメール: >> >>> Hi everyone. What are recommended production-grade load balancers for >>> an XC cluster (e.g. HA Proxy, Zen Load Balancer, etc.)? >>> >>> More importantly, are there load balancers to absolutely avoid for a >>> production XC deployment? >>> >>> Thanks! >>> dw >>> >>> >>> ------------------------------------------------------------------------- >>> ----- >>> HPCC Systems Open Source Big Data Platform from LexisNexis Risk >>> Solutions >>> Find What Matters Most in Your Big Data with HPCC Systems >>> Open Source. Fast. Scalable. Simple. Ideal for Dirty Data. >>> Leverages Graph Analysis for Fast Processing & Easy Data Exploration >>> http://p.sf.net/sfu/hpccsystems >>> _______________________________________________ >>> Postgres-xc-general mailing list >>> Pos...@li... >>> https://lists.sourceforge.net/lists/listinfo/postgres-xc-general >>> >> > > |
From: Wilkerson, D. <dwi...@fu...> - 2014-06-13 11:50:00
|
Ok cool. Thank you for your input Koichi. I appreciate it. I'm working on an XC production prototype and am currently spinning it up using HAProxy to see how well it plays with XC. Do you all have a location in Github where you are storing test results? Has a specific testing methodology been adopted by the project? I'd be happy to share what I can about my XC journey if it would be of value to the project. dw On 6/12/14 8:53 PM, "鈴木 幸市" <ko...@in...> wrote: >A couple of users are seriously testing XC for production. We don’t >have good balancers yet. There are couple of ideas how to implement >balancers, for example, random connection by modifying JDBC driver and >DNS-based balancer considering each servers workload. > >Thank you very much; >--- >Koichi Suzuki > >2014/06/12 23:45、Wilkerson, Daniel <dwi...@fu...> のメール: > >> Hi everyone. What are recommended production-grade load balancers for >>an XC cluster (e.g. HA Proxy, Zen Load Balancer, etc.)? >> >> More importantly, are there load balancers to absolutely avoid for a >>production XC deployment? >> >> Thanks! >> dw >> >> >>------------------------------------------------------------------------- >>----- >> HPCC Systems Open Source Big Data Platform from LexisNexis Risk >>Solutions >> Find What Matters Most in Your Big Data with HPCC Systems >> Open Source. Fast. Scalable. Simple. Ideal for Dirty Data. >> Leverages Graph Analysis for Fast Processing & Easy Data Exploration >> http://p.sf.net/sfu/hpccsystems >> _______________________________________________ >> Postgres-xc-general mailing list >> Pos...@li... >> https://lists.sourceforge.net/lists/listinfo/postgres-xc-general >> > |
From: 鈴木 幸市 <ko...@in...> - 2014-06-13 00:53:25
|
A couple of users are seriously testing XC for production. We don’t have good balancers yet. There are couple of ideas how to implement balancers, for example, random connection by modifying JDBC driver and DNS-based balancer considering each servers workload. Thank you very much; --- Koichi Suzuki 2014/06/12 23:45、Wilkerson, Daniel <dwi...@fu...> のメール: > Hi everyone. What are recommended production-grade load balancers for an XC cluster (e.g. HA Proxy, Zen Load Balancer, etc.)? > > More importantly, are there load balancers to absolutely avoid for a production XC deployment? > > Thanks! > dw > > ------------------------------------------------------------------------------ > HPCC Systems Open Source Big Data Platform from LexisNexis Risk Solutions > Find What Matters Most in Your Big Data with HPCC Systems > Open Source. Fast. Scalable. Simple. Ideal for Dirty Data. > Leverages Graph Analysis for Fast Processing & Easy Data Exploration > http://p.sf.net/sfu/hpccsystems > _______________________________________________ > Postgres-xc-general mailing list > Pos...@li... > https://lists.sourceforge.net/lists/listinfo/postgres-xc-general > |
From: Wilkerson, D. <dwi...@fu...> - 2014-06-12 14:45:24
|
Hi everyone. What are recommended production-grade load balancers for an XC cluster (e.g. HA Proxy, Zen Load Balancer, etc.)? More importantly, are there load balancers to absolutely avoid for a production XC deployment? Thanks! dw |
From: 鈴木 幸市 <ko...@in...> - 2014-06-12 04:39:12
|
If my memory is correct, you can issue DML only once in plpgsql functions. Regards; --- Koichi Suzuki 2014/06/12 0:15、Jan Keirse <jan...@tv...> のメール: > Hello all, > > I'm currently evaluating if we could use postgres-xc to make some > databases scale beyond one machine. > However I would like to combine sharding across hosts (distribute by > hash as provided by postgres-xc) with standard postgresql trigger > based partitioning. > The release notes for postgres xc claim that DML cannot be used in > plpgsql functions (here: > http://postgres-xc.sourceforge.net/docs/1_2_beta/release-xc-1-2.html ) > I'm trying to understand exactly what this statement means. My > expectation was that an insert statement inside a plpgsql function > would somehow raise an error or fail. > > However, when I have an insert trigger on a table like this: > > CREATE TRIGGER partition_trg > BEFORE INSERT > ON history > FOR EACH ROW > EXECUTE PROCEDURE trg_partition('day'); > > and the trg_partition function looks like this: > > -------------------------------------------------------------------------------------------------------------------------------------- > CREATE OR REPLACE FUNCTION trg_partition() > RETURNS trigger AS > $BODY$ > > DECLARE > prefix text := 'partitions.'; > timeformat text; > selector text; > _interval INTERVAL; > tablename text; > startdate text; > enddate text; > create_table_part text; > create_index_part text; > BEGIN > > selector = TG_ARGV[0]; > > IF selector = 'day' THEN > timeformat := 'YYYY_MM_DD'; > ELSIF selector = 'month' THEN > timeformat := 'YYYY_MM'; > END IF; > > _interval := '1 ' || selector; > tablename := TG_TABLE_NAME || '_p' || TO_CHAR(TO_TIMESTAMP(NEW.clock) > at time zone 'gmt', timeformat); > > EXECUTE 'INSERT INTO ' || prefix || quote_ident(tablename) || ' SELECT > ($1).*' USING NEW; > RETURN NULL; > > EXCEPTION > WHEN undefined_table THEN > > startdate := EXTRACT(epoch FROM date_trunc(selector, > TO_TIMESTAMP(NEW.clock) at time zone 'gmt')); > enddate := EXTRACT(epoch FROM date_trunc(selector, > TO_TIMESTAMP(NEW.clock) at time zone 'gmt' + _interval )); > > create_table_part:= 'CREATE TABLE '|| prefix || quote_ident(tablename) > || ' (CHECK ((clock >= ' || quote_literal(startdate) || ' AND clock < > ' || quote_literal(enddate) || '))) INHERITS ( public.'|| > TG_TABLE_NAME || ') DISTRIBUTE BY HASH(itemid)'; > create_index_part:= 'ALTER TABLE ' || prefix || quote_ident(tablename) > || ' ADD CONSTRAINT ' || quote_ident(tablename) || '_pkey PRIMARY KEY > (itemid,clock)'; > > EXECUTE create_table_part; > EXECUTE create_index_part; > > --insert it again > EXECUTE 'INSERT INTO ' || prefix || quote_ident(tablename) || ' SELECT > ($1).*' USING NEW; > RETURN NULL; > > END; > > $BODY$ > LANGUAGE plpgsql VOLATILE > COST 100; > ALTER FUNCTION trg_partition() > OWNER TO postgres; > > -------------------------------------------------------------------------------------------------------------------------------------- > > And I do an insert into history like this: > > insert into history (itemid, clock, value) values ( 1,1,123); > > > The partition history_p1970_01_01 does get created and the record is > inserted fine and is visible accross all nodes coördinator nodes and > in the expected data node. > If I do a second insert into the table, resulting in an insert in the > same partition, for example like this: > > insert into history (itemid, clock, value) values ( 100,1,123); > > The results are a bit surprising: at first it looks like nothing > happened, the new record doesn't show up. I guess that's why it's not > supported. However when I disconnect the session that did the insert, > all of a sudden the data does become visible in the > history_p1970_01_01 partition. It's as if the data doesn't get > committed until the session is disconnected. > > Should I throw the towel in the ring and give up on this, or is there > an easy workaround for this so that DML can in fact be used with some > reservations. > > The only detail I could find on the subject was this post: > http://sourceforge.net/p/postgres-xc/mailman/message/31433999/ > that appears to indicate that it could indeed probably work when > taking some things into account. > > On another note, does this limitation apply to all pl/... languages or > only to pl/pgsql? If the cause of these problems is strictly related > to pl/pgsql and would not apply to say plv8 I could rewrite my > triggers. > > Kind Regards, > > Jan Keirse > > -- > > > **** DISCLAIMER **** > > http://www.tvh.com/newen2/emaildisclaimer/default.html > > "This message is delivered to all addressees subject to the conditions > set forth in the attached disclaimer, which is an integral part of this > message." > > ------------------------------------------------------------------------------ > HPCC Systems Open Source Big Data Platform from LexisNexis Risk Solutions > Find What Matters Most in Your Big Data with HPCC Systems > Open Source. Fast. Scalable. Simple. Ideal for Dirty Data. > Leverages Graph Analysis for Fast Processing & Easy Data Exploration > http://p.sf.net/sfu/hpccsystems > _______________________________________________ > Postgres-xc-general mailing list > Pos...@li... > https://lists.sourceforge.net/lists/listinfo/postgres-xc-general |
From: Jan K. <jan...@tv...> - 2014-06-11 16:16:26
|
Hello all, I'm currently evaluating if we could use postgres-xc to make some databases scale beyond one machine. However I would like to combine sharding across hosts (distribute by hash as provided by postgres-xc) with standard postgresql trigger based partitioning. The release notes for postgres xc claim that DML cannot be used in plpgsql functions (here: http://postgres-xc.sourceforge.net/docs/1_2_beta/release-xc-1-2.html ) I'm trying to understand exactly what this statement means. My expectation was that an insert statement inside a plpgsql function would somehow raise an error or fail. However, when I have an insert trigger on a table like this: CREATE TRIGGER partition_trg BEFORE INSERT ON history FOR EACH ROW EXECUTE PROCEDURE trg_partition('day'); and the trg_partition function looks like this: -------------------------------------------------------------------------------------------------------------------------------------- CREATE OR REPLACE FUNCTION trg_partition() RETURNS trigger AS $BODY$ DECLARE prefix text := 'partitions.'; timeformat text; selector text; _interval INTERVAL; tablename text; startdate text; enddate text; create_table_part text; create_index_part text; BEGIN selector = TG_ARGV[0]; IF selector = 'day' THEN timeformat := 'YYYY_MM_DD'; ELSIF selector = 'month' THEN timeformat := 'YYYY_MM'; END IF; _interval := '1 ' || selector; tablename := TG_TABLE_NAME || '_p' || TO_CHAR(TO_TIMESTAMP(NEW.clock) at time zone 'gmt', timeformat); EXECUTE 'INSERT INTO ' || prefix || quote_ident(tablename) || ' SELECT ($1).*' USING NEW; RETURN NULL; EXCEPTION WHEN undefined_table THEN startdate := EXTRACT(epoch FROM date_trunc(selector, TO_TIMESTAMP(NEW.clock) at time zone 'gmt')); enddate := EXTRACT(epoch FROM date_trunc(selector, TO_TIMESTAMP(NEW.clock) at time zone 'gmt' + _interval )); create_table_part:= 'CREATE TABLE '|| prefix || quote_ident(tablename) || ' (CHECK ((clock >= ' || quote_literal(startdate) || ' AND clock < ' || quote_literal(enddate) || '))) INHERITS ( public.'|| TG_TABLE_NAME || ') DISTRIBUTE BY HASH(itemid)'; create_index_part:= 'ALTER TABLE ' || prefix || quote_ident(tablename) || ' ADD CONSTRAINT ' || quote_ident(tablename) || '_pkey PRIMARY KEY (itemid,clock)'; EXECUTE create_table_part; EXECUTE create_index_part; --insert it again EXECUTE 'INSERT INTO ' || prefix || quote_ident(tablename) || ' SELECT ($1).*' USING NEW; RETURN NULL; END; $BODY$ LANGUAGE plpgsql VOLATILE COST 100; ALTER FUNCTION trg_partition() OWNER TO postgres; -------------------------------------------------------------------------------------------------------------------------------------- And I do an insert into history like this: insert into history (itemid, clock, value) values ( 1,1,123); The partition history_p1970_01_01 does get created and the record is inserted fine and is visible accross all nodes coördinator nodes and in the expected data node. If I do a second insert into the table, resulting in an insert in the same partition, for example like this: insert into history (itemid, clock, value) values ( 100,1,123); The results are a bit surprising: at first it looks like nothing happened, the new record doesn't show up. I guess that's why it's not supported. However when I disconnect the session that did the insert, all of a sudden the data does become visible in the history_p1970_01_01 partition. It's as if the data doesn't get committed until the session is disconnected. Should I throw the towel in the ring and give up on this, or is there an easy workaround for this so that DML can in fact be used with some reservations. The only detail I could find on the subject was this post: http://sourceforge.net/p/postgres-xc/mailman/message/31433999/ that appears to indicate that it could indeed probably work when taking some things into account. On another note, does this limitation apply to all pl/... languages or only to pl/pgsql? If the cause of these problems is strictly related to pl/pgsql and would not apply to say plv8 I could rewrite my triggers. Kind Regards, Jan Keirse -- **** DISCLAIMER **** http://www.tvh.com/newen2/emaildisclaimer/default.html "This message is delivered to all addressees subject to the conditions set forth in the attached disclaimer, which is an integral part of this message." |
From: 鈴木 幸市 <ko...@in...> - 2014-06-11 00:55:50
|
Good to hear that. Please feel free to post your issue with pgxc_ctl as well. Pgxc_ctl tutorial will be found at http://sourceforge.net/projects/postgres-xc/files/Pgxc_ctl_primer/ Hope it is much more helpful than reference document. Best; --- Koichi Suzuki 2014/06/11 2:37、Wilkerson, Daniel <dwi...@fu...> のメール: > Thank you for the replies Masataka and Koichi. I appreciate them very much. > > You were both right. It was definitely some config relates > issues/mismatches. I suppose my mind was pretty tired after configuring > each config file on everyone one of the servers and trying to troubleshoot > all day yesterday. Cross connecting with psql and using netstat definitely > helped suss out the problematic config items across the nodes and servers. > > To fix my issue I made some corrections to pg_hba.conf for my > coordinators. My data nodes were configured correctly, but the > coordinators were not. I also noticed on CN1 and CN3, I missed changing > the listen address from localhost to * (0.0.0.0). That issue was apparent > with netstat -an > > I'm going to have a look at the pgxc_ctl tool to simplify cluster starts > and stops as well as the config of future clusters for production use. > This is only a prototype environment. > > Thanks again for the help my friends. I appreciate it. > > dw > > On 6/10/14 6:06 AM, "Masataka Saito" <pg...@gm...> wrote: > >> Hello Wilkerson, >> >> I'm afraid that you're using Postgres-XC 1.2 which has such ugly bug >> fixed on 1.2.1. >> Or Postgres-XC nodes must trust other nodes: you have to add >> pg_hba.conf entry for subnet xx.xx.40.0/24 and xx.xx.33.0/24 with >> trust authentication method, so you can't specify other authentication >> method which requires extra dialogue with client, e.g. password or md5 >> are not allowed. >> >> If both suggestions don't hit, please test that you can connect to >> other nodes from cn1 by psql command. I think it's a good way to find >> the component in which the cause resides. >> >> Regards. >> >> On 10 June 2014 07:13, Wilkerson, Daniel <dwi...@fu...> wrote: >>> Hi. I am trying to stand up a multi node cluster across multiple >>> servers. The current configuration I have is one coordinator and two >>> datanodes per server on 3 servers. Each has a GTM proxy configured. I >>> have two more servers configured to be the GTM master and slave/standby. >>> >>> When executing create database in my cluster after all the config work >>> (including the create node calls) I am getting the following error. >>> ERROR: Failed to get pooled connections >>> CONTEXT: SQL statement "EXECUTE DIRECT ON (cn2) 'SELECT >>> pg_catalog.pg_try_advisory_xact_lock_shared(65535, 0)'" >>> >>> I noticed that the coordinator on each node is added as host = local >>> host after using the initdb command to create the coordinator nodes. >>> When executing the create node calls to add the other coordinators and >>> datanodes to each other I am using the IP addresses of the servers >>> instead; I have also tried the DNS names also with no luck. >>> >>> The output of pgxc_node from one of the cluster coordinator/datanodes >>> looks like: >>> >>> oid | node_name | node_type | node_port | node_host | >>> nodeis_primary | nodeis_preferred | node_id >>> >>> -------+-----------+-----------+-----------+--------------+-------------- >>> --+------------------+------------- >>> 11198 | cn1 | C | 5432 | localhost | f >>> | f | -1178713634 >>> 16393 | cn2 | C | 5432 | xx.xx.40.11 | f >>> | f | -1923125220 >>> 16394 | cn3 | C | 5432 | xx.xx.40.123 | f >>> | f | 1101067607 >>> 16395 | dn1 | D | 15432 | xx.xx.33.198 | t >>> | t | -560021589 >>> 16396 | dn2 | D | 15433 | xx.xx.33.198 | f >>> | f | 352366662 >>> 16397 | dn3 | D | 15432 | xx.xx.40.11 | f >>> | f | -700122826 >>> 16398 | dn4 | D | 15433 | xx.xx.40.11 | f >>> | f | 823103418 >>> 16399 | dn5 | D | 15432 | xx.xx.40.123 | f >>> | f | -1268658584 >>> 16400 | dn6 | D | 15433 | xx.xx.40.123 | f >>> | f | -1765597067 >>> >>> The entry with localhost was added after using the initdb command. I >>> used it this way: initdb D CN1 - - node name CN1 >>> >>> The last troubleshooting step I tried as issuing and alter node to >>> change the host of CN1 to it's IP instead of local host, but I get the >>> ERROR: Failed to get pooled connections error there too. I have tried >>> to reload the pool several times with select pgxc_pool_reload(); That >>> didn't work either. I have added the entries in the pg_hba.conf for the >>> two subnets and copied the hba file to all the servers and their nodes. >>> >>> >>> Any help would be greatly appreciated. I'm certain it's something silly >>> I did wrong in the cluster config, but am having trouble finding it. >>> It's something very, very tiny I imagine. If there is any other config >>> information needed let me know. Hope I've capture the most important >>> bits. >>> >>> Thanks! >>> dew >>> >>> >>> ------------------------------------------------------------------------- >>> ----- >>> HPCC Systems Open Source Big Data Platform from LexisNexis Risk >>> Solutions >>> Find What Matters Most in Your Big Data with HPCC Systems >>> Open Source. Fast. Scalable. Simple. Ideal for Dirty Data. >>> Leverages Graph Analysis for Fast Processing & Easy Data Exploration >>> http://p.sf.net/sfu/hpccsystems >>> _______________________________________________ >>> Postgres-xc-general mailing list >>> Pos...@li... >>> https://lists.sourceforge.net/lists/listinfo/postgres-xc-general > > > ------------------------------------------------------------------------------ > HPCC Systems Open Source Big Data Platform from LexisNexis Risk Solutions > Find What Matters Most in Your Big Data with HPCC Systems > Open Source. Fast. Scalable. Simple. Ideal for Dirty Data. > Leverages Graph Analysis for Fast Processing & Easy Data Exploration > http://p.sf.net/sfu/hpccsystems > _______________________________________________ > Postgres-xc-general mailing list > Pos...@li... > https://lists.sourceforge.net/lists/listinfo/postgres-xc-general > |
From: Wilkerson, D. <dwi...@fu...> - 2014-06-10 17:38:08
|
Thank you for the replies Masataka and Koichi. I appreciate them very much. You were both right. It was definitely some config relates issues/mismatches. I suppose my mind was pretty tired after configuring each config file on everyone one of the servers and trying to troubleshoot all day yesterday. Cross connecting with psql and using netstat definitely helped suss out the problematic config items across the nodes and servers. To fix my issue I made some corrections to pg_hba.conf for my coordinators. My data nodes were configured correctly, but the coordinators were not. I also noticed on CN1 and CN3, I missed changing the listen address from localhost to * (0.0.0.0). That issue was apparent with netstat -an I'm going to have a look at the pgxc_ctl tool to simplify cluster starts and stops as well as the config of future clusters for production use. This is only a prototype environment. Thanks again for the help my friends. I appreciate it. dw On 6/10/14 6:06 AM, "Masataka Saito" <pg...@gm...> wrote: >Hello Wilkerson, > >I'm afraid that you're using Postgres-XC 1.2 which has such ugly bug >fixed on 1.2.1. >Or Postgres-XC nodes must trust other nodes: you have to add >pg_hba.conf entry for subnet xx.xx.40.0/24 and xx.xx.33.0/24 with >trust authentication method, so you can't specify other authentication >method which requires extra dialogue with client, e.g. password or md5 >are not allowed. > >If both suggestions don't hit, please test that you can connect to >other nodes from cn1 by psql command. I think it's a good way to find >the component in which the cause resides. > >Regards. > >On 10 June 2014 07:13, Wilkerson, Daniel <dwi...@fu...> wrote: >> Hi. I am trying to stand up a multi node cluster across multiple >>servers. The current configuration I have is one coordinator and two >>datanodes per server on 3 servers. Each has a GTM proxy configured. I >>have two more servers configured to be the GTM master and slave/standby. >> >> When executing create database in my cluster after all the config work >>(including the create node calls) I am getting the following error. >>ERROR: Failed to get pooled connections >> CONTEXT: SQL statement "EXECUTE DIRECT ON (cn2) 'SELECT >>pg_catalog.pg_try_advisory_xact_lock_shared(65535, 0)'" >> >> I noticed that the coordinator on each node is added as host = local >>host after using the initdb command to create the coordinator nodes. >>When executing the create node calls to add the other coordinators and >>datanodes to each other I am using the IP addresses of the servers >>instead; I have also tried the DNS names also with no luck. >> >> The output of pgxc_node from one of the cluster coordinator/datanodes >>looks like: >> >> oid | node_name | node_type | node_port | node_host | >>nodeis_primary | nodeis_preferred | node_id >> >>-------+-----------+-----------+-----------+--------------+-------------- >>--+------------------+------------- >> 11198 | cn1 | C | 5432 | localhost | f >> | f | -1178713634 >> 16393 | cn2 | C | 5432 | xx.xx.40.11 | f >> | f | -1923125220 >> 16394 | cn3 | C | 5432 | xx.xx.40.123 | f >> | f | 1101067607 >> 16395 | dn1 | D | 15432 | xx.xx.33.198 | t >> | t | -560021589 >> 16396 | dn2 | D | 15433 | xx.xx.33.198 | f >> | f | 352366662 >> 16397 | dn3 | D | 15432 | xx.xx.40.11 | f >> | f | -700122826 >> 16398 | dn4 | D | 15433 | xx.xx.40.11 | f >> | f | 823103418 >> 16399 | dn5 | D | 15432 | xx.xx.40.123 | f >> | f | -1268658584 >> 16400 | dn6 | D | 15433 | xx.xx.40.123 | f >> | f | -1765597067 >> >> The entry with localhost was added after using the initdb command. I >>used it this way: initdb D CN1 - - node name CN1 >> >> The last troubleshooting step I tried as issuing and alter node to >>change the host of CN1 to it's IP instead of local host, but I get the >>ERROR: Failed to get pooled connections error there too. I have tried >>to reload the pool several times with select pgxc_pool_reload(); That >>didn't work either. I have added the entries in the pg_hba.conf for the >>two subnets and copied the hba file to all the servers and their nodes. >> >> >> Any help would be greatly appreciated. I'm certain it's something silly >>I did wrong in the cluster config, but am having trouble finding it. >>It's something very, very tiny I imagine. If there is any other config >>information needed let me know. Hope I've capture the most important >>bits. >> >> Thanks! >> dew >> >> >>------------------------------------------------------------------------- >>----- >> HPCC Systems Open Source Big Data Platform from LexisNexis Risk >>Solutions >> Find What Matters Most in Your Big Data with HPCC Systems >> Open Source. Fast. Scalable. Simple. Ideal for Dirty Data. >> Leverages Graph Analysis for Fast Processing & Easy Data Exploration >> http://p.sf.net/sfu/hpccsystems >> _______________________________________________ >> Postgres-xc-general mailing list >> Pos...@li... >> https://lists.sourceforge.net/lists/listinfo/postgres-xc-general |
From: Masataka S. <pg...@gm...> - 2014-06-10 10:06:24
|
Hello Wilkerson, I'm afraid that you're using Postgres-XC 1.2 which has such ugly bug fixed on 1.2.1. Or Postgres-XC nodes must trust other nodes: you have to add pg_hba.conf entry for subnet xx.xx.40.0/24 and xx.xx.33.0/24 with trust authentication method, so you can't specify other authentication method which requires extra dialogue with client, e.g. password or md5 are not allowed. If both suggestions don't hit, please test that you can connect to other nodes from cn1 by psql command. I think it's a good way to find the component in which the cause resides. Regards. On 10 June 2014 07:13, Wilkerson, Daniel <dwi...@fu...> wrote: > Hi. I am trying to stand up a multi node cluster across multiple servers. The current configuration I have is one coordinator and two datanodes per server on 3 servers. Each has a GTM proxy configured. I have two more servers configured to be the GTM master and slave/standby. > > When executing create database in my cluster after all the config work (including the create node calls) I am getting the following error. ERROR: Failed to get pooled connections > CONTEXT: SQL statement "EXECUTE DIRECT ON (cn2) 'SELECT pg_catalog.pg_try_advisory_xact_lock_shared(65535, 0)'" > > I noticed that the coordinator on each node is added as host = local host after using the initdb command to create the coordinator nodes. When executing the create node calls to add the other coordinators and datanodes to each other I am using the IP addresses of the servers instead; I have also tried the DNS names also with no luck. > > The output of pgxc_node from one of the cluster coordinator/datanodes looks like: > > oid | node_name | node_type | node_port | node_host | nodeis_primary | nodeis_preferred | node_id > -------+-----------+-----------+-----------+--------------+----------------+------------------+------------- > 11198 | cn1 | C | 5432 | localhost | f | f | -1178713634 > 16393 | cn2 | C | 5432 | xx.xx.40.11 | f | f | -1923125220 > 16394 | cn3 | C | 5432 | xx.xx.40.123 | f | f | 1101067607 > 16395 | dn1 | D | 15432 | xx.xx.33.198 | t | t | -560021589 > 16396 | dn2 | D | 15433 | xx.xx.33.198 | f | f | 352366662 > 16397 | dn3 | D | 15432 | xx.xx.40.11 | f | f | -700122826 > 16398 | dn4 | D | 15433 | xx.xx.40.11 | f | f | 823103418 > 16399 | dn5 | D | 15432 | xx.xx.40.123 | f | f | -1268658584 > 16400 | dn6 | D | 15433 | xx.xx.40.123 | f | f | -1765597067 > > The entry with localhost was added after using the initdb command. I used it this way: initdb –D CN1 - - node name CN1 > > The last troubleshooting step I tried as issuing and alter node to change the host of CN1 to it's IP instead of local host, but I get the ERROR: Failed to get pooled connections error there too. I have tried to reload the pool several times with select pgxc_pool_reload(); That didn't work either. I have added the entries in the pg_hba.conf for the two subnets and copied the hba file to all the servers and their nodes. > > > Any help would be greatly appreciated. I'm certain it's something silly I did wrong in the cluster config, but am having trouble finding it. It's something very, very tiny I imagine. If there is any other config information needed let me know. Hope I've capture the most important bits. > > Thanks! > dew > > ------------------------------------------------------------------------------ > HPCC Systems Open Source Big Data Platform from LexisNexis Risk Solutions > Find What Matters Most in Your Big Data with HPCC Systems > Open Source. Fast. Scalable. Simple. Ideal for Dirty Data. > Leverages Graph Analysis for Fast Processing & Easy Data Exploration > http://p.sf.net/sfu/hpccsystems > _______________________________________________ > Postgres-xc-general mailing list > Pos...@li... > https://lists.sourceforge.net/lists/listinfo/postgres-xc-general |
From: 鈴木 幸市 <ko...@in...> - 2014-06-10 01:02:17
|
We had similar report in the past. In this case, as you noticed, pg_hba.conf setting had some problem and setting up this correctly corrected the issue. If you set log_min_messages to DEBUG1 or higher at all the nodes, you may get better hints. BTY, you can correct pgxc_node settings with ALTER NODE. Pgxc_ctl does this. Also, please make sure that you ran {CREATE|ALTER} NODE at all the nodes. Unlike other DDL, {CREATE | ALTER | DROP} NODE does not propagate to other nodes. Regards; --- Koichi Suzuki 2014/06/10 7:13、Wilkerson, Daniel <dwi...@fu...> のメール: > Hi. I am trying to stand up a multi node cluster across multiple servers. The current configuration I have is one coordinator and two datanodes per server on 3 servers. Each has a GTM proxy configured. I have two more servers configured to be the GTM master and slave/standby. > > When executing create database in my cluster after all the config work (including the create node calls) I am getting the following error. ERROR: Failed to get pooled connections > CONTEXT: SQL statement "EXECUTE DIRECT ON (cn2) 'SELECT pg_catalog.pg_try_advisory_xact_lock_shared(65535, 0)'" > > I noticed that the coordinator on each node is added as host = local host after using the initdb command to create the coordinator nodes. When executing the create node calls to add the other coordinators and datanodes to each other I am using the IP addresses of the servers instead; I have also tried the DNS names also with no luck. > > The output of pgxc_node from one of the cluster coordinator/datanodes looks like: > > oid | node_name | node_type | node_port | node_host | nodeis_primary | nodeis_preferred | node_id > -------+-----------+-----------+-----------+--------------+----------------+------------------+------------- > 11198 | cn1 | C | 5432 | localhost | f | f | -1178713634 > 16393 | cn2 | C | 5432 | xx.xx.40.11 | f | f | -1923125220 > 16394 | cn3 | C | 5432 | xx.xx.40.123 | f | f | 1101067607 > 16395 | dn1 | D | 15432 | xx.xx.33.198 | t | t | -560021589 > 16396 | dn2 | D | 15433 | xx.xx.33.198 | f | f | 352366662 > 16397 | dn3 | D | 15432 | xx.xx.40.11 | f | f | -700122826 > 16398 | dn4 | D | 15433 | xx.xx.40.11 | f | f | 823103418 > 16399 | dn5 | D | 15432 | xx.xx.40.123 | f | f | -1268658584 > 16400 | dn6 | D | 15433 | xx.xx.40.123 | f | f | -1765597067 > > The entry with localhost was added after using the initdb command. I used it this way: initdb –D CN1 - - node name CN1 > > The last troubleshooting step I tried as issuing and alter node to change the host of CN1 to it's IP instead of local host, but I get the ERROR: Failed to get pooled connections error there too. I have tried to reload the pool several times with select pgxc_pool_reload(); That didn't work either. I have added the entries in the pg_hba.conf for the two subnets and copied the hba file to all the servers and their nodes. > > > Any help would be greatly appreciated. I'm certain it's something silly I did wrong in the cluster config, but am having trouble finding it. It's something very, very tiny I imagine. If there is any other config information needed let me know. Hope I've capture the most important bits. > > Thanks! > dew > > ------------------------------------------------------------------------------ > HPCC Systems Open Source Big Data Platform from LexisNexis Risk Solutions > Find What Matters Most in Your Big Data with HPCC Systems > Open Source. Fast. Scalable. Simple. Ideal for Dirty Data. > Leverages Graph Analysis for Fast Processing & Easy Data Exploration > http://p.sf.net/sfu/hpccsystems > _______________________________________________ > Postgres-xc-general mailing list > Pos...@li... > https://lists.sourceforge.net/lists/listinfo/postgres-xc-general > |
From: Wilkerson, D. <dwi...@fu...> - 2014-06-09 22:25:58
|
Hi. I am trying to stand up a multi node cluster across multiple servers. The current configuration I have is one coordinator and two datanodes per server on 3 servers. Each has a GTM proxy configured. I have two more servers configured to be the GTM master and slave/standby. When executing create database in my cluster after all the config work (including the create node calls) I am getting the following error. ERROR: Failed to get pooled connections CONTEXT: SQL statement "EXECUTE DIRECT ON (cn2) 'SELECT pg_catalog.pg_try_advisory_xact_lock_shared(65535, 0)'" I noticed that the coordinator on each node is added as host = local host after using the initdb command to create the coordinator nodes. When executing the create node calls to add the other coordinators and datanodes to each other I am using the IP addresses of the servers instead; I have also tried the DNS names also with no luck. The output of pgxc_node from one of the cluster coordinator/datanodes looks like: oid | node_name | node_type | node_port | node_host | nodeis_primary | nodeis_preferred | node_id -------+-----------+-----------+-----------+--------------+----------------+------------------+------------- 11198 | cn1 | C | 5432 | localhost | f | f | -1178713634 16393 | cn2 | C | 5432 | xx.xx.40.11 | f | f | -1923125220 16394 | cn3 | C | 5432 | xx.xx.40.123 | f | f | 1101067607 16395 | dn1 | D | 15432 | xx.xx.33.198 | t | t | -560021589 16396 | dn2 | D | 15433 | xx.xx.33.198 | f | f | 352366662 16397 | dn3 | D | 15432 | xx.xx.40.11 | f | f | -700122826 16398 | dn4 | D | 15433 | xx.xx.40.11 | f | f | 823103418 16399 | dn5 | D | 15432 | xx.xx.40.123 | f | f | -1268658584 16400 | dn6 | D | 15433 | xx.xx.40.123 | f | f | -1765597067 The entry with localhost was added after using the initdb command. I used it this way: initdb –D CN1 - - node name CN1 The last troubleshooting step I tried as issuing and alter node to change the host of CN1 to it's IP instead of local host, but I get the ERROR: Failed to get pooled connections error there too. I have tried to reload the pool several times with select pgxc_pool_reload(); That didn't work either. I have added the entries in the pg_hba.conf for the two subnets and copied the hba file to all the servers and their nodes. Any help would be greatly appreciated. I'm certain it's something silly I did wrong in the cluster config, but am having trouble finding it. It's something very, very tiny I imagine. If there is any other config information needed let me know. Hope I've capture the most important bits. Thanks! dew |
From: 鈴木 幸市 <ko...@in...> - 2014-06-05 05:20:09
|
More robust node redundancy and read-only slave will be one of the next issue. Hope to have much more input on this. I’m going to open next roadmap discussion. --- Koichi Suzuki 2014/06/03 3:33、Mason Sharp <ms...@tr...<mailto:ms...@tr...>> のメール: On Sun, May 25, 2014 at 2:04 PM, Koichi Suzuki <koi...@gm...<mailto:koi...@gm...>> wrote: I see. Your have good usecase for read-only transactions. Because of the nature of log shipping and sharing/clustering, it is not simple to provide read-only transaction in XC. There may be cases where it is being used as a data warehouse (thinking a bit more about Postgres-XL here), where there is a nightly load process and then reads are done during the day. If the standbys are dedicated, being able to load balance them could be useful. Two essential reasons: 1. Delay in WAL playback in each slave may be different. It makes providing consistent database view extremely difficult. 2. At present, slave calculates snapshot of the transaction from the WAL. Current code does not allow missing XIDs. There will be memory leak and crash by OOM if there's many missing XIDs in the WAL stream. In XC, it is disabled and the database view may be inconsistent. Please note that this does not affect recovery and promotion. Agreed, there may be some level of effort for this. -- Mason Sharp TransLattice - http://www.translattice.com<http://www.translattice.com/> Distributed and Clustered Database Solutions ------------------------------------------------------------------------------ Learn Graph Databases - Download FREE O'Reilly Book "Graph Databases" is the definitive new guide to graph databases and their applications. Written by three acclaimed leaders in the field, this first edition is now available. Download your free book today! http://p.sf.net/sfu/NeoTech_______________________________________________ Postgres-xc-general mailing list Pos...@li... https://lists.sourceforge.net/lists/listinfo/postgres-xc-general |
From: 鈴木 幸市 <ko...@in...> - 2014-06-05 05:18:23
|
Hi, I reviewed the patch. +1 to go to master and 1.x. Regards; --- Koichi Suzuki 2014/06/03 17:31、Masataka Saito <pg...@gm...> のメール: > Hi, Aaron > > I think you've done right analysis. > > The order of evaluation of sub-expressions and the order in which side > effects take place are frequently defined as unspecified behavior by > the C Standard. As you saw, it's one of the such case that the order > in which the arguments to a function are evaluated. > > The patch looks good, and I think we must back-patch it to all the > versions after 1.0. > > Regards. > > On 3 June 2014 16:20, Aaron Jackson <aja...@re...> wrote: >> I've been able to work my way backwards through the problem and have >> discovered the underlying problem. When a data coordinator is paired with a >> GTM proxy, it forwards its message to the GTM proxy who adds some data to >> the payload and forwards it to the GTM. Here is what I saw when looking at >> the wire. >> >> The message captured between the data coordinator and the GTM proxy was as >> follows: >> >> 430000003d000000270000001064656d6f2e7075626c69632e666f6f0001000000000000000100000000000000ffffffffffffff7f010000000000000000 >> >> The message captured between the GTM proxy and the GTM was as follows: >> >> 430000000a00000000002746000000080000003b >> >> >> Definitely a horrible truncation of the payload. The problem is in >> GTMProxy_ProxyCommand, specifically, the two calls to pq_getmsgunreadlen(). >> The assumption is that these are called before anything else. >> Unfortunately, the intel compiler calls pq_getmsgbytes() and subsequently >> calls the second instance of pq_getmsgunreadlen(). The second time it is >> called, the value returns zero and we end up with all kinds of byte >> truncation. I've attached a patch to fix the issue. >> >> --- postgres-xc-1.2.1-orig/src/gtm/proxy/proxy_main.c 2014-04-03 >> 05:18:38.000000000 +0000 >> +++ postgres-xc-1.2.1/src/gtm/proxy/proxy_main.c 2014-06-03 >> 07:14:58.451411000 +0000 >> @@ -2390,6 +2390,7 @@ >> GTMProxy_CommandInfo *cmdinfo; >> GTMProxy_ThreadInfo *thrinfo = GetMyThreadInfo; >> GTM_ProxyMsgHeader proxyhdr; >> + size_t msgunreadlen = pq_getmsgunreadlen(message); >> >> proxyhdr.ph_conid = conninfo->con_id; >> >> @@ -2397,8 +2398,8 @@ >> if (gtmpqPutMsgStart('C', true, gtm_conn) || >> gtmpqPutnchar((char *)&proxyhdr, sizeof >> (GTM_ProxyMsgHeader), gtm_conn) || >> gtmpqPutInt(mtype, sizeof (GTM_MessageType), gtm_conn) || >> - gtmpqPutnchar(pq_getmsgbytes(message, >> pq_getmsgunreadlen(message)), >> - pq_getmsgunreadlen(message), >> gtm_conn)) >> + gtmpqPutnchar(pq_getmsgbytes(message, msgunreadlen), >> + msgunreadlen, gtm_conn)) >> elog(ERROR, "Error proxing data"); >> >> /* >> >> >> >> Aaron >> ________________________________ >> From: Aaron Jackson [aja...@re...] >> Sent: Monday, June 02, 2014 4:11 PM >> To: pos...@li... >> Subject: [Postgres-xc-general] Unable to create sequences >> >> I tried to create a database as follows ... >> >> CREATE TABLE Schema.TableFoo( >> SomeId serial NOT NULL, >> ForeignId int NOT NULL, >> ... >> ) WITH (OIDS = FALSE); >> >> >> The server returned the following... >> >> ERROR: GTM error, could not create sequence >> >> >> Looked at the server logs for the gtm_proxy, nothing so I went to the gtm. >> >> LOCATION: pq_copymsgbytes, pqformat.c:554 >> 1:140488486782720:2014-06-02 21:10:58.870 UTC -WARNING: No transaction >> handle for gxid: 0 >> LOCATION: GTM_GXIDToHandle, gtm_txn.c:163 >> 1:140488486782720:2014-06-02 21:10:58.870 UTC -WARNING: Invalid transaction >> handle: -1 >> LOCATION: GTM_HandleToTransactionInfo, gtm_txn.c:213 >> 1:140488486782720:2014-06-02 21:10:58.870 UTC -ERROR: Failed to get a >> snapshot >> LOCATION: ProcessGetSnapshotCommandMulti, gtm_snap.c:420 >> 1:140488478390016:2014-06-02 21:10:58.871 UTC -ERROR: insufficient data >> left in message >> LOCATION: pq_copymsgbytes, pqformat.c:554 >> 1:140488486782720:2014-06-02 21:10:58.871 UTC -ERROR: insufficient data >> left in message >> LOCATION: pq_copymsgbytes, pqformat.c:554 >> >> >> I'm definitely confused here. This cluster has been running fine for >> several days now. And now the GTM is failing. I performed a restart of the >> gtm and proxies (each using gtm_ctl to stop and restart the instance). >> Nothing has changed, the GTM continues to fail and will not create the >> sequence. >> >> Any ideas? >> >> Aaron >> >> ------------------------------------------------------------------------------ >> Learn Graph Databases - Download FREE O'Reilly Book >> "Graph Databases" is the definitive new guide to graph databases and their >> applications. Written by three acclaimed leaders in the field, >> this first edition is now available. Download your free book today! >> http://p.sf.net/sfu/NeoTech >> _______________________________________________ >> Postgres-xc-general mailing list >> Pos...@li... >> https://lists.sourceforge.net/lists/listinfo/postgres-xc-general >> > > ------------------------------------------------------------------------------ > Learn Graph Databases - Download FREE O'Reilly Book > "Graph Databases" is the definitive new guide to graph databases and their > applications. Written by three acclaimed leaders in the field, > this first edition is now available. Download your free book today! > http://p.sf.net/sfu/NeoTech > _______________________________________________ > Postgres-xc-general mailing list > Pos...@li... > https://lists.sourceforge.net/lists/listinfo/postgres-xc-general > |
From: Bruno C. O. <bru...@ho...> - 2014-06-04 18:57:12
|
Ok! Thank you! 2014-06-04 0:53 GMT-03:00 Michael Paquier <mic...@gm...>: > On Wed, Jun 4, 2014 at 3:50 AM, Bruno Cezar Oliveira > <bru...@ho...> wrote: > > How i can pooling the connections on the coordinators using java? PGpool? > With java? Postgres JDBC as it has support for multiple sources: > jdbc:postgresql://host1:port1,host2:port2/test > Support is basic and connections are tried in a round robin fashion > until one connection is successful. > -- > Michael > > -- Att, Bruno Cezar de Oliveira. bru...@ho... ------------------------------------------------------- Uberlândia - MG. |
From: Michael P. <mic...@gm...> - 2014-06-04 03:53:28
|
On Wed, Jun 4, 2014 at 3:50 AM, Bruno Cezar Oliveira <bru...@ho...> wrote: > How i can pooling the connections on the coordinators using java? PGpool? With java? Postgres JDBC as it has support for multiple sources: jdbc:postgresql://host1:port1,host2:port2/test Support is basic and connections are tried in a round robin fashion until one connection is successful. -- Michael |