You can subscribe to this list here.
2010 |
Jan
|
Feb
|
Mar
|
Apr
|
May
(2) |
Jun
|
Jul
|
Aug
(6) |
Sep
|
Oct
(19) |
Nov
(1) |
Dec
|
---|---|---|---|---|---|---|---|---|---|---|---|---|
2011 |
Jan
(12) |
Feb
(1) |
Mar
(4) |
Apr
(4) |
May
(32) |
Jun
(12) |
Jul
(11) |
Aug
(1) |
Sep
(6) |
Oct
(3) |
Nov
|
Dec
(10) |
2012 |
Jan
(11) |
Feb
(1) |
Mar
(3) |
Apr
(25) |
May
(53) |
Jun
(38) |
Jul
(103) |
Aug
(54) |
Sep
(31) |
Oct
(66) |
Nov
(77) |
Dec
(20) |
2013 |
Jan
(91) |
Feb
(86) |
Mar
(103) |
Apr
(107) |
May
(25) |
Jun
(37) |
Jul
(17) |
Aug
(59) |
Sep
(38) |
Oct
(78) |
Nov
(29) |
Dec
(15) |
2014 |
Jan
(23) |
Feb
(82) |
Mar
(118) |
Apr
(101) |
May
(103) |
Jun
(45) |
Jul
(6) |
Aug
(10) |
Sep
|
Oct
(32) |
Nov
|
Dec
(9) |
2015 |
Jan
(3) |
Feb
(5) |
Mar
|
Apr
(1) |
May
|
Jun
|
Jul
(9) |
Aug
(4) |
Sep
(3) |
Oct
|
Nov
|
Dec
|
2016 |
Jan
(3) |
Feb
|
Mar
|
Apr
|
May
|
Jun
|
Jul
|
Aug
|
Sep
|
Oct
|
Nov
|
Dec
|
2017 |
Jan
|
Feb
|
Mar
|
Apr
|
May
|
Jun
(3) |
Jul
|
Aug
|
Sep
|
Oct
|
Nov
|
Dec
|
2018 |
Jan
|
Feb
|
Mar
|
Apr
|
May
(4) |
Jun
|
Jul
|
Aug
|
Sep
|
Oct
|
Nov
|
Dec
|
S | M | T | W | T | F | S |
---|---|---|---|---|---|---|
1
|
2
|
3
|
4
|
5
(7) |
6
(2) |
7
|
8
|
9
(5) |
10
|
11
|
12
(5) |
13
|
14
|
15
|
16
(3) |
17
(6) |
18
|
19
(1) |
20
|
21
(1) |
22
(1) |
23
(2) |
24
|
25
(2) |
26
(2) |
27
|
28
|
29
|
30
(1) |
|
|
|
|
|
From: Sandeep G. <gup...@gm...> - 2013-09-30 23:10:06
|
Hi, I have been working with pgxc for a couple of months on a old machine. Today I installed pgxc (v1.1) on a new machine. All the ports, gtm_port, gtm_proxy port, pooler port are set to different values than the default. The OS is fedora on core i7. The problem I am facing is that the gtm_proxy exits silently. Nothing gets written in the log file as well. However, if i fire the proxy within the gdb debugger things work fine (this I didn't double check but I think it is happening). The pgxc launch scripts works fine on other machine. I am not if I have messed up some other system parameters etc shmem size etc. Please let me know. Any ideas/suggestions are welcome. -Sandeep |
From: Nikhil S. <ni...@st...> - 2013-09-26 16:39:31
|
Hi Hector, Very interesting to see that you are trying different clustering solutions. Would like to see your impressions and summary after you are done with all your configurations. Especially your thoughts on ease of use, architecture and performance of pgxc visavis the other products. Getting back to pgxc_ctl, the link that you have mentioned is good enough to get going. The most important step is to come up with a proper configuration in the pgxc_ctl.conf file. Once you have that in place, doing an: init all should get the cluster going. You can do a: monitor all to see the status of all the components as well stop all, start all etc, do things as expected. HTH, Nikhils StormDB On Thu, Sep 26, 2013 at 8:59 PM, Hector M. Jacas <hec...@et...>wrote: > Dear Nikhils, > > Thank you very much for your quick response. > > Problem solved. > > Where could find guides or tutorials on pgxc_ctl? > > https://sourceforge.net/apps/**mediawiki/postgres-xc/index.** > php?title=Pgxc_ctl_tutorial<https://sourceforge.net/apps/mediawiki/postgres-xc/index.php?title=Pgxc_ctl_tutorial>tutorial page is under construction. > > http://postgres-xc.**sourceforge.net/docs/1_1/pgxc-**ctl.html<http://postgres-xc.sourceforge.net/docs/1_1/pgxc-ctl.html>is just a manual. Explains the meanings of the parameters but says nothing > about procedures, the order in which tasks should be performed, etc. > > My project is to create clusters with different database backends used by > our applications in the enterprise. > > Could you tell me any site or page that contains guides, tutorials or > procedures to follow? > > So far, I have already a solution for MySQL with Percona cluster (3 > nodes). This solution is already in production. > > Oracle RAC (2 nodes) is almost ready. Next (now) is postgresql and mongoDB > at last. > > During the months of July and August install the 1.1beta version (3 nodes > coordinator/gtm_proxy/datanode and one gtm). I really liked the solution > (from all revised this is the most complete solution) and its performance. > For installation and deployment I follow the documents on StormDB site. > Everything was Ok. > > Now, the official 1.1 version is out and is my intention to explore the > deployment and management of a cluster postgres-xc with similar > characteristics through pgxc_ctl tool. > > Again, thank you very much for your reply > > Hector M. Jacas > > --- > This message was processed by Kaspersky Mail Gateway 5.6.28/RELEASE > running at host imx3.etecsa.cu > Visit our web-site: <http://www.kaspersky.com>, <http://www.viruslist.com> > > -- StormDB - http://www.stormdb.com The Database Cloud |
From: Hector M. J. <hec...@et...> - 2013-09-26 15:30:13
|
--- This message was processed by Kaspersky Mail Gateway 5.6.28/RELEASE running at host imx3.etecsa.cu Visit our web-site: <http://www.kaspersky.com>, <http://www.viruslist.com> |
From: Nikhil S. <ni...@st...> - 2013-09-25 22:46:41
|
Hi Hector, Try adding the path to your pgxc binaries in your .bashrc on all the nodes that are involved in the cluster. HTH, Nikhils On Thu, Sep 26, 2013 at 1:49 AM, Hector M. Jacas <hec...@et...>wrote: > Dear SEGNOR , > > In recent months I install and successfully try 1.1 beta version of > postgres-xc . > > Now the official version 1.1 is out. > > One of the components included in the contrib folder, pgxc_ctl, is where > my focus is right now and I'm confronting some problems. > > The scenario I have is as follows: > > Component | Component Name | Component Server > gtm (no slave) | gtm | gtm > - > gtp proxy (3) | gtmprx1 | dn01 > | gtmprx2 | dn02 > | gtmprx3 | dn03 > - > coordinators (3) | coord1 | dn01 > | coord2 | dn02 > | coord3 | dn03 > - > DataNodes (3) | dnode1 | dn01 > | dnode2 | dn02 > | dnode3 | dn03 > - > > I can perform the deployment ( deploy gtm) and when I try to initialize > the GTM components, for example : > > [ postgres @ rhelclient ~ ] $ pgxc_ctl > Installing pgxc_ctl_bash script as / home / postgres / pgxc_ctl / > pgxc_ctl_bash . > Installing pgxc_ctl_bash script as / home / postgres / pgxc_ctl / > pgxc_ctl_bash . > Reading configuration using / home / postgres / pgxc_ctl / pgxc_ctl_bash - > home / home / postgres / pgxc_ctl - configuration / home / postgres / > pgxc_ctl / pgxc_ctl.conf > Finished to read configuration . > PGXC_CTL ******** START *************** > > Current directory : / home / postgres / pgxc_ctl > PGXC deploy gtm > Postgres -XC Deploying materials. > Prepare tarball to deploy ... > Deploying to the server gtm . > Deployment done. > Init PGXC gtm > Initialize master GTM > GTM : no process killed > bash: initgtm : command not found > bash: gtm : command not found > bash: gtm_ctl : command not found > Done. > > Can not understand this message because users (postgres in this case) they > have PATH correctly pointing to /usr/local/pgsql/bin and which gtm_ctl > command (for example) properly returns the path of the command > /usr/local/pgsql/bin/gtm_ctl. > > In ~/pgxc_ctl/pgxc_log/ directory the last log file contents: > > pgxc_ctl(30849):1309251435_25 PGXC deploy gtm > pgxc_ctl(30849):1309251435_25 Deploying Postgres-XC materials. > pgxc_ctl(30849):1309251435_25 Deploying Postgres-XC materials. > pgxc_ctl(30849):1309251435_25 Prepare tarball to deploy ... > pgxc_ctl(30849):1309251435_25 Actual command: ( tar czCf > /home/postgres/pgsql /tmp/30849.tgz bin include lib share ) < /dev/null > > /tmp/STDOUT_30849_0 2>&1 > pgxc_ctl(30849):1309251435_27 Deploying to the server gtm. > pgxc_ctl(30849):1309251435_27 *** cmdList Dump > ********************************* > allocated = 2, used = 1 > pgxc_ctl(30849):1309251435_27 === CMD: 0 === > pgxc_ctl(30849):1309251435_27 === CMD: 0 === > pgxc_ctl(30849):1309251435_27 --- CMD-EL: 0:host="gtm", command="rm -rf > /home/postgres/pgsql/bin /home/postgres/pgsql/include > /home/postgres/pgsql/lib /home/postgres/pgsql/share; mkdir -p > /home/postgres/pgsql", localStdin="NULL", localStdout="NULL" > pgxc_ctl(30849):1309251435_27 --- CMD-EL: 1:host="NULL", command="scp > /tmp/30849.tgz postgres@gtm:/tmp", localStdin="NULL", localStdout="NULL" > pgxc_ctl(30849):1309251435_27 --- CMD-EL: 2:host="gtm", command="tar > xzCf /home/postgres/pgsql /tmp/30849.tgz; rm /tmp/30849.tgz", > localStdin="NULL", localStdout="NULL" > pgxc_ctl(30866):1309251435_27 Remote command: "rm -rf > /home/postgres/pgsql/bin /home/postgres/pgsql/include > /home/postgres/pgsql/lib /home/postgres/pgsql/share; mkdir -p > /home/postgres/pgsql", actual: "ssh postgres@gtm "( rm -rf > /home/postgres/pgsql/bin /home/postgres/pgsql/include > /home/postgres/pgsql/lib /home/postgres/pgsql/share; mkdir -p > /home/postgres/pgsql ) > /tmp/rhelclient_STDOUT_30849_2 2>&1" < /dev/null > > /dev/null 2>&1" > pgxc_ctl(30866):1309251435_28 Local command: "scp /tmp/30849.tgz > postgres@gtm:/tmp", actual: "( scp /tmp/30849.tgz postgres@gtm:/tmp ) > > /tmp/STDOUT_30849_3 2>&1 < /dev/null" > pgxc_ctl(30866):1309251435_29 Remote command: "tar xzCf > /home/postgres/pgsql /tmp/30849.tgz; rm /tmp/30849.tgz", actual: "ssh > postgres@gtm "( tar xzCf /home/postgres/pgsql /tmp/30849.tgz; rm > /tmp/30849.tgz ) > /tmp/rhelclient_STDOUT_30849_5 2>&1" < /dev/null > > /dev/null 2>&1" > pgxc_ctl(30849):1309251435_30 Actual command: ( rm -f /tmp/30849.tgz ) < > /dev/null > /tmp/STDOUT_30849_6 2>&1 > pgxc_ctl(30849):1309251435_30 Deployment done. > pgxc_ctl(30849):1309251511_35 PGXC init gtm > pgxc_ctl(30849):1309251511_35 Initialize GTM master > pgxc_ctl(30849):1309251511_35 *** cmdList Dump > ********************************* > allocated = 2, used = 1 > pgxc_ctl(30849):1309251511_35 === CMD: 0 === > pgxc_ctl(30849):1309251511_35 === CMD: 0 === > pgxc_ctl(30849):1309251511_35 --- CMD-EL: 0:host="gtm", > command="killall -u postgres -9 gtm; rm -rf /home/postgres/pgxc/nodes/gtm; > mkdir -p /home/postgres/pgxc/nodes/gtm;**initgtm -Z gtm -D > /home/postgres/pgxc/nodes/gtm"**, localStdin="NULL", localStdout="NULL" > pgxc_ctl(30849):1309251511_35 --- CMD-EL: 1:host="gtm", command="cat >> > /home/postgres/pgxc/nodes/gtm/**gtm.conf", localStdin="/tmp/STDIN_30849_**21", > localStdout="NULL" > pgxc_ctl(30849):1309251511_35 --- CMD-EL: 1:host="gtm", command="cat >> > /home/postgres/pgxc/nodes/gtm/**gtm.conf", localStdin="/tmp/STDIN_30849_**21", > localStdout="NULL" > pgxc_ctl(30849):1309251511_35 #=============================** > ================== > pgxc_ctl(30849):1309251511_35 # Added at initialization, 20130925_15:11:35 > pgxc_ctl(30849):1309251511_35 listen_addresses = '*' > pgxc_ctl(30849):1309251511_35 port = 20001 > pgxc_ctl(30849):1309251511_35 port = 20001 > pgxc_ctl(30849):1309251511_35 nodename = 'gtm' > pgxc_ctl(30849):1309251511_35 startup = ACT > pgxc_ctl(30849):1309251511_35 # End of addition > pgxc_ctl(30849):1309251511_35 ---------- > pgxc_ctl(30849):1309251511_35 --- CMD-EL: 2:host="gtm", command="(gtm > -x 2000 -D /home/postgres/pgxc/nodes/gtm &); sleep 1; gtm_ctl stop -Z gtm > -D /home/postgres/pgxc/nodes/gtm"**, localStdin="NULL", localStdout="NULL" > pgxc_ctl(31553):1309251511_35 Remote command: "killall -u postgres -9 gtm; > rm -rf /home/postgres/pgxc/nodes/gtm; mkdir -p > /home/postgres/pgxc/nodes/gtm;**initgtm -Z gtm -D > /home/postgres/pgxc/nodes/gtm"**, actual: "ssh postgres@gtm "( killall -u > postgres -9 gtm; rm -rf /home/postgres/pgxc/nodes/gtm; mkdir -p > /home/postgres/pgxc/nodes/gtm;**initgtm -Z gtm -D > /home/postgres/pgxc/nodes/gtm ) > /tmp/rhelclient_STDOUT_30849_**23 2>&1" > < /dev/null > /dev/null 2>&1" > pgxc_ctl(31553):1309251511_36 Remote command: "cat >> > /home/postgres/pgxc/nodes/gtm/**gtm.conf", actual: "ssh postgres@gtm "( > cat >> /home/postgres/pgxc/nodes/gtm/**gtm.conf ) > > /tmp/rhelclient_STDOUT_30849_**25 2>&1" < /tmp/STDIN_30849_21 > /dev/null > 2>&1" > pgxc_ctl(31553):1309251511_37 Remote command: "(gtm -x 2000 -D > /home/postgres/pgxc/nodes/gtm &); sleep 1; gtm_ctl stop -Z gtm -D > /home/postgres/pgxc/nodes/gtm"**, actual: "ssh postgres@gtm "( (gtm -x > 2000 -D /home/postgres/pgxc/nodes/gtm &); sleep 1; gtm_ctl stop -Z gtm -D > /home/postgres/pgxc/nodes/gtm ) > /tmp/rhelclient_STDOUT_30849_**27 2>&1" > < /dev/null > /dev/null 2>&1" > pgxc_ctl(30849):1309251511_38 gtm: no process killed > pgxc_ctl(30849):1309251511_38 bash: initgtm: command not found > pgxc_ctl(30849):1309251511_38 bash: initgtm: command not found > pgxc_ctl(30849):1309251511_38 bash: gtm: command not found > pgxc_ctl(30849):1309251511_38 bash: gtm_ctl: command not found > pgxc_ctl(30849):1309251511_38 Done. > > > What could be happening ? > > Could you help me? > > Thanks a lot, > > Hector M Jacas > > --- > This message was processed by Kaspersky Mail Gateway 5.6.28/RELEASE > running at host imx3.etecsa.cu > Visit our web-site: <http://www.kaspersky.com>, <http://www.viruslist.com> > > > ------------------------------------------------------------------------------ > October Webinars: Code for Performance > Free Intel webinars can help you accelerate application performance. > Explore tips for MPI, OpenMP, advanced profiling, and more. Get the most > from > the latest Intel processors and coprocessors. See abstracts and register > > http://pubads.g.doubleclick.net/gampad/clk?id=60133471&iu=/4140/ostg.clktrk > _______________________________________________ > Postgres-xc-general mailing list > Pos...@li... > https://lists.sourceforge.net/lists/listinfo/postgres-xc-general > > -- StormDB - http://www.stormdb.com The Database Cloud |
From: Hector M. J. <hec...@et...> - 2013-09-25 20:49:26
|
--- This message was processed by Kaspersky Mail Gateway 5.6.28/RELEASE running at host imx3.etecsa.cu Visit our web-site: <http://www.kaspersky.com>, <http://www.viruslist.com> |
From: Koichi S. <koi...@gm...> - 2013-09-23 01:11:31
|
To be honest, DML in plpgsql is not fully reviewed and we suspect there could be something needs improvement/fix. In the regression test, the test script issues "update" statements from plpgsql and it works fine. Regards; --- Koichi Suzuki 2013/9/22 Lucio Chiessi [VORio] <pos...@vo...> > My best regards to Postgres-XC developers. > > I am writing this e-mail in order to resolve a doubt about the use of > DML in functions using pl/pgsql. > > In section "E.5.7. Restrictions" of the "1.1 Release Notes", I can see > that DML are not allowed in pl/pgsql. > > But looking at the manual Postgres-XC, in the part that deals with the > use of pl/pgsql, I see instructions that can be used as the EXECUTE and > PERFORM commands. > > My question is: Even with this I will not be able to make DML functions > using pl/pgsql? > If not, there is another way to be into de database, or planning to > include the use of DML in pl/pgsql in future versions? > > Thanks! > > Lucio Chiessi > Rio de Janeiro - Brasil > > > ------------------------------------------------------------------------------ > LIMITED TIME SALE - Full Year of Microsoft Training For Just $49.99! > 1,500+ hours of tutorials including VisualStudio 2012, Windows 8, > SharePoint > 2013, SQL 2012, MVC 4, more. BEST VALUE: New Multi-Library Power Pack > includes > Mobile, Cloud, Java, and UX Design. Lowest price ever! Ends 9/22/13. > http://pubads.g.doubleclick.net/gampad/clk?id=64545871&iu=/4140/ostg.clktrk > _______________________________________________ > Postgres-xc-general mailing list > Pos...@li... > https://lists.sourceforge.net/lists/listinfo/postgres-xc-general > |
From: Koichi S. <koi...@gm...> - 2013-09-23 01:06:17
|
GTM-master does not require sync for every command to GTM-standby. It requires sync only at the end of grouped commands from GTM-Proxy. There should not be significant influence to the performance. Regards; --- Koichi Suzuki 2013/9/22 Nikhil Sontakke <ni...@st...> > Hi Prasad, > > 1) I am trying to evaluate the High Availability aspects of PGXC; and >> notice that GTM, and GTM-standby are configured to be in continuous >> sync. That means, every status change in GTM is synchronously made at >> GTM-standby. In such setup, what is the performance drop becoz of >> gtm-standby. >> >> Are there any benchmark tests run with and without GTM-standby?? what >> are the numbers?? >> >> > We did not see any significant differences in the with and without > GTM-Standby numbers when we did the runs some while ago. Don't have more > specifics right now though. > > > >> 2) How is GTM failure discovered? Vanilla PGXC, doesn't integrate with >> clusters like Corosync, right?? >> >> > You can come up with your resource agents for Corosync/Pacemaker. That's > what we did at StormDB. We have agents for GTM and datanode failover. > > >> 3) During GTM-failover, I see bunch of manual steps are needed to >> promote the GTM-standby to master; and make the GTM-proxies reconnect >> to the new GTM. What happens to the in-flight and new transactions >> while this GTM-failover happening?? >> I guess all active transaction will have to hang during this period, >> isn't?? >> >> > Again if you integrate properly with Corosync/Pacemaker or have your own > HA infrastructure in place, then you won't need any manual steps. > > Transactions would fail or error out for a brief period when this is > happening. If the application has logic to retry the transactions then it > might help. > > Regards, > Nikhils > > >> thanks, >> -Prasad >> >> >> ------------------------------------------------------------------------------ >> LIMITED TIME SALE - Full Year of Microsoft Training For Just $49.99! >> 1,500+ hours of tutorials including VisualStudio 2012, Windows 8, >> SharePoint >> 2013, SQL 2012, MVC 4, more. BEST VALUE: New Multi-Library Power Pack >> includes >> Mobile, Cloud, Java, and UX Design. Lowest price ever! Ends 9/20/13. >> >> http://pubads.g.doubleclick.net/gampad/clk?id=58041151&iu=/4140/ostg.clktrk >> _______________________________________________ >> Postgres-xc-general mailing list >> Pos...@li... >> https://lists.sourceforge.net/lists/listinfo/postgres-xc-general >> > > > > -- > StormDB - http://www.stormdb.com > The Database Cloud > > > ------------------------------------------------------------------------------ > LIMITED TIME SALE - Full Year of Microsoft Training For Just $49.99! > 1,500+ hours of tutorials including VisualStudio 2012, Windows 8, > SharePoint > 2013, SQL 2012, MVC 4, more. BEST VALUE: New Multi-Library Power Pack > includes > Mobile, Cloud, Java, and UX Design. Lowest price ever! Ends 9/22/13. > http://pubads.g.doubleclick.net/gampad/clk?id=64545871&iu=/4140/ostg.clktrk > _______________________________________________ > Postgres-xc-general mailing list > Pos...@li... > https://lists.sourceforge.net/lists/listinfo/postgres-xc-general > > |
From: Nikhil S. <ni...@st...> - 2013-09-22 02:01:14
|
Hi Prasad, 1) I am trying to evaluate the High Availability aspects of PGXC; and > notice that GTM, and GTM-standby are configured to be in continuous > sync. That means, every status change in GTM is synchronously made at > GTM-standby. In such setup, what is the performance drop becoz of > gtm-standby. > > Are there any benchmark tests run with and without GTM-standby?? what > are the numbers?? > > We did not see any significant differences in the with and without GTM-Standby numbers when we did the runs some while ago. Don't have more specifics right now though. > 2) How is GTM failure discovered? Vanilla PGXC, doesn't integrate with > clusters like Corosync, right?? > > You can come up with your resource agents for Corosync/Pacemaker. That's what we did at StormDB. We have agents for GTM and datanode failover. > 3) During GTM-failover, I see bunch of manual steps are needed to > promote the GTM-standby to master; and make the GTM-proxies reconnect > to the new GTM. What happens to the in-flight and new transactions > while this GTM-failover happening?? > I guess all active transaction will have to hang during this period, > isn't?? > > Again if you integrate properly with Corosync/Pacemaker or have your own HA infrastructure in place, then you won't need any manual steps. Transactions would fail or error out for a brief period when this is happening. If the application has logic to retry the transactions then it might help. Regards, Nikhils > thanks, > -Prasad > > > ------------------------------------------------------------------------------ > LIMITED TIME SALE - Full Year of Microsoft Training For Just $49.99! > 1,500+ hours of tutorials including VisualStudio 2012, Windows 8, > SharePoint > 2013, SQL 2012, MVC 4, more. BEST VALUE: New Multi-Library Power Pack > includes > Mobile, Cloud, Java, and UX Design. Lowest price ever! Ends 9/20/13. > http://pubads.g.doubleclick.net/gampad/clk?id=58041151&iu=/4140/ostg.clktrk > _______________________________________________ > Postgres-xc-general mailing list > Pos...@li... > https://lists.sourceforge.net/lists/listinfo/postgres-xc-general > -- StormDB - http://www.stormdb.com The Database Cloud |
From: Lucio C. [VORio] <pos...@vo...> - 2013-09-21 19:49:56
|
My best regards to Postgres-XC developers. I am writing this e-mail in order to resolve a doubt about the use of DML in functions using pl/pgsql. In section "E.5.7. Restrictions" of the "1.1 Release Notes", I can see that DML are not allowed in pl/pgsql. But looking at the manual Postgres-XC, in the part that deals with the use of pl/pgsql, I see instructions that can be used as the EXECUTE and PERFORM commands. My question is: Even with this I will not be able to make DML functions using pl/pgsql? If not, there is another way to be into de database, or planning to include the use of DML in pl/pgsql in future versions? Thanks! Lucio Chiessi Rio de Janeiro - Brasil |
From: Prasad V. <va...@gm...> - 2013-09-19 05:49:18
|
Hi, 1) I am trying to evaluate the High Availability aspects of PGXC; and notice that GTM, and GTM-standby are configured to be in continuous sync. That means, every status change in GTM is synchronously made at GTM-standby. In such setup, what is the performance drop becoz of gtm-standby. Are there any benchmark tests run with and without GTM-standby?? what are the numbers?? 2) How is GTM failure discovered? Vanilla PGXC, doesn't integrate with clusters like Corosync, right?? 3) During GTM-failover, I see bunch of manual steps are needed to promote the GTM-standby to master; and make the GTM-proxies reconnect to the new GTM. What happens to the in-flight and new transactions while this GTM-failover happening?? I guess all active transaction will have to hang during this period, isn't?? thanks, -Prasad |
From: Ashutosh B. <ash...@en...> - 2013-09-17 12:17:19
|
We have previously experimented with 10 coordinators and 10 datanodes. That configuration gave 6 times scalability wrt single PG server. There are reports in these archives of users trying with 20 servers. So, may be 16 coord + 16 datanodes should not be a problem. But adding coordinators increases performance if there are enough clients to keep coordinators busy. They do not store any user data. On Tue, Sep 17, 2013 at 5:39 PM, Bartłomiej Wójcik < bar...@tu...> wrote: > Hello, > > But can I have more then 16 coordinators and 16 datanodes the cluster > scaling ? > > Regards! > > > W dniu 2013-09-17 12:55, Abbas Butt pisze: > > Hi, > > Nodes are stored in a table in shared memory, which cannot be resized in > dynamic fashion. Hence limits are required at startup time to determine > shared memory size. You can have a cluster with any nodes less than the max > values defined in the configuration file and then later on add more nodes > to the cluster until that limit is reached. To add more nodes the > configuration needs to be changed and the cluster needs to be restarted. > Thanks. > > > > On Tue, Sep 17, 2013 at 3:35 PM, Bartłomiej Wójcik < > bar...@tu...> wrote: > >> Hello, >> >> I have a question about default max Coordinators which we can see >> in the configuration file - postgres.conf and in some presentations >> about pgxc >> >> What happens when you exceed the limit and after crossing? >> >> (similar restriction is for nodes) - why ? >> >> Regards! >> bw >> >> >> >> ------------------------------------------------------------------------------ >> LIMITED TIME SALE - Full Year of Microsoft Training For Just $49.99! >> 1,500+ hours of tutorials including VisualStudio 2012, Windows 8, >> SharePoint >> 2013, SQL 2012, MVC 4, more. BEST VALUE: New Multi-Library Power Pack >> includes >> Mobile, Cloud, Java, and UX Design. Lowest price ever! Ends 9/20/13. >> >> http://pubads.g.doubleclick.net/gampad/clk?id=58041151&iu=/4140/ostg.clktrk >> _______________________________________________ >> Postgres-xc-general mailing list >> Pos...@li... >> https://lists.sourceforge.net/lists/listinfo/postgres-xc-general >> > > > > -- > -- > *Abbas* > Architect > > Ph: 92.334.5100153 > Skype ID: gabbasb > www.enterprisedb.co <http://www.enterprisedb.com/>m<http://www.enterprisedb.com/> > * > Follow us on Twitter* > @EnterpriseDB > > Visit EnterpriseDB for tutorials, webinars, whitepapers<http://www.enterprisedb.com/resources-community>and more<http://www.enterprisedb.com/resources-community> > > > > > ------------------------------------------------------------------------------ > LIMITED TIME SALE - Full Year of Microsoft Training For Just $49.99! > 1,500+ hours of tutorials including VisualStudio 2012, Windows 8, > SharePoint > 2013, SQL 2012, MVC 4, more. BEST VALUE: New Multi-Library Power Pack > includes > Mobile, Cloud, Java, and UX Design. Lowest price ever! Ends 9/20/13. > http://pubads.g.doubleclick.net/gampad/clk?id=58041151&iu=/4140/ostg.clktrk > _______________________________________________ > Postgres-xc-general mailing list > Pos...@li... > https://lists.sourceforge.net/lists/listinfo/postgres-xc-general > > -- Best Wishes, Ashutosh Bapat EnterpriseDB Corporation The Postgres Database Company |
From: Abbas B. <abb...@en...> - 2013-09-17 12:15:36
|
Yes, check the section of the postgresql.conf file with heading DATA NODES AND CONNECTION POOLING and change the parameters max_coordinators and max_datanodes to the values you like. On Tue, Sep 17, 2013 at 5:09 PM, Bartłomiej Wójcik < bar...@tu...> wrote: > Hello, > > But can I have more then 16 coordinators and 16 datanodes the cluster > scaling ? > > Regards! > > > W dniu 2013-09-17 12:55, Abbas Butt pisze: > > Hi, > > Nodes are stored in a table in shared memory, which cannot be resized in > dynamic fashion. Hence limits are required at startup time to determine > shared memory size. You can have a cluster with any nodes less than the max > values defined in the configuration file and then later on add more nodes > to the cluster until that limit is reached. To add more nodes the > configuration needs to be changed and the cluster needs to be restarted. > Thanks. > > > > On Tue, Sep 17, 2013 at 3:35 PM, Bartłomiej Wójcik < > bar...@tu...> wrote: > >> Hello, >> >> I have a question about default max Coordinators which we can see >> in the configuration file - postgres.conf and in some presentations >> about pgxc >> >> What happens when you exceed the limit and after crossing? >> >> (similar restriction is for nodes) - why ? >> >> Regards! >> bw >> >> >> >> ------------------------------------------------------------------------------ >> LIMITED TIME SALE - Full Year of Microsoft Training For Just $49.99! >> 1,500+ hours of tutorials including VisualStudio 2012, Windows 8, >> SharePoint >> 2013, SQL 2012, MVC 4, more. BEST VALUE: New Multi-Library Power Pack >> includes >> Mobile, Cloud, Java, and UX Design. Lowest price ever! Ends 9/20/13. >> >> http://pubads.g.doubleclick.net/gampad/clk?id=58041151&iu=/4140/ostg.clktrk >> _______________________________________________ >> Postgres-xc-general mailing list >> Pos...@li... >> https://lists.sourceforge.net/lists/listinfo/postgres-xc-general >> > > > > -- > -- > *Abbas* > Architect > > Ph: 92.334.5100153 > Skype ID: gabbasb > www.enterprisedb.co <http://www.enterprisedb.com/>m<http://www.enterprisedb.com/> > * > Follow us on Twitter* > @EnterpriseDB > > Visit EnterpriseDB for tutorials, webinars, whitepapers<http://www.enterprisedb.com/resources-community>and more<http://www.enterprisedb.com/resources-community> > > > -- -- *Abbas* Architect Ph: 92.334.5100153 Skype ID: gabbasb www.enterprisedb.co <http://www.enterprisedb.com/>m<http://www.enterprisedb.com/> * Follow us on Twitter* @EnterpriseDB Visit EnterpriseDB for tutorials, webinars, whitepapers<http://www.enterprisedb.com/resources-community>and more<http://www.enterprisedb.com/resources-community> |
From: Bartłomiej W. <bar...@tu...> - 2013-09-17 12:09:25
|
Hello, But can I have more then 16 coordinators and 16 datanodes the cluster scaling ? Regards! W dniu 2013-09-17 12:55, Abbas Butt pisze: > Hi, > > Nodes are stored in a table in shared memory, which cannot be resized > in dynamic fashion. Hence limits are required at startup time to > determine shared memory size. You can have a cluster with any nodes > less than the max values defined in the configuration file and then > later on add more nodes to the cluster until that limit is reached. To > add more nodes the configuration needs to be changed and the cluster > needs to be restarted. > Thanks. > > > > On Tue, Sep 17, 2013 at 3:35 PM, Bartłomiej Wójcik > <bar...@tu... > <mailto:bar...@tu...>> wrote: > > Hello, > > I have a question about default max Coordinators which we can see > in the configuration file - postgres.conf and in some presentations > about pgxc > > What happens when you exceed the limit and after crossing? > > (similar restriction is for nodes) - why ? > > Regards! > bw > > > ------------------------------------------------------------------------------ > LIMITED TIME SALE - Full Year of Microsoft Training For Just $49.99! > 1,500+ hours of tutorials including VisualStudio 2012, Windows 8, > SharePoint > 2013, SQL 2012, MVC 4, more. BEST VALUE: New Multi-Library Power > Pack includes > Mobile, Cloud, Java, and UX Design. Lowest price ever! Ends 9/20/13. > http://pubads.g.doubleclick.net/gampad/clk?id=58041151&iu=/4140/ostg.clktrk > _______________________________________________ > Postgres-xc-general mailing list > Pos...@li... > <mailto:Pos...@li...> > https://lists.sourceforge.net/lists/listinfo/postgres-xc-general > > > > > -- > -- > *Abbas* > Architect > > Ph: 92.334.5100153 > Skype ID: gabbasb > www.enterprisedb.co <http://www.enterprisedb.com/>m > <http://www.enterprisedb.com/> > * > Follow us on Twitter* > @EnterpriseDB > > Visit EnterpriseDB for tutorials, webinars, whitepapers > <http://www.enterprisedb.com/resources-community>and more > <http://www.enterprisedb.com/resources-community> |
From: Abbas B. <abb...@en...> - 2013-09-17 10:56:08
|
Hi, Nodes are stored in a table in shared memory, which cannot be resized in dynamic fashion. Hence limits are required at startup time to determine shared memory size. You can have a cluster with any nodes less than the max values defined in the configuration file and then later on add more nodes to the cluster until that limit is reached. To add more nodes the configuration needs to be changed and the cluster needs to be restarted. Thanks. On Tue, Sep 17, 2013 at 3:35 PM, Bartłomiej Wójcik < bar...@tu...> wrote: > Hello, > > I have a question about default max Coordinators which we can see > in the configuration file - postgres.conf and in some presentations > about pgxc > > What happens when you exceed the limit and after crossing? > > (similar restriction is for nodes) - why ? > > Regards! > bw > > > > ------------------------------------------------------------------------------ > LIMITED TIME SALE - Full Year of Microsoft Training For Just $49.99! > 1,500+ hours of tutorials including VisualStudio 2012, Windows 8, > SharePoint > 2013, SQL 2012, MVC 4, more. BEST VALUE: New Multi-Library Power Pack > includes > Mobile, Cloud, Java, and UX Design. Lowest price ever! Ends 9/20/13. > http://pubads.g.doubleclick.net/gampad/clk?id=58041151&iu=/4140/ostg.clktrk > _______________________________________________ > Postgres-xc-general mailing list > Pos...@li... > https://lists.sourceforge.net/lists/listinfo/postgres-xc-general > -- -- *Abbas* Architect Ph: 92.334.5100153 Skype ID: gabbasb www.enterprisedb.co <http://www.enterprisedb.com/>m<http://www.enterprisedb.com/> * Follow us on Twitter* @EnterpriseDB Visit EnterpriseDB for tutorials, webinars, whitepapers<http://www.enterprisedb.com/resources-community>and more<http://www.enterprisedb.com/resources-community> |
From: Bartłomiej W. <bar...@tu...> - 2013-09-17 10:35:48
|
Hello, I have a question about default max Coordinators which we can see in the configuration file - postgres.conf and in some presentations about pgxc What happens when you exceed the limit and after crossing? (similar restriction is for nodes) - why ? Regards! bw |
From: Ashutosh B. <ash...@en...> - 2013-09-17 04:08:43
|
On Mon, Sep 16, 2013 at 7:28 PM, Bartłomiej Wójcik < bar...@tu...> wrote: > How many clients/threads/processes did you have in the test? Just one? > It seems like it based on that not many cores were being used. > > Just one client: psql -d test < insert.sql (100000 inserts) sequentially > with psql -d test < select.sql (100000 selects) > > Was the test being run on the same server as the database? > > Yes, it was. > > Try to insert the rows in parallel using multiple sessions (and if there are more than one coordinators, use multiple coordinators.) > Also, all of the "nodes" are sharing the same storage device, not > dedicated for each? > > All of the "nodes" are local. Storage device the same but it is very large > throughput ~Gbit/s > > For a better test, you should use multiple clients and ideally multiple > servers, and preferably dedicated storage. > > I wanted to test it on localhost and check WRITE performance when adding > datanodes local ( *see the read performance but why not write ?**??* ) > > > Regards! > > > > W dniu 2013-09-16 14:43, Mason Sharp pisze: > > > > > On Mon, Sep 16, 2013 at 6:34 AM, Bartłomiej Wójcik < > bar...@tu...> wrote: > >> Hello, >> >> Today I am testing the following simple cluster: Host1(vm1, IP1, >> 6cores 4GB ram, disk storage(very large throughput ~Gbit/s)): 1 >> gtm + 1 coordinator + (1,2,3) datanode >> >> Loading the database 100000 inserts and I received the time: 3m1s >> and 2m57s (2 of 6 cores work mainly) - 100% of the data at >> each datanode (checked) >> Then I do simple selects on that data and I received the time: >> 28m29s and 28m25s (2 of 6 cores work mainly) >> >> I refresh the database (drop and create database) >> Then I add second datanode to the cluster, do the same test(inserts, >> selects) and I received: >> >> 2m59s and 2m57s (inserts) (2 of 6 cores work mainly) - 50% of >> the data at each datanode (checked) >> 15m2s and 13m41s (selects) (2 of 6 cores work mainly) >> >> I refresh the database (drop and create database again) >> Then I add third datanode and I received: >> >> 3m1s and 2m58 (inserts) (2 of 6 cores work mainly) - 33% of >> the data at each datanode (checked) >> 9m35s and 10m5s (selects) (2 of 6 cores work mainly) >> >> >> Is this results ok ? Why did not increase write performance(inserts) >> after adding a datanode? Why did not benefit more from other cores ? >> >> >> The results are reproducible (as you can see) >> > > How many clients/threads/processes did you have in the test? Just one? > It seems like it based on that not many cores were being used. > > Was the test being run on the same server as the database? > > Also, all of the "nodes" are sharing the same storage device, not > dedicated for each? > > For a better test, you should use multiple clients and ideally multiple > servers, and preferably dedicated storage. > > > >> >> >> Regards! >> >> BW >> >> >> >> ------------------------------------------------------------------------------ >> LIMITED TIME SALE - Full Year of Microsoft Training For Just $49.99! >> 1,500+ hours of tutorials including VisualStudio 2012, Windows 8, >> SharePoint >> 2013, SQL 2012, MVC 4, more. BEST VALUE: New Multi-Library Power Pack >> includes >> Mobile, Cloud, Java, and UX Design. Lowest price ever! Ends 9/20/13. >> >> http://pubads.g.doubleclick.net/gampad/clk?id=58041151&iu=/4140/ostg.clktrk >> _______________________________________________ >> Postgres-xc-general mailing list >> Pos...@li... >> https://lists.sourceforge.net/lists/listinfo/postgres-xc-general >> > > > Regards, > > -- > Mason Sharp > > StormDB - http://www.stormdb.com > The Database Cloud > Postgres-XC Support and Services > > > > > ------------------------------------------------------------------------------ > LIMITED TIME SALE - Full Year of Microsoft Training For Just $49.99! > 1,500+ hours of tutorials including VisualStudio 2012, Windows 8, > SharePoint > 2013, SQL 2012, MVC 4, more. BEST VALUE: New Multi-Library Power Pack > includes > Mobile, Cloud, Java, and UX Design. Lowest price ever! Ends 9/20/13. > http://pubads.g.doubleclick.net/gampad/clk?id=58041151&iu=/4140/ostg.clktrk > _______________________________________________ > Postgres-xc-general mailing list > Pos...@li... > https://lists.sourceforge.net/lists/listinfo/postgres-xc-general > > -- Best Wishes, Ashutosh Bapat EnterpriseDB Corporation The Postgres Database Company |
From: Bartłomiej W. <bar...@tu...> - 2013-09-16 13:58:22
|
How many clients/threads/processes did you have in the test? Just one? It seems like it based on that not many cores were being used. Just one client: psql -d test < insert.sql (100000 inserts) sequentially with psql -d test < select.sql (100000 selects) Was the test being run on the same server as the database? Yes, it was. Also, all of the "nodes" are sharing the same storage device, not dedicated for each? All of the "nodes" are local. Storage device the same but it is very large throughput ~Gbit/s For a better test, you should use multiple clients and ideally multiple servers, and preferably dedicated storage. I wanted to test it on localhost and check WRITE performance when adding datanodes local ( *see the read performance but why not write ?**??* ) Regards! W dniu 2013-09-16 14:43, Mason Sharp pisze: > > > > On Mon, Sep 16, 2013 at 6:34 AM, Bartłomiej Wójcik > <bar...@tu... > <mailto:bar...@tu...>> wrote: > > Hello, > > Today I am testing the following simple cluster: Host1(vm1, IP1, > 6cores 4GB ram, disk storage(very large throughput ~Gbit/s)): 1 > gtm + 1 coordinator + (1,2,3) datanode > > Loading the database 100000 inserts and I received the time: > 3m1s > and 2m57s (2 of 6 cores work mainly) - 100% of the data at > each datanode (checked) > Then I do simple selects on that data and I received the time: > 28m29s and 28m25s (2 of 6 cores work mainly) > > I refresh the database (drop and create database) > Then I add second datanode to the cluster, do the same test(inserts, > selects) and I received: > > 2m59s and 2m57s (inserts) (2 of 6 cores work mainly) - 50% of > the data at each datanode (checked) > 15m2s and 13m41s (selects) (2 of 6 cores work mainly) > > I refresh the database (drop and create database again) > Then I add third datanode and I received: > > 3m1s and 2m58 (inserts) (2 of 6 cores work mainly) - > 33% of > the data at each datanode (checked) > 9m35s and 10m5s (selects) (2 of 6 cores work mainly) > > > Is this results ok ? Why did not increase write performance(inserts) > after adding a datanode? Why did not benefit more from other cores ? > > > The results are reproducible (as you can see) > > > How many clients/threads/processes did you have in the test? Just one? > It seems like it based on that not many cores were being used. > > Was the test being run on the same server as the database? > > Also, all of the "nodes" are sharing the same storage device, not > dedicated for each? > > For a better test, you should use multiple clients and ideally > multiple servers, and preferably dedicated storage. > > > > Regards! > > BW > > > ------------------------------------------------------------------------------ > LIMITED TIME SALE - Full Year of Microsoft Training For Just $49.99! > 1,500+ hours of tutorials including VisualStudio 2012, Windows 8, > SharePoint > 2013, SQL 2012, MVC 4, more. BEST VALUE: New Multi-Library Power > Pack includes > Mobile, Cloud, Java, and UX Design. Lowest price ever! Ends 9/20/13. > http://pubads.g.doubleclick.net/gampad/clk?id=58041151&iu=/4140/ostg.clktrk > _______________________________________________ > Postgres-xc-general mailing list > Pos...@li... > <mailto:Pos...@li...> > https://lists.sourceforge.net/lists/listinfo/postgres-xc-general > > > > Regards, > > -- > Mason Sharp > > StormDB - http://www.stormdb.com > The Database Cloud > Postgres-XC Support and Services |
From: Mason S. <ma...@st...> - 2013-09-16 12:44:20
|
On Mon, Sep 16, 2013 at 6:34 AM, Bartłomiej Wójcik < bar...@tu...> wrote: > Hello, > > Today I am testing the following simple cluster: Host1(vm1, IP1, > 6cores 4GB ram, disk storage(very large throughput ~Gbit/s)): 1 > gtm + 1 coordinator + (1,2,3) datanode > > Loading the database 100000 inserts and I received the time: 3m1s > and 2m57s (2 of 6 cores work mainly) - 100% of the data at > each datanode (checked) > Then I do simple selects on that data and I received the time: > 28m29s and 28m25s (2 of 6 cores work mainly) > > I refresh the database (drop and create database) > Then I add second datanode to the cluster, do the same test(inserts, > selects) and I received: > > 2m59s and 2m57s (inserts) (2 of 6 cores work mainly) - 50% of > the data at each datanode (checked) > 15m2s and 13m41s (selects) (2 of 6 cores work mainly) > > I refresh the database (drop and create database again) > Then I add third datanode and I received: > > 3m1s and 2m58 (inserts) (2 of 6 cores work mainly) - 33% of > the data at each datanode (checked) > 9m35s and 10m5s (selects) (2 of 6 cores work mainly) > > > Is this results ok ? Why did not increase write performance(inserts) > after adding a datanode? Why did not benefit more from other cores ? > > > The results are reproducible (as you can see) > How many clients/threads/processes did you have in the test? Just one? It seems like it based on that not many cores were being used. Was the test being run on the same server as the database? Also, all of the "nodes" are sharing the same storage device, not dedicated for each? For a better test, you should use multiple clients and ideally multiple servers, and preferably dedicated storage. > > > Regards! > > BW > > > > ------------------------------------------------------------------------------ > LIMITED TIME SALE - Full Year of Microsoft Training For Just $49.99! > 1,500+ hours of tutorials including VisualStudio 2012, Windows 8, > SharePoint > 2013, SQL 2012, MVC 4, more. BEST VALUE: New Multi-Library Power Pack > includes > Mobile, Cloud, Java, and UX Design. Lowest price ever! Ends 9/20/13. > http://pubads.g.doubleclick.net/gampad/clk?id=58041151&iu=/4140/ostg.clktrk > _______________________________________________ > Postgres-xc-general mailing list > Pos...@li... > https://lists.sourceforge.net/lists/listinfo/postgres-xc-general > Regards, -- Mason Sharp StormDB - http://www.stormdb.com The Database Cloud Postgres-XC Support and Services |
From: Bartłomiej W. <bar...@tu...> - 2013-09-16 10:35:03
|
Hello, Today I am testing the following simple cluster: Host1(vm1, IP1, 6cores 4GB ram, disk storage(very large throughput ~Gbit/s)): 1 gtm + 1 coordinator + (1,2,3) datanode Loading the database 100000 inserts and I received the time: 3m1s and 2m57s (2 of 6 cores work mainly) - 100% of the data at each datanode (checked) Then I do simple selects on that data and I received the time: 28m29s and 28m25s (2 of 6 cores work mainly) I refresh the database (drop and create database) Then I add second datanode to the cluster, do the same test(inserts, selects) and I received: 2m59s and 2m57s (inserts) (2 of 6 cores work mainly) - 50% of the data at each datanode (checked) 15m2s and 13m41s (selects) (2 of 6 cores work mainly) I refresh the database (drop and create database again) Then I add third datanode and I received: 3m1s and 2m58 (inserts) (2 of 6 cores work mainly) - 33% of the data at each datanode (checked) 9m35s and 10m5s (selects) (2 of 6 cores work mainly) Is this results ok ? Why did not increase write performance(inserts) after adding a datanode? Why did not benefit more from other cores ? The results are reproducible (as you can see) Regards! BW |
From: Mason S. <ma...@st...> - 2013-09-12 18:38:04
|
Hi Bartlomiej, On Thu, Sep 12, 2013 at 6:05 AM, Bartłomiej Wójcik < bar...@tu...> wrote: > Hello! > > I'd like to thanks for Postgres-xc existence! > > > and I have a question: > > Is it safely to use it for the production purposes ? Could you tell us > by whom it is used ? > Yes, Postgres-XC is being used in production by our customers and another one is in the process of testing to deploy soon. I don't have permission to disclose them. Feel free to send me a private email if you would like to have a call on which you can ask me your questions and I can provide general (non-identifying) info. Regards, -- Mason Sharp StormDB - http://www.stormdb.com The Database Cloud Postgres-XC Support and Services |
From: Tim K. <tk...@ut...> - 2013-09-12 15:46:56
|
On Thu, Sep 12, 2013 at 6:49 AM, Bartłomiej Wójcik < bar...@tu...> wrote: > Hello, > > When I add every new node to my cluster the performance of the entire > cluster decreases by 20 percent. > > Cluster scheme: > > Host1(vm1, IP1, 2core 4GB ram): gtm > Host2(vm2, IP2, 2core 4GB ram): 1 gtm_proxy + 1 coordinator + 1 datanode > Host3(vm3, IP3, 2core 4GB ram): 1 gtm_proxy + 1 coordinator + 1 datanode > Host4(vm4, IP4, 2core 4GB ram): 1 gtm_proxy + 1 coordinator + 1 datanode > > I run pgbench (-S) and getting: ~1200 tps (on each) > > when I use (host1 and host2 and host3) the result is: ~1500 tps (on each) > > when I use (host 1 and host 2) the result is: ~1800 tps > > > Is this normal behavior ? > > All hosts use the same disk storage(very large throughput ~Gbit/s), > cores are not common, ramare not common > > > Regards! > > > I'm a lurker on the list, but I thought pg-xc was not designed to maximize transaction rate. Rather its designed for quickly completing a few very large transactions. THK > > ------------------------------------------------------------------------------ > How ServiceNow helps IT people transform IT departments: > 1. Consolidate legacy IT systems to a single system of record for IT > 2. Standardize and globalize service processes across IT > 3. Implement zero-touch automation to replace manual, redundant tasks > http://pubads.g.doubleclick.net/gampad/clk?id=51271111&iu=/4140/ostg.clktrk > _______________________________________________ > Postgres-xc-general mailing list > Pos...@li... > https://lists.sourceforge.net/lists/listinfo/postgres-xc-general > -- http://www.keittlab.org/ |
From: Pavan D. <pav...@gm...> - 2013-09-12 12:02:26
|
On Thu, Sep 12, 2013 at 5:19 PM, Bartłomiej Wójcik < bar...@tu...> wrote: > Hello, > > When I add every new node to my cluster the performance of the entire > cluster decreases by 20 percent. > > Cluster scheme: > > Host1(vm1, IP1, 2core 4GB ram): gtm > Host2(vm2, IP2, 2core 4GB ram): 1 gtm_proxy + 1 coordinator + 1 datanode > Host3(vm3, IP3, 2core 4GB ram): 1 gtm_proxy + 1 coordinator + 1 datanode > Host4(vm4, IP4, 2core 4GB ram): 1 gtm_proxy + 1 coordinator + 1 datanode > > I run pgbench (-S) and getting: ~1200 tps (on each) > > when I use (host1 and host2 and host3) the result is: ~1500 tps (on each) > > when I use (host 1 and host 2) the result is: ~1800 tps > > > Is this normal behavior ? > Are you using pgbench supplied with the XC ? If so, AFAIR pgbench has been modified to distribute the pgbench_accounts table on "bid" field. This was done to improve pgbench TPCC queries. I can see that it can adversely affect the -S case though. Can you try using the pgbench supplied with stock Postgres ? The automatic distribution will cause the pgbench_accounts table being distributed on the "aid" which is also the column used in WHERE clause of the select only queries of -S option. I would expect no performance drop in that case. Whether you can get a performance improvement with additional nodes will depend on the workload though. You really want to have a work load which can not fit your single server. IOW a scale factor such that the entire pgbench_accounts table does not fit the RAM of a single server will be appropriate. You would also want to run the test long enough so that shared buffers are populated with the data. Thanks, Pavan -- Pavan Deolasee http://www.linkedin.com/in/pavandeolasee |
From: Bartłomiej W. <bar...@tu...> - 2013-09-12 11:49:40
|
Hello, When I add every new node to my cluster the performance of the entire cluster decreases by 20 percent. Cluster scheme: Host1(vm1, IP1, 2core 4GB ram): gtm Host2(vm2, IP2, 2core 4GB ram): 1 gtm_proxy + 1 coordinator + 1 datanode Host3(vm3, IP3, 2core 4GB ram): 1 gtm_proxy + 1 coordinator + 1 datanode Host4(vm4, IP4, 2core 4GB ram): 1 gtm_proxy + 1 coordinator + 1 datanode I run pgbench (-S) and getting: ~1200 tps (on each) when I use (host1 and host2 and host3) the result is: ~1500 tps (on each) when I use (host 1 and host 2) the result is: ~1800 tps Is this normal behavior ? All hosts use the same disk storage(very large throughput ~Gbit/s), cores are not common, ramare not common Regards! |
From: Bartłomiej W. <bar...@tu...> - 2013-09-12 10:23:13
|
Hello! I'd like to thanks for Postgres-xc existence! and I have a question: Is it safely to use it for the production purposes ? Could you tell us by whom it is used ? Best Regards! |
From: Koichi S. <koi...@gm...> - 2013-09-09 02:57:24
|
I feel it's better to maintain pgxc_ctl in a separate repo and include a snapshot into XC core release for user's convenience, as done in many of the contrib modules of PG. --- Koichi Suzuki 2013/9/9 Michael Paquier <mic...@gm...> > On Mon, Sep 9, 2013 at 11:11 AM, 鈴木 幸市 <ko...@in...> wrote: > > Having pgxc_ctl as a part of XC material (not a core, it's in the > contrib) will make deploying simpler. I agree to have pgxc_ctl repo > independent and can update separately to provide more flexible and quicker > fix/enhancement. > Thanks. Note that my suggestion is not for 1.1 stable branch where > this module is already integrated in code and released, as we > shouldn't change the features there at this point, but only for master > and future releases. > -- > Michael > |