You can subscribe to this list here.
2010 |
Jan
|
Feb
|
Mar
|
Apr
|
May
(2) |
Jun
|
Jul
|
Aug
(6) |
Sep
|
Oct
(19) |
Nov
(1) |
Dec
|
---|---|---|---|---|---|---|---|---|---|---|---|---|
2011 |
Jan
(12) |
Feb
(1) |
Mar
(4) |
Apr
(4) |
May
(32) |
Jun
(12) |
Jul
(11) |
Aug
(1) |
Sep
(6) |
Oct
(3) |
Nov
|
Dec
(10) |
2012 |
Jan
(11) |
Feb
(1) |
Mar
(3) |
Apr
(25) |
May
(53) |
Jun
(38) |
Jul
(103) |
Aug
(54) |
Sep
(31) |
Oct
(66) |
Nov
(77) |
Dec
(20) |
2013 |
Jan
(91) |
Feb
(86) |
Mar
(103) |
Apr
(107) |
May
(25) |
Jun
(37) |
Jul
(17) |
Aug
(59) |
Sep
(38) |
Oct
(78) |
Nov
(29) |
Dec
(15) |
2014 |
Jan
(23) |
Feb
(82) |
Mar
(118) |
Apr
(101) |
May
(103) |
Jun
(45) |
Jul
(6) |
Aug
(10) |
Sep
|
Oct
(32) |
Nov
|
Dec
(9) |
2015 |
Jan
(3) |
Feb
(5) |
Mar
|
Apr
(1) |
May
|
Jun
|
Jul
(9) |
Aug
(4) |
Sep
(3) |
Oct
|
Nov
|
Dec
|
2016 |
Jan
(3) |
Feb
|
Mar
|
Apr
|
May
|
Jun
|
Jul
|
Aug
|
Sep
|
Oct
|
Nov
|
Dec
|
2017 |
Jan
|
Feb
|
Mar
|
Apr
|
May
|
Jun
(3) |
Jul
|
Aug
|
Sep
|
Oct
|
Nov
|
Dec
|
2018 |
Jan
|
Feb
|
Mar
|
Apr
|
May
(4) |
Jun
|
Jul
|
Aug
|
Sep
|
Oct
|
Nov
|
Dec
|
S | M | T | W | T | F | S |
---|---|---|---|---|---|---|
|
|
|
|
1
|
2
(2) |
3
(2) |
4
|
5
(6) |
6
(8) |
7
(10) |
8
(5) |
9
(1) |
10
|
11
|
12
(2) |
13
|
14
(3) |
15
(2) |
16
(3) |
17
|
18
|
19
(1) |
20
(2) |
21
(3) |
22
(5) |
23
(2) |
24
|
25
(2) |
26
|
27
|
28
|
29
|
30
|
31
|
From: Wolfgang K. <fel...@gm...> - 2013-08-25 11:35:01
|
> I'm a post-graduation student Hm. > and my conclusion work is about Postgres-XC. I've read a lot about > the project and I really think this is a powerful tool. > Unfortunately, I couldn't achieve the performance I had expected. > > I've configured a cluster with 4 virtualized servers (all of them on > the same hardware using Virtual Box): You're running several RDBMS server instances on one physical computer and you expect to get more throughput from that than with one single instance running on the same physical computer? Errr, huh? > I used pgbench to compare the number of transactions per second in > Postgres-XC and in a standalone PostgreSQL server. Postgres-XC > achieved only half TPS compared to PostgreSQL. The test was basically > consisted of inserts into distributed tables (I used a custom script > in pgbench) simulating many clients simultaneously. That's what is to be expected. The same hardware minus the additional overhead for virtualisation and replication *has to* give you less throughput than one single RDBMS server instance. And if you emulate the clients by running scripts on the same hardware as the servers... > - How many servers would be enough to have a write/read scalability > compared to a single PostgreSQL? More than one. *Hardware* servers. > - Would it be a problem to have all servers on the same hardware? That's not just "a problem", that's the whole point. If you want more throughput, you need more hardware*. That's what replication is made for in the first place. Sincerely, Wolfgang *Of course you also must optimise your configuration to get the maximum out of the hardware. |
From: Mason S. <ma...@st...> - 2013-08-23 22:20:09
|
On Fri, Aug 23, 2013 at 12:46 PM, Aurelienne Aparecida Souza Jorge < aur...@gm...> wrote: > Hello! > > I'm a post-graduation student and my conclusion work is about Postgres-XC. > I've read a lot about the project and I really think this is a powerful > tool. Unfortunately, I couldn't achieve the performance I had expected. > > I've configured a cluster with 4 virtualized servers (all of them on the > same hardware using Virtual Box): > Server 1: gtm > Server 2: 1 gtm_proxy + 1 datanode + 1 coordinator > Server 3: 1 gtm_proxy + 1 datanode + 1 coordinator > Server 4: 1 gtm_proxy + 1 datanode + 1 coordinator > How many cores are on the underlying system? Keep in mind that those resources are going to be shared. Are you CPU bound? Was your base PostgreSQL instance for comparison also running in a VM? In addition, does each server have its own dedicated storage? Or are they all contending for the same physical device underneath? Do you have fsync on (and is it really fsync'ed by the VM)? > > The version of Postgres-XC I'm using is 1.0.3, and the OS in all servers > is Debian 7. > > I used pgbench to compare the number of transactions per second in > Postgres-XC and in a standalone PostgreSQL server. Postgres-XC achieved > only half TPS compared to PostgreSQL. The test was basically consisted of > inserts into distributed tables (I used a custom script in pgbench) simulating > many clients simultaneously. > > I would like to know what configuration would be enough to have a better > performance than PostgreSQL. > > - How many servers would be enough to have a write/read scalability > compared to a single PostgreSQL? > > - Would it be a problem to have all servers on the same hardware? > > - Should I distribute the workload among all coordinators? I'm basically > using one coordinator, once I couldn't find a way to use all of them simultaneously > with pgbench. > > I would be very thankful if you could help me. > > > Regards, > > Aurelienne Jorge. > > > ------------------------------------------------------------------------------ > Introducing Performance Central, a new site from SourceForge and > AppDynamics. Performance Central is your source for news, insights, > analysis and resources for efficient Application Performance Management. > Visit us today! > http://pubads.g.doubleclick.net/gampad/clk?id=48897511&iu=/4140/ostg.clktrk > _______________________________________________ > Postgres-xc-general mailing list > Pos...@li... > https://lists.sourceforge.net/lists/listinfo/postgres-xc-general > > -- Mason Sharp StormDB - http://www.stormdb.com The Database Cloud Postgres-XC Support and Services |
From: Aurelienne A. S. J. <aur...@gm...> - 2013-08-23 19:46:53
|
Hello! I'm a post-graduation student and my conclusion work is about Postgres-XC. I've read a lot about the project and I really think this is a powerful tool. Unfortunately, I couldn't achieve the performance I had expected. I've configured a cluster with 4 virtualized servers (all of them on the same hardware using Virtual Box): Server 1: gtm Server 2: 1 gtm_proxy + 1 datanode + 1 coordinator Server 3: 1 gtm_proxy + 1 datanode + 1 coordinator Server 4: 1 gtm_proxy + 1 datanode + 1 coordinator The version of Postgres-XC I'm using is 1.0.3, and the OS in all servers is Debian 7. I used pgbench to compare the number of transactions per second in Postgres-XC and in a standalone PostgreSQL server. Postgres-XC achieved only half TPS compared to PostgreSQL. The test was basically consisted of inserts into distributed tables (I used a custom script in pgbench) simulating many clients simultaneously. I would like to know what configuration would be enough to have a better performance than PostgreSQL. - How many servers would be enough to have a write/read scalability compared to a single PostgreSQL? - Would it be a problem to have all servers on the same hardware? - Should I distribute the workload among all coordinators? I'm basically using one coordinator, once I couldn't find a way to use all of them simultaneously with pgbench. I would be very thankful if you could help me. Regards, Aurelienne Jorge. |
From: Ahsan H. <ahs...@en...> - 2013-08-22 08:00:49
|
Congratulation. This is a major achievement. On Thu, Aug 22, 2013 at 10:13 AM, Koichi Suzuki <koi...@gm...>wrote: > Please copy this announcement to you channel. > > Thank you; > --- > Koichi Suzuki > > --- > Koichi Suzuki > > > 2013/8/22 Koichi Suzuki <koi...@gm...> > >> Postgres-XC development group is proud to announce the release of >> Postgres-XC version 1.1. This is the second major release and comes with >> many useful features. >> >> Source tarball will be available at >> http://sourceforge.net/projects/postgres-xc/files/Version_1.1/pgxc-v1.1.tar.gz/downloadwhich comes with HTML documentation and man pages. >> >> >> Please visit the project page http://postgres-xc.sourceforge.net and >> development page https://sourceforge.net/projects/postgres-xc/ for more >> materials. >> >> New features of Postgres-XC 1.1 include: >> >> * Node addition and removal while Postgres-XC cluster is in operation. >> * Added --restoremode option to pg_ctl to import catalog information from >> other coordinator/datanode, used when adding new node. >> * Added --include-nodes option to pg_dump and pg_dumpall to export node >> information as well. Mainly for node addition. >> * pgxc_lock_for_backup() function to disable DDLs while new node is going >> to be added and catalog is exported to the new node. >> * Row TRIGGER support. >> * RETURNING support. >> * pgxc_ctl tool for Postgres-XC cluster configuration and operation >> (contrib module). >> * Backup GTM restart point with CREATE BARRIER statement. >> * Merged with PostgreSQL 9.2.4. >> * ALTER TABLE statement to redistribute tables >> >> Also we have number of improvements in the planner for better performance >> as: >> >> * Push down sorting operation to the datanodes by using ORDER BY clause >> in queries to sent to the datanodes. >> * Push down LIMIT clause to datanodes. >> * Pushdown outer joins to datanodes. >> * Improve fast query shipping to ship queries containing subqueries. >> * Push GROUP BY clause to the datanodes when there is ORDER BY, LIMIT and >> other clauses in the query. >> >> It also comes with number of another improvements and fixes. >> >> >> The group appreciate all the members who provided valuable codes and >> fruitful discussions. >> >> Best Regards; >> --- >> Koichi Suzuki >> > > > > ------------------------------------------------------------------------------ > Introducing Performance Central, a new site from SourceForge and > AppDynamics. Performance Central is your source for news, insights, > analysis and resources for efficient Application Performance Management. > Visit us today! > http://pubads.g.doubleclick.net/gampad/clk?id=48897511&iu=/4140/ostg.clktrk > _______________________________________________ > Postgres-xc-core mailing list > Pos...@li... > https://lists.sourceforge.net/lists/listinfo/postgres-xc-core > > -- Ahsan Hadi Snr Director Product Development EnterpriseDB Corporation The Enterprise Postgres Company Phone: +92-51-8358874 Mobile: +92-333-5162114 Website: www.enterprisedb.com EnterpriseDB Blog: http://blogs.enterprisedb.com/ Follow us on Twitter: http://www.twitter.com/enterprisedb This e-mail message (and any attachment) is intended for the use of the individual or entity to whom it is addressed. This message contains information from EnterpriseDB Corporation that may be privileged, confidential, or exempt from disclosure under applicable law. If you are not the intended recipient or authorized to receive this for the intended recipient, any use, dissemination, distribution, retention, archiving, or copying of this communication is strictly prohibited. If you have received this e-mail in error, please notify the sender immediately by reply e-mail and delete this message. |
From: Koichi S. <koi...@gm...> - 2013-08-22 05:13:37
|
Please copy this announcement to you channel. Thank you; --- Koichi Suzuki --- Koichi Suzuki 2013/8/22 Koichi Suzuki <koi...@gm...> > Postgres-XC development group is proud to announce the release of > Postgres-XC version 1.1. This is the second major release and comes with > many useful features. > > Source tarball will be available at > http://sourceforge.net/projects/postgres-xc/files/Version_1.1/pgxc-v1.1.tar.gz/downloadwhich comes with HTML documentation and man pages. > > > Please visit the project page http://postgres-xc.sourceforge.net and > development page https://sourceforge.net/projects/postgres-xc/ for more > materials. > > New features of Postgres-XC 1.1 include: > > * Node addition and removal while Postgres-XC cluster is in operation. > * Added --restoremode option to pg_ctl to import catalog information from > other coordinator/datanode, used when adding new node. > * Added --include-nodes option to pg_dump and pg_dumpall to export node > information as well. Mainly for node addition. > * pgxc_lock_for_backup() function to disable DDLs while new node is going > to be added and catalog is exported to the new node. > * Row TRIGGER support. > * RETURNING support. > * pgxc_ctl tool for Postgres-XC cluster configuration and operation > (contrib module). > * Backup GTM restart point with CREATE BARRIER statement. > * Merged with PostgreSQL 9.2.4. > * ALTER TABLE statement to redistribute tables > > Also we have number of improvements in the planner for better performance > as: > > * Push down sorting operation to the datanodes by using ORDER BY clause in > queries to sent to the datanodes. > * Push down LIMIT clause to datanodes. > * Pushdown outer joins to datanodes. > * Improve fast query shipping to ship queries containing subqueries. > * Push GROUP BY clause to the datanodes when there is ORDER BY, LIMIT and > other clauses in the query. > > It also comes with number of another improvements and fixes. > > > The group appreciate all the members who provided valuable codes and > fruitful discussions. > > Best Regards; > --- > Koichi Suzuki > |
From: Koichi S. <koi...@gm...> - 2013-08-22 04:52:42
|
Postgres-XC development group is proud to announce the release of Postgres-XC version 1.1. This is the second major release and comes with many useful features. Source tarball will be available at http://sourceforge.net/projects/postgres-xc/files/Version_1.1/pgxc-v1.1.tar.gz/downloadwhich comes with HTML documentation and man pages. Please visit the project page http://postgres-xc.sourceforge.net and development page https://sourceforge.net/projects/postgres-xc/ for more materials. New features of Postgres-XC 1.1 include: * Node addition and removal while Postgres-XC cluster is in operation. * Added --restoremode option to pg_ctl to import catalog information from other coordinator/datanode, used when adding new node. * Added --include-nodes option to pg_dump and pg_dumpall to export node information as well. Mainly for node addition. * pgxc_lock_for_backup() function to disable DDLs while new node is going to be added and catalog is exported to the new node. * Row TRIGGER support. * RETURNING support. * pgxc_ctl tool for Postgres-XC cluster configuration and operation (contrib module). * Backup GTM restart point with CREATE BARRIER statement. * Merged with PostgreSQL 9.2.4. * ALTER TABLE statement to redistribute tables Also we have number of improvements in the planner for better performance as: * Push down sorting operation to the datanodes by using ORDER BY clause in queries to sent to the datanodes. * Push down LIMIT clause to datanodes. * Pushdown outer joins to datanodes. * Improve fast query shipping to ship queries containing subqueries. * Push GROUP BY clause to the datanodes when there is ORDER BY, LIMIT and other clauses in the query. It also comes with number of another improvements and fixes. The group appreciate all the members who provided valuable codes and fruitful discussions. Best Regards; --- Koichi Suzuki |
From: 木 幸市 <ko...@in...> - 2013-08-22 01:03:24
|
Yes, this is current XC limitation. In all these cases, we need to visit another datanode to enforce these reference integrities, which is not provided generally yet. Example of the workaround is: > ALTER TABLE ONLY cookbook_versions > ADD CONSTRAINT cookbook_versions_cookbook_id_fkey FOREIGN KEY (cookbook_id) REFERENCES cookbooks(id) ON DELETE RESTRICT; 1. Change cookbooks distribution to "replication", or 2. cookbook_versions.cookbook_id and cookbooks.id are distribution key. Please note that both case are enforced just locally in each datanode. BTW, I'm finding many similar usecase and this has higher priority in the future development. Regards; --- Koichi Suzuki On 2013/08/22, at 8:49, Darío Ezequiel Nievas <dar...@me...> wrote: > Hi Guys. > I could really use your help. > I'm a sysadmin. I'm not a DBA, and I don't have any experience with PostgreSQL. > > I'm trying PGXC as an HA solution for the PGSQL DB Component in our Chef infrastructure. > The problem I'm currently having, is that when I try to import the schema provided by Opscode, I get the following error: > > ERROR: Cannot create foreign key whose evaluation cannot be enforced to remote nodes > > I think I'm getting the error on the following statements > -- > -- Name: cookbook_version_checksums_org_id_fkey; Type: FK CONSTRAINT; Schema: public; Owner: - > -- > > ALTER TABLE ONLY cookbook_version_checksums > ADD CONSTRAINT cookbook_version_checksums_org_id_fkey FOREIGN KEY (org_id, checksum) REFERENCES checksums(org_id, checksum) ON UPDATE CASCADE ON DELETE RESTRICT; > > > -- > -- Name: cookbook_versions_cookbook_id_fkey; Type: FK CONSTRAINT; Schema: public; Owner: - > -- > > ALTER TABLE ONLY cookbook_versions > ADD CONSTRAINT cookbook_versions_cookbook_id_fkey FOREIGN KEY (cookbook_id) REFERENCES cookbooks(id) ON DELETE RESTRICT; > > > -- > -- Name: data_bag_items_org_id_fkey; Type: FK CONSTRAINT; Schema: public; Owner: - > -- > > ALTER TABLE ONLY data_bag_items > ADD CONSTRAINT data_bag_items_org_id_fkey FOREIGN KEY (org_id, data_bag_name) REFERENCES data_bags(org_id, name) ON UPDATE CASCADE ON DELETE CASCADE; > > > > Is there a way to work around this? Or is there a limitation in PGXC for this scenario? > > I'm attaching the pgsql_schema.sql for reference > > Thanks in advance! > Dario Nievas > > > <pgsql_schema.sql>------------------------------------------------------------------------------ > Introducing Performance Central, a new site from SourceForge and > AppDynamics. Performance Central is your source for news, insights, > analysis and resources for efficient Application Performance Management. > Visit us today! > http://pubads.g.doubleclick.net/gampad/clk?id=48897511&iu=/4140/ostg.clktrk_______________________________________________ > Postgres-xc-general mailing list > Pos...@li... > https://lists.sourceforge.net/lists/listinfo/postgres-xc-general |
From: Koichi S. <koi...@gm...> - 2013-08-21 02:19:50
|
Thanks Eiji for the effort. Looking forward to your outcome. Regards; --- Koichi Suzuki 2013/8/21 Eiji UWAIZUMI <uwa...@as...> > I think I'll try to output a detailed log of this matter. > I will output a detailed log with the following settings. > > Postgres-XC > log_min_messages = debug5 > > Server > iostat and vmstat > > Some time it may take, but I will report the results. > > Regards, > uwaizumi > > (2013/08/16 14:28), Nikhil Sontakke wrote: > > What I think is happening here is that the COPY process is trying to send > > data to the datanodes, but they are not consuming it at the same rate. > > Hence the send_some call sends only a portion of the data and the rest of > > the data has to remain buffered. This builds up eventually to more than > > half GB and the subsequent repalloc call to double the buffer fails > because > > 1GB is the max limit. This is evident from the below COPY error message > as > > well. > > > > What we should do is maybe if the data outbuffer size crosses a > pre-defined > > limit like 16MB or so, then we should use pgxc_node_flush to ensure that > > all of that has reached the datanode. After some point even the socket > will > > not be handle direct large send_some calls and we will increase the > buffer > > size unnecessarily. > > > > Regards, > > Nikhils > > > > > > On Fri, Aug 16, 2013 at 8:37 AM, Nikhil Sontakke <ni...@st... > >wrote: > > > >> Thanks Eiji san, > >> > >> > >> > >>> > >>> At the request of Hemmi-san, > >>> By applying the patch to the Postgres-XC v1.1, I have conducted a > DBT-1. > >>> However, the results did not change unfortunately. > >>> > >>> [results] > >>> $ psql -p 5422 -c "copy address from '/tmp/address.data' delimiter > '>';" > >>> ERROR: invalid memory alloc request size 1073741824 > >>> CONTEXT: COPY address, line 30305382: > >>> > "30305382>TXnEREEwqOLV11zEFGDXV4AXEDEeqLn9JDDwGxHw>TCyFEgXDNVOCihBdmdExEBAAvnXdvEEBDEE>TGEgxF5RdiuAEz..." > >>> > >>> > >> Yeah, as I slept over it I realized that what I am trying to solve is a > >> "COPY TO" issue and what you guys must be facing is a "COPY FROM" > error. I > >> will try to look at this after when I can make more progress on the > "COPY > >> TO" issue. > >> > >> Regards, > >> Nikhils > >> > >>> Regards, > >>> uwaizumi > >>> > >>> > >>> (2013/08/15 18:06), Hitoshi HEMMI wrote: > >>>> Hi Nikhil, > >>>> > >>>> Thank you for the patch. > >>>> I will be off from tomorrow, but I ask our team to test it > >>>> and post the result. > >>>> > >>>> Best, > >>>> -hemmi > >>>> > >>>> > >>>> Nikhil Sontakke さんは書きました: > >>>>> Hi Hitoshi-san, > >>>>> > >>>>> PFA, patch which should help to use less memory while running COPY > >>>>> commands. This is against REL 1.1. Please do let me know if this > helps > >>>>> in your case. > >>>>> > >>>>> Regards, > >>>>> Nikhils > >>>>> > >>>>> > >>>>> On Wed, Aug 14, 2013 at 3:38 PM, Hitoshi HEMMI > >>>>> <hem...@la... <mailto:hem...@la...>> > >>> wrote: > >>>>> > >>>>> Thank you, Nikhil. > >>>>> We will wait until a patch is available. > >>>>> > >>>>> Best, > >>>>> -hemmi > >>>>> > >>>>> > >>>>> Nikhil Sontakke さんは書きました: > >>>>> > Hi Hitoshi-san, > >>>>> > > >>>>> > I am working on a similar issue coincidentally. Basically > what's > >>>>> > happening is that the COPY command is creating tuples which > >>> will get > >>>>> > freed up only at the end when the command will finish, so the > >>> RAM > >>>>> > memory ends up being consumed and can lead to the error that > you > >>>>> have > >>>>> > provided. > >>>>> > > >>>>> > See the discussion around RemoteQueryNext on another thread. > >>>>> > > >>>>> > Regards, > >>>>> > Nikhils > >>>>> > > >>>>> > On Wed, Aug 14, 2013 at 1:56 PM, Hitoshi HEMMI > >>>>> > <hem...@la... > >>>>> <mailto:hem...@la...> > >>>>> <mailto:hem...@la... > >>>>> <mailto:hem...@la...>>> wrote: > >>>>> > > >>>>> > Hi, > >>>>> > > >>>>> > We encountered "invalid memory alloc request size ..." error > of > >>>>> > Postgres-XC, when preparing DBT-1 benchmark, and need > >>>>> > someones help. > >>>>> > (First of all, what does the error message mean?) > >>>>> > > >>>>> > ------------------ > >>>>> > Version of XC: v1.1.beta > >>>>> > > >>>>> > [process to cause an error] > >>>>> > - Data generation > >>>>> > cd <DBT-1 dir>/datagen/ > >>>>> > ./datagen -i 10000 -u 15000 -p <data dir> > >>>>> > > >>>>> > - Prepare XC cluster > >>>>> > We are using following HA configuration. > >>>>> > We are not sure this configuration is essential for the > error, > >>>>> > but we have never encountered the error without XC slave > nodes. > >>>>> > > >>>>> > ServerA: GTM (master) > >>>>> > ServerB: GTM (slave) > >>>>> > ServerC: GTM-Proxy, Coordinator1 (master), Datanode1 > (master), > >>>>> > Coordinator2 (slave), Datanode2 (slave) > >>>>> > ServerD: GTM-Proxy, Coordinator2 (master), Datanode2 > (master), > >>>>> > Coordinator1 (slave), Datanode1 (slave) > >>>>> > > >>>>> > - Data loading > >>>>> > $ createdb -E SQL_ASCII -T template0 -p 5422 DBT1 > >>>>> > $ psql -a -p 5422 -f <DBT-1 > dir>/scripts/pgsql/create_tables.sql > >>>>> > $ psql -p 5422 -c "copy address from '/tmp/address.data' > >>> delimiter > >>>>> > '>';" > >>>>> > > >>>>> > [the error] > >>>>> > ERROR: invalid memory alloc request size 1073741824 > >>>>> > CONTEXT: COPY address, line 33191219: > >>>>> > "33191219>EDpcFnEEAe954E5Tg6XbkiBVGLAcuHBhFAEElEE7>zxPzRj > >>>>> > > >>>>> > [Other information] > >>>>> > It seems small sized data can be copied: > >>>>> > > >>>>> > - DBT-1 tables with size (item 10000 ,user 15000) > >>>>> > address 4.7G ERROR > >>>>> > author 894K OK > >>>>> > cc_xacts 4.4G ERROR > >>>>> > customer 20G ERROR > >>>>> > item 5.0M OK > >>>>> > order_line 11G ERROR > >>>>> > orders 4.1G ERROR > >>>>> > stock 78K OK > >>>>> > -------------------- > >>>>> > > >>>>> > Best regards, > >>>>> > > >>>>> > -hemmi > >>>>> > > >>>>> > -- > >>>>> > Hitoshi HEMMI > >>>>> > NTT Open Source Software Center > >>>>> > hem...@la... <mailto: > hem...@la... > >>>> > >>>>> <mailto:hem...@la... > >>>>> <mailto:hem...@la...>> > >>>>> > Tel:(03)5860-5115 > >>>>> > Fax:(03)5463-5490 > >>>>> > > >>>>> > > >>>>> > > >>>>> > >>> > ------------------------------------------------------------------------------ > >>>>> > Get 100% visibility into Java/.NET code with AppDynamics > Lite! > >>>>> > It's a free troubleshooting tool designed for production. > >>>>> > Get down to code-level detail for bottlenecks, with <2% > >>> overhead. > >>>>> > Download for free and get started troubleshooting in minutes. > >>>>> > > >>>>> > >>> > http://pubads.g.doubleclick.net/gampad/clk?id=48897031&iu=/4140/ostg.clktrk > >>>>> < > >>> > http://pubads.g.doubleclick.net/gampad/clk?id=48897031&iu=/4140/ostg.clktrk > >>>> > >>>>> > > >>>>> < > >>> > http://pubads.g.doubleclick.net/gampad/clk?id=48897031&iu=/4140/ostg.clktrk > >>>>> < > >>> > http://pubads.g.doubleclick.net/gampad/clk?id=48897031&iu=/4140/ostg.clktrk > >>>>> > >>>>> > _______________________________________________ > >>>>> > Postgres-xc-general mailing list > >>>>> > Pos...@li... > >>>>> <mailto:Pos...@li...> > >>>>> > <mailto:Pos...@li... > >>>>> <mailto:Pos...@li...>> > >>>>> > > >>> https://lists.sourceforge.net/lists/listinfo/postgres-xc-general > >>>>> > > >>>>> > > >>>>> > > >>>>> > > >>>>> > -- > >>>>> > StormDB - http://www.stormdb.com > >>>>> > The Database Cloud > >>>>> > >>>>> > >>>>> -- > >>>>> Hitoshi HEMMI > >>>>> NTT Open Source Software Center > >>>>> hem...@la... <mailto: > hem...@la...> > >>>>> Tel:(03)5860-5115 > >>>>> Fax:(03)5463-5490 > >>>>> > >>>>> > >>>>> > >>> > ------------------------------------------------------------------------------ > >>>>> Get 100% visibility into Java/.NET code with AppDynamics Lite! > >>>>> It's a free troubleshooting tool designed for production. > >>>>> Get down to code-level detail for bottlenecks, with <2% > overhead. > >>>>> Download for free and get started troubleshooting in minutes. > >>>>> > >>> > http://pubads.g.doubleclick.net/gampad/clk?id=48897031&iu=/4140/ostg.clktrk > >>>>> < > >>> > http://pubads.g.doubleclick.net/gampad/clk?id=48897031&iu=/4140/ostg.clktrk > >>>> > >>>>> _______________________________________________ > >>>>> Postgres-xc-general mailing list > >>>>> Pos...@li... > >>>>> <mailto:Pos...@li...> > >>>>> > https://lists.sourceforge.net/lists/listinfo/postgres-xc-general > >>>>> > >>>>> > >>>>> > >>>>> > >>>>> -- > >>>>> StormDB - http://www.stormdb.com > >>>>> The Database Cloud > >>>> > >>>> > >>> > >>> > >>> -- > >>> ----------------------------------- > >>> 上泉 英二 > >>> uwa...@as... > >>> ----------------------------------- > >>> > >>> > >>> > >>> > ------------------------------------------------------------------------------ > >>> Get 100% visibility into Java/.NET code with AppDynamics Lite! > >>> It's a free troubleshooting tool designed for production. > >>> Get down to code-level detail for bottlenecks, with <2% overhead. > >>> Download for free and get started troubleshooting in minutes. > >>> > >>> > http://pubads.g.doubleclick.net/gampad/clk?id=48897031&iu=/4140/ostg.clktrk > >>> _______________________________________________ > >>> Postgres-xc-general mailing list > >>> Pos...@li... > >>> https://lists.sourceforge.net/lists/listinfo/postgres-xc-general > >>> > >> > >> > >> > >> -- > >> StormDB - http://www.stormdb.com > >> The Database Cloud > >> > > > > > > > > > > ------------------------------------------------------------------------------ > Introducing Performance Central, a new site from SourceForge and > AppDynamics. Performance Central is your source for news, insights, > analysis and resources for efficient Application Performance Management. > Visit us today! > http://pubads.g.doubleclick.net/gampad/clk?id=48897511&iu=/4140/ostg.clktrk > _______________________________________________ > Postgres-xc-general mailing list > Pos...@li... > https://lists.sourceforge.net/lists/listinfo/postgres-xc-general > |
From: Eiji U. <uwa...@as...> - 2013-08-21 01:58:28
|
I think I'll try to output a detailed log of this matter. I will output a detailed log with the following settings. Postgres-XC log_min_messages = debug5 Server iostat and vmstat Some time it may take, but I will report the results. Regards, uwaizumi (2013/08/16 14:28), Nikhil Sontakke wrote: > What I think is happening here is that the COPY process is trying to send > data to the datanodes, but they are not consuming it at the same rate. > Hence the send_some call sends only a portion of the data and the rest of > the data has to remain buffered. This builds up eventually to more than > half GB and the subsequent repalloc call to double the buffer fails because > 1GB is the max limit. This is evident from the below COPY error message as > well. > > What we should do is maybe if the data outbuffer size crosses a pre-defined > limit like 16MB or so, then we should use pgxc_node_flush to ensure that > all of that has reached the datanode. After some point even the socket will > not be handle direct large send_some calls and we will increase the buffer > size unnecessarily. > > Regards, > Nikhils > > > On Fri, Aug 16, 2013 at 8:37 AM, Nikhil Sontakke <ni...@st...>wrote: > >> Thanks Eiji san, >> >> >> >>> >>> At the request of Hemmi-san, >>> By applying the patch to the Postgres-XC v1.1, I have conducted a DBT-1. >>> However, the results did not change unfortunately. >>> >>> [results] >>> $ psql -p 5422 -c "copy address from '/tmp/address.data' delimiter '>';" >>> ERROR: invalid memory alloc request size 1073741824 >>> CONTEXT: COPY address, line 30305382: >>> "30305382>TXnEREEwqOLV11zEFGDXV4AXEDEeqLn9JDDwGxHw>TCyFEgXDNVOCihBdmdExEBAAvnXdvEEBDEE>TGEgxF5RdiuAEz..." >>> >>> >> Yeah, as I slept over it I realized that what I am trying to solve is a >> "COPY TO" issue and what you guys must be facing is a "COPY FROM" error. I >> will try to look at this after when I can make more progress on the "COPY >> TO" issue. >> >> Regards, >> Nikhils >> >>> Regards, >>> uwaizumi >>> >>> >>> (2013/08/15 18:06), Hitoshi HEMMI wrote: >>>> Hi Nikhil, >>>> >>>> Thank you for the patch. >>>> I will be off from tomorrow, but I ask our team to test it >>>> and post the result. >>>> >>>> Best, >>>> -hemmi >>>> >>>> >>>> Nikhil Sontakke さんは書きました: >>>>> Hi Hitoshi-san, >>>>> >>>>> PFA, patch which should help to use less memory while running COPY >>>>> commands. This is against REL 1.1. Please do let me know if this helps >>>>> in your case. >>>>> >>>>> Regards, >>>>> Nikhils >>>>> >>>>> >>>>> On Wed, Aug 14, 2013 at 3:38 PM, Hitoshi HEMMI >>>>> <hem...@la... <mailto:hem...@la...>> >>> wrote: >>>>> >>>>> Thank you, Nikhil. >>>>> We will wait until a patch is available. >>>>> >>>>> Best, >>>>> -hemmi >>>>> >>>>> >>>>> Nikhil Sontakke さんは書きました: >>>>> > Hi Hitoshi-san, >>>>> > >>>>> > I am working on a similar issue coincidentally. Basically what's >>>>> > happening is that the COPY command is creating tuples which >>> will get >>>>> > freed up only at the end when the command will finish, so the >>> RAM >>>>> > memory ends up being consumed and can lead to the error that you >>>>> have >>>>> > provided. >>>>> > >>>>> > See the discussion around RemoteQueryNext on another thread. >>>>> > >>>>> > Regards, >>>>> > Nikhils >>>>> > >>>>> > On Wed, Aug 14, 2013 at 1:56 PM, Hitoshi HEMMI >>>>> > <hem...@la... >>>>> <mailto:hem...@la...> >>>>> <mailto:hem...@la... >>>>> <mailto:hem...@la...>>> wrote: >>>>> > >>>>> > Hi, >>>>> > >>>>> > We encountered "invalid memory alloc request size ..." error of >>>>> > Postgres-XC, when preparing DBT-1 benchmark, and need >>>>> > someones help. >>>>> > (First of all, what does the error message mean?) >>>>> > >>>>> > ------------------ >>>>> > Version of XC: v1.1.beta >>>>> > >>>>> > [process to cause an error] >>>>> > - Data generation >>>>> > cd <DBT-1 dir>/datagen/ >>>>> > ./datagen -i 10000 -u 15000 -p <data dir> >>>>> > >>>>> > - Prepare XC cluster >>>>> > We are using following HA configuration. >>>>> > We are not sure this configuration is essential for the error, >>>>> > but we have never encountered the error without XC slave nodes. >>>>> > >>>>> > ServerA: GTM (master) >>>>> > ServerB: GTM (slave) >>>>> > ServerC: GTM-Proxy, Coordinator1 (master), Datanode1 (master), >>>>> > Coordinator2 (slave), Datanode2 (slave) >>>>> > ServerD: GTM-Proxy, Coordinator2 (master), Datanode2 (master), >>>>> > Coordinator1 (slave), Datanode1 (slave) >>>>> > >>>>> > - Data loading >>>>> > $ createdb -E SQL_ASCII -T template0 -p 5422 DBT1 >>>>> > $ psql -a -p 5422 -f <DBT-1 dir>/scripts/pgsql/create_tables.sql >>>>> > $ psql -p 5422 -c "copy address from '/tmp/address.data' >>> delimiter >>>>> > '>';" >>>>> > >>>>> > [the error] >>>>> > ERROR: invalid memory alloc request size 1073741824 >>>>> > CONTEXT: COPY address, line 33191219: >>>>> > "33191219>EDpcFnEEAe954E5Tg6XbkiBVGLAcuHBhFAEElEE7>zxPzRj >>>>> > >>>>> > [Other information] >>>>> > It seems small sized data can be copied: >>>>> > >>>>> > - DBT-1 tables with size (item 10000 ,user 15000) >>>>> > address 4.7G ERROR >>>>> > author 894K OK >>>>> > cc_xacts 4.4G ERROR >>>>> > customer 20G ERROR >>>>> > item 5.0M OK >>>>> > order_line 11G ERROR >>>>> > orders 4.1G ERROR >>>>> > stock 78K OK >>>>> > -------------------- >>>>> > >>>>> > Best regards, >>>>> > >>>>> > -hemmi >>>>> > >>>>> > -- >>>>> > Hitoshi HEMMI >>>>> > NTT Open Source Software Center >>>>> > hem...@la... <mailto:hem...@la... >>>> >>>>> <mailto:hem...@la... >>>>> <mailto:hem...@la...>> >>>>> > Tel:(03)5860-5115 >>>>> > Fax:(03)5463-5490 >>>>> > >>>>> > >>>>> > >>>>> >>> ------------------------------------------------------------------------------ >>>>> > Get 100% visibility into Java/.NET code with AppDynamics Lite! >>>>> > It's a free troubleshooting tool designed for production. >>>>> > Get down to code-level detail for bottlenecks, with <2% >>> overhead. >>>>> > Download for free and get started troubleshooting in minutes. >>>>> > >>>>> >>> http://pubads.g.doubleclick.net/gampad/clk?id=48897031&iu=/4140/ostg.clktrk >>>>> < >>> http://pubads.g.doubleclick.net/gampad/clk?id=48897031&iu=/4140/ostg.clktrk >>>> >>>>> > >>>>> < >>> http://pubads.g.doubleclick.net/gampad/clk?id=48897031&iu=/4140/ostg.clktrk >>>>> < >>> http://pubads.g.doubleclick.net/gampad/clk?id=48897031&iu=/4140/ostg.clktrk >>>>> >>>>> > _______________________________________________ >>>>> > Postgres-xc-general mailing list >>>>> > Pos...@li... >>>>> <mailto:Pos...@li...> >>>>> > <mailto:Pos...@li... >>>>> <mailto:Pos...@li...>> >>>>> > >>> https://lists.sourceforge.net/lists/listinfo/postgres-xc-general >>>>> > >>>>> > >>>>> > >>>>> > >>>>> > -- >>>>> > StormDB - http://www.stormdb.com >>>>> > The Database Cloud >>>>> >>>>> >>>>> -- >>>>> Hitoshi HEMMI >>>>> NTT Open Source Software Center >>>>> hem...@la... <mailto:hem...@la...> >>>>> Tel:(03)5860-5115 >>>>> Fax:(03)5463-5490 >>>>> >>>>> >>>>> >>> ------------------------------------------------------------------------------ >>>>> Get 100% visibility into Java/.NET code with AppDynamics Lite! >>>>> It's a free troubleshooting tool designed for production. >>>>> Get down to code-level detail for bottlenecks, with <2% overhead. >>>>> Download for free and get started troubleshooting in minutes. >>>>> >>> http://pubads.g.doubleclick.net/gampad/clk?id=48897031&iu=/4140/ostg.clktrk >>>>> < >>> http://pubads.g.doubleclick.net/gampad/clk?id=48897031&iu=/4140/ostg.clktrk >>>> >>>>> _______________________________________________ >>>>> Postgres-xc-general mailing list >>>>> Pos...@li... >>>>> <mailto:Pos...@li...> >>>>> https://lists.sourceforge.net/lists/listinfo/postgres-xc-general >>>>> >>>>> >>>>> >>>>> >>>>> -- >>>>> StormDB - http://www.stormdb.com >>>>> The Database Cloud >>>> >>>> >>> >>> >>> -- >>> ----------------------------------- >>> 上泉 英二 >>> uwa...@as... >>> ----------------------------------- >>> >>> >>> >>> ------------------------------------------------------------------------------ >>> Get 100% visibility into Java/.NET code with AppDynamics Lite! >>> It's a free troubleshooting tool designed for production. >>> Get down to code-level detail for bottlenecks, with <2% overhead. >>> Download for free and get started troubleshooting in minutes. >>> >>> http://pubads.g.doubleclick.net/gampad/clk?id=48897031&iu=/4140/ostg.clktrk >>> _______________________________________________ >>> Postgres-xc-general mailing list >>> Pos...@li... >>> https://lists.sourceforge.net/lists/listinfo/postgres-xc-general >>> >> >> >> >> -- >> StormDB - http://www.stormdb.com >> The Database Cloud >> > > > |
From: Koichi S. <koi...@gm...> - 2013-08-21 01:28:34
|
You can add gtm slave by "add gtm slave" command of pgxc_ctl, then you can start it by "start gtm slave" command. For details, please refer to pgxc_ctl document, available at http://postgres-xc.sourceforge.net/docs/1_1/pgxc-ctl.html Good luck; --- Koichi Suzuki 2013/8/20 Sean Hogan <se...@co...> > Sorry, meant to CC the list with my reply. > > > On 13-08-20 08:52 AM, Sean Hogan wrote: > > On 13-08-19 10:25 PM, Koichi Suzuki wrote: > > I have several questions on your configuration. > > 1. Did you configure your XC cluster with pgxc_ctl? If not, I'm afraid > pgxc_ctl may not work correctly if pgxc_ctl configuration file is exactly > matches your manual configuration. > > > Yes I used pgxc_ctl. > > > 2. I assume that you're configuring GTM high-available, because GTM > slave does not help if coordinator or datanode crashes. It helps only > when GTM master fails. When you find GTM master fails, you can issue > gtm_ctl to GTM slave to promote it to the new master. (pgxc_ctl failover > command will take care of this process). > > > Yes, using a slave GTM. My question is the same: after I use the failover > command to make the slave the new master, how can I "demote" the previous > master into being the slave? Otherwise, after the failover the GTM becomes > a single point of failure. > > > 3. You need to configure GTM proxy to reconnect to the new master, or > you need to stop coordinator and datanode, configure their postgresql.conf > files to target to the new master, and restart it. I recommend to > configure GTM proxy. Pgxc_ctl provide reconnect command to change GTM > proxy target to the new master. > > > I thought the failover command took care of reconnecting the GTM proxies, > but okay. > > > 4. If you'd like to make coordinator/datanode high-available, you need > their slaves too. > > > Yes, sorry I wasn't clear. Just like the diagram on page 36 of > http://wiki.postgresql.org/images/4/44/Pgxc_HA_20121024.pdf, I have two > servers: > > - server 1: coordinator 1 (master), datanode 1 (master), coordinator 2 > (slave) and datanode 2 (slave), and GTM proxy > - server 2: coordinator 2 (master), datanode 2 (master), coordinator > 1 (slave) and datanode 1 (slave), and GTM proxy > > GTM and GTM slave are running elsewhere. > > > So after I issue "failover coordinator cn1", server 1 still has the > coordinator 1 data directory that could be reconfigured as a slave for the > new master for cn1 that is now running on server 2. My question is: is > there a simple way to accomplish that reconfiguration? I.e. to change the > role of server 1 so that it is running a slave for cn1, and update server 2 > so that the new cn1 master replicates to the new slave? > > Thanks again, > Sean > > > Regards; > --- > Koichi Suzuki > > > > 2013/8/19 Sean Hogan <se...@co...> > >> Hi, >> >> I am investigating Postgres-XC with the coordinators and data nodes on >> two servers. My goal is primarily high availability rather than raw >> performance, so I was planning to use one coordinator and one datanode on >> each server, with the GTM master and slave elsewhere. Just like on page 36 >> of http://wiki.postgresql.org/images/4/44/Pgxc_HA_20121024.pdf. >> >> I've been trying to use pgxc_ctl (the C program, not the Bash script at >> https://github.com/koichi-szk/PGXC-Tools), but I'm having some >> conceptual problems with the failover process. I understand that >> coordinator and datanode slaves are promoted and become identical to the >> former masters (true?), but I'm unclear what happens to the former >> masters. "monitor all" no longer shows them. >> >> I suppose my question is: what do I need to do, to make the former >> masters into new slaves? To me it would make sense to be able to failover >> node1 once and then again, and be left with more or less the same >> configuration as in the beginning. It would be okay if there is some magic >> command I can run to reconfigure a former master as the new slave. >> >> Are these reasonable expectations, or am I asking too much of the >> software? >> >> I also have a potential issue with what happens when one of my servers >> fails completely. The master coordinators and datanodes are running with >> synchronous_commit = on and one value in synchronous_standby_names. >> Doesn't that mean that transactions on the good server's master coordinator >> and master datanode will block trying to replicate to their slaves on the >> dead server? >> >> Sorry if these are RTFM items, I couldn't find clear answers in any of >> the documentation. >> >> Thanks, >> Sean >> >> >> ------------------------------------------------------------------------------ >> Get 100% visibility into Java/.NET code with AppDynamics Lite! >> It's a free troubleshooting tool designed for production. >> Get down to code-level detail for bottlenecks, with <2% overhead. >> Download for free and get started troubleshooting in minutes. >> >> http://pubads.g.doubleclick.net/gampad/clk?id=48897031&iu=/4140/ostg.clktrk >> _______________________________________________ >> Postgres-xc-general mailing list >> Pos...@li... >> https://lists.sourceforge.net/lists/listinfo/postgres-xc-general >> >> > > > |
From: Koichi S. <koi...@gm...> - 2013-08-20 00:56:08
|
I have several questions on your configuration. 1. Did you configure your XC cluster with pgxc_ctl? If not, I'm afraid pgxc_ctl may not work correctly if pgxc_ctl configuration file is exactly matches your manual configuration. 2. I assume that you're configuring GTM high-available, because GTM slave does not help if coordinator or datanode crashes. It helps only when GTM master fails. When you find GTM master fails, you can issue gtm_ctl to GTM slave to promote it to the new master. (pgxc_ctl failover command will take care of this process). 3. You need to configure GTM proxy to reconnect to the new master, or you need to stop coordinator and datanode, configure their postgresql.conf files to target to the new master, and restart it. I recommend to configure GTM proxy. Pgxc_ctl provide reconnect command to change GTM proxy target to the new master. 4. If you'd like to make coordinator/datanode high-available, you need their slaves too. Pgxc_ctl comes with 1.1 (and its beta). Document will be available from Postgres-XC project page. Please do now hesitate to post further questions/issues. Regards; --- Koichi Suzuki 2013/8/19 Sean Hogan <se...@co...> > Hi, > > I am investigating Postgres-XC with the coordinators and data nodes on two > servers. My goal is primarily high availability rather than raw > performance, so I was planning to use one coordinator and one datanode on > each server, with the GTM master and slave elsewhere. Just like on page 36 > of http://wiki.postgresql.org/images/4/44/Pgxc_HA_20121024.pdf. > > I've been trying to use pgxc_ctl (the C program, not the Bash script at > https://github.com/koichi-szk/PGXC-Tools), but I'm having some conceptual > problems with the failover process. I understand that coordinator and > datanode slaves are promoted and become identical to the former masters > (true?), but I'm unclear what happens to the former masters. "monitor all" > no longer shows them. > > I suppose my question is: what do I need to do, to make the former masters > into new slaves? To me it would make sense to be able to failover node1 > once and then again, and be left with more or less the same configuration > as in the beginning. It would be okay if there is some magic command I can > run to reconfigure a former master as the new slave. > > Are these reasonable expectations, or am I asking too much of the software? > > I also have a potential issue with what happens when one of my servers > fails completely. The master coordinators and datanodes are running with > synchronous_commit = on and one value in synchronous_standby_names. > Doesn't that mean that transactions on the good server's master coordinator > and master datanode will block trying to replicate to their slaves on the > dead server? > > Sorry if these are RTFM items, I couldn't find clear answers in any of the > documentation. > > Thanks, > Sean > > > ------------------------------------------------------------------------------ > Get 100% visibility into Java/.NET code with AppDynamics Lite! > It's a free troubleshooting tool designed for production. > Get down to code-level detail for bottlenecks, with <2% overhead. > Download for free and get started troubleshooting in minutes. > http://pubads.g.doubleclick.net/gampad/clk?id=48897031&iu=/4140/ostg.clktrk > _______________________________________________ > Postgres-xc-general mailing list > Pos...@li... > https://lists.sourceforge.net/lists/listinfo/postgres-xc-general > > |
From: Nikhil S. <ni...@st...> - 2013-08-16 05:28:37
|
What I think is happening here is that the COPY process is trying to send data to the datanodes, but they are not consuming it at the same rate. Hence the send_some call sends only a portion of the data and the rest of the data has to remain buffered. This builds up eventually to more than half GB and the subsequent repalloc call to double the buffer fails because 1GB is the max limit. This is evident from the below COPY error message as well. What we should do is maybe if the data outbuffer size crosses a pre-defined limit like 16MB or so, then we should use pgxc_node_flush to ensure that all of that has reached the datanode. After some point even the socket will not be handle direct large send_some calls and we will increase the buffer size unnecessarily. Regards, Nikhils On Fri, Aug 16, 2013 at 8:37 AM, Nikhil Sontakke <ni...@st...>wrote: > Thanks Eiji san, > > > >> >> At the request of Hemmi-san, >> By applying the patch to the Postgres-XC v1.1, I have conducted a DBT-1. >> However, the results did not change unfortunately. >> >> [results] >> $ psql -p 5422 -c "copy address from '/tmp/address.data' delimiter '>';" >> ERROR: invalid memory alloc request size 1073741824 >> CONTEXT: COPY address, line 30305382: >> "30305382>TXnEREEwqOLV11zEFGDXV4AXEDEeqLn9JDDwGxHw>TCyFEgXDNVOCihBdmdExEBAAvnXdvEEBDEE>TGEgxF5RdiuAEz..." >> >> > Yeah, as I slept over it I realized that what I am trying to solve is a > "COPY TO" issue and what you guys must be facing is a "COPY FROM" error. I > will try to look at this after when I can make more progress on the "COPY > TO" issue. > > Regards, > Nikhils > >> Regards, >> uwaizumi >> >> >> (2013/08/15 18:06), Hitoshi HEMMI wrote: >> > Hi Nikhil, >> > >> > Thank you for the patch. >> > I will be off from tomorrow, but I ask our team to test it >> > and post the result. >> > >> > Best, >> > -hemmi >> > >> > >> > Nikhil Sontakke さんは書きました: >> >> Hi Hitoshi-san, >> >> >> >> PFA, patch which should help to use less memory while running COPY >> >> commands. This is against REL 1.1. Please do let me know if this helps >> >> in your case. >> >> >> >> Regards, >> >> Nikhils >> >> >> >> >> >> On Wed, Aug 14, 2013 at 3:38 PM, Hitoshi HEMMI >> >> <hem...@la... <mailto:hem...@la...>> >> wrote: >> >> >> >> Thank you, Nikhil. >> >> We will wait until a patch is available. >> >> >> >> Best, >> >> -hemmi >> >> >> >> >> >> Nikhil Sontakke さんは書きました: >> >> > Hi Hitoshi-san, >> >> > >> >> > I am working on a similar issue coincidentally. Basically what's >> >> > happening is that the COPY command is creating tuples which >> will get >> >> > freed up only at the end when the command will finish, so the >> RAM >> >> > memory ends up being consumed and can lead to the error that you >> >> have >> >> > provided. >> >> > >> >> > See the discussion around RemoteQueryNext on another thread. >> >> > >> >> > Regards, >> >> > Nikhils >> >> > >> >> > On Wed, Aug 14, 2013 at 1:56 PM, Hitoshi HEMMI >> >> > <hem...@la... >> >> <mailto:hem...@la...> >> >> <mailto:hem...@la... >> >> <mailto:hem...@la...>>> wrote: >> >> > >> >> > Hi, >> >> > >> >> > We encountered "invalid memory alloc request size ..." error of >> >> > Postgres-XC, when preparing DBT-1 benchmark, and need >> >> > someones help. >> >> > (First of all, what does the error message mean?) >> >> > >> >> > ------------------ >> >> > Version of XC: v1.1.beta >> >> > >> >> > [process to cause an error] >> >> > - Data generation >> >> > cd <DBT-1 dir>/datagen/ >> >> > ./datagen -i 10000 -u 15000 -p <data dir> >> >> > >> >> > - Prepare XC cluster >> >> > We are using following HA configuration. >> >> > We are not sure this configuration is essential for the error, >> >> > but we have never encountered the error without XC slave nodes. >> >> > >> >> > ServerA: GTM (master) >> >> > ServerB: GTM (slave) >> >> > ServerC: GTM-Proxy, Coordinator1 (master), Datanode1 (master), >> >> > Coordinator2 (slave), Datanode2 (slave) >> >> > ServerD: GTM-Proxy, Coordinator2 (master), Datanode2 (master), >> >> > Coordinator1 (slave), Datanode1 (slave) >> >> > >> >> > - Data loading >> >> > $ createdb -E SQL_ASCII -T template0 -p 5422 DBT1 >> >> > $ psql -a -p 5422 -f <DBT-1 dir>/scripts/pgsql/create_tables.sql >> >> > $ psql -p 5422 -c "copy address from '/tmp/address.data' >> delimiter >> >> > '>';" >> >> > >> >> > [the error] >> >> > ERROR: invalid memory alloc request size 1073741824 >> >> > CONTEXT: COPY address, line 33191219: >> >> > "33191219>EDpcFnEEAe954E5Tg6XbkiBVGLAcuHBhFAEElEE7>zxPzRj >> >> > >> >> > [Other information] >> >> > It seems small sized data can be copied: >> >> > >> >> > - DBT-1 tables with size (item 10000 ,user 15000) >> >> > address 4.7G ERROR >> >> > author 894K OK >> >> > cc_xacts 4.4G ERROR >> >> > customer 20G ERROR >> >> > item 5.0M OK >> >> > order_line 11G ERROR >> >> > orders 4.1G ERROR >> >> > stock 78K OK >> >> > -------------------- >> >> > >> >> > Best regards, >> >> > >> >> > -hemmi >> >> > >> >> > -- >> >> > Hitoshi HEMMI >> >> > NTT Open Source Software Center >> >> > hem...@la... <mailto:hem...@la... >> > >> >> <mailto:hem...@la... >> >> <mailto:hem...@la...>> >> >> > Tel:(03)5860-5115 >> >> > Fax:(03)5463-5490 >> >> > >> >> > >> >> > >> >> >> ------------------------------------------------------------------------------ >> >> > Get 100% visibility into Java/.NET code with AppDynamics Lite! >> >> > It's a free troubleshooting tool designed for production. >> >> > Get down to code-level detail for bottlenecks, with <2% >> overhead. >> >> > Download for free and get started troubleshooting in minutes. >> >> > >> >> >> http://pubads.g.doubleclick.net/gampad/clk?id=48897031&iu=/4140/ostg.clktrk >> >> < >> http://pubads.g.doubleclick.net/gampad/clk?id=48897031&iu=/4140/ostg.clktrk >> > >> >> > >> >> < >> http://pubads.g.doubleclick.net/gampad/clk?id=48897031&iu=/4140/ostg.clktrk >> >> < >> http://pubads.g.doubleclick.net/gampad/clk?id=48897031&iu=/4140/ostg.clktrk >> >> >> >> > _______________________________________________ >> >> > Postgres-xc-general mailing list >> >> > Pos...@li... >> >> <mailto:Pos...@li...> >> >> > <mailto:Pos...@li... >> >> <mailto:Pos...@li...>> >> >> > >> https://lists.sourceforge.net/lists/listinfo/postgres-xc-general >> >> > >> >> > >> >> > >> >> > >> >> > -- >> >> > StormDB - http://www.stormdb.com >> >> > The Database Cloud >> >> >> >> >> >> -- >> >> Hitoshi HEMMI >> >> NTT Open Source Software Center >> >> hem...@la... <mailto:hem...@la...> >> >> Tel:(03)5860-5115 >> >> Fax:(03)5463-5490 >> >> >> >> >> >> >> ------------------------------------------------------------------------------ >> >> Get 100% visibility into Java/.NET code with AppDynamics Lite! >> >> It's a free troubleshooting tool designed for production. >> >> Get down to code-level detail for bottlenecks, with <2% overhead. >> >> Download for free and get started troubleshooting in minutes. >> >> >> http://pubads.g.doubleclick.net/gampad/clk?id=48897031&iu=/4140/ostg.clktrk >> >> < >> http://pubads.g.doubleclick.net/gampad/clk?id=48897031&iu=/4140/ostg.clktrk >> > >> >> _______________________________________________ >> >> Postgres-xc-general mailing list >> >> Pos...@li... >> >> <mailto:Pos...@li...> >> >> https://lists.sourceforge.net/lists/listinfo/postgres-xc-general >> >> >> >> >> >> >> >> >> >> -- >> >> StormDB - http://www.stormdb.com >> >> The Database Cloud >> > >> > >> >> >> -- >> ----------------------------------- >> 上泉 英二 >> uwa...@as... >> ----------------------------------- >> >> >> >> ------------------------------------------------------------------------------ >> Get 100% visibility into Java/.NET code with AppDynamics Lite! >> It's a free troubleshooting tool designed for production. >> Get down to code-level detail for bottlenecks, with <2% overhead. >> Download for free and get started troubleshooting in minutes. >> >> http://pubads.g.doubleclick.net/gampad/clk?id=48897031&iu=/4140/ostg.clktrk >> _______________________________________________ >> Postgres-xc-general mailing list >> Pos...@li... >> https://lists.sourceforge.net/lists/listinfo/postgres-xc-general >> > > > > -- > StormDB - http://www.stormdb.com > The Database Cloud > -- StormDB - http://www.stormdb.com The Database Cloud |
From: Nikhil S. <ni...@st...> - 2013-08-16 03:15:32
|
Thanks Eiji san, > > At the request of Hemmi-san, > By applying the patch to the Postgres-XC v1.1, I have conducted a DBT-1. > However, the results did not change unfortunately. > > [results] > $ psql -p 5422 -c "copy address from '/tmp/address.data' delimiter '>';" > ERROR: invalid memory alloc request size 1073741824 > CONTEXT: COPY address, line 30305382: > "30305382>TXnEREEwqOLV11zEFGDXV4AXEDEeqLn9JDDwGxHw>TCyFEgXDNVOCihBdmdExEBAAvnXdvEEBDEE>TGEgxF5RdiuAEz..." > > Yeah, as I slept over it I realized that what I am trying to solve is a "COPY TO" issue and what you guys must be facing is a "COPY FROM" error. I will try to look at this after when I can make more progress on the "COPY TO" issue. Regards, Nikhils > Regards, > uwaizumi > > > (2013/08/15 18:06), Hitoshi HEMMI wrote: > > Hi Nikhil, > > > > Thank you for the patch. > > I will be off from tomorrow, but I ask our team to test it > > and post the result. > > > > Best, > > -hemmi > > > > > > Nikhil Sontakke さんは書きました: > >> Hi Hitoshi-san, > >> > >> PFA, patch which should help to use less memory while running COPY > >> commands. This is against REL 1.1. Please do let me know if this helps > >> in your case. > >> > >> Regards, > >> Nikhils > >> > >> > >> On Wed, Aug 14, 2013 at 3:38 PM, Hitoshi HEMMI > >> <hem...@la... <mailto:hem...@la...>> > wrote: > >> > >> Thank you, Nikhil. > >> We will wait until a patch is available. > >> > >> Best, > >> -hemmi > >> > >> > >> Nikhil Sontakke さんは書きました: > >> > Hi Hitoshi-san, > >> > > >> > I am working on a similar issue coincidentally. Basically what's > >> > happening is that the COPY command is creating tuples which will > get > >> > freed up only at the end when the command will finish, so the RAM > >> > memory ends up being consumed and can lead to the error that you > >> have > >> > provided. > >> > > >> > See the discussion around RemoteQueryNext on another thread. > >> > > >> > Regards, > >> > Nikhils > >> > > >> > On Wed, Aug 14, 2013 at 1:56 PM, Hitoshi HEMMI > >> > <hem...@la... > >> <mailto:hem...@la...> > >> <mailto:hem...@la... > >> <mailto:hem...@la...>>> wrote: > >> > > >> > Hi, > >> > > >> > We encountered "invalid memory alloc request size ..." error of > >> > Postgres-XC, when preparing DBT-1 benchmark, and need > >> > someones help. > >> > (First of all, what does the error message mean?) > >> > > >> > ------------------ > >> > Version of XC: v1.1.beta > >> > > >> > [process to cause an error] > >> > - Data generation > >> > cd <DBT-1 dir>/datagen/ > >> > ./datagen -i 10000 -u 15000 -p <data dir> > >> > > >> > - Prepare XC cluster > >> > We are using following HA configuration. > >> > We are not sure this configuration is essential for the error, > >> > but we have never encountered the error without XC slave nodes. > >> > > >> > ServerA: GTM (master) > >> > ServerB: GTM (slave) > >> > ServerC: GTM-Proxy, Coordinator1 (master), Datanode1 (master), > >> > Coordinator2 (slave), Datanode2 (slave) > >> > ServerD: GTM-Proxy, Coordinator2 (master), Datanode2 (master), > >> > Coordinator1 (slave), Datanode1 (slave) > >> > > >> > - Data loading > >> > $ createdb -E SQL_ASCII -T template0 -p 5422 DBT1 > >> > $ psql -a -p 5422 -f <DBT-1 dir>/scripts/pgsql/create_tables.sql > >> > $ psql -p 5422 -c "copy address from '/tmp/address.data' > delimiter > >> > '>';" > >> > > >> > [the error] > >> > ERROR: invalid memory alloc request size 1073741824 > >> > CONTEXT: COPY address, line 33191219: > >> > "33191219>EDpcFnEEAe954E5Tg6XbkiBVGLAcuHBhFAEElEE7>zxPzRj > >> > > >> > [Other information] > >> > It seems small sized data can be copied: > >> > > >> > - DBT-1 tables with size (item 10000 ,user 15000) > >> > address 4.7G ERROR > >> > author 894K OK > >> > cc_xacts 4.4G ERROR > >> > customer 20G ERROR > >> > item 5.0M OK > >> > order_line 11G ERROR > >> > orders 4.1G ERROR > >> > stock 78K OK > >> > -------------------- > >> > > >> > Best regards, > >> > > >> > -hemmi > >> > > >> > -- > >> > Hitoshi HEMMI > >> > NTT Open Source Software Center > >> > hem...@la... <mailto:hem...@la...> > >> <mailto:hem...@la... > >> <mailto:hem...@la...>> > >> > Tel:(03)5860-5115 > >> > Fax:(03)5463-5490 > >> > > >> > > >> > > >> > ------------------------------------------------------------------------------ > >> > Get 100% visibility into Java/.NET code with AppDynamics Lite! > >> > It's a free troubleshooting tool designed for production. > >> > Get down to code-level detail for bottlenecks, with <2% overhead. > >> > Download for free and get started troubleshooting in minutes. > >> > > >> > http://pubads.g.doubleclick.net/gampad/clk?id=48897031&iu=/4140/ostg.clktrk > >> < > http://pubads.g.doubleclick.net/gampad/clk?id=48897031&iu=/4140/ostg.clktrk > > > >> > > >> < > http://pubads.g.doubleclick.net/gampad/clk?id=48897031&iu=/4140/ostg.clktrk > >> < > http://pubads.g.doubleclick.net/gampad/clk?id=48897031&iu=/4140/ostg.clktrk > >> > >> > _______________________________________________ > >> > Postgres-xc-general mailing list > >> > Pos...@li... > >> <mailto:Pos...@li...> > >> > <mailto:Pos...@li... > >> <mailto:Pos...@li...>> > >> > https://lists.sourceforge.net/lists/listinfo/postgres-xc-general > >> > > >> > > >> > > >> > > >> > -- > >> > StormDB - http://www.stormdb.com > >> > The Database Cloud > >> > >> > >> -- > >> Hitoshi HEMMI > >> NTT Open Source Software Center > >> hem...@la... <mailto:hem...@la...> > >> Tel:(03)5860-5115 > >> Fax:(03)5463-5490 > >> > >> > >> > ------------------------------------------------------------------------------ > >> Get 100% visibility into Java/.NET code with AppDynamics Lite! > >> It's a free troubleshooting tool designed for production. > >> Get down to code-level detail for bottlenecks, with <2% overhead. > >> Download for free and get started troubleshooting in minutes. > >> > http://pubads.g.doubleclick.net/gampad/clk?id=48897031&iu=/4140/ostg.clktrk > >> < > http://pubads.g.doubleclick.net/gampad/clk?id=48897031&iu=/4140/ostg.clktrk > > > >> _______________________________________________ > >> Postgres-xc-general mailing list > >> Pos...@li... > >> <mailto:Pos...@li...> > >> https://lists.sourceforge.net/lists/listinfo/postgres-xc-general > >> > >> > >> > >> > >> -- > >> StormDB - http://www.stormdb.com > >> The Database Cloud > > > > > > > -- > ----------------------------------- > 上泉 英二 > uwa...@as... > ----------------------------------- > > > > ------------------------------------------------------------------------------ > Get 100% visibility into Java/.NET code with AppDynamics Lite! > It's a free troubleshooting tool designed for production. > Get down to code-level detail for bottlenecks, with <2% overhead. > Download for free and get started troubleshooting in minutes. > http://pubads.g.doubleclick.net/gampad/clk?id=48897031&iu=/4140/ostg.clktrk > _______________________________________________ > Postgres-xc-general mailing list > Pos...@li... > https://lists.sourceforge.net/lists/listinfo/postgres-xc-general > -- StormDB - http://www.stormdb.com The Database Cloud |
From: Eiji U. <uwa...@as...> - 2013-08-16 02:55:12
|
Hi Nikhil, At the request of Hemmi-san, By applying the patch to the Postgres-XC v1.1, I have conducted a DBT-1. However, the results did not change unfortunately. [results] $ psql -p 5422 -c "copy address from '/tmp/address.data' delimiter '>';" ERROR: invalid memory alloc request size 1073741824 CONTEXT: COPY address, line 30305382: "30305382>TXnEREEwqOLV11zEFGDXV4AXEDEeqLn9JDDwGxHw>TCyFEgXDNVOCihBdmdExEBAAvnXdvEEBDEE>TGEgxF5RdiuAEz..." Regards, uwaizumi (2013/08/15 18:06), Hitoshi HEMMI wrote: > Hi Nikhil, > > Thank you for the patch. > I will be off from tomorrow, but I ask our team to test it > and post the result. > > Best, > -hemmi > > > Nikhil Sontakke さんは書きました: >> Hi Hitoshi-san, >> >> PFA, patch which should help to use less memory while running COPY >> commands. This is against REL 1.1. Please do let me know if this helps >> in your case. >> >> Regards, >> Nikhils >> >> >> On Wed, Aug 14, 2013 at 3:38 PM, Hitoshi HEMMI >> <hem...@la... <mailto:hem...@la...>> wrote: >> >> Thank you, Nikhil. >> We will wait until a patch is available. >> >> Best, >> -hemmi >> >> >> Nikhil Sontakke さんは書きました: >> > Hi Hitoshi-san, >> > >> > I am working on a similar issue coincidentally. Basically what's >> > happening is that the COPY command is creating tuples which will get >> > freed up only at the end when the command will finish, so the RAM >> > memory ends up being consumed and can lead to the error that you >> have >> > provided. >> > >> > See the discussion around RemoteQueryNext on another thread. >> > >> > Regards, >> > Nikhils >> > >> > On Wed, Aug 14, 2013 at 1:56 PM, Hitoshi HEMMI >> > <hem...@la... >> <mailto:hem...@la...> >> <mailto:hem...@la... >> <mailto:hem...@la...>>> wrote: >> > >> > Hi, >> > >> > We encountered "invalid memory alloc request size ..." error of >> > Postgres-XC, when preparing DBT-1 benchmark, and need >> > someones help. >> > (First of all, what does the error message mean?) >> > >> > ------------------ >> > Version of XC: v1.1.beta >> > >> > [process to cause an error] >> > - Data generation >> > cd <DBT-1 dir>/datagen/ >> > ./datagen -i 10000 -u 15000 -p <data dir> >> > >> > - Prepare XC cluster >> > We are using following HA configuration. >> > We are not sure this configuration is essential for the error, >> > but we have never encountered the error without XC slave nodes. >> > >> > ServerA: GTM (master) >> > ServerB: GTM (slave) >> > ServerC: GTM-Proxy, Coordinator1 (master), Datanode1 (master), >> > Coordinator2 (slave), Datanode2 (slave) >> > ServerD: GTM-Proxy, Coordinator2 (master), Datanode2 (master), >> > Coordinator1 (slave), Datanode1 (slave) >> > >> > - Data loading >> > $ createdb -E SQL_ASCII -T template0 -p 5422 DBT1 >> > $ psql -a -p 5422 -f <DBT-1 dir>/scripts/pgsql/create_tables.sql >> > $ psql -p 5422 -c "copy address from '/tmp/address.data' delimiter >> > '>';" >> > >> > [the error] >> > ERROR: invalid memory alloc request size 1073741824 >> > CONTEXT: COPY address, line 33191219: >> > "33191219>EDpcFnEEAe954E5Tg6XbkiBVGLAcuHBhFAEElEE7>zxPzRj >> > >> > [Other information] >> > It seems small sized data can be copied: >> > >> > - DBT-1 tables with size (item 10000 ,user 15000) >> > address 4.7G ERROR >> > author 894K OK >> > cc_xacts 4.4G ERROR >> > customer 20G ERROR >> > item 5.0M OK >> > order_line 11G ERROR >> > orders 4.1G ERROR >> > stock 78K OK >> > -------------------- >> > >> > Best regards, >> > >> > -hemmi >> > >> > -- >> > Hitoshi HEMMI >> > NTT Open Source Software Center >> > hem...@la... <mailto:hem...@la...> >> <mailto:hem...@la... >> <mailto:hem...@la...>> >> > Tel:(03)5860-5115 >> > Fax:(03)5463-5490 >> > >> > >> > >> ------------------------------------------------------------------------------ >> > Get 100% visibility into Java/.NET code with AppDynamics Lite! >> > It's a free troubleshooting tool designed for production. >> > Get down to code-level detail for bottlenecks, with <2% overhead. >> > Download for free and get started troubleshooting in minutes. >> > >> http://pubads.g.doubleclick.net/gampad/clk?id=48897031&iu=/4140/ostg.clktrk >> <http://pubads.g.doubleclick.net/gampad/clk?id=48897031&iu=/4140/ostg.clktrk> >> > >> <http://pubads.g.doubleclick.net/gampad/clk?id=48897031&iu=/4140/ostg.clktrk >> <http://pubads.g.doubleclick.net/gampad/clk?id=48897031&iu=/4140/ostg.clktrk>> >> > _______________________________________________ >> > Postgres-xc-general mailing list >> > Pos...@li... >> <mailto:Pos...@li...> >> > <mailto:Pos...@li... >> <mailto:Pos...@li...>> >> > https://lists.sourceforge.net/lists/listinfo/postgres-xc-general >> > >> > >> > >> > >> > -- >> > StormDB - http://www.stormdb.com >> > The Database Cloud >> >> >> -- >> Hitoshi HEMMI >> NTT Open Source Software Center >> hem...@la... <mailto:hem...@la...> >> Tel:(03)5860-5115 >> Fax:(03)5463-5490 >> >> >> ------------------------------------------------------------------------------ >> Get 100% visibility into Java/.NET code with AppDynamics Lite! >> It's a free troubleshooting tool designed for production. >> Get down to code-level detail for bottlenecks, with <2% overhead. >> Download for free and get started troubleshooting in minutes. >> http://pubads.g.doubleclick.net/gampad/clk?id=48897031&iu=/4140/ostg.clktrk >> <http://pubads.g.doubleclick.net/gampad/clk?id=48897031&iu=/4140/ostg.clktrk> >> _______________________________________________ >> Postgres-xc-general mailing list >> Pos...@li... >> <mailto:Pos...@li...> >> https://lists.sourceforge.net/lists/listinfo/postgres-xc-general >> >> >> >> >> -- >> StormDB - http://www.stormdb.com >> The Database Cloud > > -- ----------------------------------- 上泉 英二 uwa...@as... ----------------------------------- |
From: Hitoshi H. <hem...@la...> - 2013-08-15 09:07:05
|
Hi Nikhil, Thank you for the patch. I will be off from tomorrow, but I ask our team to test it and post the result. Best, -hemmi Nikhil Sontakke さんは書きました: > Hi Hitoshi-san, > > PFA, patch which should help to use less memory while running COPY > commands. This is against REL 1.1. Please do let me know if this helps > in your case. > > Regards, > Nikhils > > > On Wed, Aug 14, 2013 at 3:38 PM, Hitoshi HEMMI > <hem...@la... <mailto:hem...@la...>> wrote: > > Thank you, Nikhil. > We will wait until a patch is available. > > Best, > -hemmi > > > Nikhil Sontakke さんは書きました: > > Hi Hitoshi-san, > > > > I am working on a similar issue coincidentally. Basically what's > > happening is that the COPY command is creating tuples which will get > > freed up only at the end when the command will finish, so the RAM > > memory ends up being consumed and can lead to the error that you > have > > provided. > > > > See the discussion around RemoteQueryNext on another thread. > > > > Regards, > > Nikhils > > > > On Wed, Aug 14, 2013 at 1:56 PM, Hitoshi HEMMI > > <hem...@la... > <mailto:hem...@la...> > <mailto:hem...@la... > <mailto:hem...@la...>>> wrote: > > > > Hi, > > > > We encountered "invalid memory alloc request size ..." error of > > Postgres-XC, when preparing DBT-1 benchmark, and need > > someones help. > > (First of all, what does the error message mean?) > > > > ------------------ > > Version of XC: v1.1.beta > > > > [process to cause an error] > > - Data generation > > cd <DBT-1 dir>/datagen/ > > ./datagen -i 10000 -u 15000 -p <data dir> > > > > - Prepare XC cluster > > We are using following HA configuration. > > We are not sure this configuration is essential for the error, > > but we have never encountered the error without XC slave nodes. > > > > ServerA: GTM (master) > > ServerB: GTM (slave) > > ServerC: GTM-Proxy, Coordinator1 (master), Datanode1 (master), > > Coordinator2 (slave), Datanode2 (slave) > > ServerD: GTM-Proxy, Coordinator2 (master), Datanode2 (master), > > Coordinator1 (slave), Datanode1 (slave) > > > > - Data loading > > $ createdb -E SQL_ASCII -T template0 -p 5422 DBT1 > > $ psql -a -p 5422 -f <DBT-1 dir>/scripts/pgsql/create_tables.sql > > $ psql -p 5422 -c "copy address from '/tmp/address.data' delimiter > > '>';" > > > > [the error] > > ERROR: invalid memory alloc request size 1073741824 > > CONTEXT: COPY address, line 33191219: > > "33191219>EDpcFnEEAe954E5Tg6XbkiBVGLAcuHBhFAEElEE7>zxPzRj > > > > [Other information] > > It seems small sized data can be copied: > > > > - DBT-1 tables with size (item 10000 ,user 15000) > > address 4.7G ERROR > > author 894K OK > > cc_xacts 4.4G ERROR > > customer 20G ERROR > > item 5.0M OK > > order_line 11G ERROR > > orders 4.1G ERROR > > stock 78K OK > > -------------------- > > > > Best regards, > > > > -hemmi > > > > -- > > Hitoshi HEMMI > > NTT Open Source Software Center > > hem...@la... <mailto:hem...@la...> > <mailto:hem...@la... > <mailto:hem...@la...>> > > Tel:(03)5860-5115 > > Fax:(03)5463-5490 > > > > > > > ------------------------------------------------------------------------------ > > Get 100% visibility into Java/.NET code with AppDynamics Lite! > > It's a free troubleshooting tool designed for production. > > Get down to code-level detail for bottlenecks, with <2% overhead. > > Download for free and get started troubleshooting in minutes. > > > http://pubads.g.doubleclick.net/gampad/clk?id=48897031&iu=/4140/ostg.clktrk > <http://pubads.g.doubleclick.net/gampad/clk?id=48897031&iu=/4140/ostg.clktrk> > > > <http://pubads.g.doubleclick.net/gampad/clk?id=48897031&iu=/4140/ostg.clktrk > <http://pubads.g.doubleclick.net/gampad/clk?id=48897031&iu=/4140/ostg.clktrk>> > > _______________________________________________ > > Postgres-xc-general mailing list > > Pos...@li... > <mailto:Pos...@li...> > > <mailto:Pos...@li... > <mailto:Pos...@li...>> > > https://lists.sourceforge.net/lists/listinfo/postgres-xc-general > > > > > > > > > > -- > > StormDB - http://www.stormdb.com > > The Database Cloud > > > -- > Hitoshi HEMMI > NTT Open Source Software Center > hem...@la... <mailto:hem...@la...> > Tel:(03)5860-5115 > Fax:(03)5463-5490 > > > ------------------------------------------------------------------------------ > Get 100% visibility into Java/.NET code with AppDynamics Lite! > It's a free troubleshooting tool designed for production. > Get down to code-level detail for bottlenecks, with <2% overhead. > Download for free and get started troubleshooting in minutes. > http://pubads.g.doubleclick.net/gampad/clk?id=48897031&iu=/4140/ostg.clktrk > <http://pubads.g.doubleclick.net/gampad/clk?id=48897031&iu=/4140/ostg.clktrk> > _______________________________________________ > Postgres-xc-general mailing list > Pos...@li... > <mailto:Pos...@li...> > https://lists.sourceforge.net/lists/listinfo/postgres-xc-general > > > > > -- > StormDB - http://www.stormdb.com > The Database Cloud -- Hitoshi HEMMI NTT Open Source Software Center hem...@la... Tel:(03)5860-5115 Fax:(03)5463-5490 |
From: Hitoshi H. <hem...@la...> - 2013-08-14 10:08:58
|
Thank you, Nikhil. We will wait until a patch is available. Best, -hemmi Nikhil Sontakke さんは書きました: > Hi Hitoshi-san, > > I am working on a similar issue coincidentally. Basically what's > happening is that the COPY command is creating tuples which will get > freed up only at the end when the command will finish, so the RAM > memory ends up being consumed and can lead to the error that you have > provided. > > See the discussion around RemoteQueryNext on another thread. > > Regards, > Nikhils > > On Wed, Aug 14, 2013 at 1:56 PM, Hitoshi HEMMI > <hem...@la... <mailto:hem...@la...>> wrote: > > Hi, > > We encountered "invalid memory alloc request size ..." error of > Postgres-XC, when preparing DBT-1 benchmark, and need > someones help. > (First of all, what does the error message mean?) > > ------------------ > Version of XC: v1.1.beta > > [process to cause an error] > - Data generation > cd <DBT-1 dir>/datagen/ > ./datagen -i 10000 -u 15000 -p <data dir> > > - Prepare XC cluster > We are using following HA configuration. > We are not sure this configuration is essential for the error, > but we have never encountered the error without XC slave nodes. > > ServerA: GTM (master) > ServerB: GTM (slave) > ServerC: GTM-Proxy, Coordinator1 (master), Datanode1 (master), > Coordinator2 (slave), Datanode2 (slave) > ServerD: GTM-Proxy, Coordinator2 (master), Datanode2 (master), > Coordinator1 (slave), Datanode1 (slave) > > - Data loading > $ createdb -E SQL_ASCII -T template0 -p 5422 DBT1 > $ psql -a -p 5422 -f <DBT-1 dir>/scripts/pgsql/create_tables.sql > $ psql -p 5422 -c "copy address from '/tmp/address.data' delimiter > '>';" > > [the error] > ERROR: invalid memory alloc request size 1073741824 > CONTEXT: COPY address, line 33191219: > "33191219>EDpcFnEEAe954E5Tg6XbkiBVGLAcuHBhFAEElEE7>zxPzRj > > [Other information] > It seems small sized data can be copied: > > - DBT-1 tables with size (item 10000 ,user 15000) > address 4.7G ERROR > author 894K OK > cc_xacts 4.4G ERROR > customer 20G ERROR > item 5.0M OK > order_line 11G ERROR > orders 4.1G ERROR > stock 78K OK > -------------------- > > Best regards, > > -hemmi > > -- > Hitoshi HEMMI > NTT Open Source Software Center > hem...@la... <mailto:hem...@la...> > Tel:(03)5860-5115 > Fax:(03)5463-5490 > > > ------------------------------------------------------------------------------ > Get 100% visibility into Java/.NET code with AppDynamics Lite! > It's a free troubleshooting tool designed for production. > Get down to code-level detail for bottlenecks, with <2% overhead. > Download for free and get started troubleshooting in minutes. > http://pubads.g.doubleclick.net/gampad/clk?id=48897031&iu=/4140/ostg.clktrk > <http://pubads.g.doubleclick.net/gampad/clk?id=48897031&iu=/4140/ostg.clktrk> > _______________________________________________ > Postgres-xc-general mailing list > Pos...@li... > <mailto:Pos...@li...> > https://lists.sourceforge.net/lists/listinfo/postgres-xc-general > > > > > -- > StormDB - http://www.stormdb.com > The Database Cloud -- Hitoshi HEMMI NTT Open Source Software Center hem...@la... Tel:(03)5860-5115 Fax:(03)5463-5490 |
From: Nikhil S. <ni...@st...> - 2013-08-14 08:55:21
|
Hi Hitoshi-san, I am working on a similar issue coincidentally. Basically what's happening is that the COPY command is creating tuples which will get freed up only at the end when the command will finish, so the RAM memory ends up being consumed and can lead to the error that you have provided. See the discussion around RemoteQueryNext on another thread. Regards, Nikhils On Wed, Aug 14, 2013 at 1:56 PM, Hitoshi HEMMI <hem...@la...>wrote: > Hi, > > We encountered "invalid memory alloc request size ..." error of > Postgres-XC, when preparing DBT-1 benchmark, and need > someones help. > (First of all, what does the error message mean?) > > ------------------ > Version of XC: v1.1.beta > > [process to cause an error] > - Data generation > cd <DBT-1 dir>/datagen/ > ./datagen -i 10000 -u 15000 -p <data dir> > > - Prepare XC cluster > We are using following HA configuration. > We are not sure this configuration is essential for the error, > but we have never encountered the error without XC slave nodes. > > ServerA: GTM (master) > ServerB: GTM (slave) > ServerC: GTM-Proxy, Coordinator1 (master), Datanode1 (master), > Coordinator2 (slave), Datanode2 (slave) > ServerD: GTM-Proxy, Coordinator2 (master), Datanode2 (master), > Coordinator1 (slave), Datanode1 (slave) > > - Data loading > $ createdb -E SQL_ASCII -T template0 -p 5422 DBT1 > $ psql -a -p 5422 -f <DBT-1 dir>/scripts/pgsql/create_tables.sql > $ psql -p 5422 -c "copy address from '/tmp/address.data' delimiter '>';" > > [the error] > ERROR: invalid memory alloc request size 1073741824 > CONTEXT: COPY address, line 33191219: > "33191219>EDpcFnEEAe954E5Tg6XbkiBVGLAcuHBhFAEElEE7>zxPzRj > > [Other information] > It seems small sized data can be copied: > > - DBT-1 tables with size (item 10000 ,user 15000) > address 4.7G ERROR > author 894K OK > cc_xacts 4.4G ERROR > customer 20G ERROR > item 5.0M OK > order_line 11G ERROR > orders 4.1G ERROR > stock 78K OK > -------------------- > > Best regards, > > -hemmi > > -- > Hitoshi HEMMI > NTT Open Source Software Center > hem...@la... > Tel:(03)5860-5115 > Fax:(03)5463-5490 > > > > ------------------------------------------------------------------------------ > Get 100% visibility into Java/.NET code with AppDynamics Lite! > It's a free troubleshooting tool designed for production. > Get down to code-level detail for bottlenecks, with <2% overhead. > Download for free and get started troubleshooting in minutes. > http://pubads.g.doubleclick.net/gampad/clk?id=48897031&iu=/4140/ostg.clktrk > _______________________________________________ > Postgres-xc-general mailing list > Pos...@li... > https://lists.sourceforge.net/lists/listinfo/postgres-xc-general > -- StormDB - http://www.stormdb.com The Database Cloud |
From: Hitoshi H. <hem...@la...> - 2013-08-14 08:26:35
|
Hi, We encountered "invalid memory alloc request size ..." error of Postgres-XC, when preparing DBT-1 benchmark, and need someones help. (First of all, what does the error message mean?) ------------------ Version of XC: v1.1.beta [process to cause an error] - Data generation cd <DBT-1 dir>/datagen/ ./datagen -i 10000 -u 15000 -p <data dir> - Prepare XC cluster We are using following HA configuration. We are not sure this configuration is essential for the error, but we have never encountered the error without XC slave nodes. ServerA: GTM (master) ServerB: GTM (slave) ServerC: GTM-Proxy, Coordinator1 (master), Datanode1 (master), Coordinator2 (slave), Datanode2 (slave) ServerD: GTM-Proxy, Coordinator2 (master), Datanode2 (master), Coordinator1 (slave), Datanode1 (slave) - Data loading $ createdb -E SQL_ASCII -T template0 -p 5422 DBT1 $ psql -a -p 5422 -f <DBT-1 dir>/scripts/pgsql/create_tables.sql $ psql -p 5422 -c "copy address from '/tmp/address.data' delimiter '>';" [the error] ERROR: invalid memory alloc request size 1073741824 CONTEXT: COPY address, line 33191219: "33191219>EDpcFnEEAe954E5Tg6XbkiBVGLAcuHBhFAEElEE7>zxPzRj [Other information] It seems small sized data can be copied: - DBT-1 tables with size (item 10000 ,user 15000) address 4.7G ERROR author 894K OK cc_xacts 4.4G ERROR customer 20G ERROR item 5.0M OK order_line 11G ERROR orders 4.1G ERROR stock 78K OK -------------------- Best regards, -hemmi -- Hitoshi HEMMI NTT Open Source Software Center hem...@la... Tel:(03)5860-5115 Fax:(03)5463-5490 |
From: Michael P. <mic...@gm...> - 2013-08-12 23:49:25
|
On Mon, Aug 12, 2013 at 11:59 PM, <pos...@gz...> wrote: > > All 8 datanodes and the coordinator responded, but not the gtm. > > psql -p 6666 -c 'select 1' > psql: could not connect to server: No such file or directory > Is the server running locally and accepting > connections on Unix domain socket "/tmp/.s.PGSQL.6666"? GTM is not a Postgres node, so contacting it with psql or any postgres client will not work... > > > If I do a ps aux | grep gtm I get this: > /usr/local/pgsql/bin/gtm -D /var/lib/pgsql/9.2/postgres-xc/data_gtm > > So it is running. If I tail gtm.log I get lots of these (one every couple seconds): > > 1:139982497588992:2013-08-12 10:54:49.677 EDT -LOG: Any GTM standby node not found in registered node(s). > LOCATION: gtm_standby_connect_to_standby_int, gtm_standby.c:378 > 1:139981775435520:2013-08-12 10:54:49.677 EDT -LOG: Assigning new transaction ID = 79640 > LOCATION: GTM_GetGlobalTransactionIdMulti, gtm_txn.c:581 > 1:139981775435520:2013-08-12 10:54:49.677 EDT -LOG: Sending transaction id 79640 > LOCATION: ProcessBeginTransactionGetGXIDCommand, gtm_txn.c:1172 > 1:139981775435520:2013-08-12 10:54:49.677 EDT -LOG: Received transaction ID 79640 for snapshot obtention > LOCATION: ProcessGetSnapshotCommand, gtm_snap.c:307 > 1:139981775435520:2013-08-12 10:54:49.679 EDT -LOG: Committing transaction id 79640 > LOCATION: ProcessCommitTransactionCommand, gtm_txn.c:1592 > 1:139981775435520:2013-08-12 10:54:49.679 EDT -LOG: Cleaning up thread state > LOCATION: GTM_ThreadCleanup, gtm_thread.c:265 For me this GTM is working correctly, you are having automatic activity, probably due to some autovacuum processes running on the Coordinators and Datanodes. > > > If I do: > select * from pgxc_node; > > I get: > node_name | node_type | node_port | node_host | nodeis_primary | nodeis_preferred | node_id > -----------+-----------+-----------+-----------+----------------+------------------+------------- > datanode1 | D | 15432 | localhost | f | f | 888802358 > datanode2 | D | 15433 | localhost | f | f | -905831925 > datanode3 | D | 15434 | localhost | f | f | -1894792127 > datanode4 | D | 15435 | localhost | f | f | -1307323892 > datanode5 | D | 15436 | localhost | f | f | 1797586929 > datanode6 | D | 15437 | localhost | f | f | 587455710 > datanode7 | D | 15438 | localhost | f | f | -1685037427 > datanode8 | D | 15439 | localhost | f | f | -993847320 > coord1 | C | 5477 | localhost | f | f | 1885696643 > (9 rows) > > Is the GTM supposed to be listed there? > > Here are the values defined in gtm.conf (everything else is commented out): > nodename = 'one' > port = 6666 > > Anything else I can check? Yes. Connect to the Coordinator and try to run EXECUTE DIRECT to each other Datanode to check if your coordinator can connect to them. More details here: http://postgres-xc.sourceforge.net/docs/1_1/sql-executedirect.html Try for example that: EXECUTE DIRECT ON (datanode1) SELECT 1; If your Coordinator is able to contact each other Datanode, then your cluster should be fine in terms of connections. In the query that failed before, you have been using "count" as an alias. Perhaps there is a problem with that query? I don't have an environment set so I cannot say directly, sorry. select count(*) as count from table1; Regards, -- Michael |
From: <pos...@gz...> - 2013-08-12 14:59:22
|
All 8 datanodes and the coordinator responded, but not the gtm. psql -p 6666 -c 'select 1' psql: could not connect to server: No such file or directory Is the server running locally and accepting connections on Unix domain socket "/tmp/.s.PGSQL.6666"? If I do a ps aux | grep gtm I get this: /usr/local/pgsql/bin/gtm -D /var/lib/pgsql/9.2/postgres-xc/data_gtm So it is running. If I tail gtm.log I get lots of these (one every couple seconds): 1:139982497588992:2013-08-12 10:54:49.677 EDT -LOG: Any GTM standby node not found in registered node(s). LOCATION: gtm_standby_connect_to_standby_int, gtm_standby.c:378 1:139981775435520:2013-08-12 10:54:49.677 EDT -LOG: Assigning new transaction ID = 79640 LOCATION: GTM_GetGlobalTransactionIdMulti, gtm_txn.c:581 1:139981775435520:2013-08-12 10:54:49.677 EDT -LOG: Sending transaction id 79640 LOCATION: ProcessBeginTransactionGetGXIDCommand, gtm_txn.c:1172 1:139981775435520:2013-08-12 10:54:49.677 EDT -LOG: Received transaction ID 79640 for snapshot obtention LOCATION: ProcessGetSnapshotCommand, gtm_snap.c:307 1:139981775435520:2013-08-12 10:54:49.679 EDT -LOG: Committing transaction id 79640 LOCATION: ProcessCommitTransactionCommand, gtm_txn.c:1592 1:139981775435520:2013-08-12 10:54:49.679 EDT -LOG: Cleaning up thread state LOCATION: GTM_ThreadCleanup, gtm_thread.c:265 If I do: select * from pgxc_node; I get: node_name | node_type | node_port | node_host | nodeis_primary | nodeis_preferred | node_id -----------+-----------+-----------+-----------+----------------+------------------+------------- datanode1 | D | 15432 | localhost | f | f | 888802358 datanode2 | D | 15433 | localhost | f | f | -905831925 datanode3 | D | 15434 | localhost | f | f | -1894792127 datanode4 | D | 15435 | localhost | f | f | -1307323892 datanode5 | D | 15436 | localhost | f | f | 1797586929 datanode6 | D | 15437 | localhost | f | f | 587455710 datanode7 | D | 15438 | localhost | f | f | -1685037427 datanode8 | D | 15439 | localhost | f | f | -993847320 coord1 | C | 5477 | localhost | f | f | 1885696643 (9 rows) Is the GTM supposed to be listed there? Here are the values defined in gtm.conf (everything else is commented out): nodename = 'one' port = 6666 Anything else I can check? Thanks, Brian On 8/8/13 8:53 PM, Koichi Suzuki wrote: > Could you check the log of the coordinator you connected? This may > tell something about the issue. > > Also, could you check if all the nodes (gtm, coordinators and > datanodes) are working correctly? You can do it with psql like: > > psql -p port -h host -c 'select 1' > > where port and host is these of each node you're testing. If there're > anything wrong, psql will report an error. > > Regards; > --- > Koichi Suzuki > > > > 2013/8/9 <pos...@gz... > <mailto:pos...@gz...>> > > We have a server with 64 cores and 384GB of RAM. We'd like to > take advantage of that hardware to speed up some queries that take > 8+ hours to run, and Postgres-XC seems like a good fit. > > I've setup a cluster of 8 data nodes (I'll increase that to 48 for > real usage), 1 coordinator, and 1 GTM, all running on the same > physical server. I'm using 1.1 beta, on Postgres 9.2. > > Here are the relevant commands (near-identical repeated commands > are omitted): > /usr/local/pgsql/bin/initdb -D > /var/lib/pgsql/9.2/postgres-xc/data_coord1 --nodename coord1 > /usr/local/pgsql/bin/initdb -D > /var/lib/pgsql/9.2/postgres-xc/data_datanode1 --nodename datanode1 > ... > /usr/local/pgsql/bin/initdb -D > /var/lib/pgsql/9.2/postgres-xc/data_datanode8 --nodename datanode8 > /usr/local/pgsql/bin/initgtm -D > /var/lib/pgsql/9.2/postgres-xc/data_gtm -Z gtm > /usr/local/pgsql/bin/gtm -D > /var/lib/pgsql/9.2/postgres-xc/data_gtm >> > /var/lib/pgsql/9.2/postgres-xc/logfile 2>&1 & > /usr/local/pgsql/bin/postgres -X -p 15432 -D > /var/lib/pgsql/9.2/postgres-xc/data_datanode1 >> > /var/lib/pgsql/9.2/postgres-xc/logfile 2>&1 & > ... > /usr/local/pgsql/bin/postgres -X -p 15439 -D > /var/lib/pgsql/9.2/postgres-xc/data_datanode8 >> > /var/lib/pgsql/9.2/postgres-xc/logfile 2>&1 & > /usr/local/pgsql/bin/postgres -C -p 5477 -D > /var/lib/pgsql/9.2/postgres-xc/data_coord1 >> > /var/lib/pgsql/9.2/postgres-xc/logfile 2>&1 & > /usr/local/pgsql/bin/psql -p 5477 -c "CREATE NODE datanode1 WITH > (TYPE = 'datanode', PORT = 15432)" postgres > ... > /usr/local/pgsql/bin/psql -p 5477 -c "CREATE NODE datanode8 WITH > (TYPE = 'datanode', PORT = 15439)" postgres > /usr/local/pgsql/bin/psql -p 5477 -c "SELECT pgxc_pool_reload()" > postgres > /usr/local/pgsql/bin/createdb -p 5477 test > /usr/local/pgsql/bin/psql -p 5477 test > > I then created the following tables: > CREATE TABLE trails1 ( > id text, > a_lat double precision, > a_long double precision, > b_lat double precision, > b_long double precision, > trail_id character varying(20), > type character varying(4), > distance numeric(10,5) > ); > CREATE INDEX table1_a_lat ON table1 USING btree (a_lat); > CREATE INDEX table1_a_long ON table1 USING btree (a_long); > CREATE INDEX table1_b_lat ON table1 USING btree (b_lat); > CREATE INDEX table1_b_long ON table1 USING btree (b_long); > CREATE INDEX table1_type ON table1 USING btree (type); > CREATE INDEX table1_distance ON table1 USING btree (distance); > > CREATE TABLE trails2 ( > a_lat double precision, > a_long double precision, > b_lat double precision, > b_long double precision, > type character varying(5), > distance numeric(16,8) > ); > CREATE INDEX table2_a_lat ON table2 USING btree (a_lat); > CREATE INDEX table2_a_long ON table2 USING btree (a_long); > CREATE INDEX table2_b_lat ON table2 USING btree (b_lat); > CREATE INDEX table2_b_long ON table2 USING btree (b_long); > CREATE INDEX table2_type ON table2 USING btree (type); > CREATE INDEX table2_distance ON table2 USING btree (distance); > > I think I should have had something in the table definition about > how to partition the data. > > I populated them using copy, with 331,106 rows in table1, and > 1,124,421 rows in table2. > > Simple queries return the right values: > > select count(*) as count from table1; > 331106 > > select count(*) as count from table2; > 1124421 > > When I do a big query (the one that normally takes hours to > complete) it is only using one CPU core. I expected to see > Postgres processes using near 100% CPU on 8 cores. > > SELECT > count(*) as count > FROM > trails1 > WHERE > not exists ( > SELECT > 'x' > FROM > trails2 > WHERE > trails2.a_lat >= trails1.a_lat - 0.000833 AND > trails2.a_lat <= trails1.a_lat + 0.000833 AND > trails2.a_long >= trails1.a_long - 0.000833 AND > trails2.a_long <= trails1.a_long + 0.000833 AND > trails2.b_lat >= trails1.b_lat - 0.000833 AND > trails2.b_lat <= trails1.b_lat + 0.000833 AND > trails2.b_long >= trails1.b_long - 0.000833 AND > trails2.b_long <= trails1.b_long + 0.000833 AND > ( > trails2.type = trails1.type OR > trails2.type = 'S' > ) AND > trails2.distance >= trails1.distance - 1.0 AND > trails2.distance <= trails1.distance + 1.0 > ); > > I haven't let it run to completion. After seeing the lack of CPU > usage I went looking for other problems (and found them). > > If I reload the pool and then check it, I get true: > SELECT pgxc_pool_reload(); > t > > SELECT pgxc_pool_check(); > t > > But after I interact with the data at all, it fails: > select count(*) as count from table1; > 331106 > > SELECT pgxc_pool_check(); > f > > So I don't think it's working properly. The logs don't show anything. > > Any suggestions? > > Thanks! > > > > > > > ------------------------------------------------------------------------------ > Get 100% visibility into Java/.NET code with AppDynamics Lite! > It's a free troubleshooting tool designed for production. > Get down to code-level detail for bottlenecks, with <2% overhead. > Download for free and get started troubleshooting in minutes. > http://pubads.g.doubleclick.net/gampad/clk?id=48897031&iu=/4140/ostg.clktrk > _______________________________________________ > Postgres-xc-general mailing list > Pos...@li... > <mailto:Pos...@li...> > https://lists.sourceforge.net/lists/listinfo/postgres-xc-general > > |
From: Koichi S. <koi...@gm...> - 2013-08-09 02:54:02
|
Could you check the log of the coordinator you connected? This may tell something about the issue. Also, could you check if all the nodes (gtm, coordinators and datanodes) are working correctly? You can do it with psql like: psql -p port -h host -c 'select 1' where port and host is these of each node you're testing. If there're anything wrong, psql will report an error. Regards; --- Koichi Suzuki 2013/8/9 <pos...@gz...> > We have a server with 64 cores and 384GB of RAM. We'd like to take > advantage of that hardware to speed up some queries that take 8+ hours to > run, and Postgres-XC seems like a good fit. > > I've setup a cluster of 8 data nodes (I'll increase that to 48 for real > usage), 1 coordinator, and 1 GTM, all running on the same physical server. > I'm using 1.1 beta, on Postgres 9.2. > > Here are the relevant commands (near-identical repeated commands are > omitted): > /usr/local/pgsql/bin/initdb -D /var/lib/pgsql/9.2/postgres-xc/data_coord1 > --nodename coord1 > /usr/local/pgsql/bin/initdb -D > /var/lib/pgsql/9.2/postgres-xc/data_datanode1 --nodename datanode1 > ... > /usr/local/pgsql/bin/initdb -D > /var/lib/pgsql/9.2/postgres-xc/data_datanode8 --nodename datanode8 > /usr/local/pgsql/bin/initgtm -D /var/lib/pgsql/9.2/postgres-xc/data_gtm -Z > gtm > /usr/local/pgsql/bin/gtm -D /var/lib/pgsql/9.2/postgres-xc/data_gtm >> > /var/lib/pgsql/9.2/postgres-xc/logfile 2>&1 & > /usr/local/pgsql/bin/postgres -X -p 15432 -D > /var/lib/pgsql/9.2/postgres-xc/data_datanode1 >> > /var/lib/pgsql/9.2/postgres-xc/logfile 2>&1 & > ... > /usr/local/pgsql/bin/postgres -X -p 15439 -D > /var/lib/pgsql/9.2/postgres-xc/data_datanode8 >> > /var/lib/pgsql/9.2/postgres-xc/logfile 2>&1 & > /usr/local/pgsql/bin/postgres -C -p 5477 -D > /var/lib/pgsql/9.2/postgres-xc/data_coord1 >> > /var/lib/pgsql/9.2/postgres-xc/logfile 2>&1 & > /usr/local/pgsql/bin/psql -p 5477 -c "CREATE NODE datanode1 WITH (TYPE = > 'datanode', PORT = 15432)" postgres > ... > /usr/local/pgsql/bin/psql -p 5477 -c "CREATE NODE datanode8 WITH (TYPE = > 'datanode', PORT = 15439)" postgres > /usr/local/pgsql/bin/psql -p 5477 -c "SELECT pgxc_pool_reload()" postgres > /usr/local/pgsql/bin/createdb -p 5477 test > /usr/local/pgsql/bin/psql -p 5477 test > > I then created the following tables: > CREATE TABLE trails1 ( > id text, > a_lat double precision, > a_long double precision, > b_lat double precision, > b_long double precision, > trail_id character varying(20), > type character varying(4), > distance numeric(10,5) > ); > CREATE INDEX table1_a_lat ON table1 USING btree (a_lat); > CREATE INDEX table1_a_long ON table1 USING btree (a_long); > CREATE INDEX table1_b_lat ON table1 USING btree (b_lat); > CREATE INDEX table1_b_long ON table1 USING btree (b_long); > CREATE INDEX table1_type ON table1 USING btree (type); > CREATE INDEX table1_distance ON table1 USING btree (distance); > > CREATE TABLE trails2 ( > a_lat double precision, > a_long double precision, > b_lat double precision, > b_long double precision, > type character varying(5), > distance numeric(16,8) > ); > CREATE INDEX table2_a_lat ON table2 USING btree (a_lat); > CREATE INDEX table2_a_long ON table2 USING btree (a_long); > CREATE INDEX table2_b_lat ON table2 USING btree (b_lat); > CREATE INDEX table2_b_long ON table2 USING btree (b_long); > CREATE INDEX table2_type ON table2 USING btree (type); > CREATE INDEX table2_distance ON table2 USING btree (distance); > > I think I should have had something in the table definition about how to > partition the data. > > I populated them using copy, with 331,106 rows in table1, and 1,124,421 > rows in table2. > > Simple queries return the right values: > > select count(*) as count from table1; > 331106 > > select count(*) as count from table2; > 1124421 > > When I do a big query (the one that normally takes hours to complete) it > is only using one CPU core. I expected to see Postgres processes using > near 100% CPU on 8 cores. > > SELECT > count(*) as count > FROM > trails1 > WHERE > not exists ( > SELECT > 'x' > FROM > trails2 > WHERE > trails2.a_lat >= trails1.a_lat - 0.000833 AND > trails2.a_lat <= trails1.a_lat + 0.000833 AND > trails2.a_long >= trails1.a_long - 0.000833 AND > trails2.a_long <= trails1.a_long + 0.000833 AND > trails2.b_lat >= trails1.b_lat - 0.000833 AND > trails2.b_lat <= trails1.b_lat + 0.000833 AND > trails2.b_long >= trails1.b_long - 0.000833 AND > trails2.b_long <= trails1.b_long + 0.000833 AND > ( > trails2.type = trails1.type OR > trails2.type = 'S' > ) AND > trails2.distance >= trails1.distance - 1.0 AND > trails2.distance <= trails1.distance + 1.0 > ); > > I haven't let it run to completion. After seeing the lack of CPU usage I > went looking for other problems (and found them). > > If I reload the pool and then check it, I get true: > SELECT pgxc_pool_reload(); > t > > SELECT pgxc_pool_check(); > t > > But after I interact with the data at all, it fails: > select count(*) as count from table1; > 331106 > > SELECT pgxc_pool_check(); > f > > So I don't think it's working properly. The logs don't show anything. > > Any suggestions? > > Thanks! > > > > > > > > ------------------------------------------------------------------------------ > Get 100% visibility into Java/.NET code with AppDynamics Lite! > It's a free troubleshooting tool designed for production. > Get down to code-level detail for bottlenecks, with <2% overhead. > Download for free and get started troubleshooting in minutes. > http://pubads.g.doubleclick.net/gampad/clk?id=48897031&iu=/4140/ostg.clktrk > _______________________________________________ > Postgres-xc-general mailing list > Pos...@li... > https://lists.sourceforge.net/lists/listinfo/postgres-xc-general > > |
From: <pos...@gz...> - 2013-08-08 21:01:06
|
We have a server with 64 cores and 384GB of RAM. We'd like to take advantage of that hardware to speed up some queries that take 8+ hours to run, and Postgres-XC seems like a good fit. I've setup a cluster of 8 data nodes (I'll increase that to 48 for real usage), 1 coordinator, and 1 GTM, all running on the same physical server. I'm using 1.1 beta, on Postgres 9.2. Here are the relevant commands (near-identical repeated commands are omitted): /usr/local/pgsql/bin/initdb -D /var/lib/pgsql/9.2/postgres-xc/data_coord1 --nodename coord1 /usr/local/pgsql/bin/initdb -D /var/lib/pgsql/9.2/postgres-xc/data_datanode1 --nodename datanode1 ... /usr/local/pgsql/bin/initdb -D /var/lib/pgsql/9.2/postgres-xc/data_datanode8 --nodename datanode8 /usr/local/pgsql/bin/initgtm -D /var/lib/pgsql/9.2/postgres-xc/data_gtm -Z gtm /usr/local/pgsql/bin/gtm -D /var/lib/pgsql/9.2/postgres-xc/data_gtm >> /var/lib/pgsql/9.2/postgres-xc/logfile 2>&1 & /usr/local/pgsql/bin/postgres -X -p 15432 -D /var/lib/pgsql/9.2/postgres-xc/data_datanode1 >> /var/lib/pgsql/9.2/postgres-xc/logfile 2>&1 & ... /usr/local/pgsql/bin/postgres -X -p 15439 -D /var/lib/pgsql/9.2/postgres-xc/data_datanode8 >> /var/lib/pgsql/9.2/postgres-xc/logfile 2>&1 & /usr/local/pgsql/bin/postgres -C -p 5477 -D /var/lib/pgsql/9.2/postgres-xc/data_coord1 >> /var/lib/pgsql/9.2/postgres-xc/logfile 2>&1 & /usr/local/pgsql/bin/psql -p 5477 -c "CREATE NODE datanode1 WITH (TYPE = 'datanode', PORT = 15432)" postgres ... /usr/local/pgsql/bin/psql -p 5477 -c "CREATE NODE datanode8 WITH (TYPE = 'datanode', PORT = 15439)" postgres /usr/local/pgsql/bin/psql -p 5477 -c "SELECT pgxc_pool_reload()" postgres /usr/local/pgsql/bin/createdb -p 5477 test /usr/local/pgsql/bin/psql -p 5477 test I then created the following tables: CREATE TABLE trails1 ( id text, a_lat double precision, a_long double precision, b_lat double precision, b_long double precision, trail_id character varying(20), type character varying(4), distance numeric(10,5) ); CREATE INDEX table1_a_lat ON table1 USING btree (a_lat); CREATE INDEX table1_a_long ON table1 USING btree (a_long); CREATE INDEX table1_b_lat ON table1 USING btree (b_lat); CREATE INDEX table1_b_long ON table1 USING btree (b_long); CREATE INDEX table1_type ON table1 USING btree (type); CREATE INDEX table1_distance ON table1 USING btree (distance); CREATE TABLE trails2 ( a_lat double precision, a_long double precision, b_lat double precision, b_long double precision, type character varying(5), distance numeric(16,8) ); CREATE INDEX table2_a_lat ON table2 USING btree (a_lat); CREATE INDEX table2_a_long ON table2 USING btree (a_long); CREATE INDEX table2_b_lat ON table2 USING btree (b_lat); CREATE INDEX table2_b_long ON table2 USING btree (b_long); CREATE INDEX table2_type ON table2 USING btree (type); CREATE INDEX table2_distance ON table2 USING btree (distance); I think I should have had something in the table definition about how to partition the data. I populated them using copy, with 331,106 rows in table1, and 1,124,421 rows in table2. Simple queries return the right values: select count(*) as count from table1; 331106 select count(*) as count from table2; 1124421 When I do a big query (the one that normally takes hours to complete) it is only using one CPU core. I expected to see Postgres processes using near 100% CPU on 8 cores. SELECT count(*) as count FROM trails1 WHERE not exists ( SELECT 'x' FROM trails2 WHERE trails2.a_lat >= trails1.a_lat - 0.000833 AND trails2.a_lat <= trails1.a_lat + 0.000833 AND trails2.a_long >= trails1.a_long - 0.000833 AND trails2.a_long <= trails1.a_long + 0.000833 AND trails2.b_lat >= trails1.b_lat - 0.000833 AND trails2.b_lat <= trails1.b_lat + 0.000833 AND trails2.b_long >= trails1.b_long - 0.000833 AND trails2.b_long <= trails1.b_long + 0.000833 AND ( trails2.type = trails1.type OR trails2.type = 'S' ) AND trails2.distance >= trails1.distance - 1.0 AND trails2.distance <= trails1.distance + 1.0 ); I haven't let it run to completion. After seeing the lack of CPU usage I went looking for other problems (and found them). If I reload the pool and then check it, I get true: SELECT pgxc_pool_reload(); t SELECT pgxc_pool_check(); t But after I interact with the data at all, it fails: select count(*) as count from table1; 331106 SELECT pgxc_pool_check(); f So I don't think it's working properly. The logs don't show anything. Any suggestions? Thanks! |
From: amul s. <sul...@ya...> - 2013-08-08 04:12:07
|
Hello Suzuki-san, >This is not supported yet. Because product table row referred from orders table can be located at other datanode. >To make this work, you can distribute product table as replicated. This works. Because product table is more stable than order table, >impact to the whole throughput may not be significant. But in my whole database contain, multiple table with reference key. Do i need to replicated all referred table? is this good scenario for achieving good performance compare to Single Postgres instance? Thanks and Regards, Amul Sul |
From: Koichi S. <koi...@gm...> - 2013-08-08 02:47:33
|
You have two options to do this for the current XC version. 1. The table is replicated, or 2. backupuid is the distribution key. Regards; --- Koichi Suzuki 2013/8/8 Afonso Bione <aag...@gm...> > Dear Friends, > > Does anyone know a way for me to write this: > CREATE UNIQUE INDEX mdl_backcont_bac_uix ON mdl_backup_controllers USING > btree (backupid); > > distributed in various DataNodes? > > Best Regards > Afonso Bione > > > ------------------------------------------------------------------------------ > Get 100% visibility into Java/.NET code with AppDynamics Lite! > It's a free troubleshooting tool designed for production. > Get down to code-level detail for bottlenecks, with <2% overhead. > Download for free and get started troubleshooting in minutes. > http://pubads.g.doubleclick.net/gampad/clk?id=48897031&iu=/4140/ostg.clktrk > _______________________________________________ > Postgres-xc-general mailing list > Pos...@li... > https://lists.sourceforge.net/lists/listinfo/postgres-xc-general > > |
From: Koichi S. <koi...@gm...> - 2013-08-08 02:23:43
|
Hello; 2013/8/8 amul sul <sul...@ya...> > Dear Michael-sir > > Thanks to you and Suzuki-san, Bapat-sir for their quick reply. > > Here just want to clear Parent and child relation, > Do you mean, child table which contain Foreign key and Parent table is > which going to be referred. > The following eg. PRODUCTS table will be Parent and ORDERS table is child. > > I hope, this is right, > > CREATE TABLE products ( > product_no integer PRIMARY KEY, > name text, > price numeric > ); > This is correct. The table will be distributed by hash using the primary key. > > CREATE TABLE orders ( > order_id integer, > product_no integer REFERENCES products (product_no), > quantity integer > ) > This is not supported yet. Because product table row referred from orders table can be located at other datanode. To make this work, you can distribute product table as replicated. This works. Because product table is more stable than order table, impact to the whole throughput may not be significant. Regards; --- Koichi Suzuki > As can i see, more general way is the referred table(here products) should > replicated only when we unable distribute it (using its product_no) due to > other limitations, will definitely follow in such case. > > My simple understanding is, when we going to add any row in ORDERS table, > we are going to look respective entry is present in PRODUCTS table. > so we need to find it using it unique key in global distribution or > locally. > > That's why here we needed PRODUCTS table should distributed on its > PRODUCT_NO column. > and no worry about ORDERS table distribution. > > Is this right? > > But document at > http://postgres-xc.sourceforge.net/docs/1_0_3/ddl-constraints.html > > > ------------------------------------------------------------------------------ > CREATE TABLE orders ( order_id integer, product_no integer REFERENCES > products (product_no), quantity integer > ) DISTRIBUTE BY HASH(product_no); > Note: The following description applies only to Postgres-XC > Please note that column with REFERENCE must be the distribution column. In > this case, we cannot add PRIMARY KEY to order_id because PRIMARY KEY must > be the distribution column as well. This limitation is introduced because > constraints are enforced only locally in each Datanode, which will be > resolved in the future. > > ------------------------------------------------------------------------------ > > This focusing to distribute ORDERS on its reference column > (here product_no). > > Does this required? > because in my case if my query is inserting row in ORDERS table, will > always try to search refereed table i.e PRODUCTS. > so why should distribute ORDERS table with REFERENCE column. > > Lets say there will be third table which referring ORDER on its order_id, > in this case its good idea to distribute it on order_id. > Finally picture will look like > > PRODUCTS --> distribute using product_no > ORDERS --> distribute using order_id > THIRD_TABLE --> depends on requirement. > > Can i do this way, do i missing something? > > >>Please check out this video http://www.youtube.com/watch?v=g_9CYat8nkY (Preview) . > The video explains various impacts the distribution/replication has in XC. > Very informative presentation. Thank you :) > > >>>Welcome to Postgres-XC community. > Thank you Suzuki-san. > > >>Hi Amul, > > Thanks and Regards, > Amul Sul > > > ------------------------------------------------------------------------------ > Get 100% visibility into Java/.NET code with AppDynamics Lite! > It's a free troubleshooting tool designed for production. > Get down to code-level detail for bottlenecks, with <2% overhead. > Download for free and get started troubleshooting in minutes. > http://pubads.g.doubleclick.net/gampad/clk?id=48897031&iu=/4140/ostg.clktrk > _______________________________________________ > Postgres-xc-general mailing list > Pos...@li... > https://lists.sourceforge.net/lists/listinfo/postgres-xc-general > |