You can subscribe to this list here.
| 2010 |
Jan
|
Feb
|
Mar
|
Apr
|
May
(2) |
Jun
|
Jul
|
Aug
(6) |
Sep
|
Oct
(19) |
Nov
(1) |
Dec
|
|---|---|---|---|---|---|---|---|---|---|---|---|---|
| 2011 |
Jan
(12) |
Feb
(1) |
Mar
(4) |
Apr
(4) |
May
(32) |
Jun
(12) |
Jul
(11) |
Aug
(1) |
Sep
(6) |
Oct
(3) |
Nov
|
Dec
(10) |
| 2012 |
Jan
(11) |
Feb
(1) |
Mar
(3) |
Apr
(25) |
May
(53) |
Jun
(38) |
Jul
(103) |
Aug
(54) |
Sep
(31) |
Oct
(66) |
Nov
(77) |
Dec
(20) |
| 2013 |
Jan
(91) |
Feb
(86) |
Mar
(103) |
Apr
(107) |
May
(25) |
Jun
(37) |
Jul
(17) |
Aug
(59) |
Sep
(38) |
Oct
(78) |
Nov
(29) |
Dec
(15) |
| 2014 |
Jan
(23) |
Feb
(82) |
Mar
(118) |
Apr
(101) |
May
(103) |
Jun
(45) |
Jul
(6) |
Aug
(10) |
Sep
|
Oct
(32) |
Nov
|
Dec
(9) |
| 2015 |
Jan
(3) |
Feb
(5) |
Mar
|
Apr
(1) |
May
|
Jun
|
Jul
(9) |
Aug
(4) |
Sep
(3) |
Oct
|
Nov
|
Dec
|
| 2016 |
Jan
(3) |
Feb
|
Mar
|
Apr
|
May
|
Jun
|
Jul
|
Aug
|
Sep
|
Oct
|
Nov
|
Dec
|
| 2017 |
Jan
|
Feb
|
Mar
|
Apr
|
May
|
Jun
(3) |
Jul
|
Aug
|
Sep
|
Oct
|
Nov
|
Dec
|
| 2018 |
Jan
|
Feb
|
Mar
|
Apr
|
May
(4) |
Jun
|
Jul
|
Aug
|
Sep
|
Oct
|
Nov
|
Dec
|
|
From: Sandeep G. <gup...@gm...> - 2013-10-07 15:51:33
|
Hi, I have a short query. In my setup I have one gtm, one gtm_proxy, one co-ordinator, and, multiple datanodes. It seems for copy gtm_proxy seems to be a bottleneck. Is there any way I can have multiple gtm_proxies. If so, how can I use in my setup. Just point me to the documentation. Thanks. Sandeep |
|
From: Sandeep G. <gup...@gm...> - 2013-10-07 15:40:35
|
Hi Ashutosh, Thanks for the note. I cannot commit right away. However, whenever you have some time can you mention relevant portions in the codebase where changes have to be made. I have a general understanding of the execution engine. I will take a look see if it feasible for me. Thanks. Sandeep On Sun, Oct 6, 2013 at 11:53 PM, Ashutosh Bapat < ash...@en...> wrote: > > > > On Sat, Oct 5, 2013 at 7:44 PM, Sandeep Gupta <gup...@gm...>wrote: > >> Thanks Michael. I understand. The only issue is that we have an update >> query as >> >> update T set T.a = -1 from A where A.x = T.x >> >> >> Both A and T and distributed by x column. The problem is that coordinator >> first does the join and then >> calls update several times at each datanode. This is turning out to be >> too slow. Would have >> been better if the entire query was shipped to the datanodes. >> >> > Right now there is no way to ship a DML with more than one relation > involved there. But that's something, I have been thinking about. If you > have developer resources and can produce a patch. I can help. > > >> Thanks. >> Sandeep >> >> >> >> On Sat, Oct 5, 2013 at 6:27 AM, Michael Paquier < >> mic...@gm...> wrote: >> >>> On Sat, Oct 5, 2013 at 2:58 AM, Sandeep Gupta <gup...@gm...> >>> wrote: >>> > I understand that the datanodes are read only and that updates/insert >>> can >>> > happen at coordinator. >>> You got it. >>> >>> > Also, it does not allow modification of column over which the records >>> are distributed. >>> Hum no, 1.1 allows ALTER TABLE that you can use to change the >>> distribution type of a table. >>> >>> > However, in case I know what I am doing, it there anyway possible to >>> modify >>> > the values directly at datanodes. >>> > The modifications are not over column over which distribution happens. >>> If you mean by connecting directly to the Datanodes, no. You would >>> break data consistency if table is replicated by the way by doing >>> that. Let the Coordinator planner do the job and choose the remote >>> nodes for you. >>> >>> There have been discussion to merge Coordinators and Datanodes >>> together though. This would allow what you say, with a simpler cluster >>> design. >>> -- >>> Michael >>> >> >> >> >> ------------------------------------------------------------------------------ >> October Webinars: Code for Performance >> Free Intel webinars can help you accelerate application performance. >> Explore tips for MPI, OpenMP, advanced profiling, and more. Get the most >> from >> the latest Intel processors and coprocessors. See abstracts and register > >> >> http://pubads.g.doubleclick.net/gampad/clk?id=60134791&iu=/4140/ostg.clktrk >> _______________________________________________ >> Postgres-xc-general mailing list >> Pos...@li... >> https://lists.sourceforge.net/lists/listinfo/postgres-xc-general >> >> > > > -- > Best Wishes, > Ashutosh Bapat > EnterpriseDB Corporation > The Postgres Database Company > |
|
From: Julian <jul...@gm...> - 2013-10-07 14:27:54
|
I can not just issue "add coordinator master",it alway asked me for more here is my step: # Deploy binaries PGXC$ depoly all #initialize everyting: one GTM Master, three GTM_Proxy, three coordinator, three datanode PGXC$ init all .... PGXC$ Createdb pgxc .... PGXC$ Psql pgxc=# select * from pgxc_node; node_name | node_type | node_port | node_host | nodeis_primary | nodeis_preferred | node_id .... pgxc=# create table user_info_hash(id int primary key,firstname text,lastname text,info text) distribute by hash(id); NOTICE: CREATE TABLE / PRIMARY KEY will create implicit index "user_info_hash_pkey" for table "user_info_hash" CREATE TABLE ... PGXC$ deploy node4 PGXC$ add datanode master datanode4 *ERROR: please specify the host for the datanode masetr* PGXC$ add datanode master datanode4 node4 20008 /opt/pgxc/nodes/dn_master *ERROR: sorry found some inconflicts in datanode master configuration.* -------------------------------------------------------------- 2. and i found when the dump file contain table with CREATE TABLE user_info_hash ( id integer NOT NULL, firstname text, lastname text, info text ) DISTRIBUTE BY HASH (id) TO NODE (datanode1,datanode2,datanode3); is alway failed to add new coordinator but with this CREATE TABLE user_id ( id integer NOT NULL ) DISTRIBUTE BY HASH (id) TO NODE (dn2,dn1,dn3); it's sucessed to add the new coordinator to the cluster 於 7/10/2013 18:18, Koichi Suzuki 提到: > I found your configuration, in terms of owner and user, is the same as > my demonstration scenario. Then what you should do is: > > 1. Log in to your operating system as pgxcUser, > 2. Initialize Postgres-XC cluster with "init all" command of pgxc_ctl, > > Then the database user $pgxcOwner should have been created and you > don't have to worry about it. When you add a coordinator, simply issue > "add coordinator master" command. It should work. If you have any > other issue, please let me know. > > My pgxc_ctl demonstration scenario will be found at > https://sourceforge.net/apps/mediawiki/postgres-xc/index.php?title=PGOpen2013_Postgres_Open_2013 > > The configuration is with full slaves, which you can disable. The demo > adds a datanode, not a coordinator. I believe there's no significant > differences. > > Regards; > > --- > Koichi Suzuki > > > 2013/10/7 Koichi Suzuki <koi...@gm... > <mailto:koi...@gm...>> > > I understood the situation. Could you let me know if pgxc is not > the operating system user name you are using? If so, I will run > the test with this situation and see what is going to happen. > Maybe a bug (not Postgres-XC core, but pgxc_ctl). If pgxc is the > operating system user you are using, then there could be another > cause. Also, please let me know your setting of pgxcUser in > pgxc_ctl.conf? > > Best; > > --- > Koichi Suzuki > > > 2013/10/7 Julian <jul...@gm... <mailto:jul...@gm...>> > > Sorry, i found i have already config pgxcOwner=pgxc in > pxc_ctl.conf. > > And i forgot to mention there's something strange about to add > new coordinator, > i can’t add coord4 by "add coordinator master coord4 node4 > 20004 20010 /opt/pgxc/nodes/coord” > > it will show me > ERROR: sorry found some inconflicts in coordinator master > configuration.PGXC$ > > unless i add coord4 to pgxc_ctl.conf and remove it at PGXC shell > PGXC$ remove coordinator master coord4 > ERROR: PGXC Node coord4: object not defined > ERROR: PGXC Node coord4: object not defined > ERROR: PGXC Node coord4: object not defined > > Is it normal ? > > attach file is my pgxc config > > Julian > 使用 Sparrow <http://www.sparrowmailapp.com/?sig> 發信 > > On 2013年10月7日Monday at 下午5:31, Koichi Suzuki wrote: > >> Okay. I'm wondering why internal pg_restore complains pgxc >> already exists. If you do not specify pgxc in pgxc_ctl and >> created it manually, newly-craeted node should not contain >> the role pgxc, though we have different massage. >> >> Could you let me know if you have any idea? >> >> Best; >> >> --- >> Koichi Suzuki >> >> >> 2013/10/7 Julian <jul...@gm... <mailto:jul...@gm...>> >>> I did not specify pgxc as a database owner, but i create >>> user pgxc to do these job. >>> >>> -- >>> Julian >>> 使用 Sparrow <http://www.sparrowmailapp.com/?sig> 發信 >>> >>> On 2013年10月7日Monday at 下午4:44, Koichi Suzuki wrote: >>> >>>> Did you specify pgxc as a database owner >>> >> > > > |
|
From: Koichi S. <koi...@gm...> - 2013-10-07 10:42:22
|
I do hope your work with Postgres-XC is successful. Please write here if you have any other issues. Good Luck; --- Koichi Suzuki 2013/10/4 Hector M. Jacas <hec...@et...> > > Hi all, > > First of all, I would like to thank Mr. Koichi Suzuki for their comments > to my previous post. Was right with regard to the parameter max_connection. > In the new configuration each DataNode has a maximum of 800 (three > coordinators with 200 concurrent connections each plus 200 extra) > > Regarding the suggestion about stress tool to use (dbt-1), I'm in the > study of their characteristics and peculiarities of installation and use. > > When I get my first results with it I publishes in this forum. > > I still have details to resolve as is the use of parameter > gtmPxyExtraConfig of pgxc_ctl.conf to include parameters (as worker_threads > = xx, for example) in the gtm_proxy settings. > > This was one of the problems detected during the initial deployment > through pgxc_ctl tool. > > This is the second post about my impressions with pgxc-1.1 > > In the previous post exposed them details of the scenario we want to build > as well as configurations necessary to reach a state of functionality and > operability acceptable. > > Once you reach that point, we design and execute a set of tests (based on > pgbench) in order to measure the performance of our installation and know > when we reached our goals: 300 (or more) concurrent connections and > increase the number of transactional operations. > > The modifications was made mainly on DataNodes config. These changes weres > implemented through datanodeExtraConfig parameter (pgxc_ctl.conf file) and > were as follows: > > # ==============================**================== > # Added to all the DataNode postgresql.conf > # Original: datanodeExtraConfig > log_destination = 'syslog' > logging_collector = on > log_directory = 'pg_log' > listen_addresses = '*' > max_connections = 800 > work_mem = 100MB > fsync = off > shared_buffers = 5GB > wal_buffers = 1MB > checkpoint_timeout = 5min > effective_cache_size = 12GB > checkpoint_segments = 64 > checkpoint_completion_target = 0.9 > maintenance_work_mem = 4GB > max_prepared_transactions = 800 > synchronous_commit = off > > These modifications allowed us to obtain results with increases between > two and three times (in some cases, more than three times) with respect to > the initial results set (number of transactions and tps). > > Our other goal, get more than 300 concurrent connections is reached and > can measure up to 355 connections. > > During the course of testing measurements were obtained on the consumption > of CPU and memory resources on each of the components of the cluster (dn01, > DN02, DN03, GTM). > > For these measurements, use the SAR command parameters: > > -R Report memory utilization statistics > -U Report CPU utilization > > 200 iterations with an interval of 5 seconds (14 test with a duration of > 60 seconds divided by 5 seconds is +/- 170 iterations) > > The averages obtained by server: > > This command was executed on each servers before launch the tests: > sar -u -r 5 200 > > dn01: > Average: CPU %user %nice %system %iowait %steal > %idle > Average: all 15.79 0.00 18.20 0.44 0.00 > 65.57 > > Average: kbmemfree kbmemused %memused kbbuffers kbcached kbcommit > %commit > Average: 12003982 4327934 26.50 44163 1766752 7616567 > 37.35 > > dn02: > Average: CPU %user %nice %system %iowait %steal > %idle > Average: all 14.89 0.00 17.37 0.11 0.00 > 67.62 > > Average: kbmemfree kbmemused %memused kbbuffers kbcached kbcommit > %commit > Average: 12097661 4234255 25.93 42716 1725394 7609960 > 37.31 > > dn03: > Average: CPU %user %nice %system %iowait %steal > %idle > Average: all 16.67 0.00 19.59 0.57 0.00 > 63.17 > > Average: kbmemfree kbmemused %memused kbbuffers kbcached kbcommit > %commit > Average: 11603908 4728008 28.95 42955 1708146 7609769 > 37.31 > > gtm: > Average: CPU %user %nice %system %iowait %steal > %idle > Average: all 8.54 0.00 24.80 0.12 0.00 > 66.54 > > Average: kbmemfree kbmemused %memused kbbuffers kbcached kbcommit > %commit > Average: 3553938 370626 9.44 42358 120419 723856 > 9.06 > > The result obtained in each of the tests: > > [root@rhelclient ~]# pgbench -c 16 -j 8 -T 60 -h 192.168.97.44 -U > postgres pgbench > starting vacuum...end. > transaction type: TPC-B (sort of) > scaling factor: 100 > query mode: simple > number of clients: 16 > number of threads: 8 > duration: 60 s > number of transactions actually processed: 23680 > tps = 394.458636 (including connections establishing) > tps = 394.637063 (excluding connections establishing) > > [root@rhelclient ~]# pgbench -S -c 16 -j 8 -T 60 -h 192.168.97.44 -U > postgres pgbench > starting vacuum...end. > transaction type: SELECT only > scaling factor: 100 > query mode: simple > number of clients: 16 > number of threads: 8 > duration: 60 s > number of transactions actually processed: 108929 > tps = 1815.247714 (including connections establishing) > tps = 1815.947505 (excluding connections establishing) > > [root@rhelclient ~]# pgbench -c 16 -j 16 -T 60 -h 192.168.97.44 -U > postgres pgbench > starting vacuum...end. > transaction type: TPC-B (sort of) > scaling factor: 100 > query mode: simple > number of clients: 16 > number of threads: 16 > duration: 60 s > number of transactions actually processed: 23953 > tps = 399.034541 (including connections establishing) > tps = 399.120451 (excluding connections establishing) > > [root@rhelclient ~]# pgbench -S -c 16 -j 16 -T 60 -h 192.168.97.44 -U > postgres pgbench > starting vacuum...end. > transaction type: SELECT only > scaling factor: 100 > query mode: simple > number of clients: 16 > number of threads: 16 > duration: 60 s > number of transactions actually processed: 127142 > tps = 2118.825088 (including connections establishing) > tps = 2119.318006 (excluding connections establishing) > > [root@rhelclient ~]# pgbench -c 96 -j 8 -T 60 -h 192.168.97.44 -U > postgres pgbench > starting vacuum...end. > transaction type: TPC-B (sort of) > scaling factor: 100 > query mode: simple > number of clients: 96 > number of threads: 8 > duration: 60 s > number of transactions actually processed: 95644 > tps = 1592.722011 (including connections establishing) > tps = 1595.906611 (excluding connections establishing) > > [root@rhelclient ~]# pgbench -S -c 96 -j 8 -T 60 -h 192.168.97.44 -U > postgres pgbench > starting vacuum...end. > transaction type: SELECT only > scaling factor: 100 > query mode: simple > number of clients: 96 > number of threads: 8 > duration: 60 s > number of transactions actually processed: 580728 > tps = 9675.754717 (including connections establishing) > tps = 9695.954649 (excluding connections establishing) > > [root@rhelclient ~]# pgbench -c 64 -j 32 -T 60 -h 192.168.97.44 -U > postgres pgbench > starting vacuum...end. > transaction type: TPC-B (sort of) > scaling factor: 100 > query mode: simple > number of clients: 64 > number of threads: 32 > duration: 60 s > number of transactions actually processed: 72239 > tps = 1183.511659 (including connections establishing) > tps = 1184.529232 (excluding connections establishing) > > [root@rhelclient ~]# pgbench -S -c 64 -j 32 -T 60 -h 192.168.97.44 -U > postgres pgbench > starting vacuum...end. > transaction type: SELECT only > scaling factor: 100 > query mode: simple > number of clients: 64 > number of threads: 32 > duration: 60 s > number of transactions actually processed: 388861 > tps = 6479.326642 (including connections establishing) > tps = 6482.532350 (excluding connections establishing) > > [root@rhelclient ~]# pgbench -c 64 -j 64 -T 60 -h 192.168.97.44 -U > postgres pgbench > starting vacuum...end. > transaction type: TPC-B (sort of) > scaling factor: 100 > query mode: simple > number of clients: 64 > number of threads: 64 > duration: 60 s > number of transactions actually processed: 61663 > tps = 1026.636406 (including connections establishing) > tps = 1027.679280 (excluding connections establishing) > > [root@rhelclient ~]# pgbench -S -c 64 -j 64 -T 60 -h 192.168.97.44 -U > postgres pgbench > starting vacuum...end. > transaction type: SELECT only > scaling factor: 100 > query mode: simple > number of clients: 64 > number of threads: 64 > duration: 60 s > number of transactions actually processed: 369321 > tps = 6151.931064 (including connections establishing) > tps = 6155.611035 (excluding connections establishing) > > [root@rhelclient ~]# pgbench -c 104 -j 8 -T 60 -h 192.168.97.44 -U > postgres pgbench > starting vacuum...end. > transaction type: TPC-B (sort of) > scaling factor: 100 > query mode: simple > number of clients: 104 > number of threads: 8 > duration: 60 s > number of transactions actually processed: 80479 > tps = 1337.396423 (including connections establishing) > tps = 1347.248687 (excluding connections establishing) > > [root@rhelclient ~]# pgbench -S -c 104 -j 8 -T 60 -h 192.168.97.44 -U > postgres pgbench > starting vacuum...end. > transaction type: SELECT only > scaling factor: 100 > query mode: simple > number of clients: 104 > number of threads: 8 > duration: 60 s > number of transactions actually processed: 587109 > tps = 9782.401960 (including connections establishing) > tps = 9805.111450 (excluding connections establishing) > > [root@rhelclient ~]# pgbench -c 300 -j 10 -T 60 -h 192.168.97.44 -U > postgres pgbench > starting vacuum...end. > transaction type: TPC-B (sort of) > scaling factor: 100 > query mode: simple > number of clients: 300 > number of threads: 10 > duration: 60 s > number of transactions actually processed: 171351 > tps = 2849.021939 (including connections establishing) > tps = 2869.345032 (excluding connections establishing) > > [root@rhelclient ~]# pgbench -S -c 300 -j 10 -T 60 -h 192.168.97.44 -U > postgres pgbench > starting vacuum...end. > transaction type: SELECT only > scaling factor: 100 > query mode: simple > number of clients: 300 > number of threads: 10 > duration: 60 s > number of transactions actually processed: 1177464 > tps = 19613.592584 (including connections establishing) > tps = 19716.537285 (excluding connections establishing) > > [root@rhelclient ~]# > > > Our new goal is to provide to our pgxc cluster of characteristics of High > Availability adding slave servers for DataNodes (and perhaps for > coordinators servers) > > When we get this environment, run again the same set of tests above, plus > new ones, such as fault recovery tests simulating the loss of cluster > components, etc.. > > Ok, that's all for now. > > Thank you very much. This is a great project and I'm very glad I found it. > > Thanks again, > > Hector M. Jacas > --- > This message was processed by Kaspersky Mail Gateway 5.6.28/RELEASE > running at host imx3.etecsa.cu > Visit our web-site: <http://www.kaspersky.com>, <http://www.viruslist.com> > > > ------------------------------------------------------------------------------ > October Webinars: Code for Performance > Free Intel webinars can help you accelerate application performance. > Explore tips for MPI, OpenMP, advanced profiling, and more. Get the most > from > the latest Intel processors and coprocessors. See abstracts and register > > http://pubads.g.doubleclick.net/gampad/clk?id=60134791&iu=/4140/ostg.clktrk > _______________________________________________ > Postgres-xc-general mailing list > Pos...@li... > https://lists.sourceforge.net/lists/listinfo/postgres-xc-general > > |
|
From: Koichi S. <koi...@gm...> - 2013-10-07 10:18:46
|
I found your configuration, in terms of owner and user, is the same as my demonstration scenario. Then what you should do is: 1. Log in to your operating system as pgxcUser, 2. Initialize Postgres-XC cluster with "init all" command of pgxc_ctl, Then the database user $pgxcOwner should have been created and you don't have to worry about it. When you add a coordinator, simply issue "add coordinator master" command. It should work. If you have any other issue, please let me know. My pgxc_ctl demonstration scenario will be found at https://sourceforge.net/apps/mediawiki/postgres-xc/index.php?title=PGOpen2013_Postgres_Open_2013 The configuration is with full slaves, which you can disable. The demo adds a datanode, not a coordinator. I believe there's no significant differences. Regards; --- Koichi Suzuki 2013/10/7 Koichi Suzuki <koi...@gm...> > I understood the situation. Could you let me know if pgxc is not the > operating system user name you are using? If so, I will run the test with > this situation and see what is going to happen. Maybe a bug (not > Postgres-XC core, but pgxc_ctl). If pgxc is the operating system user you > are using, then there could be another cause. Also, please let me know > your setting of pgxcUser in pgxc_ctl.conf? > > Best; > > --- > Koichi Suzuki > > > 2013/10/7 Julian <jul...@gm...> > >> Sorry, i found i have already config pgxcOwner=pgxc in pxc_ctl.conf. >> >> And i forgot to mention there's something strange about to add new >> coordinator, >> i can’t add coord4 by "add coordinator master coord4 node4 20004 20010 >> /opt/pgxc/nodes/coord” >> >> it will show me >> ERROR: sorry found some inconflicts in coordinator master >> configuration.PGXC$ >> >> unless i add coord4 to pgxc_ctl.conf and remove it at PGXC shell >> PGXC$ remove coordinator master coord4 >> ERROR: PGXC Node coord4: object not defined >> ERROR: PGXC Node coord4: object not defined >> ERROR: PGXC Node coord4: object not defined >> >> Is it normal ? >> >> attach file is my pgxc config >> >> Julian >> 使用 Sparrow <http://www.sparrowmailapp.com/?sig> 發信 >> >> On 2013年10月7日Monday at 下午5:31, Koichi Suzuki wrote: >> >> Okay. I'm wondering why internal pg_restore complains pgxc already >> exists. If you do not specify pgxc in pgxc_ctl and created it manually, >> newly-craeted node should not contain the role pgxc, though we have >> different massage. >> >> Could you let me know if you have any idea? >> >> Best; >> >> --- >> Koichi Suzuki >> >> >> 2013/10/7 Julian <jul...@gm...> >> >> I did not specify pgxc as a database owner, but i create user pgxc to do >> these job. >> >> -- >> Julian >> 使用 Sparrow <http://www.sparrowmailapp.com/?sig> 發信 >> >> On 2013年10月7日Monday at 下午4:44, Koichi Suzuki wrote: >> >> Did you specify pgxc as a database owner >> >> >> >> >> > |
|
From: Koichi S. <koi...@gm...> - 2013-10-07 10:04:57
|
I understood the situation. Could you let me know if pgxc is not the operating system user name you are using? If so, I will run the test with this situation and see what is going to happen. Maybe a bug (not Postgres-XC core, but pgxc_ctl). If pgxc is the operating system user you are using, then there could be another cause. Also, please let me know your setting of pgxcUser in pgxc_ctl.conf? Best; --- Koichi Suzuki 2013/10/7 Julian <jul...@gm...> > Sorry, i found i have already config pgxcOwner=pgxc in pxc_ctl.conf. > > And i forgot to mention there's something strange about to add new > coordinator, > i can’t add coord4 by "add coordinator master coord4 node4 20004 20010 > /opt/pgxc/nodes/coord” > > it will show me > ERROR: sorry found some inconflicts in coordinator master > configuration.PGXC$ > > unless i add coord4 to pgxc_ctl.conf and remove it at PGXC shell > PGXC$ remove coordinator master coord4 > ERROR: PGXC Node coord4: object not defined > ERROR: PGXC Node coord4: object not defined > ERROR: PGXC Node coord4: object not defined > > Is it normal ? > > attach file is my pgxc config > > Julian > 使用 Sparrow <http://www.sparrowmailapp.com/?sig> 發信 > > On 2013年10月7日Monday at 下午5:31, Koichi Suzuki wrote: > > Okay. I'm wondering why internal pg_restore complains pgxc already > exists. If you do not specify pgxc in pgxc_ctl and created it manually, > newly-craeted node should not contain the role pgxc, though we have > different massage. > > Could you let me know if you have any idea? > > Best; > > --- > Koichi Suzuki > > > 2013/10/7 Julian <jul...@gm...> > > I did not specify pgxc as a database owner, but i create user pgxc to do > these job. > > -- > Julian > 使用 Sparrow <http://www.sparrowmailapp.com/?sig> 發信 > > On 2013年10月7日Monday at 下午4:44, Koichi Suzuki wrote: > > Did you specify pgxc as a database owner > > > > > |
|
From: Koichi S. <koi...@gm...> - 2013-10-07 09:32:07
|
Okay. I'm wondering why internal pg_restore complains pgxc already exists. If you do not specify pgxc in pgxc_ctl and created it manually, newly-craeted node should not contain the role pgxc, though we have different massage. Could you let me know if you have any idea? Best; --- Koichi Suzuki 2013/10/7 Julian <jul...@gm...> > I did not specify pgxc as a database owner, but i create user pgxc to do > these job. > > -- > Julian > 使用 Sparrow <http://www.sparrowmailapp.com/?sig> 發信 > > On 2013年10月7日Monday at 下午4:44, Koichi Suzuki wrote: > > Did you specify pgxc as a database owner > > > |
|
From: Julian <jul...@gm...> - 2013-10-07 08:53:46
|
I did not specify pgxc as a database owner, but i create user pgxc to do these job. -- Julian 使用 Sparrow (http://www.sparrowmailapp.com/?sig) 發信 On 2013年10月7日Monday at 下午4:44, Koichi Suzuki wrote: > Did you specify pgxc as a database owner |
|
From: Koichi S. <koi...@gm...> - 2013-10-07 08:44:47
|
I checked two files. Did you specify pgxc as a database owner? It seems that the role pgxc already exists when pg_restore tries to restore the role "pgxc". Maybe we need to control internal pg_dump to exclude such existing superusers from the restore list, and also need to fix the crash to move forward with the same situation. Are you using different operating system user and XC owner name? If then, could you try to use the same name for these? Regards; --- Koichi Suzuki 2013/10/7 Julian <jul...@gm...> > Hi, > > Here is the log file at the new coordinator and scheme file > > > > -- > Julian > 使用 Sparrow <http://www.sparrowmailapp.com/?sig> 發信 > > On 2013年10月7日Monday at 下午2:40, Koichi Suzuki wrote: > > I supporte the error was encountered in pg_resgtore and pgxc is the owner > of postgres-XC in your pgxc_ctl.conf file. The issue looks some (not > necessarily just one) bugs. Can I have log files at the new coordinator? > This may have some more information on the issue. > > Regards; > > --- > Koichi Suzuki > > > 2013/10/4 Julian <jul...@gm...> > > Dear Sir, > > My cluster has configured by pg_ctl now, but till filed. > > Error message > > -------------------------------------------------------------------------------- > PGXC add coordinator master coord4 node4 20004 20010 /opt/pgxc/nodes/coord > …... > Actual Command: ssh pgxc@node4 "( pg_ctl start -Z restoremode -D > /opt/pgxc/nodes/coord -o -i ) > /tmp/squeeze-10-200_STDOUT_2618_16 2>&1" < > /dev/null > /dev/null 2>&1 > Bring remote stdout: scp pgxc@node4:/tmp/squeeze-10-200_STDOUT_2618_16 > /tmp/STDOUT_2618_17 > /dev/null 2>&1 > SET > SET > psql:/tmp/GENERAL_2618_15:12: ERROR: role "pgxc" already exists > ALTER ROLE > REVOKE > REVOKE > GRANT > GRANT > CREATE NODE > CREATE NODE > CREATE NODE > CREATE NODE > CREATE NODE > CREATE NODE > You are now connected to database "postgres" as user "pgxc". > SET > SET > SET > SET > SET > COMMENT > CREATE EXTENSION > COMMENT > SET > SET > SET > psql:/tmp/GENERAL_2618_15:92: connection to server was lost > Actual Command: ssh pgxc@node4 "( pg_ctl stop -Z restoremode -D > /opt/pgxc/nodes/coord ) > /tmp/squeeze-10-200_STDOUT_2618_18 2>&1" < > /dev/null > /dev/null 2>&1 > Bring remote stdout: scp pgxc@node4:/tmp/squeeze-10-200_STDOUT_2618_18 > /tmp/STDOUT_2618_19 > /dev/null 2>&1 > Starting coordinator master coord4 > Done. > CREATE NODE > CREATE NODE > CREATE NODE > ALTER NODE > ---------------------------------------------------------------- > > Error log > -------------------------------------------------------------------- > ERROR: role "pgxc" already exists > STATEMENT: CREATE ROLE pgxc; > LOG: *server process (PID 3013) was terminated by signal 11: > Segmentation fault* > DETAIL: Failed process was running: CREATE TABLE user_info_hash ( > id integer NOT NULL, > firstname text, > lastname text, > info text > ) > DISTRIBUTE BY HASH (id) > TO NODE (datanode1,datanode2,datanode3); > LOG: terminating any other active server processes > WARNING: terminating connection because of crash of another server process > DETAIL: The postmaster has commanded this server process to roll back the > current transaction and exit, because another server process exited > abnormally and possibly co > rrupted shared memory. > HINT: In a moment you should be able to reconnect to the database and > repeat your command. > --------------------------------------------------------------------------- > > Any comments are appreciated > > > Best Regards, > > -- > Julian > 使用 Sparrow <http://www.sparrowmailapp.com/?sig> 發信 > > On 2013年10月3日Thursday at 上午9:31, Koichi Suzuki wrote: > > You cannot add a coordinator in such a way. There're many issued to be > resolved internally. You can configure and operate whole cluster with > pgxc_ctl to get handy way to add coordinator/datanode. > > I understand you have your cluster configured without pgxc_ctl. In this > case, adding coordinator manually could be a bit complicated work. Sorry, > I've not uploaded the detailed step to do it. > > Whole steps will be found in add_coordinatorMaster() function defined in > coord_cmd.c of pgxc_ctl source code. It will be found at contrib/pgxc_ctl > in the release material. > > Please allow a bit of time to find my time to upload this information to > XC wiki. > > Or, you can backup whole database with pg_dumpall, then reconfigure new xc > cluster with additional coordinator, and then restore the backup. > > Regards; > > --- > Koichi Suzuki > > > 2013/10/3 Julian <jul...@gm...> > > Dear Sir, > > I have a cluster with 3 coordinators and 3 datanodes on 3 VM, today i > was try to added a new coordinator to the cluster, when i using command > "psql postgres -f coordinator.sql -p 5455" to restore the backup file to > the new coordinator. > > Then i got this message : > > psql:coordinator-dump.sql:105: connection to server was lost > > In the log file: > > ---------------------------------------------------------------------------------------------------------------------------------------------------------------- > 2013-10-02 23:00:09.811 CST,,,29061,,524c34de.7185,2,,2013-10-02 > 22:59:42 CST,,0,LOG,00000,"server process (PID 29094) was terminated by > signal 11: Segmentation fault","Failed process was r > unning: CREATE TABLE user_info_hash ( > id integer NOT NULL, > firstname text, > lastname text, > info text > ) > DISTRIBUTE BY HASH (id) > TO NODE (dn2,dn1,dn3);",,,,,,,,"" > 2013-10-02 23:00:09.811 CST,,,29061,,524c34de.7185,3,,2013-10-02 > 22:59:42 CST,,0,LOG,00000,"terminating any other active server > processes",,,,,,,,,"" > > ------------------------------------------------------------------------------------------------------------------------------------------------------------------ > refer to > http://postgres-xc.sourceforge.net/docs/1_1/add-node-coordinator.html > > Is there something i doing worng? > > > Thanks for your kindly reply. > > And sorry for my poor english. > > > Best regards. > > > ------------------------------------------------------------------------------ > October Webinars: Code for Performance > Free Intel webinars can help you accelerate application performance. > Explore tips for MPI, OpenMP, advanced profiling, and more. Get the most > from > the latest Intel processors and coprocessors. See abstracts and register > > http://pubads.g.doubleclick.net/gampad/clk?id=60134791&iu=/4140/ostg.clktrk > _______________________________________________ > Postgres-xc-general mailing list > Pos...@li... > https://lists.sourceforge.net/lists/listinfo/postgres-xc-general > > > > > > |
|
From: Koichi S. <koi...@gm...> - 2013-10-07 06:41:03
|
I supporte the error was encountered in pg_resgtore and pgxc is the owner of postgres-XC in your pgxc_ctl.conf file. The issue looks some (not necessarily just one) bugs. Can I have log files at the new coordinator? This may have some more information on the issue. Regards; --- Koichi Suzuki 2013/10/4 Julian <jul...@gm...> > Dear Sir, > > My cluster has configured by pg_ctl now, but till filed. > > Error message > > -------------------------------------------------------------------------------- > PGXC add coordinator master coord4 node4 20004 20010 /opt/pgxc/nodes/coord > …... > Actual Command: ssh pgxc@node4 "( pg_ctl start -Z restoremode -D > /opt/pgxc/nodes/coord -o -i ) > /tmp/squeeze-10-200_STDOUT_2618_16 2>&1" < > /dev/null > /dev/null 2>&1 > Bring remote stdout: scp pgxc@node4:/tmp/squeeze-10-200_STDOUT_2618_16 > /tmp/STDOUT_2618_17 > /dev/null 2>&1 > SET > SET > psql:/tmp/GENERAL_2618_15:12: ERROR: role "pgxc" already exists > ALTER ROLE > REVOKE > REVOKE > GRANT > GRANT > CREATE NODE > CREATE NODE > CREATE NODE > CREATE NODE > CREATE NODE > CREATE NODE > You are now connected to database "postgres" as user "pgxc". > SET > SET > SET > SET > SET > COMMENT > CREATE EXTENSION > COMMENT > SET > SET > SET > psql:/tmp/GENERAL_2618_15:92: connection to server was lost > Actual Command: ssh pgxc@node4 "( pg_ctl stop -Z restoremode -D > /opt/pgxc/nodes/coord ) > /tmp/squeeze-10-200_STDOUT_2618_18 2>&1" < > /dev/null > /dev/null 2>&1 > Bring remote stdout: scp pgxc@node4:/tmp/squeeze-10-200_STDOUT_2618_18 > /tmp/STDOUT_2618_19 > /dev/null 2>&1 > Starting coordinator master coord4 > Done. > CREATE NODE > CREATE NODE > CREATE NODE > ALTER NODE > ---------------------------------------------------------------- > > Error log > -------------------------------------------------------------------- > ERROR: role "pgxc" already exists > STATEMENT: CREATE ROLE pgxc; > LOG: *server process (PID 3013) was terminated by signal 11: > Segmentation fault* > DETAIL: Failed process was running: CREATE TABLE user_info_hash ( > id integer NOT NULL, > firstname text, > lastname text, > info text > ) > DISTRIBUTE BY HASH (id) > TO NODE (datanode1,datanode2,datanode3); > LOG: terminating any other active server processes > WARNING: terminating connection because of crash of another server process > DETAIL: The postmaster has commanded this server process to roll back the > current transaction and exit, because another server process exited > abnormally and possibly co > rrupted shared memory. > HINT: In a moment you should be able to reconnect to the database and > repeat your command. > --------------------------------------------------------------------------- > > Any comments are appreciated > > > Best Regards, > > -- > Julian > 使用 Sparrow <http://www.sparrowmailapp.com/?sig> 發信 > > On 2013年10月3日Thursday at 上午9:31, Koichi Suzuki wrote: > > You cannot add a coordinator in such a way. There're many issued to be > resolved internally. You can configure and operate whole cluster with > pgxc_ctl to get handy way to add coordinator/datanode. > > I understand you have your cluster configured without pgxc_ctl. In this > case, adding coordinator manually could be a bit complicated work. Sorry, > I've not uploaded the detailed step to do it. > > Whole steps will be found in add_coordinatorMaster() function defined in > coord_cmd.c of pgxc_ctl source code. It will be found at contrib/pgxc_ctl > in the release material. > > Please allow a bit of time to find my time to upload this information to > XC wiki. > > Or, you can backup whole database with pg_dumpall, then reconfigure new xc > cluster with additional coordinator, and then restore the backup. > > Regards; > > --- > Koichi Suzuki > > > 2013/10/3 Julian <jul...@gm...> > > Dear Sir, > > I have a cluster with 3 coordinators and 3 datanodes on 3 VM, today i > was try to added a new coordinator to the cluster, when i using command > "psql postgres -f coordinator.sql -p 5455" to restore the backup file to > the new coordinator. > > Then i got this message : > > psql:coordinator-dump.sql:105: connection to server was lost > > In the log file: > > ---------------------------------------------------------------------------------------------------------------------------------------------------------------- > 2013-10-02 23:00:09.811 CST,,,29061,,524c34de.7185,2,,2013-10-02 > 22:59:42 CST,,0,LOG,00000,"server process (PID 29094) was terminated by > signal 11: Segmentation fault","Failed process was r > unning: CREATE TABLE user_info_hash ( > id integer NOT NULL, > firstname text, > lastname text, > info text > ) > DISTRIBUTE BY HASH (id) > TO NODE (dn2,dn1,dn3);",,,,,,,,"" > 2013-10-02 23:00:09.811 CST,,,29061,,524c34de.7185,3,,2013-10-02 > 22:59:42 CST,,0,LOG,00000,"terminating any other active server > processes",,,,,,,,,"" > > ------------------------------------------------------------------------------------------------------------------------------------------------------------------ > refer to > http://postgres-xc.sourceforge.net/docs/1_1/add-node-coordinator.html > > Is there something i doing worng? > > > Thanks for your kindly reply. > > And sorry for my poor english. > > > Best regards. > > > ------------------------------------------------------------------------------ > October Webinars: Code for Performance > Free Intel webinars can help you accelerate application performance. > Explore tips for MPI, OpenMP, advanced profiling, and more. Get the most > from > the latest Intel processors and coprocessors. See abstracts and register > > http://pubads.g.doubleclick.net/gampad/clk?id=60134791&iu=/4140/ostg.clktrk > _______________________________________________ > Postgres-xc-general mailing list > Pos...@li... > https://lists.sourceforge.net/lists/listinfo/postgres-xc-general > > > > |
|
From: Ashutosh B. <ash...@en...> - 2013-10-07 03:54:04
|
On Sat, Oct 5, 2013 at 7:44 PM, Sandeep Gupta <gup...@gm...>wrote: > Thanks Michael. I understand. The only issue is that we have an update > query as > > update T set T.a = -1 from A where A.x = T.x > > > Both A and T and distributed by x column. The problem is that coordinator > first does the join and then > calls update several times at each datanode. This is turning out to be too > slow. Would have > been better if the entire query was shipped to the datanodes. > > Right now there is no way to ship a DML with more than one relation involved there. But that's something, I have been thinking about. If you have developer resources and can produce a patch. I can help. > Thanks. > Sandeep > > > > On Sat, Oct 5, 2013 at 6:27 AM, Michael Paquier <mic...@gm... > > wrote: > >> On Sat, Oct 5, 2013 at 2:58 AM, Sandeep Gupta <gup...@gm...> >> wrote: >> > I understand that the datanodes are read only and that updates/insert >> can >> > happen at coordinator. >> You got it. >> >> > Also, it does not allow modification of column over which the records >> are distributed. >> Hum no, 1.1 allows ALTER TABLE that you can use to change the >> distribution type of a table. >> >> > However, in case I know what I am doing, it there anyway possible to >> modify >> > the values directly at datanodes. >> > The modifications are not over column over which distribution happens. >> If you mean by connecting directly to the Datanodes, no. You would >> break data consistency if table is replicated by the way by doing >> that. Let the Coordinator planner do the job and choose the remote >> nodes for you. >> >> There have been discussion to merge Coordinators and Datanodes >> together though. This would allow what you say, with a simpler cluster >> design. >> -- >> Michael >> > > > > ------------------------------------------------------------------------------ > October Webinars: Code for Performance > Free Intel webinars can help you accelerate application performance. > Explore tips for MPI, OpenMP, advanced profiling, and more. Get the most > from > the latest Intel processors and coprocessors. See abstracts and register > > http://pubads.g.doubleclick.net/gampad/clk?id=60134791&iu=/4140/ostg.clktrk > _______________________________________________ > Postgres-xc-general mailing list > Pos...@li... > https://lists.sourceforge.net/lists/listinfo/postgres-xc-general > > -- Best Wishes, Ashutosh Bapat EnterpriseDB Corporation The Postgres Database Company |
|
From: Ashutosh B. <ash...@en...> - 2013-10-07 03:52:39
|
On Fri, Oct 4, 2013 at 11:28 PM, Sandeep Gupta <gup...@gm...>wrote: > Hi, > > I understand that the datanodes are read only and that updates/insert can > happen at co-ordinator. Also, it does not allow modification of column over > which the records are distributed. > Users should not connect to the datanodes directly, that might break consistency. All the accesses to the data should happen through the coordinator. > > However, in case I know what I am doing, it there anyway possible to > modify the values directly at datanodes. The modifications are not over > column over which distribution happens. > > Thanks. > Sandeep > > > > ------------------------------------------------------------------------------ > October Webinars: Code for Performance > Free Intel webinars can help you accelerate application performance. > Explore tips for MPI, OpenMP, advanced profiling, and more. Get the most > from > the latest Intel processors and coprocessors. See abstracts and register > > http://pubads.g.doubleclick.net/gampad/clk?id=60134791&iu=/4140/ostg.clktrk > _______________________________________________ > Postgres-xc-general mailing list > Pos...@li... > https://lists.sourceforge.net/lists/listinfo/postgres-xc-general > > -- Best Wishes, Ashutosh Bapat EnterpriseDB Corporation The Postgres Database Company |
|
From: Stefan L. <ar...@er...> - 2013-10-06 14:20:12
|
On 10/5/2013 5:19 PM, Michael Paquier wrote: > On Sat, Oct 5, 2013 at 9:00 PM, Stefan Lekov <ar...@er...> wrote: >> Hello, I'm new to the Postgres-XC project. In fact I am still >> considering if I should install it in order to try it as a >> replacement of my current database clusters (those are based around >> MySQL and its binary_log based replication). > Have you considered PostgreSQL as a potential solution before > Postgres-XC. Why do you especially need XC? I have used PosgreSQL in the past, I am using it at the moment (for other projects) and I'd like to continue using in the future. My current requirements are including having a multi-master replicated database cluster. These requirements are related to redundancy and possible scalability. While one PostgreSQL server will cope with any load that I can throw at it for the near future, that might not be the case in about an year or two. As for the redundancy part - I am familiar with PostgreSQL capabilities of a warm standby server however I am looking for something more robust. Because of the requirement of "multi-master", I am investigating Postgres-XC and pgpool2 capabilities to deliver such system. I can migrate to a single PostgreSQL server, however I am not really keen on solving the replication dilemma on-the-fly when the system is already running with Postgres - I prefer having something that is already working as expected right from the start. >> Before actually starting the installation of postgres-xc I would like >> to know what is the procedure for restarting nodes. I have already >> read a few documents/mails regarding restoring or resyncing a failed >> datanode, however these documents does not answer my simple question: >> What should be the procedure for rebooting servers? For example I >> have a kernel updated pending (due to security reasons) - I'm >> installing the new kernel, but I have to reboot all machine. >> Theoretically all nodes (both coordinators and datanodes) are working >> on different physical servers or VMes. In a perfect scenario I would >> like to keep the system in production while I am restarting the >> servers one by one. However I am not sure what would be the effect of >> rebooting servers one by one. > If a node is restarted or facing an outage, all the transactions it > needs to be involved in will simply fail. In the case of Coordinator, > this has effect only for DDL. For Datanodes, this has effect as well > for DDL, but also for DML and SELECT of the node is needed for the > transaction. There would be no DDL during these operations. I can limit the queries to DML only. >> For purpose of example let me have four datanodes: A,B,C,D All >> servers are synced and are operating as expected. 1) Upgrade A, >> reboot A 2) INSERT/UPDATE/DELETE queries 3) A boots up and is >> successfully started 4) INSERT/UPDATE/DELETE queries 5) Upgrade B, >> reboot B ... ... As for the "Coordinators" nodes. How are those >> affected by temporary stopping and restarting the postgres-xc related >> services. What should be the load balancer in front of these servers >> in order to be able to both load-balance and fail-over if one of the >> Coordinators is offline either due to failed server or due to >> rebooting servers. > DDLs won't work. Applications will use one access point. In this case > no problems for your application, connect to the other Coordinators to > execute queries as long as they are not DDLs. What system, application or method would you recommend for performing the load-balance/fail-over of connections to the Coordinators. >> I have no problem with relatively heavy operation of full restore of >> a datanode in event of failed server. Such restoration operation can >> be properly scheduled and executed, however I am interested how would >> postgres-xc react to simple scenarioa simple operation of restarting >> a server due to whatever reasons should > As mentioned above, transactions that will need it will simply fail. > You could always failover a slave for the outage period if necessary. Correct me if I'm wrong: All data (read databases, schema, tables, etc) would be replicated to all datanodes. So before host A goes down all servers would have the same dataset. This way no transaction should fail due to the missing datanode A. While A has been booting up several transactions have passed (since such restart is an operation I can schedule, I'm doing that during time when we have low to no load on our systems, thus the transaction count is relatively low). My question is how to bring A back to having "the same dataset" as the rest of the datanodes before I can continue with the next host/datanode? Regards, Stefan Lekov |
|
From: Michael P. <mic...@gm...> - 2013-10-06 13:40:02
|
On Sun, Oct 6, 2013 at 7:45 PM, Yehezkel Horowitz <hor...@ch...> wrote: > Second, I allow myself suggest you to consider some conventions for your > mailing list (as example cUrl’s Etiquette: > http://curl.haxx.se/mail/etiquette.html) as it is quite hard to follow > threads in the archive. This is rather interesting. Thanks for pointing to that! > My goal – I have an application that needs SQL DB and must always be up (I > have a backup machine for this purpose). Have you thought about PostgreSQL itself for your solution. Is there any reason you'd need XC? Do you have an amount of data that forces you to use multi-master architecture or perhaps PG itself could handle it? > I plan to deploy as follow: > Machine A: 1 Datanode, 1 Coordinator, 1 GTM proxy, 1 GTM > Machine B: 1 Datanode, 1 Coordinator, 1 GTM proxy, 1 GTM-slave > > Both machines have my application installed on, and the clients of my > application will connect to the working machine (in normal case, they can > connect to either one of them with simple load-balancer, hence I need > multi-master replication). So all your tables will be replicated. > If I understand correctly, in case of failure in Machine A, I need to > promote the GTM-slave to become GTM master, and reconnect the GTM proxy - > all this could be done in Machine B. Right? Yep, this is doable. If all your data is replicated you would be able to do that. However you need to keep in mind that you will not be able to write new data to node B if node A is not accessible. If you data is replicated and you need to update a table, both nodes need to work. Or if you want B to be still writable, you could update the node information inside it, make it workable alone, and when server A is up again recreate a new XC node from scratch and add it again to the cluster. > My questions: > > 1. In your docs, you always put the GTM in dedicated machine. > a. Is this a requirement, just an easy to understand topology or best > practice? GTM consumes a certain amount of CPU and does not need much RAM, while for your nodes you might prioritize the opposite. > b. In case of best practice, what is the expected penalty in case the > GTM is deployed on the same machine with coordinator and datanode? CPU resource consumption and reduction of performance if your queries need some CPU with for example internal sort operations among other things. > c. In such deployment, is there a need for GTM proxy on this machine? This is actually a good question. GTM proxy is here to reduce the amount of data exchanged between GTM and the nodes. So yes if you have a lot of concurrent sessions in the whole cluster. > 2. What should I do after Machine A is back to life if I want: > a. Make it act as a new slave? > b. Make it become the master again? There is no principle of master/slave in XC like in Postgres (well you could create a slave node for an individual Coordinator/Datanode). But basically in your configuration machine A and B have the same state. Only GTM is a slave. Regards, -- Michael |
|
From: Yehezkel H. <hor...@ch...> - 2013-10-06 11:53:09
|
First I want to thank you all for this project, it seems very interesting, involving highly sophisticated technology and answering a real need of the industry. Second, I allow myself suggest you to consider some conventions for your mailing list (as example cUrl's Etiquette: http://curl.haxx.se/mail/etiquette.html) as it is quite hard to follow threads in the archive. My goal - I have an application that needs SQL DB and must always be up (I have a backup machine for this purpose). I plan to deploy as follow: Machine A: 1 Datanode, 1 Coordinator, 1 GTM proxy, 1 GTM Machine B: 1 Datanode, 1 Coordinator, 1 GTM proxy, 1 GTM-slave Both machines have my application installed on, and the clients of my application will connect to the working machine (in normal case, they can connect to either one of them with simple load-balancer, hence I need multi-mater replication). If I understand correctly, in case of failure in Machine A, I need to promote the GTM-slave to become GTM master, and reconnect the GTM proxy - all this could be done in Machine B. Right? My questions: 1. In your docs, you always put the GTM in dedicated machine. a. Is this a requirement, just an easy to understand topology or best practice? b. In case of best practice, what is the expected penalty in case the GTM is deployed on the same machine with coordinator and datanode? c. In such deployment, is there a need for GTM proxy on this machine? 2. What should I do after Machine A is back to life if I want: a. Make it act as a new slave? b. Make it become the master again? I saw this question in the archive (http://sourceforge.net/p/postgres-xc/mailman/message/31302978/), but didn't found any answer: > I suppose my question is: what do I need to do, to make the former masters > into new slaves? To me it would make sense to be able to failover node1 > once and then again, and be left with more or less the same configuration > as in the beginning. It would be okay if there is some magic command I can > run to reconfigure a former master as the new slave. Hope I don't ask silly questions, but I couldn't find answers in the docs/archive. Thanks in advanced Yehezkel Horowitz Check Point Software Technologies Ltd. |
|
From: Michael P. <mic...@gm...> - 2013-10-05 21:49:59
|
On Sat, Oct 5, 2013 at 11:48 PM, Sandeep Gupta <gup...@gm...> wrote: > > Hi Michael, > > Sure. I am using pgxc v.1. 1.0 or 1.1? > For the query explain verbose update person set intervened = 84 from d1_sum > WHERE person.pid=d1_sum.pid; > > > Query Plan: > > Update on public.person (cost=0.00..0.00 rows=1000 width=24) > Node/s: datanode1 > Node expr: person.pid > Remote query: UPDATE ONLY public.person SET intervened = $3 WHERE > ((person.ctid = $4) AND (person.xc_node_id = $5)) > -> Data Node Scan on "_REMOTE_TABLE_QUERY_" (cost=0.00..0.00 rows=1000 > width=24) > Output: person.pid, person.persons, 84, person.pid, person.ctid, > person.xc_node_id, d1_sum.ctid > Node/s: datanode1 > Remote query: SELECT l.a_1, l.a_2, l.a_3, l.a_4, r.a_1 FROM > ((SELECT person.pid, person.persons, person.ctid, person.xc_node_i > d FROM ONLY public.person WHERE true) l(a_1, a_2, a_3, a_4) JOIN (SELECT > d1_sum.ctid, d1_sum.pid FROM ONLY public.d1_sum WHERE true) r( > a_1, a_2) ON (true)) WHERE (l.a_1 = r.a_2) > (8 rows) > > For the second style > > explain verbose update person set intervened = 84 where person.pid = (select > d1_sum.pid from d1_sum,person WHERE person.pid=d1_sum.pid); > > Update on public.person (cost=0.00..0.00 rows=1000 width=18) > Node/s: datanode1 > Node expr: public.person.pid > Remote query: UPDATE ONLY public.person SET intervened = $3 WHERE > ((person.ctid = $4) AND (person.xc_node_id = $5)) > InitPlan 1 (returns $0) > -> Data Node Scan on "_REMOTE_TABLE_QUERY_" (cost=0.00..0.00 > rows=1000 width=4) > Output: d1_sum.pid > Node/s: datanode1 > Remote query: SELECT l.a_1 FROM ((SELECT d1_sum.pid FROM ONLY > public.d1_sum WHERE true) l(a_1) JOIN (SELECT person.pid FROM > ONLY public.person WHERE true) r(a_1) ON (true)) WHERE (l.a_1 = r.a_1) > -> Data Node Scan on person "_REMOTE_TABLE_QUERY_" (cost=0.00..0.00 > rows=1000 width=18) > Output: public.person.pid, public.person.persons, 84, > public.person.pid, public.person.ctid, public.person.xc_node_id > Node/s: datanode1 > Remote query: SELECT pid, persons, ctid, xc_node_id FROM ONLY > public.person WHERE true > Coordinator quals: (public.person.pid = $0) > > > In both the scenarios the planner breaks it into two parts: update and join. > The results of join is pulled up at the coordinator and then shipped one by > one for update. Indeed you are right. It seems that FROM clause support in UPDATE is limited? Others, comments on that? I thought that there has been some work done in the area. -- Michael |
|
From: Sandeep G. <gup...@gm...> - 2013-10-05 14:48:09
|
Hi Michael,
Sure. I am using pgxc v.1.
For the query explain verbose update person set intervened = 84 from d1_sum
WHERE person.pid=d1_sum.pid;
Query Plan:
Update on public.person (cost=0.00..0.00 rows=1000 width=24)
Node/s: datanode1
Node expr: person.pid
Remote query: UPDATE ONLY public.person SET intervened = $3 WHERE
((person.ctid = $4) AND (person.xc_node_id = $5))
-> Data Node Scan on "_REMOTE_TABLE_QUERY_" (cost=0.00..0.00 rows=1000
width=24)
Output: person.pid, person.persons, 84, person.pid, person.ctid,
person.xc_node_id, d1_sum.ctid
Node/s: datanode1
Remote query: SELECT l.a_1, l.a_2, l.a_3, l.a_4, r.a_1 FROM
((SELECT person.pid, person.persons, person.ctid, person.xc_node_i
d FROM ONLY public.person WHERE true) l(a_1, a_2, a_3, a_4) JOIN (SELECT
d1_sum.ctid, d1_sum.pid FROM ONLY public.d1_sum WHERE true) r(
a_1, a_2) ON (true)) WHERE (l.a_1 = r.a_2)
(8 rows)
For the second style
explain verbose update person set intervened = 84 where person.pid =
(select d1_sum.pid from d1_sum,person WHERE person.pid=d1_sum.pid);
Update on public.person (cost=0.00..0.00 rows=1000 width=18)
Node/s: datanode1
Node expr: public.person.pid
Remote query: UPDATE ONLY public.person SET intervened = $3 WHERE
((person.ctid = $4) AND (person.xc_node_id = $5))
InitPlan 1 (returns $0)
-> Data Node Scan on "_REMOTE_TABLE_QUERY_" (cost=0.00..0.00
rows=1000 width=4)
Output: d1_sum.pid
Node/s: datanode1
Remote query: SELECT l.a_1 FROM ((SELECT d1_sum.pid FROM ONLY
public.d1_sum WHERE true) l(a_1) JOIN (SELECT person.pid FROM
ONLY public.person WHERE true) r(a_1) ON (true)) WHERE (l.a_1 = r.a_1)
-> Data Node Scan on person "_REMOTE_TABLE_QUERY_" (cost=0.00..0.00
rows=1000 width=18)
Output: public.person.pid, public.person.persons, 84,
public.person.pid, public.person.ctid, public.person.xc_node_id
Node/s: datanode1
Remote query: SELECT pid, persons, ctid, xc_node_id FROM ONLY
public.person WHERE true
Coordinator quals: (public.person.pid = $0)
In both the scenarios the planner breaks it into two parts: update and
join. The results of join is pulled up at the coordinator and then shipped
one by one for update.
Thanks for taking a look.
-Sandeep
On Sat, Oct 5, 2013 at 10:26 AM, Michael Paquier
<mic...@gm...>wrote:
> On Sat, Oct 5, 2013 at 11:14 PM, Sandeep Gupta <gup...@gm...>
> wrote:
> > Thanks Michael. I understand. The only issue is that we have an update
> > query as
> >
> > update T set T.a = -1 from A where A.x = T.x
> >
> >
> > Both A and T and distributed by x column. The problem is that coordinator
> > first does the join and then
> > calls update several times at each datanode. This is turning out to be
> too
> > slow. Would have
> > been better if the entire query was shipped to the datanodes.
> Hum?! Logically, I would imagine that if A and T are distributed by x
> this WHERE clause should be pushed down as the SET clause is a
> constant. However perhaps UPDATE FROM does not have an explicit
> support... Could you provide the version number and an EXPLAIN VERBOSE
> output?
>
> What if you put the where join in a subquery or a WITH clause? Like
> that for example:
> update T set T.a = -1 where A.x = (select A.x from A,T where A.x = T.x);
> --
> Michael
>
|
|
From: Michael P. <mic...@gm...> - 2013-10-05 14:26:59
|
On Sat, Oct 5, 2013 at 11:14 PM, Sandeep Gupta <gup...@gm...> wrote: > Thanks Michael. I understand. The only issue is that we have an update > query as > > update T set T.a = -1 from A where A.x = T.x > > > Both A and T and distributed by x column. The problem is that coordinator > first does the join and then > calls update several times at each datanode. This is turning out to be too > slow. Would have > been better if the entire query was shipped to the datanodes. Hum?! Logically, I would imagine that if A and T are distributed by x this WHERE clause should be pushed down as the SET clause is a constant. However perhaps UPDATE FROM does not have an explicit support... Could you provide the version number and an EXPLAIN VERBOSE output? What if you put the where join in a subquery or a WITH clause? Like that for example: update T set T.a = -1 where A.x = (select A.x from A,T where A.x = T.x); -- Michael |
|
From: Michael P. <mic...@gm...> - 2013-10-05 14:19:10
|
On Sat, Oct 5, 2013 at 9:00 PM, Stefan Lekov <ar...@er...> wrote: > Hello, > > I'm new to the Postgres-XC project. In fact I am still considering if I > should install it > in order to try it as a replacement of my current database clusters > (those are based > around MySQL and its binary_log based replication). Have you considered PostgreSQL as a potential solution before Postgres-XC. Why do you especially need XC? > Before actually starting the installation of postgres-xc I would like > to know what is the procedure for restarting > nodes. I have already read a few documents/mails regarding restoring or > resyncing > a failed datanode, however these documents does not answer my simple > question: > > What should be the procedure for rebooting servers? For example I have a > kernel > updated pending (due to security reasons) - I'm installing the new > kernel, but I have > to reboot all machine. Theoretically all nodes (both coordinators and > datanodes) > are working on different physical servers or VMes. In a perfect scenario > I would > like to keep the system in production while I am restarting the servers > one by one. > However I am not sure what would be the effect of rebooting servers one > by one. If a node is restarted or facing an outage, all the transactions it needs to be involved in will simply fail. In the case of Coordinator, this has effect only for DDL. For Datanodes, this has effect as well for DDL, but also for DML and SELECT of the node is needed for the transaction. > For purpose of example let me have four datanodes: A,B,C,D > > All servers are synced and are operating as expected. > 1) Upgrade A, reboot A > 2) INSERT/UPDATE/DELETE queries > 3) A boots up and is successfully started > 4) INSERT/UPDATE/DELETE queries > 5) Upgrade B, reboot B > ... > ... > As for the "Coordinators" nodes. How are those affected by temporary > stopping > and restarting the postgres-xc related services. What should be the load > balancer > in front of these servers in order to be able to both load-balance and > fail-over if > one of the Coordinators is offline either due to failed server or due to > rebooting > servers. DDLs won't work. Applications will use one access point. In this case no problems for your application, connect to the other Coordinators to execute queries as long as they are not DDLs. > I have no problem with relatively heavy operation of full restore of a > datanode in > event of failed server. Such restoration operation can be properly > scheduled and > executed, however I am interested how would postgres-xc react to simple > scenarioa simple operation of restarting a server due to whatever > reasons should As mentioned above, transactions that will need it will simply fail. You could always failover a slave for the outage period if necessary. -- Michael |
|
From: Sandeep G. <gup...@gm...> - 2013-10-05 14:14:26
|
Thanks Michael. I understand. The only issue is that we have an update query as update T set T.a = -1 from A where A.x = T.x Both A and T and distributed by x column. The problem is that coordinator first does the join and then calls update several times at each datanode. This is turning out to be too slow. Would have been better if the entire query was shipped to the datanodes. Thanks. Sandeep On Sat, Oct 5, 2013 at 6:27 AM, Michael Paquier <mic...@gm...>wrote: > On Sat, Oct 5, 2013 at 2:58 AM, Sandeep Gupta <gup...@gm...> > wrote: > > I understand that the datanodes are read only and that updates/insert > can > > happen at coordinator. > You got it. > > > Also, it does not allow modification of column over which the records > are distributed. > Hum no, 1.1 allows ALTER TABLE that you can use to change the > distribution type of a table. > > > However, in case I know what I am doing, it there anyway possible to > modify > > the values directly at datanodes. > > The modifications are not over column over which distribution happens. > If you mean by connecting directly to the Datanodes, no. You would > break data consistency if table is replicated by the way by doing > that. Let the Coordinator planner do the job and choose the remote > nodes for you. > > There have been discussion to merge Coordinators and Datanodes > together though. This would allow what you say, with a simpler cluster > design. > -- > Michael > |
|
From: Stefan L. <ar...@er...> - 2013-10-05 12:27:37
|
Hello, I'm new to the Postgres-XC project. In fact I am still considering if I should install it in order to try it as a replacement of my current database clusters (those are based around MySQL and its binary_log based replication). Before actually starting the installation of postgres-xc I would like to know what is the procedure for restarting nodes. I have already read a few documents/mails regarding restoring or resyncing a failed datanode, however these documents does not answer my simple question: What should be the procedure for rebooting servers? For example I have a kernel updated pending (due to security reasons) - I'm installing the new kernel, but I have to reboot all machine. Theoretically all nodes (both coordinators and datanodes) are working on different physical servers or VMes. In a perfect scenario I would like to keep the system in production while I am restarting the servers one by one. However I am not sure what would be the effect of rebooting servers one by one. For purpose of example let me have four datanodes: A,B,C,D All servers are synced and are operating as expected. 1) Upgrade A, reboot A 2) INSERT/UPDATE/DELETE queries 3) A boots up and is successfully started 4) INSERT/UPDATE/DELETE queries 5) Upgrade B, reboot B ... ... As for the "Coordinators" nodes. How are those affected by temporary stopping and restarting the postgres-xc related services. What should be the load balancer in front of these servers in order to be able to both load-balance and fail-over if one of the Coordinators is offline either due to failed server or due to rebooting servers. I have no problem with relatively heavy operation of full restore of a datanode in event of failed server. Such restoration operation can be properly scheduled and executed, however I am interested how would postgres-xc react to simple scenarioa simple operation of restarting a server due to whatever reasons should Kind Regards, Stefan Lekov |
|
From: Michael P. <mic...@gm...> - 2013-10-05 10:27:21
|
On Sat, Oct 5, 2013 at 2:58 AM, Sandeep Gupta <gup...@gm...> wrote: > I understand that the datanodes are read only and that updates/insert can > happen at coordinator. You got it. > Also, it does not allow modification of column over which the records are distributed. Hum no, 1.1 allows ALTER TABLE that you can use to change the distribution type of a table. > However, in case I know what I am doing, it there anyway possible to modify > the values directly at datanodes. > The modifications are not over column over which distribution happens. If you mean by connecting directly to the Datanodes, no. You would break data consistency if table is replicated by the way by doing that. Let the Coordinator planner do the job and choose the remote nodes for you. There have been discussion to merge Coordinators and Datanodes together though. This would allow what you say, with a simpler cluster design. -- Michael |
|
From: Sandeep G. <gup...@gm...> - 2013-10-04 17:58:31
|
Hi, I understand that the datanodes are read only and that updates/insert can happen at co-ordinator. Also, it does not allow modification of column over which the records are distributed. However, in case I know what I am doing, it there anyway possible to modify the values directly at datanodes. The modifications are not over column over which distribution happens. Thanks. Sandeep |
|
From: Julian <jul...@gm...> - 2013-10-04 09:59:47
|
Dear Sir,
My cluster has configured by pg_ctl now, but till filed.
Error message
--------------------------------------------------------------------------------
PGXC add coordinator master coord4 node4 20004 20010 /opt/pgxc/nodes/coord
…...
Actual Command: ssh pgxc@node4 "( pg_ctl start -Z restoremode -D /opt/pgxc/nodes/coord -o -i ) > /tmp/squeeze-10-200_STDOUT_2618_16 2>&1" < /dev/null > /dev/null 2>&1
Bring remote stdout: scp pgxc@node4:/tmp/squeeze-10-200_STDOUT_2618_16 /tmp/STDOUT_2618_17 > /dev/null 2>&1
SET
SET
psql:/tmp/GENERAL_2618_15:12: ERROR: role "pgxc" already exists
ALTER ROLE
REVOKE
REVOKE
GRANT
GRANT
CREATE NODE
CREATE NODE
CREATE NODE
CREATE NODE
CREATE NODE
CREATE NODE
You are now connected to database "postgres" as user "pgxc".
SET
SET
SET
SET
SET
COMMENT
CREATE EXTENSION
COMMENT
SET
SET
SET
psql:/tmp/GENERAL_2618_15:92: connection to server was lost
Actual Command: ssh pgxc@node4 "( pg_ctl stop -Z restoremode -D /opt/pgxc/nodes/coord ) > /tmp/squeeze-10-200_STDOUT_2618_18 2>&1" < /dev/null > /dev/null 2>&1
Bring remote stdout: scp pgxc@node4:/tmp/squeeze-10-200_STDOUT_2618_18 /tmp/STDOUT_2618_19 > /dev/null 2>&1
Starting coordinator master coord4
Done.
CREATE NODE
CREATE NODE
CREATE NODE
ALTER NODE
----------------------------------------------------------------
Error log
--------------------------------------------------------------------
ERROR: role "pgxc" already exists
STATEMENT: CREATE ROLE pgxc;
LOG: server process (PID 3013) was terminated by signal 11: Segmentation fault
DETAIL: Failed process was running: CREATE TABLE user_info_hash (
id integer NOT NULL,
firstname text,
lastname text,
info text
)
DISTRIBUTE BY HASH (id)
TO NODE (datanode1,datanode2,datanode3);
LOG: terminating any other active server processes
WARNING: terminating connection because of crash of another server process
DETAIL: The postmaster has commanded this server process to roll back the current transaction and exit, because another server process exited abnormally and possibly co
rrupted shared memory.
HINT: In a moment you should be able to reconnect to the database and repeat your command.
---------------------------------------------------------------------------
Any comments are appreciated
Best Regards,
--
Julian
使用 Sparrow (http://www.sparrowmailapp.com/?sig) 發信
On 2013年10月3日Thursday at 上午9:31, Koichi Suzuki wrote:
> You cannot add a coordinator in such a way. There're many issued to be resolved internally. You can configure and operate whole cluster with pgxc_ctl to get handy way to add coordinator/datanode.
>
> I understand you have your cluster configured without pgxc_ctl. In this case, adding coordinator manually could be a bit complicated work. Sorry, I've not uploaded the detailed step to do it.
>
> Whole steps will be found in add_coordinatorMaster() function defined in coord_cmd.c of pgxc_ctl source code. It will be found at contrib/pgxc_ctl in the release material.
>
> Please allow a bit of time to find my time to upload this information to XC wiki.
>
> Or, you can backup whole database with pg_dumpall, then reconfigure new xc cluster with additional coordinator, and then restore the backup.
>
> Regards;
>
> ---
> Koichi Suzuki
>
>
>
>
> 2013/10/3 Julian <jul...@gm... (mailto:jul...@gm...)>
> > Dear Sir,
> >
> > I have a cluster with 3 coordinators and 3 datanodes on 3 VM, today i
> > was try to added a new coordinator to the cluster, when i using command
> > "psql postgres -f coordinator.sql -p 5455" to restore the backup file to
> > the new coordinator.
> >
> > Then i got this message :
> >
> > psql:coordinator-dump.sql:105: connection to server was lost
> >
> > In the log file:
> > ----------------------------------------------------------------------------------------------------------------------------------------------------------------
> > 2013-10-02 23:00:09.811 CST,,,29061,,524c34de.7185,2,,2013-10-02
> > 22:59:42 CST,,0,LOG,00000,"server process (PID 29094) was terminated by
> > signal 11: Segmentation fault","Failed process was r
> > unning: CREATE TABLE user_info_hash (
> > id integer NOT NULL,
> > firstname text,
> > lastname text,
> > info text
> > )
> > DISTRIBUTE BY HASH (id)
> > TO NODE (dn2,dn1,dn3);",,,,,,,,""
> > 2013-10-02 23:00:09.811 CST,,,29061,,524c34de.7185,3,,2013-10-02
> > 22:59:42 CST,,0,LOG,00000,"terminating any other active server
> > processes",,,,,,,,,""
> > ------------------------------------------------------------------------------------------------------------------------------------------------------------------
> > refer to
> > http://postgres-xc.sourceforge.net/docs/1_1/add-node-coordinator.html
> >
> > Is there something i doing worng?
> >
> >
> > Thanks for your kindly reply.
> >
> > And sorry for my poor english.
> >
> >
> > Best regards.
> >
> > ------------------------------------------------------------------------------
> > October Webinars: Code for Performance
> > Free Intel webinars can help you accelerate application performance.
> > Explore tips for MPI, OpenMP, advanced profiling, and more. Get the most from
> > the latest Intel processors and coprocessors. See abstracts and register >
> > http://pubads.g.doubleclick.net/gampad/clk?id=60134791&iu=/4140/ostg.clktrk
> > _______________________________________________
> > Postgres-xc-general mailing list
> > Pos...@li... (mailto:Pos...@li...)
> > https://lists.sourceforge.net/lists/listinfo/postgres-xc-general
>
|
|
From: Koichi S. <koi...@gm...> - 2013-10-04 02:33:11
|
Sorry, I'm not familiar with Hibernate but JDBC does support DISTRIBUTE BY REPLICATION. You can issue CREATE TABLE through general jdbc apps, or you can issue ALTER TABLE to change distributed table into replicated table outside Hibernate, through psql. There's no influence in other query statement. Regards; --- Koichi Suzuki 2013/10/4 Anson Abraham <ans...@gm...> > So I migrated a pg database (9.1) to postgres-xc 1.1. Required me to > essentially apply the Distribute By Replication to most of the tables. But > doing so, apparently this app threw out an error: > > Unable to upgrade schema to latest version. > org.hibernate.exception.GenericJDBCException: ResultSet not positioned properly, perhaps you need to call next. > at org.hibernate.exception.internal.StandardSQLExceptionConverter.convert(StandardSQLExceptionConverter.java:54) > at org.hibernate.engine.jdbc.spi.SqlExceptionHelper.convert(SqlExceptionHelper.java:125) > at org.hibernate.engine.jdbc.spi.SqlExceptionHelper.convert(SqlExceptionHelper.java:110) > at org.hibernate.engine.jdbc.internal.proxy.AbstractResultSetProxyHandler.continueInvocation(AbstractResultSetProxyHandler.java:108) > at org.hibernate.engine.jdbc.internal.proxy.AbstractProxyHandler.invoke(AbstractProxyHandler.java:81) > at $Proxy10.getInt(Unknown Source) > at com.cloudera.enterprise.dbutil.DbUtil.getSchemaVersion(DbUtil.java:212) > at com.cloudera.enterprise.dbutil.DbUtil$1SchemaVersionWork.execute(DbUtil.java:159) > at org.hibernate.jdbc.WorkExecutor.executeWork(WorkExecutor.java:54) > at org.hibernate.internal.SessionImpl$2.accept(SessionImpl.java:1937) > at org.hibernate.internal.SessionImpl$2.accept(SessionImpl.java:1934) > at org.hibernate.engine.jdbc.internal.JdbcCoordinatorImpl.coordinateWork(JdbcCoordinatorImpl.java:211) > at org.hibernate.internal.SessionImpl.doWork(SessionImpl.java:1955) > at org.hibernate.internal.SessionImpl.doWork(SessionImpl.java:1941) > at com.cloudera.enterprise.dbutil.DbUtil.getSchemaVersion(DbUtil.java:171) > at com.cloudera.enterprise.dbutil.DbUtil.upgradeSchema(DbUtil.java:333) > at com.cloudera.cmon.FhDatabaseManager.initialize(FhDatabaseManager.java:68) > at com.cloudera.cmon.firehose.Main.main(Main.java:339) > Caused by: org.postgresql.util.PSQLException: ResultSet not positioned properly, perhaps you need to call next. > at org.postgresql.jdbc2.AbstractJdbc2ResultSet.checkResultSet(AbstractJdbc2ResultSet.java:2695) > at org.postgresql.jdbc2.AbstractJdbc2ResultSet.getInt(AbstractJdbc2ResultSet.java:1992) > at com.mchange.v2.c3p0.impl.NewProxyResultSet.getInt(NewProxyResultSet.java:2547) > at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method) > at sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:39) > at sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:25) > at java.lang.reflect.Method.invoke(Method.java:597) > at org.hibernate.engine.jdbc.internal.proxy.AbstractResultSetProxyHandler.continueInvocation(AbstractResultSetProxyHandler.java:104) > ... > > > So I'm assuming hibernate does not support Distribute by Replication? Is that so, or did I not need to apply distribute by replication, though the table has a PK w/ is also reference as FK from another table? If not, is there a hack to get around this, w/o having to recompile hibernate objects? > > > > ------------------------------------------------------------------------------ > October Webinars: Code for Performance > Free Intel webinars can help you accelerate application performance. > Explore tips for MPI, OpenMP, advanced profiling, and more. Get the most > from > the latest Intel processors and coprocessors. See abstracts and register > > http://pubads.g.doubleclick.net/gampad/clk?id=60134791&iu=/4140/ostg.clktrk > _______________________________________________ > Postgres-xc-general mailing list > Pos...@li... > https://lists.sourceforge.net/lists/listinfo/postgres-xc-general > > |