You can subscribe to this list here.
2010 |
Jan
|
Feb
|
Mar
|
Apr
|
May
(2) |
Jun
|
Jul
|
Aug
(6) |
Sep
|
Oct
(19) |
Nov
(1) |
Dec
|
---|---|---|---|---|---|---|---|---|---|---|---|---|
2011 |
Jan
(12) |
Feb
(1) |
Mar
(4) |
Apr
(4) |
May
(32) |
Jun
(12) |
Jul
(11) |
Aug
(1) |
Sep
(6) |
Oct
(3) |
Nov
|
Dec
(10) |
2012 |
Jan
(11) |
Feb
(1) |
Mar
(3) |
Apr
(25) |
May
(53) |
Jun
(38) |
Jul
(103) |
Aug
(54) |
Sep
(31) |
Oct
(66) |
Nov
(77) |
Dec
(20) |
2013 |
Jan
(91) |
Feb
(86) |
Mar
(103) |
Apr
(107) |
May
(25) |
Jun
(37) |
Jul
(17) |
Aug
(59) |
Sep
(38) |
Oct
(78) |
Nov
(29) |
Dec
(15) |
2014 |
Jan
(23) |
Feb
(82) |
Mar
(118) |
Apr
(101) |
May
(103) |
Jun
(45) |
Jul
(6) |
Aug
(10) |
Sep
|
Oct
(32) |
Nov
|
Dec
(9) |
2015 |
Jan
(3) |
Feb
(5) |
Mar
|
Apr
(1) |
May
|
Jun
|
Jul
(9) |
Aug
(4) |
Sep
(3) |
Oct
|
Nov
|
Dec
|
2016 |
Jan
(3) |
Feb
|
Mar
|
Apr
|
May
|
Jun
|
Jul
|
Aug
|
Sep
|
Oct
|
Nov
|
Dec
|
2017 |
Jan
|
Feb
|
Mar
|
Apr
|
May
|
Jun
(3) |
Jul
|
Aug
|
Sep
|
Oct
|
Nov
|
Dec
|
2018 |
Jan
|
Feb
|
Mar
|
Apr
|
May
(4) |
Jun
|
Jul
|
Aug
|
Sep
|
Oct
|
Nov
|
Dec
|
S | M | T | W | T | F | S |
---|---|---|---|---|---|---|
|
|
|
|
|
|
1
|
2
|
3
|
4
|
5
(19) |
6
|
7
(3) |
8
|
9
|
10
(1) |
11
(4) |
12
(1) |
13
(4) |
14
|
15
|
16
|
17
|
18
|
19
|
20
(2) |
21
|
22
|
23
|
24
(1) |
25
(2) |
26
|
27
|
28
|
29
|
30
|
|
|
|
|
|
|
From: 鈴木 幸市 <ko...@in...> - 2013-06-25 03:32:31
|
If it doesn't work, here's a backup you can try. git://github.com/postgres-xc/postgres-xc.git --- Koichi Suzuki On 2013/06/25, at 10:45, Michael Paquier <mic...@gm...> wrote: > On Tue, Jun 25, 2013 at 12:54 AM, Adam Dec <ada...@gm...> wrote: >> Do I need any credential to clone Postgres XC from GIT repository. >> I would like to test version 1.1. > You need no credential when fetching in read mode the GIT repo on > sourceforce. Use this URL to clone it: > git://postgres-xc.git.sourceforge.net/gitroot/postgres-xc/postgres-xc > -- > Michael > > ------------------------------------------------------------------------------ > This SF.net email is sponsored by Windows: > > Build for Windows Store. > > http://p.sf.net/sfu/windows-dev2dev > _______________________________________________ > Postgres-xc-general mailing list > Pos...@li... > https://lists.sourceforge.net/lists/listinfo/postgres-xc-general > |
From: Michael P. <mic...@gm...> - 2013-06-25 01:45:44
|
On Tue, Jun 25, 2013 at 12:54 AM, Adam Dec <ada...@gm...> wrote: > Do I need any credential to clone Postgres XC from GIT repository. > I would like to test version 1.1. You need no credential when fetching in read mode the GIT repo on sourceforce. Use this URL to clone it: git://postgres-xc.git.sourceforge.net/gitroot/postgres-xc/postgres-xc -- Michael |
From: Adam D. <ada...@gm...> - 2013-06-24 15:54:24
|
Hi al!! Do I need any credential to clone Postgres XC from GIT repository. I would like to test version 1.1. Regards, Adam Dec |
From: Matt W. <MW...@XI...> - 2013-06-20 18:03:09
|
As I look at this in more detail, I have a couple of thoughts: 1. This may be a performance issue as much as anything. 2. It seems to me that a tiny amount of data won’t be able to reproduce the issue. I’ve used a table that has fewer columns to try to reproduce this. For example, this actually completes: # create table accn_proc_tiny distribute by hash (fk_accn_id) to node (node1,node2,node3,node4,node5,node6,node7,node8) as select fk_accn_id,pk_accn_proc_seq_id,fk_sta_id from accn_proc_reduced limit 1; INSERT 0 1 Changing the limit value to 10,000; 100,000; and even 1M also finish successfully: # create table accn_proc_tiny distribute by hash (fk_accn_id) to node (node1,node2,node3,node4,node5,node6,node7,node8) as select fk_accn_id,pk_accn_proc_seq_id,fk_sta_id from accn_proc_reduced limit 10000; INSERT 0 10000 Time: 1699.595 ms # create table accn_proc_tiny distribute by hash (fk_accn_id) to node (node1,node2,node3,node4,node5,node6,node7,node8) as select fk_accn_id,pk_accn_proc_seq_id,fk_sta_id from accn_proc_reduced limit 100000; INSERT 0 100000 Time: 16358.605 ms # create table accn_proc_tiny distribute by hash (fk_accn_id) to node (node1,node2,node3,node4,node5,node6,node7,node8) as select fk_accn_id,pk_accn_proc_seq_id,fk_sta_id from accn_proc_reduced limit 1000000; INSERT 0 1000000 Time: 163090.452 ms Assuming I’ve done the math correctly, the 84M records in the source table would take almost 4 hours to create “accn_proc_tiny” whereas a copy to file and reimport takes about 30-40 minutes total. Here’s the table structure: # \d accn_proc_reduced Table "public.accn_proc_reduced" Column | Type | Modifiers --------------------------+-----------------------+----------- pk_accn_proc_seq_id | bigint | fk_proc_id | character varying(40) | fk_fac_id | integer | fk_accn_id | character varying(40) | fk_subm_file_seq_id | integer | fk_sta_id | integer | exp_prc | numeric | bil_prc | numeric | gross_prc | numeric | due_amt | numeric | due_amt_with_bulk | numeric | exp_due_amt | numeric | exp_due_amt_without_bulk | numeric | fk_bill_fac_id | integer | Indexes: "accn_proc_reduced_fk_sta_id" btree (fk_sta_id) From: Koichi Suzuki [mailto:koi...@gm...] Sent: Thursday, June 20, 2013 1:44 AM To: Matt Warner Cc: Pos...@li... Subject: Re: [Postgres-xc-general] Problem With Memory Handling When Creating Table Subset Sorry for the late reply. I suspect this is caused by XC bug. It's helpful if you provide the original table structure with small amount of data to reproduce the problem. I hope the issue can be analyzed or reproduced with quite small amount of data. Best Regards; ---------- Koichi Suzuki 2013/6/12 Matt Warner <MW...@xi...<mailto:MW...@xi...>> For comparison, it only takes about 15 minutes to export just the reduced number of columns for the 56M rows to a file and then another 15 minutes to import that into a new, distributed table. Matt From: Matt Warner Sent: Tuesday, June 11, 2013 4:12 PM To: Pos...@li...<mailto:Pos...@li...> Subject: Problem With Memory Handling When Creating Table Subset I’m using the pre-release version of XC and noticed that with very large tables the coordinator is consuming a huge amount of memory. Specifically: • I have a table distributed by hash across 8 nodes. The initial load had no issues (using ‘copy table from file’ syntax). • The table has 56M total records in it. What’s failing is the statement below, which selects a subset of the columns in the original table and creates a new distributed table. Note that I’m using the same hash key that was used to originally distribute the data. That is, the data isn’t really being redistributed, but (conceptually) just copied within the node to a different table with fewer columns: create table accn_proc_reduced distribute by hash(fk_accn_id) to node (node1, node2, node3, node4, node5, node6, node7,node8) as select pk_accn_proc_seq_id,fk_proc_id,fk_fac_id,fk_accn_id,fk_subm_file_seq_id,fk_sta_id,exp_prc,bil_prc,gross_prc,due_amt,due_amt_with_bulk,exp_due_amt,exp_due_amt_without_bulk,fk_bill_fac_id from accn_proc; Version of XC: psql (PGXC 1.1devel, based on PG 9.2beta2) When I say that the coordinator is consuming a huge amount of RAM, I mean that a system running these 8 nodes, the GTM, and the coordinator has 64GB RAM, and that the coordinator’s consumption keeps growing until the system runs out of RAM+swap space and finally generates an “ERROR: out of memory” message, and then dumps a core file. The process of running the system out of RAM while trying to create this new table takes about 6 hours, much longer than the initial load of the data. The only changes to the default postgresql.conf file are to set shared_buffers=1GB and effective_cache_size=1GB (8 nodes, 64GB RAM + swap in the system). Assuming this is not expected behavior, what debug data can I provide to troubleshoot? Matt ------------------------------------------------------------------------------ This SF.net email is sponsored by Windows: Build for Windows Store. http://p.sf.net/sfu/windows-dev2dev _______________________________________________ Postgres-xc-general mailing list Pos...@li...<mailto:Pos...@li...> https://lists.sourceforge.net/lists/listinfo/postgres-xc-general |
From: Koichi S. <koi...@gm...> - 2013-06-20 08:44:14
|
Sorry for the late reply. I suspect this is caused by XC bug. It's helpful if you provide the original table structure with small amount of data to reproduce the problem. I hope the issue can be analyzed or reproduced with quite small amount of data. Best Regards; ---------- Koichi Suzuki 2013/6/12 Matt Warner <MW...@xi...> > For comparison, it only takes about 15 minutes to export just the reduced > number of columns for the 56M rows to a file and then another 15 minutes to > import that into a new, distributed table.**** > > ** ** > > Matt**** > > ** ** > > *From:* Matt Warner > *Sent:* Tuesday, June 11, 2013 4:12 PM > *To:* Pos...@li... > *Subject:* Problem With Memory Handling When Creating Table Subset**** > > ** ** > > I’m using the pre-release version of XC and noticed that with very large > tables the coordinator is consuming a huge amount of memory.**** > > ** ** > > Specifically:**** > > **· **I have a table distributed by hash across 8 nodes. The > initial load had no issues (using ‘copy table from file’ syntax).**** > > **· **The table has 56M total records in it.**** > > ** ** > > What’s failing is the statement below, which selects a subset of the > columns in the original table and creates a new distributed table. Note > that I’m using the same hash key that was used to originally distribute the > data. That is, the data isn’t really being redistributed, but > (conceptually) just copied within the node to a different table with fewer > columns:**** > > ** ** > > create table accn_proc_reduced distribute by hash(fk_accn_id) to node > (node1, node2, node3, node4, node5, node6, node7,node8) as select > pk_accn_proc_seq_id,fk_proc_id,fk_fac_id,fk_accn_id,fk_subm_file_seq_id,fk_sta_id,exp_prc,bil_prc,gross_prc,due_amt,due_amt_with_bulk,exp_due_amt,exp_due_amt_without_bulk,fk_bill_fac_id > from accn_proc;**** > > ** ** > > ** ** > > Version of XC:**** > > psql (PGXC 1.1devel, based on PG 9.2beta2)**** > > ** ** > > When I say that the coordinator is consuming a huge amount of RAM, I mean > that a system running these 8 nodes, the GTM, and the coordinator has 64GB > RAM, and that the coordinator’s consumption keeps growing until the system > runs out of RAM+swap space and finally generates an “ERROR: out of memory” > message, and then dumps a core file. *The process of running the system > out of RAM while trying to create this new table takes about 6 hours, much > longer than the initial load of the data.***** > > ** ** > > The only changes to the default postgresql.conf file are to set > shared_buffers=1GB and effective_cache_size=1GB (8 nodes, 64GB RAM + swap > in the system).**** > > ** ** > > Assuming this is not expected behavior, what debug data can I provide to > troubleshoot?**** > > ** ** > > Matt**** > > > ------------------------------------------------------------------------------ > This SF.net email is sponsored by Windows: > > Build for Windows Store. > > http://p.sf.net/sfu/windows-dev2dev > _______________________________________________ > Postgres-xc-general mailing list > Pos...@li... > https://lists.sourceforge.net/lists/listinfo/postgres-xc-general > > |
From: Abbas B. <abb...@en...> - 2013-06-13 14:31:08
|
I see that you got the same value for three successive calls to nextval, and the PK constraint also did not complain while inserting. I do not remember if that was a known bug in 1.0, but can you please try it on a later version of XC, 1.1 if not master. On Thu, Jun 13, 2013 at 4:39 PM, Afonso Bione <aag...@gm...> wrote: > Hi, > > I miss to send this! > > psql (PGXC 1.0.3, based on PG 9.1.9) > Type "help" for help. > > moodle=# INSERT INTO mdl_context VALUES(nextval('mdl_context_id_seq'), > 50,'1',0,1); > INSERT 0 1 > moodle=# INSERT INTO mdl_context VALUES(nextval('mdl_context_id_seq'), > 50,'1',0,1); > INSERT 0 1 > moodle=# INSERT INTO mdl_context VALUES(nextval('mdl_context_id_seq'), > 50,'1',0,1); > INSERT 0 1 > moodle=# select * from mdl_context; > > id | contextlevel | instanceid | path | depth > ----+--------------+------------+------+------- > 1 | 50 | 1 | 0 | 1 > 1 | 50 | 1 | 0 | 1 > 1 | 50 | 1 | 0 | 1 > > thanks > > > > On Thu, Jun 13, 2013 at 7:37 AM, Afonso Bione <aag...@gm...> wrote: > >> Hi, Abbas, >> >> First I'd the thank you for your prompt answer, >> 1 - I tried to install direct from moodle.org (moodle25+) >> the first column id is missing because its use a sequence to get >> the next value. >> I installed the same version (moodle25+) with postgres91 and >> everything goes well. >> >> 2 - when I tried to install in the same way, in postgres-xc I get the >> error message tha t i send to you >> I will try your suggestions >> and try it again, >> >> When I finished I will share our experience >> Best Regards >> Afonso Bione >> >> >> On Thu, Jun 13, 2013 at 3:54 AM, Abbas Butt <abb...@en...>wrote: >> >>> >>> Hi, >>> There are a few things you need to look at. >>> >>> 1. The definition of the table mdl_context has 5 columns, whereas the >>> INSERT is only mentioning 4. The first column i.e. id is missed and was not >>> given any default value either. For me even the first attempt to run the >>> insert fails, let alone the duplicate key error, which would be expected on >>> second attempt. >>> >>> 2. I am supposing that you were able to create moodle database in XC and >>> when you ran moodle GUI and tried to create something it failed, Is this >>> true? If yes what configuration of cluster are you using? If not from where >>> is this insert coming from? >>> >>> 3. I tried a modified test case similar to what you mentioned on current >>> master and found it works fine. >>> >>> CREATE TABLE mdl_context (id bigint NOT NULL,contextlevel bigint DEFAULT >>> 0 NOT NULL,instanceid bigint DEFAULT 0 NOT NULL,path character >>> varying(255),depth smallint DEFAULT 0 NOT NULL); >>> >>> ALTER TABLE public.mdl_context OWNER TO edb; >>> ALTER TABLE ONLY mdl_context ADD CONSTRAINT mdl_cont_id_pk PRIMARY KEY >>> (id); >>> >>> CREATE SEQUENCE mdl_context_id_seq START WITH 1 INCREMENT BY 1 NO >>> MINVALUE NO MAXVALUE CACHE 1; >>> >>> ALTER TABLE public.mdl_context_id_seq OWNER TO edb; >>> ALTER SEQUENCE mdl_context_id_seq OWNED BY mdl_context.id; >>> >>> INSERT INTO mdl_context VALUES(nextval('mdl_context_id_seq'), >>> 50,'1',0,NULL); >>> INSERT INTO mdl_context VALUES(nextval('mdl_context_id_seq'), >>> 50,'1',0,NULL); >>> INSERT INTO mdl_context VALUES(nextval('mdl_context_id_seq'), >>> 50,'1',0,NULL); >>> INSERT INTO mdl_context VALUES(nextval('mdl_context_id_seq'), >>> 50,'1',0,NULL); >>> select * from mdl_context; >>> id | contextlevel | instanceid | path | depth >>> ----+--------------+------------+------+------- >>> 3 | 50 | 1 | | 0 >>> 4 | 50 | 1 | | 0 >>> 1 | 50 | 1 | | 0 >>> 2 | 50 | 1 | | 0 >>> (4 rows) >>> >>> If you happen to run moodle successfully , please do share your >>> experiences. >>> >>> Regards >>> >>> On Wed, Jun 12, 2013 at 3:46 PM, Afonso Bione <aag...@gm...>wrote: >>> >>>> Please, i'm very new in postgres-xc and I need help with >>>> instlal moodle 2.5 with postgres-xc >>>> >>>> here is the error message >>>> >>>> Debug info: ERROR: duplicate key value violates unique constraint >>>> "mdl_cont_id_pk" >>>> DETAIL: Key (id)=(1) already exists. >>>> INSERT INTO mdl_context (contextlevel,instanceid,depth,path) >>>> VALUES($1,$2,$3,$4) RETURNING id >>>> [array ( >>>> 'contextlevel' => 50, >>>> 'instanceid' => '1', >>>> 'depth' => 0, >>>> 'path' => NULL, >>>> )] >>>> >>>> --------------------------------------------- >>>> and this are the tables that the system shows >>>> >>>> >>>> -- >>>> -- Name: mdl_context; Type: TABLE; Schema: public; Owner: postgres; >>>> Tablespace: >>>> -- >>>> >>>> CREATE TABLE mdl_context ( >>>> id bigint NOT NULL, >>>> contextlevel bigint DEFAULT 0 NOT NULL, >>>> instanceid bigint DEFAULT 0 NOT NULL, >>>> path character varying(255), >>>> depth smallint DEFAULT 0 NOT NULL >>>> ); >>>> >>>> >>>> ALTER TABLE public.mdl_context OWNER TO postgres; >>>> >>>> -- >>>> -- Name: mdl_cont_id_pk; Type: CONSTRAINT; Schema: public; Owner: >>>> postgres; Tablespace: >>>> -- >>>> >>>> ALTER TABLE ONLY mdl_context >>>> ADD CONSTRAINT mdl_cont_id_pk PRIMARY KEY (id); >>>> >>>> >>>> >>>> -- >>>> -- Name: TABLE mdl_context; Type: COMMENT; Schema: public; Owner: >>>> postgres >>>> -- >>>> >>>> COMMENT ON TABLE mdl_context IS 'one of these must be set'; >>>> - >>>> -- Name: mdl_context_id_seq; Type: SEQUENCE; Schema: public; Owner: >>>> postgres >>>> -- >>>> >>>> CREATE SEQUENCE mdl_context_id_seq >>>> START WITH 1 >>>> INCREMENT BY 1 >>>> NO MINVALUE >>>> NO MAXVALUE >>>> CACHE 1; >>>> >>>> >>>> ALTER TABLE public.mdl_context_id_seq OWNER TO postgres; >>>> >>>> -- >>>> -- Name: mdl_context_id_seq; Type: SEQUENCE OWNED BY; Schema: public; >>>> Owner: postgres >>>> -- >>>> >>>> ALTER SEQUENCE mdl_context_id_seq OWNED BY mdl_context.id; >>>> >>>> >>>> -- >>>> -- Name: mdl_context_temp; Type: TABLE; Schema: public; Owner: >>>> postgres; Tablespace: >>>> -- >>>> >>>> CREATE TABLE mdl_context_temp ( >>>> id bigint NOT NULL, >>>> path character varying(255) DEFAULT ''::character varying NOT NULL, >>>> depth smallint NOT NULL >>>> ); >>>> >>>> >>>> ALTER TABLE public.mdl_context_temp OWNER TO postgres; >>>> >>>> -- >>>> -- Name: TABLE mdl_context_temp; Type: COMMENT; Schema: public; Owner: >>>> postgres >>>> -- >>>> COMMENT ON TABLE mdl_context_temp IS 'Used by build_context_path() in >>>> upgrade and cron to keep context depths and paths in sync.'; >>>> >>>> >>>> Best Regards >>>> Afonso Bione >>>> >>>> >>>> ------------------------------------------------------------------------------ >>>> This SF.net email is sponsored by Windows: >>>> >>>> Build for Windows Store. >>>> >>>> http://p.sf.net/sfu/windows-dev2dev >>>> _______________________________________________ >>>> Postgres-xc-general mailing list >>>> Pos...@li... >>>> https://lists.sourceforge.net/lists/listinfo/postgres-xc-general >>>> >>>> >>> >>> >>> -- >>> -- >>> *Abbas* >>> Architect >>> >>> Ph: 92.334.5100153 >>> Skype ID: gabbasb >>> www.enterprisedb.co <http://www.enterprisedb.com/>m<http://www.enterprisedb.com/> >>> * >>> Follow us on Twitter* >>> @EnterpriseDB >>> >>> Visit EnterpriseDB for tutorials, webinars, whitepapers<http://www.enterprisedb.com/resources-community>and more<http://www.enterprisedb.com/resources-community> >>> >> >> > -- -- *Abbas* Architect Ph: 92.334.5100153 Skype ID: gabbasb www.enterprisedb.co <http://www.enterprisedb.com/>m<http://www.enterprisedb.com/> * Follow us on Twitter* @EnterpriseDB Visit EnterpriseDB for tutorials, webinars, whitepapers<http://www.enterprisedb.com/resources-community>and more<http://www.enterprisedb.com/resources-community> |
From: Afonso B. <aag...@gm...> - 2013-06-13 11:39:57
|
Hi, I miss to send this! psql (PGXC 1.0.3, based on PG 9.1.9) Type "help" for help. moodle=# INSERT INTO mdl_context VALUES(nextval('mdl_context_id_seq'), 50,'1',0,1); INSERT 0 1 moodle=# INSERT INTO mdl_context VALUES(nextval('mdl_context_id_seq'), 50,'1',0,1); INSERT 0 1 moodle=# INSERT INTO mdl_context VALUES(nextval('mdl_context_id_seq'), 50,'1',0,1); INSERT 0 1 moodle=# select * from mdl_context; id | contextlevel | instanceid | path | depth ----+--------------+------------+------+------- 1 | 50 | 1 | 0 | 1 1 | 50 | 1 | 0 | 1 1 | 50 | 1 | 0 | 1 thanks On Thu, Jun 13, 2013 at 7:37 AM, Afonso Bione <aag...@gm...> wrote: > Hi, Abbas, > > First I'd the thank you for your prompt answer, > 1 - I tried to install direct from moodle.org (moodle25+) > the first column id is missing because its use a sequence to get > the next value. > I installed the same version (moodle25+) with postgres91 and > everything goes well. > > 2 - when I tried to install in the same way, in postgres-xc I get the > error message tha t i send to you > I will try your suggestions > and try it again, > > When I finished I will share our experience > Best Regards > Afonso Bione > > > On Thu, Jun 13, 2013 at 3:54 AM, Abbas Butt <abb...@en...>wrote: > >> >> Hi, >> There are a few things you need to look at. >> >> 1. The definition of the table mdl_context has 5 columns, whereas the >> INSERT is only mentioning 4. The first column i.e. id is missed and was not >> given any default value either. For me even the first attempt to run the >> insert fails, let alone the duplicate key error, which would be expected on >> second attempt. >> >> 2. I am supposing that you were able to create moodle database in XC and >> when you ran moodle GUI and tried to create something it failed, Is this >> true? If yes what configuration of cluster are you using? If not from where >> is this insert coming from? >> >> 3. I tried a modified test case similar to what you mentioned on current >> master and found it works fine. >> >> CREATE TABLE mdl_context (id bigint NOT NULL,contextlevel bigint DEFAULT >> 0 NOT NULL,instanceid bigint DEFAULT 0 NOT NULL,path character >> varying(255),depth smallint DEFAULT 0 NOT NULL); >> >> ALTER TABLE public.mdl_context OWNER TO edb; >> ALTER TABLE ONLY mdl_context ADD CONSTRAINT mdl_cont_id_pk PRIMARY KEY >> (id); >> >> CREATE SEQUENCE mdl_context_id_seq START WITH 1 INCREMENT BY 1 NO >> MINVALUE NO MAXVALUE CACHE 1; >> >> ALTER TABLE public.mdl_context_id_seq OWNER TO edb; >> ALTER SEQUENCE mdl_context_id_seq OWNED BY mdl_context.id; >> >> INSERT INTO mdl_context VALUES(nextval('mdl_context_id_seq'), >> 50,'1',0,NULL); >> INSERT INTO mdl_context VALUES(nextval('mdl_context_id_seq'), >> 50,'1',0,NULL); >> INSERT INTO mdl_context VALUES(nextval('mdl_context_id_seq'), >> 50,'1',0,NULL); >> INSERT INTO mdl_context VALUES(nextval('mdl_context_id_seq'), >> 50,'1',0,NULL); >> select * from mdl_context; >> id | contextlevel | instanceid | path | depth >> ----+--------------+------------+------+------- >> 3 | 50 | 1 | | 0 >> 4 | 50 | 1 | | 0 >> 1 | 50 | 1 | | 0 >> 2 | 50 | 1 | | 0 >> (4 rows) >> >> If you happen to run moodle successfully , please do share your >> experiences. >> >> Regards >> >> On Wed, Jun 12, 2013 at 3:46 PM, Afonso Bione <aag...@gm...> wrote: >> >>> Please, i'm very new in postgres-xc and I need help with >>> instlal moodle 2.5 with postgres-xc >>> >>> here is the error message >>> >>> Debug info: ERROR: duplicate key value violates unique constraint >>> "mdl_cont_id_pk" >>> DETAIL: Key (id)=(1) already exists. >>> INSERT INTO mdl_context (contextlevel,instanceid,depth,path) >>> VALUES($1,$2,$3,$4) RETURNING id >>> [array ( >>> 'contextlevel' => 50, >>> 'instanceid' => '1', >>> 'depth' => 0, >>> 'path' => NULL, >>> )] >>> >>> --------------------------------------------- >>> and this are the tables that the system shows >>> >>> >>> -- >>> -- Name: mdl_context; Type: TABLE; Schema: public; Owner: postgres; >>> Tablespace: >>> -- >>> >>> CREATE TABLE mdl_context ( >>> id bigint NOT NULL, >>> contextlevel bigint DEFAULT 0 NOT NULL, >>> instanceid bigint DEFAULT 0 NOT NULL, >>> path character varying(255), >>> depth smallint DEFAULT 0 NOT NULL >>> ); >>> >>> >>> ALTER TABLE public.mdl_context OWNER TO postgres; >>> >>> -- >>> -- Name: mdl_cont_id_pk; Type: CONSTRAINT; Schema: public; Owner: >>> postgres; Tablespace: >>> -- >>> >>> ALTER TABLE ONLY mdl_context >>> ADD CONSTRAINT mdl_cont_id_pk PRIMARY KEY (id); >>> >>> >>> >>> -- >>> -- Name: TABLE mdl_context; Type: COMMENT; Schema: public; Owner: >>> postgres >>> -- >>> >>> COMMENT ON TABLE mdl_context IS 'one of these must be set'; >>> - >>> -- Name: mdl_context_id_seq; Type: SEQUENCE; Schema: public; Owner: >>> postgres >>> -- >>> >>> CREATE SEQUENCE mdl_context_id_seq >>> START WITH 1 >>> INCREMENT BY 1 >>> NO MINVALUE >>> NO MAXVALUE >>> CACHE 1; >>> >>> >>> ALTER TABLE public.mdl_context_id_seq OWNER TO postgres; >>> >>> -- >>> -- Name: mdl_context_id_seq; Type: SEQUENCE OWNED BY; Schema: public; >>> Owner: postgres >>> -- >>> >>> ALTER SEQUENCE mdl_context_id_seq OWNED BY mdl_context.id; >>> >>> >>> -- >>> -- Name: mdl_context_temp; Type: TABLE; Schema: public; Owner: postgres; >>> Tablespace: >>> -- >>> >>> CREATE TABLE mdl_context_temp ( >>> id bigint NOT NULL, >>> path character varying(255) DEFAULT ''::character varying NOT NULL, >>> depth smallint NOT NULL >>> ); >>> >>> >>> ALTER TABLE public.mdl_context_temp OWNER TO postgres; >>> >>> -- >>> -- Name: TABLE mdl_context_temp; Type: COMMENT; Schema: public; Owner: >>> postgres >>> -- >>> COMMENT ON TABLE mdl_context_temp IS 'Used by build_context_path() in >>> upgrade and cron to keep context depths and paths in sync.'; >>> >>> >>> Best Regards >>> Afonso Bione >>> >>> >>> ------------------------------------------------------------------------------ >>> This SF.net email is sponsored by Windows: >>> >>> Build for Windows Store. >>> >>> http://p.sf.net/sfu/windows-dev2dev >>> _______________________________________________ >>> Postgres-xc-general mailing list >>> Pos...@li... >>> https://lists.sourceforge.net/lists/listinfo/postgres-xc-general >>> >>> >> >> >> -- >> -- >> *Abbas* >> Architect >> >> Ph: 92.334.5100153 >> Skype ID: gabbasb >> www.enterprisedb.co <http://www.enterprisedb.com/>m<http://www.enterprisedb.com/> >> * >> Follow us on Twitter* >> @EnterpriseDB >> >> Visit EnterpriseDB for tutorials, webinars, whitepapers<http://www.enterprisedb.com/resources-community>and more<http://www.enterprisedb.com/resources-community> >> > > |
From: Afonso B. <aag...@gm...> - 2013-06-13 10:37:32
|
Hi, Abbas, First I'd the thank you for your prompt answer, 1 - I tried to install direct from moodle.org (moodle25+) the first column id is missing because its use a sequence to get the next value. I installed the same version (moodle25+) with postgres91 and everything goes well. 2 - when I tried to install in the same way, in postgres-xc I get the error message tha t i send to you I will try your suggestions and try it again, When I finished I will share our experience Best Regards Afonso Bione On Thu, Jun 13, 2013 at 3:54 AM, Abbas Butt <abb...@en...>wrote: > > Hi, > There are a few things you need to look at. > > 1. The definition of the table mdl_context has 5 columns, whereas the > INSERT is only mentioning 4. The first column i.e. id is missed and was not > given any default value either. For me even the first attempt to run the > insert fails, let alone the duplicate key error, which would be expected on > second attempt. > > 2. I am supposing that you were able to create moodle database in XC and > when you ran moodle GUI and tried to create something it failed, Is this > true? If yes what configuration of cluster are you using? If not from where > is this insert coming from? > > 3. I tried a modified test case similar to what you mentioned on current > master and found it works fine. > > CREATE TABLE mdl_context (id bigint NOT NULL,contextlevel bigint DEFAULT 0 > NOT NULL,instanceid bigint DEFAULT 0 NOT NULL,path character > varying(255),depth smallint DEFAULT 0 NOT NULL); > > ALTER TABLE public.mdl_context OWNER TO edb; > ALTER TABLE ONLY mdl_context ADD CONSTRAINT mdl_cont_id_pk PRIMARY KEY > (id); > > CREATE SEQUENCE mdl_context_id_seq START WITH 1 INCREMENT BY 1 NO MINVALUE > NO MAXVALUE CACHE 1; > > ALTER TABLE public.mdl_context_id_seq OWNER TO edb; > ALTER SEQUENCE mdl_context_id_seq OWNED BY mdl_context.id; > > INSERT INTO mdl_context VALUES(nextval('mdl_context_id_seq'), > 50,'1',0,NULL); > INSERT INTO mdl_context VALUES(nextval('mdl_context_id_seq'), > 50,'1',0,NULL); > INSERT INTO mdl_context VALUES(nextval('mdl_context_id_seq'), > 50,'1',0,NULL); > INSERT INTO mdl_context VALUES(nextval('mdl_context_id_seq'), > 50,'1',0,NULL); > select * from mdl_context; > id | contextlevel | instanceid | path | depth > ----+--------------+------------+------+------- > 3 | 50 | 1 | | 0 > 4 | 50 | 1 | | 0 > 1 | 50 | 1 | | 0 > 2 | 50 | 1 | | 0 > (4 rows) > > If you happen to run moodle successfully , please do share your > experiences. > > Regards > > On Wed, Jun 12, 2013 at 3:46 PM, Afonso Bione <aag...@gm...> wrote: > >> Please, i'm very new in postgres-xc and I need help with >> instlal moodle 2.5 with postgres-xc >> >> here is the error message >> >> Debug info: ERROR: duplicate key value violates unique constraint >> "mdl_cont_id_pk" >> DETAIL: Key (id)=(1) already exists. >> INSERT INTO mdl_context (contextlevel,instanceid,depth,path) >> VALUES($1,$2,$3,$4) RETURNING id >> [array ( >> 'contextlevel' => 50, >> 'instanceid' => '1', >> 'depth' => 0, >> 'path' => NULL, >> )] >> >> --------------------------------------------- >> and this are the tables that the system shows >> >> >> -- >> -- Name: mdl_context; Type: TABLE; Schema: public; Owner: postgres; >> Tablespace: >> -- >> >> CREATE TABLE mdl_context ( >> id bigint NOT NULL, >> contextlevel bigint DEFAULT 0 NOT NULL, >> instanceid bigint DEFAULT 0 NOT NULL, >> path character varying(255), >> depth smallint DEFAULT 0 NOT NULL >> ); >> >> >> ALTER TABLE public.mdl_context OWNER TO postgres; >> >> -- >> -- Name: mdl_cont_id_pk; Type: CONSTRAINT; Schema: public; Owner: >> postgres; Tablespace: >> -- >> >> ALTER TABLE ONLY mdl_context >> ADD CONSTRAINT mdl_cont_id_pk PRIMARY KEY (id); >> >> >> >> -- >> -- Name: TABLE mdl_context; Type: COMMENT; Schema: public; Owner: postgres >> -- >> >> COMMENT ON TABLE mdl_context IS 'one of these must be set'; >> - >> -- Name: mdl_context_id_seq; Type: SEQUENCE; Schema: public; Owner: >> postgres >> -- >> >> CREATE SEQUENCE mdl_context_id_seq >> START WITH 1 >> INCREMENT BY 1 >> NO MINVALUE >> NO MAXVALUE >> CACHE 1; >> >> >> ALTER TABLE public.mdl_context_id_seq OWNER TO postgres; >> >> -- >> -- Name: mdl_context_id_seq; Type: SEQUENCE OWNED BY; Schema: public; >> Owner: postgres >> -- >> >> ALTER SEQUENCE mdl_context_id_seq OWNED BY mdl_context.id; >> >> >> -- >> -- Name: mdl_context_temp; Type: TABLE; Schema: public; Owner: postgres; >> Tablespace: >> -- >> >> CREATE TABLE mdl_context_temp ( >> id bigint NOT NULL, >> path character varying(255) DEFAULT ''::character varying NOT NULL, >> depth smallint NOT NULL >> ); >> >> >> ALTER TABLE public.mdl_context_temp OWNER TO postgres; >> >> -- >> -- Name: TABLE mdl_context_temp; Type: COMMENT; Schema: public; Owner: >> postgres >> -- >> COMMENT ON TABLE mdl_context_temp IS 'Used by build_context_path() in >> upgrade and cron to keep context depths and paths in sync.'; >> >> >> Best Regards >> Afonso Bione >> >> >> ------------------------------------------------------------------------------ >> This SF.net email is sponsored by Windows: >> >> Build for Windows Store. >> >> http://p.sf.net/sfu/windows-dev2dev >> _______________________________________________ >> Postgres-xc-general mailing list >> Pos...@li... >> https://lists.sourceforge.net/lists/listinfo/postgres-xc-general >> >> > > > -- > -- > *Abbas* > Architect > > Ph: 92.334.5100153 > Skype ID: gabbasb > www.enterprisedb.co <http://www.enterprisedb.com/>m<http://www.enterprisedb.com/> > * > Follow us on Twitter* > @EnterpriseDB > > Visit EnterpriseDB for tutorials, webinars, whitepapers<http://www.enterprisedb.com/resources-community>and more<http://www.enterprisedb.com/resources-community> > |
From: Abbas B. <abb...@en...> - 2013-06-13 06:54:30
|
Hi, There are a few things you need to look at. 1. The definition of the table mdl_context has 5 columns, whereas the INSERT is only mentioning 4. The first column i.e. id is missed and was not given any default value either. For me even the first attempt to run the insert fails, let alone the duplicate key error, which would be expected on second attempt. 2. I am supposing that you were able to create moodle database in XC and when you ran moodle GUI and tried to create something it failed, Is this true? If yes what configuration of cluster are you using? If not from where is this insert coming from? 3. I tried a modified test case similar to what you mentioned on current master and found it works fine. CREATE TABLE mdl_context (id bigint NOT NULL,contextlevel bigint DEFAULT 0 NOT NULL,instanceid bigint DEFAULT 0 NOT NULL,path character varying(255),depth smallint DEFAULT 0 NOT NULL); ALTER TABLE public.mdl_context OWNER TO edb; ALTER TABLE ONLY mdl_context ADD CONSTRAINT mdl_cont_id_pk PRIMARY KEY (id); CREATE SEQUENCE mdl_context_id_seq START WITH 1 INCREMENT BY 1 NO MINVALUE NO MAXVALUE CACHE 1; ALTER TABLE public.mdl_context_id_seq OWNER TO edb; ALTER SEQUENCE mdl_context_id_seq OWNED BY mdl_context.id; INSERT INTO mdl_context VALUES(nextval('mdl_context_id_seq'), 50,'1',0,NULL); INSERT INTO mdl_context VALUES(nextval('mdl_context_id_seq'), 50,'1',0,NULL); INSERT INTO mdl_context VALUES(nextval('mdl_context_id_seq'), 50,'1',0,NULL); INSERT INTO mdl_context VALUES(nextval('mdl_context_id_seq'), 50,'1',0,NULL); select * from mdl_context; id | contextlevel | instanceid | path | depth ----+--------------+------------+------+------- 3 | 50 | 1 | | 0 4 | 50 | 1 | | 0 1 | 50 | 1 | | 0 2 | 50 | 1 | | 0 (4 rows) If you happen to run moodle successfully , please do share your experiences. Regards On Wed, Jun 12, 2013 at 3:46 PM, Afonso Bione <aag...@gm...> wrote: > Please, i'm very new in postgres-xc and I need help with > instlal moodle 2.5 with postgres-xc > > here is the error message > > Debug info: ERROR: duplicate key value violates unique constraint > "mdl_cont_id_pk" > DETAIL: Key (id)=(1) already exists. > INSERT INTO mdl_context (contextlevel,instanceid,depth,path) > VALUES($1,$2,$3,$4) RETURNING id > [array ( > 'contextlevel' => 50, > 'instanceid' => '1', > 'depth' => 0, > 'path' => NULL, > )] > > --------------------------------------------- > and this are the tables that the system shows > > > -- > -- Name: mdl_context; Type: TABLE; Schema: public; Owner: postgres; > Tablespace: > -- > > CREATE TABLE mdl_context ( > id bigint NOT NULL, > contextlevel bigint DEFAULT 0 NOT NULL, > instanceid bigint DEFAULT 0 NOT NULL, > path character varying(255), > depth smallint DEFAULT 0 NOT NULL > ); > > > ALTER TABLE public.mdl_context OWNER TO postgres; > > -- > -- Name: mdl_cont_id_pk; Type: CONSTRAINT; Schema: public; Owner: > postgres; Tablespace: > -- > > ALTER TABLE ONLY mdl_context > ADD CONSTRAINT mdl_cont_id_pk PRIMARY KEY (id); > > > > -- > -- Name: TABLE mdl_context; Type: COMMENT; Schema: public; Owner: postgres > -- > > COMMENT ON TABLE mdl_context IS 'one of these must be set'; > - > -- Name: mdl_context_id_seq; Type: SEQUENCE; Schema: public; Owner: > postgres > -- > > CREATE SEQUENCE mdl_context_id_seq > START WITH 1 > INCREMENT BY 1 > NO MINVALUE > NO MAXVALUE > CACHE 1; > > > ALTER TABLE public.mdl_context_id_seq OWNER TO postgres; > > -- > -- Name: mdl_context_id_seq; Type: SEQUENCE OWNED BY; Schema: public; > Owner: postgres > -- > > ALTER SEQUENCE mdl_context_id_seq OWNED BY mdl_context.id; > > > -- > -- Name: mdl_context_temp; Type: TABLE; Schema: public; Owner: postgres; > Tablespace: > -- > > CREATE TABLE mdl_context_temp ( > id bigint NOT NULL, > path character varying(255) DEFAULT ''::character varying NOT NULL, > depth smallint NOT NULL > ); > > > ALTER TABLE public.mdl_context_temp OWNER TO postgres; > > -- > -- Name: TABLE mdl_context_temp; Type: COMMENT; Schema: public; Owner: > postgres > -- > COMMENT ON TABLE mdl_context_temp IS 'Used by build_context_path() in > upgrade and cron to keep context depths and paths in sync.'; > > > Best Regards > Afonso Bione > > > ------------------------------------------------------------------------------ > This SF.net email is sponsored by Windows: > > Build for Windows Store. > > http://p.sf.net/sfu/windows-dev2dev > _______________________________________________ > Postgres-xc-general mailing list > Pos...@li... > https://lists.sourceforge.net/lists/listinfo/postgres-xc-general > > -- -- *Abbas* Architect Ph: 92.334.5100153 Skype ID: gabbasb www.enterprisedb.co <http://www.enterprisedb.com/>m<http://www.enterprisedb.com/> * Follow us on Twitter* @EnterpriseDB Visit EnterpriseDB for tutorials, webinars, whitepapers<http://www.enterprisedb.com/resources-community>and more<http://www.enterprisedb.com/resources-community> |
From: Afonso B. <aag...@gm...> - 2013-06-12 10:46:40
|
Please, i'm very new in postgres-xc and I need help with instlal moodle 2.5 with postgres-xc here is the error message Debug info: ERROR: duplicate key value violates unique constraint "mdl_cont_id_pk" DETAIL: Key (id)=(1) already exists. INSERT INTO mdl_context (contextlevel,instanceid,depth,path) VALUES($1,$2,$3,$4) RETURNING id [array ( 'contextlevel' => 50, 'instanceid' => '1', 'depth' => 0, 'path' => NULL, )] --------------------------------------------- and this are the tables that the system shows -- -- Name: mdl_context; Type: TABLE; Schema: public; Owner: postgres; Tablespace: -- CREATE TABLE mdl_context ( id bigint NOT NULL, contextlevel bigint DEFAULT 0 NOT NULL, instanceid bigint DEFAULT 0 NOT NULL, path character varying(255), depth smallint DEFAULT 0 NOT NULL ); ALTER TABLE public.mdl_context OWNER TO postgres; -- -- Name: mdl_cont_id_pk; Type: CONSTRAINT; Schema: public; Owner: postgres; Tablespace: -- ALTER TABLE ONLY mdl_context ADD CONSTRAINT mdl_cont_id_pk PRIMARY KEY (id); -- -- Name: TABLE mdl_context; Type: COMMENT; Schema: public; Owner: postgres -- COMMENT ON TABLE mdl_context IS 'one of these must be set'; - -- Name: mdl_context_id_seq; Type: SEQUENCE; Schema: public; Owner: postgres -- CREATE SEQUENCE mdl_context_id_seq START WITH 1 INCREMENT BY 1 NO MINVALUE NO MAXVALUE CACHE 1; ALTER TABLE public.mdl_context_id_seq OWNER TO postgres; -- -- Name: mdl_context_id_seq; Type: SEQUENCE OWNED BY; Schema: public; Owner: postgres -- ALTER SEQUENCE mdl_context_id_seq OWNED BY mdl_context.id; -- -- Name: mdl_context_temp; Type: TABLE; Schema: public; Owner: postgres; Tablespace: -- CREATE TABLE mdl_context_temp ( id bigint NOT NULL, path character varying(255) DEFAULT ''::character varying NOT NULL, depth smallint NOT NULL ); ALTER TABLE public.mdl_context_temp OWNER TO postgres; -- -- Name: TABLE mdl_context_temp; Type: COMMENT; Schema: public; Owner: postgres -- COMMENT ON TABLE mdl_context_temp IS 'Used by build_context_path() in upgrade and cron to keep context depths and paths in sync.'; Best Regards Afonso Bione |
From: Matt W. <MW...@XI...> - 2013-06-11 23:52:54
|
For comparison, it only takes about 15 minutes to export just the reduced number of columns for the 56M rows to a file and then another 15 minutes to import that into a new, distributed table. Matt From: Matt Warner Sent: Tuesday, June 11, 2013 4:12 PM To: Pos...@li... Subject: Problem With Memory Handling When Creating Table Subset I'm using the pre-release version of XC and noticed that with very large tables the coordinator is consuming a huge amount of memory. Specifically: * I have a table distributed by hash across 8 nodes. The initial load had no issues (using 'copy table from file' syntax). * The table has 56M total records in it. What's failing is the statement below, which selects a subset of the columns in the original table and creates a new distributed table. Note that I'm using the same hash key that was used to originally distribute the data. That is, the data isn't really being redistributed, but (conceptually) just copied within the node to a different table with fewer columns: create table accn_proc_reduced distribute by hash(fk_accn_id) to node (node1, node2, node3, node4, node5, node6, node7,node8) as select pk_accn_proc_seq_id,fk_proc_id,fk_fac_id,fk_accn_id,fk_subm_file_seq_id,fk_sta_id,exp_prc,bil_prc,gross_prc,due_amt,due_amt_with_bulk,exp_due_amt,exp_due_amt_without_bulk,fk_bill_fac_id from accn_proc; Version of XC: psql (PGXC 1.1devel, based on PG 9.2beta2) When I say that the coordinator is consuming a huge amount of RAM, I mean that a system running these 8 nodes, the GTM, and the coordinator has 64GB RAM, and that the coordinator's consumption keeps growing until the system runs out of RAM+swap space and finally generates an "ERROR: out of memory" message, and then dumps a core file. The process of running the system out of RAM while trying to create this new table takes about 6 hours, much longer than the initial load of the data. The only changes to the default postgresql.conf file are to set shared_buffers=1GB and effective_cache_size=1GB (8 nodes, 64GB RAM + swap in the system). Assuming this is not expected behavior, what debug data can I provide to troubleshoot? Matt |
From: Matt W. <MW...@XI...> - 2013-06-11 23:12:25
|
I'm using the pre-release version of XC and noticed that with very large tables the coordinator is consuming a huge amount of memory. Specifically: * I have a table distributed by hash across 8 nodes. The initial load had no issues (using 'copy table from file' syntax). * The table has 56M total records in it. What's failing is the statement below, which selects a subset of the columns in the original table and creates a new distributed table. Note that I'm using the same hash key that was used to originally distribute the data. That is, the data isn't really being redistributed, but (conceptually) just copied within the node to a different table with fewer columns: create table accn_proc_reduced distribute by hash(fk_accn_id) to node (node1, node2, node3, node4, node5, node6, node7,node8) as select pk_accn_proc_seq_id,fk_proc_id,fk_fac_id,fk_accn_id,fk_subm_file_seq_id,fk_sta_id,exp_prc,bil_prc,gross_prc,due_amt,due_amt_with_bulk,exp_due_amt,exp_due_amt_without_bulk,fk_bill_fac_id from accn_proc; Version of XC: psql (PGXC 1.1devel, based on PG 9.2beta2) When I say that the coordinator is consuming a huge amount of RAM, I mean that a system running these 8 nodes, the GTM, and the coordinator has 64GB RAM, and that the coordinator's consumption keeps growing until the system runs out of RAM+swap space and finally generates an "ERROR: out of memory" message, and then dumps a core file. The process of running the system out of RAM while trying to create this new table takes about 6 hours, much longer than the initial load of the data. The only changes to the default postgresql.conf file are to set shared_buffers=1GB and effective_cache_size=1GB (8 nodes, 64GB RAM + swap in the system). Assuming this is not expected behavior, what debug data can I provide to troubleshoot? Matt |
From: Matt W. <MW...@XI...> - 2013-06-11 16:21:36
|
The distribution looks even: postgres=# execute direct on (node1) 'select count(*) from accn'; count --------- 2063261 (1 row) postgres=# execute direct on (node2) 'select count(*) from accn'; count --------- 2062455 (1 row) postgres=# execute direct on (node3) 'select count(*) from accn'; count --------- 2064740 (1 row) postgres=# execute direct on (node4) 'select count(*) from accn'; count --------- 2061484 (1 row) postgres=# execute direct on (node5) 'select count(*) from accn'; count --------- 2062569 (1 row) postgres=# execute direct on (node6) 'select count(*) from accn'; count --------- 2062357 (1 row) postgres=# execute direct on (node7) 'select count(*) from accn'; count --------- 2061114 (1 row) postgres=# execute direct on (node8) 'select count(*) from accn'; count --------- 2065580 (1 row) From: Ashutosh Bapat [mailto:ash...@en...] Sent: Tuesday, June 11, 2013 12:30 AM To: Matt Warner Cc: Pos...@li...<mailto:Pos...@li...> Subject: Re: [Postgres-xc-general] XC Performance with Subquery Hi Matt, If you have 8 nodes node1 to node8, the query is being fired to all those nodes. It might happen that the data distribution across the datanodes is skewed, which might cause different loads on different nodes. You may check that by looking at count(*) output from each node using EXECUTE DIRECT. |
From: Ashutosh B. <ash...@en...> - 2013-06-11 07:30:01
|
Hi Matt, If you have 8 nodes node1 to node8, the query is being fired to all those nodes. It might happen that the data distribution across the datanodes is skewed, which might cause different loads on different nodes. You may check that by looking at count(*) output from each node using EXECUTE DIRECT. On Mon, Jun 10, 2013 at 10:22 PM, Matt Warner <MW...@xi...> wrote: > My apologies for the delay. Here’s the verbose output:**** > > ** ** > > ** ** > > psql (PGXC 1.1devel, based on PG 9.2beta2)**** > > Type "help" for help.**** > > ** ** > > postgres=# explain verbose select count(*) from accn a1 where exists > (select null from accn_proc a2 where a2.fk_accn_id=a1.pk_accn_id and > a2.fk_sta_id=52);**** > > QUERY > PLAN **** > > > ------------------------------------------------------------------------------------------------------------ > **** > > Aggregate (cost=0.05..0.06 rows=1 width=0)**** > > Output: count(*)**** > > -> Hash Semi Join (cost=0.01..0.05 rows=1 width=0)**** > > Hash Cond: ((a1.pk_accn_id)::text = (a2.fk_accn_id)::text)**** > > -> Data Node Scan on accn "_REMOTE_TABLE_QUERY_" > (cost=0.00..0.00 rows=1000 width=98)**** > > Output: a1.pk_accn_id**** > > Node/s: node1, node2, node3, node4, node5, node6, node7, > node8**** > > Remote query: SELECT pk_accn_id FROM ONLY accn a1 WHERE true > **** > > -> Hash (cost=0.00..0.00 rows=1000 width=98)**** > > Output: a2.fk_accn_id**** > > -> Data Node Scan on accn_proc "_REMOTE_TABLE_QUERY_" > (cost=0.00..0.00 rows=1000 width=98)**** > > Output: a2.fk_accn_id**** > > Node/s: node1, node2, node3, node4, node5, node6, > node7, node8**** > > Remote query: SELECT fk_accn_id FROM ONLY accn_proc > a2 WHERE (fk_sta_id = 52)**** > > (14 rows)**** > > ** ** > > ** ** > > *From:* Ashutosh Bapat [mailto:ash...@en...] > *Sent:* Thursday, June 06, 2013 9:29 PM > > *To:* Matt Warner > *Cc:* Pos...@li... > *Subject:* Re: [Postgres-xc-general] XC Performance with Subquery**** > > ** ** > > It might help, if you can send us the EXPLAIN VERBOSE output of this query. > **** > > ** ** > > On Thu, Jun 6, 2013 at 2:38 AM, Matt Warner <MW...@xi...> wrote:**** > > Just to follow up on this, the pre-1.2 version works much better—the query > actually completes and I do see more CPU resources being used, but > curiously, not all of them.**** > > **** > > Still trying to figure out why that would be, so if anyone has any > suggestions, please let me know.**** > > **** > > Matt**** > > **** > > **** > > *From:* Matt Warner **** > > *Sent:* Wednesday, June 05, 2013 8:52 AM > *To:* 'Ashutosh Bapat' > *Cc:* Pos...@li...**** > > *Subject:* RE: [Postgres-xc-general] XC Performance with Subquery**** > > **** > > I’m using 1.0.3, but will try out pre-1.2.**** > > **** > > BTW, I had to make some minor changes to get 1.0.3 to compile correctly on > Solaris. Is anyone interested in receiving these changes? They’re things > such as illegal return statement from void functions (which doesn’t > actually make sense, as far as I know) that the Solaris compiler flags as > errors.**** > > **** > > -bash-4.1$ psql**** > > psql (PGXC 1.0.3, based on PG 9.1.9)**** > > Type "help" for help.**** > > **** > > postgres=# explain select count(*) from accn a1 where exists (select null > from accn_proc a2 where a2.fk_accn_id=a1.pk_accn_id and a2.fk_sta_id=52);* > *** > > QUERY > PLAN **** > > > ------------------------------------------------------------------------------ > **** > > Aggregate (cost=0.02..0.03 rows=1 width=0)**** > > -> Nested Loop Semi Join (cost=0.00..0.01 rows=1 width=0)**** > > Join Filter: ((a1.pk_accn_id)::text = (a2.fk_accn_id)::text)**** > > -> Data Node Scan on a1 (cost=0.00..0.00 rows=1000 width=98)*** > * > > Node/s: node1, node2, node3, node4, node5, node6, node7, > node8**** > > -> Data Node Scan on a2 (cost=0.00..0.00 rows=1000 width=98)*** > * > > Node/s: node1, node2, node3, node4, node5, node6, node7, > node8**** > > (7 rows)**** > > **** > > create table accn(pk_accn_id character varying(40),**** > > <lots of other column definitions deleted for brevity>)**** > > distribute by hash(pk_accn_id)**** > > to node node1, node2, node3, node4, node5, node6, node7,node8;**** > > **** > > create table accn(pk_accn_id character varying(40),**** > > <lots of other column definitions deleted for brevity>)**** > > distribute by hash(fk_accn_id)**** > > to node node1, node2, node3, node4, node5, node6, node7,node8;**** > > **** > > **** > > *From:* Ashutosh Bapat [mailto:ash...@en...<ash...@en...>] > > *Sent:* Tuesday, June 04, 2013 9:15 PM > *To:* Matt Warner > *Cc:* Pos...@li... > *Subject:* Re: [Postgres-xc-general] XC Performance with Subquery**** > > **** > > Hi Matt,**** > > Which version of XC are you using? There has been a lot of change in the > planner since last release. You may try the latest master HEAD (to be > released as 1.2 in about a month).**** > > It will help if you can provide all the table definitions and EXPLAIN > outputs.**** > > **** > > On Wed, Jun 5, 2013 at 5:40 AM, Matt Warner <MW...@xi...> wrote:**** > > I need to correct item 3, below. The coordinator and only one of the data > nodes goes to work. One by one, each of the data nodes appears to spin up > to process the request and then go back to sleep.**** > > **** > > ?**** > > **** > > *From:* Matt Warner > *Sent:* Tuesday, June 04, 2013 5:00 PM > *To:* 'Pos...@li...' > *Subject:* XC Performance with Subquery**** > > **** > > I’ve been experimenting with XC and see interesting results. I’m hoping > someone can help explain something I’m seeing.**** > > **** > > 1. I created two distributed tables, one with a primary key, one > with a foreign key, and hashed both tables by that key. I’m expecting this > to mean that the data for a given key is localized to a single node.**** > > 2. When I perform a simple “select count(*) from table1” I see all > 8 data nodes consuming CPU (plus the coordinator), which I take to be a > good sign—all nodes are working in parallel.**** > > 3. When I perform a join on the distribution key, I see only the > coordinator go to work instead of all 8 data nodes.**** > > 4. I notice that the explain plan appears similar to page 55 of > this document ( > http://www.pgcon.org/2012/schedule/attachments/224_Postgres-XC_tutorial.pdf > ).**** > > 5. I have indexes on the distribution keys, but that does not seem > to make any difference.**** > > **** > > How do I get XC to perform the join on the data nodes? To be verbose, I am > expecting to see more CPU resources consumed in this query:**** > > **** > > select count(*) from tablea a1 where exists (select null from tableb a2 > where a2.fk_accn_id=a1.pk_accn_id and a2.fk_sta_id=52);**** > > **** > > Rewriting this as a simple join does not seem to work any better.**** > > **** > > What am I missing?**** > > **** > > TIA,**** > > **** > > Matt**** > > > > ------------------------------------------------------------------------------ > How ServiceNow helps IT people transform IT departments: > 1. A cloud service to automate IT design, transition and operations > 2. Dashboards that offer high-level views of enterprise services > 3. A single system of record for all IT processes > http://p.sf.net/sfu/servicenow-d2d-j > _______________________________________________ > Postgres-xc-general mailing list > Pos...@li... > https://lists.sourceforge.net/lists/listinfo/postgres-xc-general**** > > > > > -- **** > > Best Wishes, > Ashutosh Bapat > EntepriseDB Corporation > The Postgres Database Company**** > > > > > -- **** > > Best Wishes, > Ashutosh Bapat > EntepriseDB Corporation > The Postgres Database Company**** > -- Best Wishes, Ashutosh Bapat EntepriseDB Corporation The Postgres Database Company |
From: Matt W. <MW...@XI...> - 2013-06-10 16:53:07
|
My apologies for the delay. Here's the verbose output: psql (PGXC 1.1devel, based on PG 9.2beta2) Type "help" for help. postgres=# explain verbose select count(*) from accn a1 where exists (select null from accn_proc a2 where a2.fk_accn_id=a1.pk_accn_id and a2.fk_sta_id=52); QUERY PLAN ------------------------------------------------------------------------------------------------------------ Aggregate (cost=0.05..0.06 rows=1 width=0) Output: count(*) -> Hash Semi Join (cost=0.01..0.05 rows=1 width=0) Hash Cond: ((a1.pk_accn_id)::text = (a2.fk_accn_id)::text) -> Data Node Scan on accn "_REMOTE_TABLE_QUERY_" (cost=0.00..0.00 rows=1000 width=98) Output: a1.pk_accn_id Node/s: node1, node2, node3, node4, node5, node6, node7, node8 Remote query: SELECT pk_accn_id FROM ONLY accn a1 WHERE true -> Hash (cost=0.00..0.00 rows=1000 width=98) Output: a2.fk_accn_id -> Data Node Scan on accn_proc "_REMOTE_TABLE_QUERY_" (cost=0.00..0.00 rows=1000 width=98) Output: a2.fk_accn_id Node/s: node1, node2, node3, node4, node5, node6, node7, node8 Remote query: SELECT fk_accn_id FROM ONLY accn_proc a2 WHERE (fk_sta_id = 52) (14 rows) From: Ashutosh Bapat [mailto:ash...@en...] Sent: Thursday, June 06, 2013 9:29 PM To: Matt Warner Cc: Pos...@li... Subject: Re: [Postgres-xc-general] XC Performance with Subquery It might help, if you can send us the EXPLAIN VERBOSE output of this query. On Thu, Jun 6, 2013 at 2:38 AM, Matt Warner <MW...@xi...<mailto:MW...@xi...>> wrote: Just to follow up on this, the pre-1.2 version works much better-the query actually completes and I do see more CPU resources being used, but curiously, not all of them. Still trying to figure out why that would be, so if anyone has any suggestions, please let me know. Matt From: Matt Warner Sent: Wednesday, June 05, 2013 8:52 AM To: 'Ashutosh Bapat' Cc: Pos...@li...<mailto:Pos...@li...> Subject: RE: [Postgres-xc-general] XC Performance with Subquery I'm using 1.0.3, but will try out pre-1.2. BTW, I had to make some minor changes to get 1.0.3 to compile correctly on Solaris. Is anyone interested in receiving these changes? They're things such as illegal return statement from void functions (which doesn't actually make sense, as far as I know) that the Solaris compiler flags as errors. -bash-4.1$ psql psql (PGXC 1.0.3, based on PG 9.1.9) Type "help" for help. postgres=# explain select count(*) from accn a1 where exists (select null from accn_proc a2 where a2.fk_accn_id=a1.pk_accn_id and a2.fk_sta_id=52); QUERY PLAN ------------------------------------------------------------------------------ Aggregate (cost=0.02..0.03 rows=1 width=0) -> Nested Loop Semi Join (cost=0.00..0.01 rows=1 width=0) Join Filter: ((a1.pk_accn_id)::text = (a2.fk_accn_id)::text) -> Data Node Scan on a1 (cost=0.00..0.00 rows=1000 width=98) Node/s: node1, node2, node3, node4, node5, node6, node7, node8 -> Data Node Scan on a2 (cost=0.00..0.00 rows=1000 width=98) Node/s: node1, node2, node3, node4, node5, node6, node7, node8 (7 rows) create table accn(pk_accn_id character varying(40), <lots of other column definitions deleted for brevity>) distribute by hash(pk_accn_id) to node node1, node2, node3, node4, node5, node6, node7,node8; create table accn(pk_accn_id character varying(40), <lots of other column definitions deleted for brevity>) distribute by hash(fk_accn_id) to node node1, node2, node3, node4, node5, node6, node7,node8; From: Ashutosh Bapat [mailto:ash...@en...] Sent: Tuesday, June 04, 2013 9:15 PM To: Matt Warner Cc: Pos...@li...<mailto:Pos...@li...> Subject: Re: [Postgres-xc-general] XC Performance with Subquery Hi Matt, Which version of XC are you using? There has been a lot of change in the planner since last release. You may try the latest master HEAD (to be released as 1.2 in about a month). It will help if you can provide all the table definitions and EXPLAIN outputs. On Wed, Jun 5, 2013 at 5:40 AM, Matt Warner <MW...@xi...<mailto:MW...@xi...>> wrote: I need to correct item 3, below. The coordinator and only one of the data nodes goes to work. One by one, each of the data nodes appears to spin up to process the request and then go back to sleep. ? From: Matt Warner Sent: Tuesday, June 04, 2013 5:00 PM To: 'Pos...@li...<mailto:Pos...@li...>' Subject: XC Performance with Subquery I've been experimenting with XC and see interesting results. I'm hoping someone can help explain something I'm seeing. 1. I created two distributed tables, one with a primary key, one with a foreign key, and hashed both tables by that key. I'm expecting this to mean that the data for a given key is localized to a single node. 2. When I perform a simple "select count(*) from table1" I see all 8 data nodes consuming CPU (plus the coordinator), which I take to be a good sign-all nodes are working in parallel. 3. When I perform a join on the distribution key, I see only the coordinator go to work instead of all 8 data nodes. 4. I notice that the explain plan appears similar to page 55 of this document (http://www.pgcon.org/2012/schedule/attachments/224_Postgres-XC_tutorial.pdf). 5. I have indexes on the distribution keys, but that does not seem to make any difference. How do I get XC to perform the join on the data nodes? To be verbose, I am expecting to see more CPU resources consumed in this query: select count(*) from tablea a1 where exists (select null from tableb a2 where a2.fk_accn_id=a1.pk_accn_id and a2.fk_sta_id=52); Rewriting this as a simple join does not seem to work any better. What am I missing? TIA, Matt ------------------------------------------------------------------------------ How ServiceNow helps IT people transform IT departments: 1. A cloud service to automate IT design, transition and operations 2. Dashboards that offer high-level views of enterprise services 3. A single system of record for all IT processes http://p.sf.net/sfu/servicenow-d2d-j _______________________________________________ Postgres-xc-general mailing list Pos...@li...<mailto:Pos...@li...> https://lists.sourceforge.net/lists/listinfo/postgres-xc-general -- Best Wishes, Ashutosh Bapat EntepriseDB Corporation The Postgres Database Company -- Best Wishes, Ashutosh Bapat EntepriseDB Corporation The Postgres Database Company |
From: Mason S. <ma...@st...> - 2013-06-07 22:51:36
|
Hi Ray, I would not think of Postgres-XC as an HA solution. Think of it as a write scalability solution (and to some extent, simple reads). If you are just looking for HA, you could consider using streaming replication with pgpool-II. In the architecture you suggested, it might be ok for a read-heavy workload, but you will still have to externally manage HA. If however you do need the scalability, then please do consider XC and distribute your tables. For HA, manage that external to Postgres-XC using something like Corosync/Pacemaker. Regards, On Fri, Jun 7, 2013 at 3:35 PM, Ray Stell <st...@vt...> wrote: > I've never touched XC, but I'm considering a HA/postgresql solution. XC > docs indicate it provides the benefit of multi-master writes which is very > attractive. I would want to have the data replicated to all the datanodes > to guard services against node failure. Is this considered a production > safe design or are adopters generally using a partitioning configuration? > I scanned http://wiki.postgresql.org/images/4/44/Pgxc_HA_20121024.pdf and I'm gathering resources for a trial run. Can this be demonstrated > well enough in three VM nodes: 1. GTM; 2. GTM proxy, cooridinator, > datanode; 3. GTM proxy, cooridinator, datanode. I get the impression that > many more nodes are really ideal for a complete HA solution What would be > the base starting point suggested? Maybe there is a cookbook I've not > seen. Thanks, Ray > > ------------------------------------------------------------------------------ > How ServiceNow helps IT people transform IT departments: > 1. A cloud service to automate IT design, transition and operations > 2. Dashboards that offer high-level views of enterprise services > 3. A single system of record for all IT processes > http://p.sf.net/sfu/servicenow-d2d-j > _______________________________________________ > Postgres-xc-general mailing list > Pos...@li... > https://lists.sourceforge.net/lists/listinfo/postgres-xc-general > -- Mason Sharp StormDB - http://www.stormdb.com The Database Cloud Postgres-XC Support and Services |
From: Ray S. <st...@vt...> - 2013-06-07 19:35:37
|
I've never touched XC, but I'm considering a HA/postgresql solution. XC docs indicate it provides the benefit of multi-master writes which is very attractive. I would want to have the data replicated to all the datanodes to guard services against node failure. Is this considered a production safe design or are adopters generally using a partitioning configuration? I scanned http://wiki.postgresql.org/images/4/44/Pgxc_HA_20121024.pdf and I'm gathering resources for a trial run. Can this be demonstrated well enough in three VM nodes: 1. GTM; 2. GTM proxy, cooridinator, datanode; 3. GTM proxy, cooridinator, datanode. I get the impression that many more nodes are really ideal for a complete HA solution What would be the base starting point suggested? Maybe there is a cookbook I've not seen. Thanks, Ray |
From: Ashutosh B. <ash...@en...> - 2013-06-07 04:28:47
|
It might help, if you can send us the EXPLAIN VERBOSE output of this query. On Thu, Jun 6, 2013 at 2:38 AM, Matt Warner <MW...@xi...> wrote: > Just to follow up on this, the pre-1.2 version works much better—the query > actually completes and I do see more CPU resources being used, but > curiously, not all of them.**** > > ** ** > > Still trying to figure out why that would be, so if anyone has any > suggestions, please let me know.**** > > ** ** > > Matt**** > > ** ** > > ** ** > > *From:* Matt Warner > *Sent:* Wednesday, June 05, 2013 8:52 AM > *To:* 'Ashutosh Bapat' > *Cc:* Pos...@li... > *Subject:* RE: [Postgres-xc-general] XC Performance with Subquery**** > > ** ** > > I’m using 1.0.3, but will try out pre-1.2.**** > > ** ** > > BTW, I had to make some minor changes to get 1.0.3 to compile correctly on > Solaris. Is anyone interested in receiving these changes? They’re things > such as illegal return statement from void functions (which doesn’t > actually make sense, as far as I know) that the Solaris compiler flags as > errors.**** > > ** ** > > -bash-4.1$ psql**** > > psql (PGXC 1.0.3, based on PG 9.1.9)**** > > Type "help" for help.**** > > ** ** > > postgres=# explain select count(*) from accn a1 where exists (select null > from accn_proc a2 where a2.fk_accn_id=a1.pk_accn_id and a2.fk_sta_id=52);* > *** > > QUERY > PLAN **** > > > ------------------------------------------------------------------------------ > **** > > Aggregate (cost=0.02..0.03 rows=1 width=0)**** > > -> Nested Loop Semi Join (cost=0.00..0.01 rows=1 width=0)**** > > Join Filter: ((a1.pk_accn_id)::text = (a2.fk_accn_id)::text)**** > > -> Data Node Scan on a1 (cost=0.00..0.00 rows=1000 width=98)*** > * > > Node/s: node1, node2, node3, node4, node5, node6, node7, > node8**** > > -> Data Node Scan on a2 (cost=0.00..0.00 rows=1000 width=98)*** > * > > Node/s: node1, node2, node3, node4, node5, node6, node7, > node8**** > > (7 rows)**** > > ** ** > > create table accn(pk_accn_id character varying(40),**** > > <lots of other column definitions deleted for brevity>)**** > > distribute by hash(pk_accn_id)**** > > to node node1, node2, node3, node4, node5, node6, node7,node8;**** > > ** ** > > create table accn(pk_accn_id character varying(40),**** > > <lots of other column definitions deleted for brevity>)**** > > distribute by hash(fk_accn_id)**** > > to node node1, node2, node3, node4, node5, node6, node7,node8;**** > > ** ** > > ** ** > > *From:* Ashutosh Bapat [mailto:ash...@en...<ash...@en...>] > > *Sent:* Tuesday, June 04, 2013 9:15 PM > *To:* Matt Warner > *Cc:* Pos...@li... > *Subject:* Re: [Postgres-xc-general] XC Performance with Subquery**** > > ** ** > > Hi Matt,**** > > Which version of XC are you using? There has been a lot of change in the > planner since last release. You may try the latest master HEAD (to be > released as 1.2 in about a month).**** > > It will help if you can provide all the table definitions and EXPLAIN > outputs.**** > > ** ** > > On Wed, Jun 5, 2013 at 5:40 AM, Matt Warner <MW...@xi...> wrote:**** > > I need to correct item 3, below. The coordinator and only one of the data > nodes goes to work. One by one, each of the data nodes appears to spin up > to process the request and then go back to sleep.**** > > **** > > ?**** > > **** > > *From:* Matt Warner > *Sent:* Tuesday, June 04, 2013 5:00 PM > *To:* 'Pos...@li...' > *Subject:* XC Performance with Subquery**** > > **** > > I’ve been experimenting with XC and see interesting results. I’m hoping > someone can help explain something I’m seeing.**** > > **** > > 1. I created two distributed tables, one with a primary key, one > with a foreign key, and hashed both tables by that key. I’m expecting this > to mean that the data for a given key is localized to a single node.**** > > 2. When I perform a simple “select count(*) from table1” I see all > 8 data nodes consuming CPU (plus the coordinator), which I take to be a > good sign—all nodes are working in parallel.**** > > 3. When I perform a join on the distribution key, I see only the > coordinator go to work instead of all 8 data nodes.**** > > 4. I notice that the explain plan appears similar to page 55 of > this document ( > http://www.pgcon.org/2012/schedule/attachments/224_Postgres-XC_tutorial.pdf > ).**** > > 5. I have indexes on the distribution keys, but that does not seem > to make any difference.**** > > **** > > How do I get XC to perform the join on the data nodes? To be verbose, I am > expecting to see more CPU resources consumed in this query:**** > > **** > > select count(*) from tablea a1 where exists (select null from tableb a2 > where a2.fk_accn_id=a1.pk_accn_id and a2.fk_sta_id=52);**** > > **** > > Rewriting this as a simple join does not seem to work any better.**** > > **** > > What am I missing?**** > > **** > > TIA,**** > > **** > > Matt**** > > > > ------------------------------------------------------------------------------ > How ServiceNow helps IT people transform IT departments: > 1. A cloud service to automate IT design, transition and operations > 2. Dashboards that offer high-level views of enterprise services > 3. A single system of record for all IT processes > http://p.sf.net/sfu/servicenow-d2d-j > _______________________________________________ > Postgres-xc-general mailing list > Pos...@li... > https://lists.sourceforge.net/lists/listinfo/postgres-xc-general**** > > > > > -- **** > > Best Wishes, > Ashutosh Bapat > EntepriseDB Corporation > The Postgres Database Company**** > -- Best Wishes, Ashutosh Bapat EntepriseDB Corporation The Postgres Database Company |
From: Matt W. <MW...@XI...> - 2013-06-05 21:08:52
|
Just to follow up on this, the pre-1.2 version works much better-the query actually completes and I do see more CPU resources being used, but curiously, not all of them. Still trying to figure out why that would be, so if anyone has any suggestions, please let me know. Matt From: Matt Warner Sent: Wednesday, June 05, 2013 8:52 AM To: 'Ashutosh Bapat' Cc: Pos...@li... Subject: RE: [Postgres-xc-general] XC Performance with Subquery I'm using 1.0.3, but will try out pre-1.2. BTW, I had to make some minor changes to get 1.0.3 to compile correctly on Solaris. Is anyone interested in receiving these changes? They're things such as illegal return statement from void functions (which doesn't actually make sense, as far as I know) that the Solaris compiler flags as errors. -bash-4.1$ psql psql (PGXC 1.0.3, based on PG 9.1.9) Type "help" for help. postgres=# explain select count(*) from accn a1 where exists (select null from accn_proc a2 where a2.fk_accn_id=a1.pk_accn_id and a2.fk_sta_id=52); QUERY PLAN ------------------------------------------------------------------------------ Aggregate (cost=0.02..0.03 rows=1 width=0) -> Nested Loop Semi Join (cost=0.00..0.01 rows=1 width=0) Join Filter: ((a1.pk_accn_id)::text = (a2.fk_accn_id)::text) -> Data Node Scan on a1 (cost=0.00..0.00 rows=1000 width=98) Node/s: node1, node2, node3, node4, node5, node6, node7, node8 -> Data Node Scan on a2 (cost=0.00..0.00 rows=1000 width=98) Node/s: node1, node2, node3, node4, node5, node6, node7, node8 (7 rows) create table accn(pk_accn_id character varying(40), <lots of other column definitions deleted for brevity>) distribute by hash(pk_accn_id) to node node1, node2, node3, node4, node5, node6, node7,node8; create table accn(pk_accn_id character varying(40), <lots of other column definitions deleted for brevity>) distribute by hash(fk_accn_id) to node node1, node2, node3, node4, node5, node6, node7,node8; From: Ashutosh Bapat [mailto:ash...@en...] Sent: Tuesday, June 04, 2013 9:15 PM To: Matt Warner Cc: Pos...@li...<mailto:Pos...@li...> Subject: Re: [Postgres-xc-general] XC Performance with Subquery Hi Matt, Which version of XC are you using? There has been a lot of change in the planner since last release. You may try the latest master HEAD (to be released as 1.2 in about a month). It will help if you can provide all the table definitions and EXPLAIN outputs. On Wed, Jun 5, 2013 at 5:40 AM, Matt Warner <MW...@xi...<mailto:MW...@xi...>> wrote: I need to correct item 3, below. The coordinator and only one of the data nodes goes to work. One by one, each of the data nodes appears to spin up to process the request and then go back to sleep. ? From: Matt Warner Sent: Tuesday, June 04, 2013 5:00 PM To: 'Pos...@li...<mailto:Pos...@li...>' Subject: XC Performance with Subquery I've been experimenting with XC and see interesting results. I'm hoping someone can help explain something I'm seeing. 1. I created two distributed tables, one with a primary key, one with a foreign key, and hashed both tables by that key. I'm expecting this to mean that the data for a given key is localized to a single node. 2. When I perform a simple "select count(*) from table1" I see all 8 data nodes consuming CPU (plus the coordinator), which I take to be a good sign-all nodes are working in parallel. 3. When I perform a join on the distribution key, I see only the coordinator go to work instead of all 8 data nodes. 4. I notice that the explain plan appears similar to page 55 of this document (http://www.pgcon.org/2012/schedule/attachments/224_Postgres-XC_tutorial.pdf). 5. I have indexes on the distribution keys, but that does not seem to make any difference. How do I get XC to perform the join on the data nodes? To be verbose, I am expecting to see more CPU resources consumed in this query: select count(*) from tablea a1 where exists (select null from tableb a2 where a2.fk_accn_id=a1.pk_accn_id and a2.fk_sta_id=52); Rewriting this as a simple join does not seem to work any better. What am I missing? TIA, Matt ------------------------------------------------------------------------------ How ServiceNow helps IT people transform IT departments: 1. A cloud service to automate IT design, transition and operations 2. Dashboards that offer high-level views of enterprise services 3. A single system of record for all IT processes http://p.sf.net/sfu/servicenow-d2d-j _______________________________________________ Postgres-xc-general mailing list Pos...@li...<mailto:Pos...@li...> https://lists.sourceforge.net/lists/listinfo/postgres-xc-general -- Best Wishes, Ashutosh Bapat EntepriseDB Corporation The Postgres Database Company |
From: Matt W. <MW...@XI...> - 2013-06-05 19:48:05
|
From: Michael Paquier [mailto:mic...@gm...] Sent: Wednesday, June 05, 2013 11:36 AM To: Matt Warner Cc: Ashutosh Bapat; Pos...@li... Subject: Re: [Postgres-xc-general] XC Performance with Subquery On Thu, Jun 6, 2013 at 3:25 AM, Matt Warner <MW...@xi...<mailto:MW...@xi...>> wrote: I've obtained the latest version via git, and compiled and installed it. It looks like the syntax to create a distributed table has maybe changed? Yes, you need to put some brackets when specifying a node list in CREATE TABLE: - 1.0 grammar: [ TO NODE nodename [, ... ] ] - 1.1 grammar: [ TO (nodename [, ... ]) ] This has been done to solve bison shift/reduce conflicts because ALTER TABLE using now similar grammar as CREATE TABLE for node list definition. [Matt Warner] That worked. Thanks! |
From: seikath <se...@gm...> - 2013-06-05 18:41:20
|
I think I got it, as the import use copy from , this should be the same filled bug : COPY FROM does not copy data to PRIMARY datanode - ID: 3611989 http://sourceforge.net/tracker/?func=detail&aid=3611989&group_id=311227&atid=1310232 seems the same , as I executed the import from second node and the primary node did not get the data, Cheers On 06/05/2013 04:05 PM, Andrei Martsinchyk wrote: > Ivan, > > It would not be a higly available setup, because if any of the nodes fails, all database updates will be failing. > Postgres master + Hot Standby's configuration would work better. The updates will stop only if master fails, but you can promote one slave instead, and you > can easily scale out handle more read-only queries. > > So my initial guess was incorrect. Try to set up logging more verbose and try to load into a test table. Probably you find a clue in the logs. > |
From: Michael P. <mic...@gm...> - 2013-06-05 18:36:21
|
On Thu, Jun 6, 2013 at 3:25 AM, Matt Warner <MW...@xi...> wrote: > I’ve obtained the latest version via git, and compiled and installed it. > It looks like the syntax to create a distributed table has maybe changed? > Yes, you need to put some brackets when specifying a node list in CREATE TABLE: - 1.0 grammar: [ TO NODE *nodename* [, ... ] ] - 1.1 grammar: [ TO *(nodename* [, ... ]) ] This has been done to solve bison shift/reduce conflicts because ALTER TABLE using now similar grammar as CREATE TABLE for node list definition. **** > > Where do I find the latest syntax for a distributed table? > I have nowhere documentation based on 1.1 builds, neither html nor pdf. It would be good to have something when 1.1 beta is released though. -- Michael |
From: Matt W. <MW...@XI...> - 2013-06-05 18:26:03
|
I've obtained the latest version via git, and compiled and installed it. It looks like the syntax to create a distributed table has maybe changed? Where do I find the latest syntax for a distributed table? From: Matt Warner Sent: Wednesday, June 05, 2013 8:52 AM To: 'Ashutosh Bapat' Cc: Pos...@li... Subject: RE: [Postgres-xc-general] XC Performance with Subquery I'm using 1.0.3, but will try out pre-1.2. BTW, I had to make some minor changes to get 1.0.3 to compile correctly on Solaris. Is anyone interested in receiving these changes? They're things such as illegal return statement from void functions (which doesn't actually make sense, as far as I know) that the Solaris compiler flags as errors. -bash-4.1$ psql psql (PGXC 1.0.3, based on PG 9.1.9) Type "help" for help. postgres=# explain select count(*) from accn a1 where exists (select null from accn_proc a2 where a2.fk_accn_id=a1.pk_accn_id and a2.fk_sta_id=52); QUERY PLAN ------------------------------------------------------------------------------ Aggregate (cost=0.02..0.03 rows=1 width=0) -> Nested Loop Semi Join (cost=0.00..0.01 rows=1 width=0) Join Filter: ((a1.pk_accn_id)::text = (a2.fk_accn_id)::text) -> Data Node Scan on a1 (cost=0.00..0.00 rows=1000 width=98) Node/s: node1, node2, node3, node4, node5, node6, node7, node8 -> Data Node Scan on a2 (cost=0.00..0.00 rows=1000 width=98) Node/s: node1, node2, node3, node4, node5, node6, node7, node8 (7 rows) create table accn(pk_accn_id character varying(40), <lots of other column definitions deleted for brevity>) distribute by hash(pk_accn_id) to node node1, node2, node3, node4, node5, node6, node7,node8; create table accn(pk_accn_id character varying(40), <lots of other column definitions deleted for brevity>) distribute by hash(fk_accn_id) to node node1, node2, node3, node4, node5, node6, node7,node8; From: Ashutosh Bapat [mailto:ash...@en...] Sent: Tuesday, June 04, 2013 9:15 PM To: Matt Warner Cc: Pos...@li...<mailto:Pos...@li...> Subject: Re: [Postgres-xc-general] XC Performance with Subquery Hi Matt, Which version of XC are you using? There has been a lot of change in the planner since last release. You may try the latest master HEAD (to be released as 1.2 in about a month). It will help if you can provide all the table definitions and EXPLAIN outputs. On Wed, Jun 5, 2013 at 5:40 AM, Matt Warner <MW...@xi...<mailto:MW...@xi...>> wrote: I need to correct item 3, below. The coordinator and only one of the data nodes goes to work. One by one, each of the data nodes appears to spin up to process the request and then go back to sleep. ? From: Matt Warner Sent: Tuesday, June 04, 2013 5:00 PM To: 'Pos...@li...<mailto:Pos...@li...>' Subject: XC Performance with Subquery I've been experimenting with XC and see interesting results. I'm hoping someone can help explain something I'm seeing. 1. I created two distributed tables, one with a primary key, one with a foreign key, and hashed both tables by that key. I'm expecting this to mean that the data for a given key is localized to a single node. 2. When I perform a simple "select count(*) from table1" I see all 8 data nodes consuming CPU (plus the coordinator), which I take to be a good sign-all nodes are working in parallel. 3. When I perform a join on the distribution key, I see only the coordinator go to work instead of all 8 data nodes. 4. I notice that the explain plan appears similar to page 55 of this document (http://www.pgcon.org/2012/schedule/attachments/224_Postgres-XC_tutorial.pdf). 5. I have indexes on the distribution keys, but that does not seem to make any difference. How do I get XC to perform the join on the data nodes? To be verbose, I am expecting to see more CPU resources consumed in this query: select count(*) from tablea a1 where exists (select null from tableb a2 where a2.fk_accn_id=a1.pk_accn_id and a2.fk_sta_id=52); Rewriting this as a simple join does not seem to work any better. What am I missing? TIA, Matt ------------------------------------------------------------------------------ How ServiceNow helps IT people transform IT departments: 1. A cloud service to automate IT design, transition and operations 2. Dashboards that offer high-level views of enterprise services 3. A single system of record for all IT processes http://p.sf.net/sfu/servicenow-d2d-j _______________________________________________ Postgres-xc-general mailing list Pos...@li...<mailto:Pos...@li...> https://lists.sourceforge.net/lists/listinfo/postgres-xc-general -- Best Wishes, Ashutosh Bapat EntepriseDB Corporation The Postgres Database Company |
From: Matt W. <MW...@XI...> - 2013-06-05 15:52:37
|
I'm using 1.0.3, but will try out pre-1.2. BTW, I had to make some minor changes to get 1.0.3 to compile correctly on Solaris. Is anyone interested in receiving these changes? They're things such as illegal return statement from void functions (which doesn't actually make sense, as far as I know) that the Solaris compiler flags as errors. -bash-4.1$ psql psql (PGXC 1.0.3, based on PG 9.1.9) Type "help" for help. postgres=# explain select count(*) from accn a1 where exists (select null from accn_proc a2 where a2.fk_accn_id=a1.pk_accn_id and a2.fk_sta_id=52); QUERY PLAN ------------------------------------------------------------------------------ Aggregate (cost=0.02..0.03 rows=1 width=0) -> Nested Loop Semi Join (cost=0.00..0.01 rows=1 width=0) Join Filter: ((a1.pk_accn_id)::text = (a2.fk_accn_id)::text) -> Data Node Scan on a1 (cost=0.00..0.00 rows=1000 width=98) Node/s: node1, node2, node3, node4, node5, node6, node7, node8 -> Data Node Scan on a2 (cost=0.00..0.00 rows=1000 width=98) Node/s: node1, node2, node3, node4, node5, node6, node7, node8 (7 rows) create table accn(pk_accn_id character varying(40), <lots of other column definitions deleted for brevity>) distribute by hash(pk_accn_id) to node node1, node2, node3, node4, node5, node6, node7,node8; create table accn(pk_accn_id character varying(40), <lots of other column definitions deleted for brevity>) distribute by hash(fk_accn_id) to node node1, node2, node3, node4, node5, node6, node7,node8; From: Ashutosh Bapat [mailto:ash...@en...] Sent: Tuesday, June 04, 2013 9:15 PM To: Matt Warner Cc: Pos...@li... Subject: Re: [Postgres-xc-general] XC Performance with Subquery Hi Matt, Which version of XC are you using? There has been a lot of change in the planner since last release. You may try the latest master HEAD (to be released as 1.2 in about a month). It will help if you can provide all the table definitions and EXPLAIN outputs. On Wed, Jun 5, 2013 at 5:40 AM, Matt Warner <MW...@xi...<mailto:MW...@xi...>> wrote: I need to correct item 3, below. The coordinator and only one of the data nodes goes to work. One by one, each of the data nodes appears to spin up to process the request and then go back to sleep. ? From: Matt Warner Sent: Tuesday, June 04, 2013 5:00 PM To: 'Pos...@li...<mailto:Pos...@li...>' Subject: XC Performance with Subquery I've been experimenting with XC and see interesting results. I'm hoping someone can help explain something I'm seeing. 1. I created two distributed tables, one with a primary key, one with a foreign key, and hashed both tables by that key. I'm expecting this to mean that the data for a given key is localized to a single node. 2. When I perform a simple "select count(*) from table1" I see all 8 data nodes consuming CPU (plus the coordinator), which I take to be a good sign-all nodes are working in parallel. 3. When I perform a join on the distribution key, I see only the coordinator go to work instead of all 8 data nodes. 4. I notice that the explain plan appears similar to page 55 of this document (http://www.pgcon.org/2012/schedule/attachments/224_Postgres-XC_tutorial.pdf). 5. I have indexes on the distribution keys, but that does not seem to make any difference. How do I get XC to perform the join on the data nodes? To be verbose, I am expecting to see more CPU resources consumed in this query: select count(*) from tablea a1 where exists (select null from tableb a2 where a2.fk_accn_id=a1.pk_accn_id and a2.fk_sta_id=52); Rewriting this as a simple join does not seem to work any better. What am I missing? TIA, Matt ------------------------------------------------------------------------------ How ServiceNow helps IT people transform IT departments: 1. A cloud service to automate IT design, transition and operations 2. Dashboards that offer high-level views of enterprise services 3. A single system of record for all IT processes http://p.sf.net/sfu/servicenow-d2d-j _______________________________________________ Postgres-xc-general mailing list Pos...@li...<mailto:Pos...@li...> https://lists.sourceforge.net/lists/listinfo/postgres-xc-general -- Best Wishes, Ashutosh Bapat EntepriseDB Corporation The Postgres Database Company |
From: seikath <se...@gm...> - 2013-06-05 15:08:47
|
Thank you Andrei, OK, I will go for your recipe and will do some test over OpenStack with full verbosity. Thank you all. On 06/05/2013 04:05 PM, Andrei Martsinchyk wrote: > Ivan, > > It would not be a higly available setup, because if any of the nodes fails, all database updates will be failing. > Postgres master + Hot Standby's configuration would work better. The updates will stop only if master fails, but you can promote one slave instead, and you > can easily scale out handle more read-only queries. > > So my initial guess was incorrect. Try to set up logging more verbose and try to load into a test table. Probably you find a clue in the logs. > > > 2013/6/5 seikath <se...@gm... <mailto:se...@gm...>> > > Hello all, > > this is in short the idea of the "HA" implementation at AWS: > > We use Amazon VPC, several frontends, they use TCP Amazon Loadbalancer to balance the dblink to the 4 nodes XC. > Each XC node has nginx which is used by the AWS LB to perform various checks on XC node, > if one of them fails, nginx responce is 404 and the AWS LoadBalancer excludes the XC node from the round robin. > Every XC node has pgboucer in front of the coordinators. > So in short : > internet --port 443-->[AWS LB SSL certificate offloading ] --port 80--> [apps]x4 --port 5432 - > [AWS Internal LB TCP ] --pgbouncer-port 5434 -- > [ > [pgboucer] --port 5432-->[coordinator] --port 6543-->[datanode] ]x4 > > The database is small, and we do expect massive concurrent load. > Later on bigger db size we will separate the load on two XC clusters, write only and read only, I plan to use streaming/other type of replication from the > write group > to the massive read only group. > > But this is just the idea, atm we just test the Rails Framework with XC. > In a view to the expected huge load, we want to use XC in production. > > Regariding the select nodeoids from pgxc_class result, I see 4 columns on all the 4 nodes: > > boxes@vpc-xc-coord-01:5432::boxes_production=[Wed Jun 5 13:34:27 UTC 2013]> select nodeoids from pgxc_class limit 4; > nodeoids > ------------------------- > 16384 16386 16388 16390 > 16384 16386 16388 16390 > 16384 16386 16388 16390 > 16384 16386 16388 16390 > (4 rows) > > Time: 4.444 ms > postgres@vpc-xc-coord-02:5432::boxes_production=[Wed Jun 5 13:34:31 UTC 2013]# select nodeoids from pgxc_class limit 4; > nodeoids > ------------------------- > 16386 16384 16388 16390 > 16386 16384 16388 16390 > 16386 16384 16388 16390 > 16386 16384 16388 16390 > (4 rows) > > postgres@vpc-xc-coord-03:5432::boxes_production=[Wed Jun 5 13:34:34 UTC 2013]# select nodeoids from pgxc_class limit 4; > nodeoids > ------------------------- > 16386 16388 16384 16390 > 16386 16388 16384 16390 > 16386 16388 16384 16390 > 16386 16388 16384 16390 > (4 rows) > > postgres@vpc-xc-coord-04:5432::boxes_production=[Wed Jun 5 13:34:37 UTC 2013]# select nodeoids from pgxc_class limit 4; > nodeoids > ------------------------- > 16386 16388 16390 16384 > 16386 16388 16390 16384 > 16386 16388 16390 16384 > 16386 16388 16390 16384 > (4 rows) > > > Kind regards, and again, thank you all. > > Cheers > > Ivan > > > On 06/05/2013 02:29 PM, Mason Sharp wrote: >> >> >> >> On Wed, Jun 5, 2013 at 7:37 AM, Andrei Martsinchyk <and...@gm... <mailto:and...@gm...>> wrote: >> >> Hi Iván, >> >> First of all, you should not replicate all your tables. No such case when it might be reasonable, except maybe some ultimate test case. Single >> Postgres server would perform better then your four-node cluster in any application. So think again about the distribution planning. >> >> >> It might be ok if the data set is relatively small (fits in memory on all nodes) and there is very high read-only concurrency with multiple coordinators. >> Anyway, I agree, in general distributing tables is the thing to do. >> >> >> Regarding your problem, I guess node definitions was different at the moment when you created your tables. >> To verify, please run following query on all your coordinators: >> >> select nodeoids from pgxc_class; >> >> The result should look like this: >> >> nodeoids >> --------------------------- >> 16386 16387 16388 16389 >> 16386 16387 16388 16389 >> ... >> (N rows) >> >> If you see less then four node OIDs in some or all rows that is the cause. >> >> >> >> >> 2013/6/5 seikath <se...@gm... <mailto:se...@gm...>> >> >> Hello guys, >> >> I am facing one problem with XC I would like to know if its common or not. >> >> 4 AWS based XC nodes installed with datanode and coordinator on each one, then I have separated one gtm-proxy and one gtm node. >> version used: >> psql (Postgres-XC) 1.0.3 >> (based on PostgreSQL) 9.1.9 >> >> once installed, it operates OK as db/roles creation , database import etc. >> >> The issue is, on db import, I get only three of the nodes replicated , as all the tables definitions are with *distribute by replication* : >> >> xzcat prod-db01-new.2013-06-04.12.31.34.sql.xz | sed 'h;/^CREATE TABLE/,/^);/s/;/ DISTRIBUTE BY REPLICATION;/' | psql -U postgres dbname >> I see no errors at the time of the dbimport , >> the tables , indexes , pkeys are replicated, just the data is only at three of the 4 active nodes. >> >> Details : >> >> # coordinatods config: >> >> boxes@vpc-xc-coord-01:5432::boxes_production=[Wed Jun 5 08:41:26 UTC 2013]> select * from pgxc_node; >> node_name | node_type | node_port | node_host | nodeis_primary | nodeis_preferred | node_id >> ------------+-----------+-----------+-----------+----------------+------------------+------------- >> coord01 | C | 5432 | localhost | f | f | -951114102 >> datanode01 | D | 6543 | localhost | t | t | -561864558 >> coord02 | C | 5432 | 10.0.1.12 | f | f | -1523582700 >> datanode02 | D | 6543 | 10.0.1.12 | f | f | 670480207 >> coord03 | C | 5432 | 10.0.1.13 | f | f | 1641506819 >> datanode03 | D | 6543 | 10.0.1.13 | f | f | -1804036519 >> coord04 | C | 5432 | 10.0.1.14 | f | f | -1385444041 >> datanode04 | D | 6543 | 10.0.1.14 | f | f | 1005050720 >> (8 rows) >> >> postgres@vpc-xc-coord-02:5432::boxes_production=[Wed Jun 5 08:42:24 UTC 2013]# select * from pgxc_node; >> node_name | node_type | node_port | node_host | nodeis_primary | nodeis_preferred | node_id >> ------------+-----------+-----------+-----------+----------------+------------------+------------- >> coord02 | C | 5432 | localhost | f | f | -1523582700 >> datanode02 | D | 6543 | localhost | f | t | 670480207 >> coord01 | C | 5432 | 10.0.1.11 | f | f | -951114102 >> datanode01 | D | 6543 | 10.0.1.11 | t | f | -561864558 >> coord03 | C | 5432 | 10.0.1.13 | f | f | 1641506819 >> datanode03 | D | 6543 | 10.0.1.13 | f | f | -1804036519 >> coord04 | C | 5432 | 10.0.1.14 | f | f | -1385444041 >> datanode04 | D | 6543 | 10.0.1.14 | f | f | 1005050720 >> (8 rows) >> postgres@vpc-xc-coord-03:5432::boxes_production=[Wed Jun 5 08:42:57 UTC 2013]# select * from pgxc_node; >> node_name | node_type | node_port | node_host | nodeis_primary | nodeis_preferred | node_id >> ------------+-----------+-----------+-----------+----------------+------------------+------------- >> coord03 | C | 5432 | localhost | f | f | 1641506819 >> datanode03 | D | 6543 | localhost | f | t | -1804036519 >> coord01 | C | 5432 | 10.0.1.11 | f | f | -951114102 >> datanode01 | D | 6543 | 10.0.1.11 | t | f | -561864558 >> coord02 | C | 5432 | 10.0.1.12 | f | f | -1523582700 >> datanode02 | D | 6543 | 10.0.1.12 | f | f | 670480207 >> coord04 | C | 5432 | 10.0.1.14 | f | f | -1385444041 >> datanode04 | D | 6543 | 10.0.1.14 | f | f | 1005050720 >> >> postgres@vpc-xc-coord-04:5432::boxes_production=[Wed Jun 5 08:20:35 UTC 2013]# select * from pgxc_node; >> node_name | node_type | node_port | node_host | nodeis_primary | nodeis_preferred | node_id >> ------------+-----------+-----------+-----------+----------------+------------------+------------- >> coord04 | C | 5432 | localhost | f | f | -1385444041 >> datanode04 | D | 6543 | localhost | f | t | 1005050720 >> coord01 | C | 5432 | 10.0.1.11 | f | f | -951114102 >> datanode01 | D | 6543 | 10.0.1.11 | t | f | -561864558 >> coord02 | C | 5432 | 10.0.1.12 | f | f | -1523582700 >> datanode02 | D | 6543 | 10.0.1.12 | f | f | 670480207 >> coord03 | C | 5432 | 10.0.1.13 | f | f | 1641506819 >> datanode03 | D | 6543 | 10.0.1.13 | f | f | -1804036519 >> >> fist node coordinator and datanodes config: >> ==================================================================== >> >> postgres@vpc-xc-coord-01:[Wed Jun 05 08:40:00][/usr/local/pgsql]$ cat datanode.postgresql.conf >> listen_addresses = '*' # what IP address(es) to listen on; >> port = 6543 # (change requires restart) >> max_connections = 100 # (change requires restart) >> shared_buffers = 320MB # min 128kB >> max_prepared_transactions = 100 # zero disables the feature >> datestyle = 'iso, mdy' >> lc_messages = 'en_US.UTF-8' # locale for system error message >> lc_monetary = 'en_US.UTF-8' # locale for monetary formatting >> lc_numeric = 'en_US.UTF-8' # locale for number formatting >> lc_time = 'en_US.UTF-8' # locale for time formatting >> default_text_search_config = 'pg_catalog.english' >> include '/usr/local/pgsql/gtm.include.conf' >> include '/usr/local/pgsql/datanode_node_name.conf' >> #pgxc_node_name = 'datanode04' # Coordinator or Datanode name >> enforce_two_phase_commit = on # Enforce the usage of two-phase commit on transactions >> enable_fast_query_shipping = on >> enable_remotejoin = on >> enable_remotegroup = on >> >> >> postgres@vpc-xc-coord-01:[Wed Jun 05 08:43:29][/usr/local/pgsql]$ cat coordinator.postgresql.conf >> listen_addresses = '*' # what IP address(es) to listen on; >> port = 5432 # (change requires restart) >> max_connections = 100 # (change requires restart) >> shared_buffers = 120MB # min 128kB >> max_prepared_transactions = 100 # zero disables the feature >> datestyle = 'iso, mdy' >> lc_messages = 'en_US.UTF-8' # locale for system error message >> lc_monetary = 'en_US.UTF-8' # locale for monetary formatting >> lc_numeric = 'en_US.UTF-8' # locale for number formatting >> lc_time = 'en_US.UTF-8' # locale for time formatting >> default_text_search_config = 'pg_catalog.english' >> pooler_port = 6667 # Pool Manager TCP port >> min_pool_size = 1 # Initial pool size >> max_pool_size = 100 # Maximum pool size >> max_coordinators = 16 # Maximum number of Coordinators >> max_datanodes = 16 # Maximum number of Datanodes >> include '/usr/local/pgsql/gtm.include.conf' >> include '/usr/local/pgsql/coordinator_node_name.conf' >> enforce_two_phase_commit = on # Enforce the usage of two-phase commit on transactions >> enable_fast_query_shipping = on >> enable_remotejoin = on >> enable_remotegroup = on >> >> >> postgres@vpc-xc-coord-01:[Wed Jun 05 08:43:38][/usr/local/pgsql]$ cat /usr/local/pgsql/gtm.include.conf >> gtm_host = '10.0.1.16' # Host name or address of GTM Proxy, if not - direct link to GTM >> gtm_port = 6543 >> >> #gtm_host = '127.0.0.1' >> #gtm_port = 5434 >> >> boxes@vpc-xc-coord-01:5432::boxes_production=[Wed Jun 5 08:42:04 UTC 2013]> select count(*) from friends; >> count >> ------- >> 1 >> (1 row) >> >> Time: 4.698 ms >> >> postgres@vpc-xc-coord-02:5432::boxes_production=[Wed Jun 5 08:42:25 UTC 2013]# select count(*) from friends; >> count >> ------- >> 41416 >> (1 row) >> >> postgres@vpc-xc-coord-03:5432::boxes_production=[Wed Jun 5 08:43:01 UTC 2013]# select count(*) from friends; >> count >> ------- >> 41416 >> (1 row) >> >> postgres@vpc-xc-coord-04:5432::boxes_production=[Wed Jun 5 08:42:53 UTC 2013]# select count(*) from friends; >> count >> ------- >> 41416 >> (1 row) >> >> # identical configs : >> postgres@vpc-xc-coord-01:[Wed Jun 05 08:53:58][/usr/local/pgsql]$ md5sum coordinator.postgresql.conf >> b7f61b5d8baeec83cd82d9f1ee744728 coordinator.postgresql.conf >> postgres@vpc-xc-coord-02:[Wed Jun 05 08:53:53][~]$ md5sum coordinator.postgresql.conf >> b7f61b5d8baeec83cd82d9f1ee744728 coordinator.postgresql.conf >> >> postgres@vpc-xc-coord-01:[Wed Jun 05 08:54:58][/usr/local/pgsql]$ md5sum datanode.postgresql.conf >> 00d6a5736b6401dc6cc3d820fb412082 datanode.postgresql.conf >> postgres@vpc-xc-coord-02:[Wed Jun 05 08:54:37][~]$ md5sum datanode.postgresql.conf >> 00d6a5736b6401dc6cc3d820fb412082 datanode.postgresql.conf >> >> postgres@vpc-xc-coord-01:[Wed Jun 05 08:55:24][/usr/local/pgsql]$ md5sum /usr/local/pgsql/gtm.include.conf >> a6e7c3a21958a23bfb5054dc645d9576 /usr/local/pgsql/gtm.include.conf >> postgres@vpc-xc-coord-02:[Wed Jun 05 08:55:01][~]$ md5sum /usr/local/pgsql/gtm.include.conf >> a6e7c3a21958a23bfb5054dc645d9576 /usr/local/pgsql/gtm.include.conf >> >> >> ==================================================================== >> I >> In general I suspect wrong configuration, but atm can not find any. >> >> I did a test importing the same db at the second XC node, and its the same issue: the fixt XC gets eveyrthing replicated but the actual data. >> >> My plan is to launch new clone instance of the vpc-xc-coord-02 as a replacement of the vpc-xc-coord-01, but this is the desperate plan C >> I want to know what happens .. :) >> >> >> Kind regards, >> >> Iván >> >> >> >> >> ------------------------------------------------------------------------------ >> How ServiceNow helps IT people transform IT departments: >> 1. A cloud service to automate IT design, transition and operations >> 2. Dashboards that offer high-level views of enterprise services >> 3. A single system of record for all IT processes >> http://p.sf.net/sfu/servicenow-d2d-j >> _______________________________________________ >> Postgres-xc-general mailing list >> Pos...@li... <mailto:Pos...@li...> >> https://lists.sourceforge.net/lists/listinfo/postgres-xc-general >> >> >> >> >> -- >> Andrei Martsinchyk >> >> StormDB - http://www.stormdb.com <http://www.stormdb.com/> >> The Database Cloud >> >> >> ------------------------------------------------------------------------------ >> How ServiceNow helps IT people transform IT departments: >> 1. A cloud service to automate IT design, transition and operations >> 2. Dashboards that offer high-level views of enterprise services >> 3. A single system of record for all IT processes >> http://p.sf.net/sfu/servicenow-d2d-j >> _______________________________________________ >> Postgres-xc-general mailing list >> Pos...@li... <mailto:Pos...@li...> >> https://lists.sourceforge.net/lists/listinfo/postgres-xc-general >> >> >> >> >> -- >> Mason Sharp >> >> StormDB - http://www.stormdb.com >> The Database Cloud >> Postgres-XC Support and Services > > ------------------------------------------------------------------------------ > How ServiceNow helps IT people transform IT departments: > 1. A cloud service to automate IT design, transition and operations > 2. Dashboards that offer high-level views of enterprise services > 3. A single system of record for all IT processes > http://p.sf.net/sfu/servicenow-d2d-j > _______________________________________________ > Postgres-xc-general mailing list > Pos...@li... <mailto:Pos...@li...> > https://lists.sourceforge.net/lists/listinfo/postgres-xc-general > > > > > -- > Andrei Martsinchyk > > StormDB - http://www.stormdb.com <http://www.stormdb.com/> > The Database Cloud > |