You can subscribe to this list here.
2010 |
Jan
|
Feb
|
Mar
|
Apr
|
May
(2) |
Jun
|
Jul
|
Aug
(6) |
Sep
|
Oct
(19) |
Nov
(1) |
Dec
|
---|---|---|---|---|---|---|---|---|---|---|---|---|
2011 |
Jan
(12) |
Feb
(1) |
Mar
(4) |
Apr
(4) |
May
(32) |
Jun
(12) |
Jul
(11) |
Aug
(1) |
Sep
(6) |
Oct
(3) |
Nov
|
Dec
(10) |
2012 |
Jan
(11) |
Feb
(1) |
Mar
(3) |
Apr
(25) |
May
(53) |
Jun
(38) |
Jul
(103) |
Aug
(54) |
Sep
(31) |
Oct
(66) |
Nov
(77) |
Dec
(20) |
2013 |
Jan
(91) |
Feb
(86) |
Mar
(103) |
Apr
(107) |
May
(25) |
Jun
(37) |
Jul
(17) |
Aug
(59) |
Sep
(38) |
Oct
(78) |
Nov
(29) |
Dec
(15) |
2014 |
Jan
(23) |
Feb
(82) |
Mar
(118) |
Apr
(101) |
May
(103) |
Jun
(45) |
Jul
(6) |
Aug
(10) |
Sep
|
Oct
(32) |
Nov
|
Dec
(9) |
2015 |
Jan
(3) |
Feb
(5) |
Mar
|
Apr
(1) |
May
|
Jun
|
Jul
(9) |
Aug
(4) |
Sep
(3) |
Oct
|
Nov
|
Dec
|
2016 |
Jan
(3) |
Feb
|
Mar
|
Apr
|
May
|
Jun
|
Jul
|
Aug
|
Sep
|
Oct
|
Nov
|
Dec
|
2017 |
Jan
|
Feb
|
Mar
|
Apr
|
May
|
Jun
(3) |
Jul
|
Aug
|
Sep
|
Oct
|
Nov
|
Dec
|
2018 |
Jan
|
Feb
|
Mar
|
Apr
|
May
(4) |
Jun
|
Jul
|
Aug
|
Sep
|
Oct
|
Nov
|
Dec
|
S | M | T | W | T | F | S |
---|---|---|---|---|---|---|
|
|
|
|
1
|
2
(3) |
3
|
4
|
5
(5) |
6
(22) |
7
(1) |
8
(4) |
9
|
10
|
11
|
12
|
13
(4) |
14
(6) |
15
(5) |
16
(18) |
17
|
18
|
19
(8) |
20
(1) |
21
|
22
|
23
|
24
|
25
|
26
|
27
|
28
|
29
|
30
|
|
From: Tatsuo I. <is...@po...> - 2012-11-06 10:47:01
|
Thanks. Your view works great! BTW, Should the view be executed on coordinator node? If so, I'm a little bit confused. Because you said: > Because pgxc_class and pg_class are both local to each node, oid is > also local. To query correctly, they should be issued using EXECUTE > DIRECT statement. Each EXECUTE DIRECT will return the result local > to each node. > 2012/11/6 Tatsuo Ishii <is...@po...>: >> I sent a query to coordinator node and got following result. Doesn't >> this means table "pgbench_accounts" distributed? >> >> test=# select * from pgxc_class as x join pg_class as c on x.pcrelid = c.oid where c.relname = 'pgbench_accounts'; > pgxc_class has distribution definition of each table. I'm using the > following views as a handy tool: > > ---- > [koichi@linker:scripts]$ cat distr_view.sql > DROP VIEW IF EXISTS xc_table_distribution; > > > CREATE VIEW xc_table_distribution AS > > SELECT pg_class.relname relation, > CASE > WHEN pclocatortype = 'H' THEN 'Hash' > WHEN pclocatortype = 'M' THEN 'Modulo' > WHEN pclocatortype = 'N' THEN 'Round Robin' > WHEN pclocatortype = 'R' THEN 'Replicate' > ELSE 'Unknown' > END AS distribution, > pg_attribute.attname attname > FROM pg_class, pgxc_class, pg_attribute > WHERE pg_class.oid = pgxc_class.pcrelid > and pg_class.oid = pg_attribute.attrelid > and pgxc_class.pcattnum = pg_attribute.attnum > UNION > > SELECT pg_class.relname relation, > CASE > WHEN pclocatortype = 'H' THEN 'Hash' > WHEN pclocatortype = 'M' THEN 'Modulo' > WHEN pclocatortype = 'N' THEN 'Round Robin' > WHEN pclocatortype = 'R' THEN 'Replicate' > ELSE 'Unknown' > END AS distribution, > '- none -' attname > FROM pg_class, pgxc_class, pg_attribute > WHERE pg_class.oid = pgxc_class.pcrelid > and pg_class.oid = pg_attribute.attrelid > and pgxc_class.pcattnum = 0 > ; > > COMMENT ON VIEW xc_table_distribution IS > 'View to show distribution/replication attribute of the given table.'; > > [koichi@linker:scripts]$ > ------ > > Info for pgbench tables are as follows: > ----------- > koichi=# select * from xc_table_distribution > koichi-# ; > relation | distribution | attname > ------------------+--------------+---------- > t | Hash | a > pgbench_branches | Hash | bid > y | Modulo | a > pgbench_tellers | Hash | bid > pgbench_accounts | Hash | bid > x | Round Robin | - none - > pgbench_history | Hash | bid > s | Replicate | - none - > (8 rows) > > koichi=# > ------ > > Hope it helps. > > Regards; > ---------- > Koichi Suzuki > > > 2012/11/6 Tatsuo Ishii <is...@po...>: >>> One suggestion. Because set bid value is relatively small, it may be >>> better to distribute tables using MODULO, not HASH. >> >> What is the distribution method when pgbench -k is used? >> -- >> Tatsuo Ishii >> SRA OSS, Inc. Japan >> English: http://www.sraoss.co.jp/index_en.php >> Japanese: http://www.sraoss.co.jp |
From: Koichi S. <koi...@gm...> - 2012-11-06 10:03:58
|
The result has several noise. Tables t, y, x, and s are all for my own test. Sorry for inconvenience. ---------- Koichi Suzuki 2012/11/6 Koichi Suzuki <koi...@gm...>: > pgxc_class has distribution definition of each table. I'm using the > following views as a handy tool: > > ---- > [koichi@linker:scripts]$ cat distr_view.sql > DROP VIEW IF EXISTS xc_table_distribution; > > > CREATE VIEW xc_table_distribution AS > > SELECT pg_class.relname relation, > CASE > WHEN pclocatortype = 'H' THEN 'Hash' > WHEN pclocatortype = 'M' THEN 'Modulo' > WHEN pclocatortype = 'N' THEN 'Round Robin' > WHEN pclocatortype = 'R' THEN 'Replicate' > ELSE 'Unknown' > END AS distribution, > pg_attribute.attname attname > FROM pg_class, pgxc_class, pg_attribute > WHERE pg_class.oid = pgxc_class.pcrelid > and pg_class.oid = pg_attribute.attrelid > and pgxc_class.pcattnum = pg_attribute.attnum > UNION > > SELECT pg_class.relname relation, > CASE > WHEN pclocatortype = 'H' THEN 'Hash' > WHEN pclocatortype = 'M' THEN 'Modulo' > WHEN pclocatortype = 'N' THEN 'Round Robin' > WHEN pclocatortype = 'R' THEN 'Replicate' > ELSE 'Unknown' > END AS distribution, > '- none -' attname > FROM pg_class, pgxc_class, pg_attribute > WHERE pg_class.oid = pgxc_class.pcrelid > and pg_class.oid = pg_attribute.attrelid > and pgxc_class.pcattnum = 0 > ; > > COMMENT ON VIEW xc_table_distribution IS > 'View to show distribution/replication attribute of the given table.'; > > [koichi@linker:scripts]$ > ------ > > Info for pgbench tables are as follows: > ----------- > koichi=# select * from xc_table_distribution > koichi-# ; > relation | distribution | attname > ------------------+--------------+---------- > t | Hash | a > pgbench_branches | Hash | bid > y | Modulo | a > pgbench_tellers | Hash | bid > pgbench_accounts | Hash | bid > x | Round Robin | - none - > pgbench_history | Hash | bid > s | Replicate | - none - > (8 rows) > > koichi=# > ------ > > Hope it helps. > > Regards; > ---------- > Koichi Suzuki > > > 2012/11/6 Tatsuo Ishii <is...@po...>: >>> One suggestion. Because set bid value is relatively small, it may be >>> better to distribute tables using MODULO, not HASH. >> >> What is the distribution method when pgbench -k is used? >> -- >> Tatsuo Ishii >> SRA OSS, Inc. Japan >> English: http://www.sraoss.co.jp/index_en.php >> Japanese: http://www.sraoss.co.jp |
From: Koichi S. <koi...@gm...> - 2012-11-06 10:02:55
|
pgxc_class has distribution definition of each table. I'm using the following views as a handy tool: ---- [koichi@linker:scripts]$ cat distr_view.sql DROP VIEW IF EXISTS xc_table_distribution; CREATE VIEW xc_table_distribution AS SELECT pg_class.relname relation, CASE WHEN pclocatortype = 'H' THEN 'Hash' WHEN pclocatortype = 'M' THEN 'Modulo' WHEN pclocatortype = 'N' THEN 'Round Robin' WHEN pclocatortype = 'R' THEN 'Replicate' ELSE 'Unknown' END AS distribution, pg_attribute.attname attname FROM pg_class, pgxc_class, pg_attribute WHERE pg_class.oid = pgxc_class.pcrelid and pg_class.oid = pg_attribute.attrelid and pgxc_class.pcattnum = pg_attribute.attnum UNION SELECT pg_class.relname relation, CASE WHEN pclocatortype = 'H' THEN 'Hash' WHEN pclocatortype = 'M' THEN 'Modulo' WHEN pclocatortype = 'N' THEN 'Round Robin' WHEN pclocatortype = 'R' THEN 'Replicate' ELSE 'Unknown' END AS distribution, '- none -' attname FROM pg_class, pgxc_class, pg_attribute WHERE pg_class.oid = pgxc_class.pcrelid and pg_class.oid = pg_attribute.attrelid and pgxc_class.pcattnum = 0 ; COMMENT ON VIEW xc_table_distribution IS 'View to show distribution/replication attribute of the given table.'; [koichi@linker:scripts]$ ------ Info for pgbench tables are as follows: ----------- koichi=# select * from xc_table_distribution koichi-# ; relation | distribution | attname ------------------+--------------+---------- t | Hash | a pgbench_branches | Hash | bid y | Modulo | a pgbench_tellers | Hash | bid pgbench_accounts | Hash | bid x | Round Robin | - none - pgbench_history | Hash | bid s | Replicate | - none - (8 rows) koichi=# ------ Hope it helps. Regards; ---------- Koichi Suzuki 2012/11/6 Tatsuo Ishii <is...@po...>: >> One suggestion. Because set bid value is relatively small, it may be >> better to distribute tables using MODULO, not HASH. > > What is the distribution method when pgbench -k is used? > -- > Tatsuo Ishii > SRA OSS, Inc. Japan > English: http://www.sraoss.co.jp/index_en.php > Japanese: http://www.sraoss.co.jp |
From: Tatsuo I. <is...@po...> - 2012-11-06 08:19:19
|
Oh, I thought coordinator node knows everything about data node. I did following: execute direct on datanode1 'select * from pgxc_class as x join pg_class as c on x.pcrelid = c.oid where c.relname = ''pgbench_accounts'''; on coordinator node and got 0 rows. Is that means pgbench_accounts table is not distributed? I did pgbench -k against coordinator node and I thought coordinator automatically distributes data on each data node. Was this wrong assumption? -- Tatsuo Ishii SRA OSS, Inc. Japan English: http://www.sraoss.co.jp/index_en.php Japanese: http://www.sraoss.co.jp > Because pgxc_class and pg_class are both local to each node, oid is > also local. To query correctly, they should be issued using EXECUTE > DIRECT statement. Each EXECUTE DIRECT will return the result local > to each node. > > Regards; > ---------- > Koichi Suzuki > > > 2012/11/6 Tatsuo Ishii <is...@po...>: >> I sent a query to coordinator node and got following result. Doesn't >> this means table "pgbench_accounts" distributed? >> >> test=# select * from pgxc_class as x join pg_class as c on x.pcrelid = c.oid where c.relname = 'pgbench_accounts'; >> -[ RECORD 1 ]---+----------------- >> pcrelid | 16413 >> pclocatortype | H >> pcattnum | 1 >> pchashalgorithm | 1 >> pchashbuckets | 4096 >> nodeoids | 16384 16385 >> relname | pgbench_accounts >> relnamespace | 2200 >> reltype | 16415 >> reloftype | 0 >> relowner | 10 >> relam | 0 >> relfilenode | 16419 >> reltablespace | 0 >> relpages | 0 >> reltuples | 0 >> reltoastrelid | 0 >> reltoastidxid | 0 >> relhasindex | t >> relisshared | f >> relpersistence | p >> relkind | r >> relnatts | 4 >> relchecks | 0 >> relhasoids | f >> relhaspkey | t >> relhasrules | f >> relhastriggers | f >> relhassubclass | f >> relfrozenxid | 78651 >> relacl | >> reloptions | {fillfactor=100} >> >>> I tested with the current head (sorry) and found pg_statio_user_tables is available. >>> >>> Because this is not distributed tables (not distributed or replicated), query will return only local information of these tables. Please use execute direct to query each node's statistics. >>> >>> Regards; >>> --- >>> Koichi Suzuki >>> >>> On Tue, 06 Nov 2012 15:02:37 +0900 (JST) >>> Tatsuo Ishii <is...@po...> wrote: >>> >>>> Hi, >>>> >>>> It seems I cannot read pg_statio_user_tables from either coordinator or >>>> data node. Is this a known limitation? I just would like to see how I >>>> can confirm UPDATEs distributed among data nodes. >>>> -- >>>> Tatsuo Ishii >>>> SRA OSS, Inc. Japan >>>> English: http://www.sraoss.co.jp/index_en.php >>>> Japanese: http://www.sraoss.co.jp >>>> >>>> ------------------------------------------------------------------------------ >>>> LogMeIn Central: Instant, anywhere, Remote PC access and management. >>>> Stay in control, update software, and manage PCs from one command center >>>> Diagnose problems and improve visibility into emerging IT issues >>>> Automate, monitor and manage. Do more in less time with Central >>>> http://p.sf.net/sfu/logmein12331_d2d >>>> _______________________________________________ >>>> Postgres-xc-general mailing list >>>> Pos...@li... >>>> https://lists.sourceforge.net/lists/listinfo/postgres-xc-general >>>> >> >> ------------------------------------------------------------------------------ >> LogMeIn Central: Instant, anywhere, Remote PC access and management. >> Stay in control, update software, and manage PCs from one command center >> Diagnose problems and improve visibility into emerging IT issues >> Automate, monitor and manage. Do more in less time with Central >> http://p.sf.net/sfu/logmein12331_d2d >> _______________________________________________ >> Postgres-xc-general mailing list >> Pos...@li... >> https://lists.sourceforge.net/lists/listinfo/postgres-xc-general |
From: Koichi S. <koi...@gm...> - 2012-11-06 08:09:30
|
Because pgxc_class and pg_class are both local to each node, oid is also local. To query correctly, they should be issued using EXECUTE DIRECT statement. Each EXECUTE DIRECT will return the result local to each node. Regards; ---------- Koichi Suzuki 2012/11/6 Tatsuo Ishii <is...@po...>: > I sent a query to coordinator node and got following result. Doesn't > this means table "pgbench_accounts" distributed? > > test=# select * from pgxc_class as x join pg_class as c on x.pcrelid = c.oid where c.relname = 'pgbench_accounts'; > -[ RECORD 1 ]---+----------------- > pcrelid | 16413 > pclocatortype | H > pcattnum | 1 > pchashalgorithm | 1 > pchashbuckets | 4096 > nodeoids | 16384 16385 > relname | pgbench_accounts > relnamespace | 2200 > reltype | 16415 > reloftype | 0 > relowner | 10 > relam | 0 > relfilenode | 16419 > reltablespace | 0 > relpages | 0 > reltuples | 0 > reltoastrelid | 0 > reltoastidxid | 0 > relhasindex | t > relisshared | f > relpersistence | p > relkind | r > relnatts | 4 > relchecks | 0 > relhasoids | f > relhaspkey | t > relhasrules | f > relhastriggers | f > relhassubclass | f > relfrozenxid | 78651 > relacl | > reloptions | {fillfactor=100} > >> I tested with the current head (sorry) and found pg_statio_user_tables is available. >> >> Because this is not distributed tables (not distributed or replicated), query will return only local information of these tables. Please use execute direct to query each node's statistics. >> >> Regards; >> --- >> Koichi Suzuki >> >> On Tue, 06 Nov 2012 15:02:37 +0900 (JST) >> Tatsuo Ishii <is...@po...> wrote: >> >>> Hi, >>> >>> It seems I cannot read pg_statio_user_tables from either coordinator or >>> data node. Is this a known limitation? I just would like to see how I >>> can confirm UPDATEs distributed among data nodes. >>> -- >>> Tatsuo Ishii >>> SRA OSS, Inc. Japan >>> English: http://www.sraoss.co.jp/index_en.php >>> Japanese: http://www.sraoss.co.jp >>> >>> ------------------------------------------------------------------------------ >>> LogMeIn Central: Instant, anywhere, Remote PC access and management. >>> Stay in control, update software, and manage PCs from one command center >>> Diagnose problems and improve visibility into emerging IT issues >>> Automate, monitor and manage. Do more in less time with Central >>> http://p.sf.net/sfu/logmein12331_d2d >>> _______________________________________________ >>> Postgres-xc-general mailing list >>> Pos...@li... >>> https://lists.sourceforge.net/lists/listinfo/postgres-xc-general >>> > > ------------------------------------------------------------------------------ > LogMeIn Central: Instant, anywhere, Remote PC access and management. > Stay in control, update software, and manage PCs from one command center > Diagnose problems and improve visibility into emerging IT issues > Automate, monitor and manage. Do more in less time with Central > http://p.sf.net/sfu/logmein12331_d2d > _______________________________________________ > Postgres-xc-general mailing list > Pos...@li... > https://lists.sourceforge.net/lists/listinfo/postgres-xc-general |
From: Tatsuo I. <is...@po...> - 2012-11-06 07:35:42
|
I sent a query to coordinator node and got following result. Doesn't this means table "pgbench_accounts" distributed? test=# select * from pgxc_class as x join pg_class as c on x.pcrelid = c.oid where c.relname = 'pgbench_accounts'; -[ RECORD 1 ]---+----------------- pcrelid | 16413 pclocatortype | H pcattnum | 1 pchashalgorithm | 1 pchashbuckets | 4096 nodeoids | 16384 16385 relname | pgbench_accounts relnamespace | 2200 reltype | 16415 reloftype | 0 relowner | 10 relam | 0 relfilenode | 16419 reltablespace | 0 relpages | 0 reltuples | 0 reltoastrelid | 0 reltoastidxid | 0 relhasindex | t relisshared | f relpersistence | p relkind | r relnatts | 4 relchecks | 0 relhasoids | f relhaspkey | t relhasrules | f relhastriggers | f relhassubclass | f relfrozenxid | 78651 relacl | reloptions | {fillfactor=100} > I tested with the current head (sorry) and found pg_statio_user_tables is available. > > Because this is not distributed tables (not distributed or replicated), query will return only local information of these tables. Please use execute direct to query each node's statistics. > > Regards; > --- > Koichi Suzuki > > On Tue, 06 Nov 2012 15:02:37 +0900 (JST) > Tatsuo Ishii <is...@po...> wrote: > >> Hi, >> >> It seems I cannot read pg_statio_user_tables from either coordinator or >> data node. Is this a known limitation? I just would like to see how I >> can confirm UPDATEs distributed among data nodes. >> -- >> Tatsuo Ishii >> SRA OSS, Inc. Japan >> English: http://www.sraoss.co.jp/index_en.php >> Japanese: http://www.sraoss.co.jp >> >> ------------------------------------------------------------------------------ >> LogMeIn Central: Instant, anywhere, Remote PC access and management. >> Stay in control, update software, and manage PCs from one command center >> Diagnose problems and improve visibility into emerging IT issues >> Automate, monitor and manage. Do more in less time with Central >> http://p.sf.net/sfu/logmein12331_d2d >> _______________________________________________ >> Postgres-xc-general mailing list >> Pos...@li... >> https://lists.sourceforge.net/lists/listinfo/postgres-xc-general >> |
From: Koichi S. <ko...@in...> - 2012-11-06 07:26:41
|
I tested with the current head (sorry) and found pg_statio_user_tables is available. Because this is not distributed tables (not distributed or replicated), query will return only local information of these tables. Please use execute direct to query each node's statistics. Regards; --- Koichi Suzuki On Tue, 06 Nov 2012 15:02:37 +0900 (JST) Tatsuo Ishii <is...@po...> wrote: > Hi, > > It seems I cannot read pg_statio_user_tables from either coordinator or > data node. Is this a known limitation? I just would like to see how I > can confirm UPDATEs distributed among data nodes. > -- > Tatsuo Ishii > SRA OSS, Inc. Japan > English: http://www.sraoss.co.jp/index_en.php > Japanese: http://www.sraoss.co.jp > > ------------------------------------------------------------------------------ > LogMeIn Central: Instant, anywhere, Remote PC access and management. > Stay in control, update software, and manage PCs from one command center > Diagnose problems and improve visibility into emerging IT issues > Automate, monitor and manage. Do more in less time with Central > http://p.sf.net/sfu/logmein12331_d2d > _______________________________________________ > Postgres-xc-general mailing list > Pos...@li... > https://lists.sourceforge.net/lists/listinfo/postgres-xc-general > |
From: Tatsuo I. <is...@po...> - 2012-11-06 06:03:08
|
Hi, It seems I cannot read pg_statio_user_tables from either coordinator or data node. Is this a known limitation? I just would like to see how I can confirm UPDATEs distributed among data nodes. -- Tatsuo Ishii SRA OSS, Inc. Japan English: http://www.sraoss.co.jp/index_en.php Japanese: http://www.sraoss.co.jp |
From: Tatsuo I. <is...@po...> - 2012-11-06 05:53:11
|
> One suggestion. Because set bid value is relatively small, it may be > better to distribute tables using MODULO, not HASH. What is the distribution method when pgbench -k is used? -- Tatsuo Ishii SRA OSS, Inc. Japan English: http://www.sraoss.co.jp/index_en.php Japanese: http://www.sraoss.co.jp |
From: Michael P. <mic...@gm...> - 2012-11-06 02:04:56
|
> My fault. Sorry. (note the space between "--prefix=" and > "/home/t-ishii...". > > BTW, I got "may be used uninitialized in this function" warnings while > compiling. Are they safe? > Yes they are. The only reason why those things are not fixed yet is because nobody took the time to seriously fix them all. -- Michael Paquier http://michael.otacoo.com |
From: Tatsuo I. <is...@po...> - 2012-11-06 02:02:07
|
> On Tue, Nov 6, 2012 at 10:32 AM, Tatsuo Ishii <is...@po...> wrote: > >> Hi, >> >> I tried this with Postgres-XC 1.0.1. >> >> [t-ishii@localhost pgxc]$ ./configure --prefix= >> /home/t-ishii/work/Postgres-XC >> configure: WARNING: you should use --build, --host, --target >> configure: WARNING: invalid host type: /home/t-ishii/work/Postgres-XC >> checking build system type... Invalid configuration >> `/home/t-ishii/work/Postgres-XC': machine `/home/t-ishii/work/Postgres' not >> recognized >> configure: error: /bin/sh config/config.sub /home/t-ishii/work/Postgres-XC >> failed >> >> It seems Postgres-XC's configure has a problem with prefix dir >> including "-". Note that PostgreSQL 9.2.1. is fine with this kind of >> directory name. >> > That's funny I can't reproduce it... My fault. Sorry. (note the space between "--prefix=" and "/home/t-ishii...". BTW, I got "may be used uninitialized in this function" warnings while compiling. Are they safe? -- Tatsuo Ishii SRA OSS, Inc. Japan English: http://www.sraoss.co.jp/index_en.php Japanese: http://www.sraoss.co.jp |
From: Tatsuo I. <is...@po...> - 2012-11-06 01:58:17
|
Oops. It appeared my fault. Sorry for noise. -- Tatsuo Ishii SRA OSS, Inc. Japan English: http://www.sraoss.co.jp/index_en.php Japanese: http://www.sraoss.co.jp > Hi, > > I tried this with Postgres-XC 1.0.1. > > [t-ishii@localhost pgxc]$ ./configure --prefix= /home/t-ishii/work/Postgres-XC > configure: WARNING: you should use --build, --host, --target > configure: WARNING: invalid host type: /home/t-ishii/work/Postgres-XC > checking build system type... Invalid configuration `/home/t-ishii/work/Postgres-XC': machine `/home/t-ishii/work/Postgres' not recognized > configure: error: /bin/sh config/config.sub /home/t-ishii/work/Postgres-XC failed > > It seems Postgres-XC's configure has a problem with prefix dir > including "-". Note that PostgreSQL 9.2.1. is fine with this kind of > directory name. > > This is Linux 3.0.50 x86_64. > -- > Tatsuo Ishii > SRA OSS, Inc. Japan > English: http://www.sraoss.co.jp/index_en.php > Japanese: http://www.sraoss.co.jp > > ------------------------------------------------------------------------------ > LogMeIn Central: Instant, anywhere, Remote PC access and management. > Stay in control, update software, and manage PCs from one command center > Diagnose problems and improve visibility into emerging IT issues > Automate, monitor and manage. Do more in less time with Central > http://p.sf.net/sfu/logmein12331_d2d > _______________________________________________ > Postgres-xc-general mailing list > Pos...@li... > https://lists.sourceforge.net/lists/listinfo/postgres-xc-general |
From: Michael P. <mic...@gm...> - 2012-11-06 01:57:42
|
On Tue, Nov 6, 2012 at 10:32 AM, Tatsuo Ishii <is...@po...> wrote: > Hi, > > I tried this with Postgres-XC 1.0.1. > > [t-ishii@localhost pgxc]$ ./configure --prefix= > /home/t-ishii/work/Postgres-XC > configure: WARNING: you should use --build, --host, --target > configure: WARNING: invalid host type: /home/t-ishii/work/Postgres-XC > checking build system type... Invalid configuration > `/home/t-ishii/work/Postgres-XC': machine `/home/t-ishii/work/Postgres' not > recognized > configure: error: /bin/sh config/config.sub /home/t-ishii/work/Postgres-XC > failed > > It seems Postgres-XC's configure has a problem with prefix dir > including "-". Note that PostgreSQL 9.2.1. is fine with this kind of > directory name. > That's funny I can't reproduce it... -- Michael Paquier http://michael.otacoo.com |
From: Koichi S. <koi...@gm...> - 2012-11-06 01:57:29
|
I found a space after --prefix= Removing this space runs configure without problem. Regards; ---------- Koichi Suzuki 2012/11/6 Tatsuo Ishii <is...@po...>: > Hi, > > I tried this with Postgres-XC 1.0.1. > > [t-ishii@localhost pgxc]$ ./configure --prefix= /home/t-ishii/work/Postgres-XC > configure: WARNING: you should use --build, --host, --target > configure: WARNING: invalid host type: /home/t-ishii/work/Postgres-XC > checking build system type... Invalid configuration `/home/t-ishii/work/Postgres-XC': machine `/home/t-ishii/work/Postgres' not recognized > configure: error: /bin/sh config/config.sub /home/t-ishii/work/Postgres-XC failed > > It seems Postgres-XC's configure has a problem with prefix dir > including "-". Note that PostgreSQL 9.2.1. is fine with this kind of > directory name. > > This is Linux 3.0.50 x86_64. > -- > Tatsuo Ishii > SRA OSS, Inc. Japan > English: http://www.sraoss.co.jp/index_en.php > Japanese: http://www.sraoss.co.jp > > ------------------------------------------------------------------------------ > LogMeIn Central: Instant, anywhere, Remote PC access and management. > Stay in control, update software, and manage PCs from one command center > Diagnose problems and improve visibility into emerging IT issues > Automate, monitor and manage. Do more in less time with Central > http://p.sf.net/sfu/logmein12331_d2d > _______________________________________________ > Postgres-xc-general mailing list > Pos...@li... > https://lists.sourceforge.net/lists/listinfo/postgres-xc-general |
From: Koichi S. <koi...@gm...> - 2012-11-06 01:40:34
|
One suggestion. Because set bid value is relatively small, it may be better to distribute tables using MODULO, not HASH. Regards; ---------- Koichi Suzuki 2012/11/6 Michael Paquier <mic...@gm...>: > > > On Tue, Nov 6, 2012 at 10:05 AM, Tatsuo Ishii <is...@po...> wrote: >> >> Hi, >> >> PostgreSQL Enterprise Consortium is planning to do a benchmark against >> Postges-XC. If we would use standard pgbench workload(pgbench default, >> -N, -S), what is a recommended portioning plan for pgbench_accounts? > > If you want to show up the scalability, I recommend that you use pgbench > with option -k for initialization and launching, which is an option that has > been added in the pgbench version of XC available in its source code. This > allows to to a benchmark test by using bid as a distribution key so this > minimizes the amount of 2PC done when write operations involve several nodes > in a transaction. > $ pgbench --help > Initialization options: > -k distribute by primary key branch id - bid > Benchmarking options: > -k query with default key and additional key branch id (bid) > > Depending on your cluster structure, I would also recommend you also to use > PREFERRED node with ALTER NODE (ALTER NODE nodename WITH (PREFERRED)) for > example with the Datanode that is on the same server as a Coordinator if you > use a structure of 1 Coordinator and 1 Datanode per server. This also > reduces the network load by having replicated table read being done on the > preferred node in priority. This is especially better if the node is local > of course. > > >> >> >> Any suggestions will be appreciated. >> -- >> Tatsuo Ishii >> SRA OSS, Inc. Japan >> English: http://www.sraoss.co.jp/index_en.php >> Japanese: http://www.sraoss.co.jp >> >> >> ------------------------------------------------------------------------------ >> LogMeIn Central: Instant, anywhere, Remote PC access and management. >> Stay in control, update software, and manage PCs from one command center >> Diagnose problems and improve visibility into emerging IT issues >> Automate, monitor and manage. Do more in less time with Central >> http://p.sf.net/sfu/logmein12331_d2d >> _______________________________________________ >> Postgres-xc-general mailing list >> Pos...@li... >> https://lists.sourceforge.net/lists/listinfo/postgres-xc-general > > > > > -- > Michael Paquier > http://michael.otacoo.com > > ------------------------------------------------------------------------------ > LogMeIn Central: Instant, anywhere, Remote PC access and management. > Stay in control, update software, and manage PCs from one command center > Diagnose problems and improve visibility into emerging IT issues > Automate, monitor and manage. Do more in less time with Central > http://p.sf.net/sfu/logmein12331_d2d > _______________________________________________ > Postgres-xc-general mailing list > Pos...@li... > https://lists.sourceforge.net/lists/listinfo/postgres-xc-general > |
From: Tatsuo I. <is...@po...> - 2012-11-06 01:33:27
|
Hi, I tried this with Postgres-XC 1.0.1. [t-ishii@localhost pgxc]$ ./configure --prefix= /home/t-ishii/work/Postgres-XC configure: WARNING: you should use --build, --host, --target configure: WARNING: invalid host type: /home/t-ishii/work/Postgres-XC checking build system type... Invalid configuration `/home/t-ishii/work/Postgres-XC': machine `/home/t-ishii/work/Postgres' not recognized configure: error: /bin/sh config/config.sub /home/t-ishii/work/Postgres-XC failed It seems Postgres-XC's configure has a problem with prefix dir including "-". Note that PostgreSQL 9.2.1. is fine with this kind of directory name. This is Linux 3.0.50 x86_64. -- Tatsuo Ishii SRA OSS, Inc. Japan English: http://www.sraoss.co.jp/index_en.php Japanese: http://www.sraoss.co.jp |
From: Michael P. <mic...@gm...> - 2012-11-06 01:20:37
|
On Tue, Nov 6, 2012 at 10:05 AM, Tatsuo Ishii <is...@po...> wrote: > Hi, > > PostgreSQL Enterprise Consortium is planning to do a benchmark against > Postges-XC. If we would use standard pgbench workload(pgbench default, > -N, -S), what is a recommended portioning plan for pgbench_accounts? > If you want to show up the scalability, I recommend that you use pgbench with option -k for initialization and launching, which is an option that has been added in the pgbench version of XC available in its source code. This allows to to a benchmark test by using bid as a distribution key so this minimizes the amount of 2PC done when write operations involve several nodes in a transaction. $ pgbench --help Initialization options: -k distribute by primary key branch id - bid Benchmarking options: -k query with default key and additional key branch id (bid) Depending on your cluster structure, I would also recommend you also to use PREFERRED node with ALTER NODE (ALTER NODE nodename WITH (PREFERRED)) for example with the Datanode that is on the same server as a Coordinator if you use a structure of 1 Coordinator and 1 Datanode per server. This also reduces the network load by having replicated table read being done on the preferred node in priority. This is especially better if the node is local of course. > > Any suggestions will be appreciated. > -- > Tatsuo Ishii > SRA OSS, Inc. Japan > English: http://www.sraoss.co.jp/index_en.php > Japanese: http://www.sraoss.co.jp > > > ------------------------------------------------------------------------------ > LogMeIn Central: Instant, anywhere, Remote PC access and management. > Stay in control, update software, and manage PCs from one command center > Diagnose problems and improve visibility into emerging IT issues > Automate, monitor and manage. Do more in less time with Central > http://p.sf.net/sfu/logmein12331_d2d > _______________________________________________ > Postgres-xc-general mailing list > Pos...@li... > https://lists.sourceforge.net/lists/listinfo/postgres-xc-general > -- Michael Paquier http://michael.otacoo.com |
From: Tatsuo I. <is...@po...> - 2012-11-06 01:06:03
|
Hi, PostgreSQL Enterprise Consortium is planning to do a benchmark against Postges-XC. If we would use standard pgbench workload(pgbench default, -N, -S), what is a recommended portioning plan for pgbench_accounts? Any suggestions will be appreciated. -- Tatsuo Ishii SRA OSS, Inc. Japan English: http://www.sraoss.co.jp/index_en.php Japanese: http://www.sraoss.co.jp |
From: Koichi S. <koi...@gm...> - 2012-11-06 00:55:59
|
Does anybody has performance data for DRBD with PostgreSQL? The following reports that the throughput will become almost half. http://archives.postgresql.org/pgsql-performance/2007-09/msg00118.php The article is posted about five years ago, though. Regards; ---------- Koichi Suzuki 2012/11/6 Vladimir Stavrinov <vst...@gm...>: > On Mon, Nov 05, 2012 at 07:22:54PM +0300, Vladimir Stavrinov wrote: > >> solution is alliance of drbd + pacemaker + corosync. It is very > > Forgot to add in this team ipvs. > > > ***************************** > ### Vladimir Stavrinov > ### vst...@gm... > ***************************** > > > ------------------------------------------------------------------------------ > LogMeIn Central: Instant, anywhere, Remote PC access and management. > Stay in control, update software, and manage PCs from one command center > Diagnose problems and improve visibility into emerging IT issues > Automate, monitor and manage. Do more in less time with Central > http://p.sf.net/sfu/logmein12331_d2d > _______________________________________________ > Postgres-xc-general mailing list > Pos...@li... > https://lists.sourceforge.net/lists/listinfo/postgres-xc-general |
From: Vladimir S. <vst...@gm...> - 2012-11-05 16:26:48
|
On Mon, Nov 05, 2012 at 07:22:54PM +0300, Vladimir Stavrinov wrote: > solution is alliance of drbd + pacemaker + corosync. It is very Forgot to add in this team ipvs. ***************************** ### Vladimir Stavrinov ### vst...@gm... ***************************** |
From: Vladimir S. <vst...@gm...> - 2012-11-05 16:23:02
|
On Tue, Oct 30, 2012 at 10:02 PM, Roger Mayes <rog...@gm...> wrote: > Odd that their list publishes our email addresses, and if I hit "reply" (in > gmail), it goes to the person who made the post rather than to the list. Click arrow on the right of "Reply" button, then click "Reply all". Or copy/past list address into CC: field. > gmail is more-or-less commercial software, isn't it? Lol. No matter commercial or not. More important is open source criteria. > It seems like our conversation has gone a bit afield of the list's topic, > anyway. We continue discussion about HA for XC. > So have you found any way to do get write scalability and high availability? I've already wrote here about this. I think at this time the best solution is alliance of drbd + pacemaker + corosync. It is very effective, reliable and rather simple setup for duplicating every node. I have such architecture running lot of vz boxes. But it is no matter what services are there running. The difference is only resource agent, i.e. start/stop script. BTW among others (most of them are web, application and database servers) there are postgresql running inside those vz boxes without any problems. |
From: Roger M. <rog...@gm...> - 2012-11-05 15:31:27
|
On Tue, Oct 30, 2012 at 4:58 AM, Vladimir Stavrinov <vst...@gm...>wrote: > On Mon, Oct 29, 2012 at 01:59:47PM -0700, Roger Mayes wrote: > > > Restoring a virt from an image is one way of restoring from a > > backup. It's a bit quicker and more thorough, unless you have > > Normally You are restoring database from sql dump. If You want to do > this with data files, then You should synchronize all of them over the > cluster. > > > our hosting environment is limited. They're inexpensive enough for > > us because they use commodity hardware, but using commodity > > But cloud infrastructure with all management tools and service itself > costs money too. One of my providers offered me cloud instead of hardware > rent. But calculation showed it's twice more expensive for the same > capacity. Though it may be not a common case due to specific > requirements, but nevertheless I think cloud is not suitable for > cluster. Though I see the convenience for customers may overcome other > factors. > > > hardware means they can only give us so much cpu, ram, io, and > > network bandwidth on a single host. Hence the need for clustering. > > First, You loose performance with additional level for virtual machine > (though not so much). And second, You can't upgrade kernel running on > hardware host, leaving it on providers own. But this impacts not only > performance, but reliability too. Though I see it is not interesting for > You as You are getting that capacity what You are paying for. > > > Security became the victim of "speed" meaning system performance, > > or "speed" meaning expediency as far as getting it setup and > > First is right. > > > running goes? Nobody should ever run database processes as the > > root user. And they should never open direct database access ports > > to the outside world. > > You are absolutely right here. > > > The users themselves don't expect their posts to stay around forever. > > First, they prefer delete unneeded data them self, than loose what they > need. > Second, they can tolerate to loose old data, but just this data You can > restore > from backup. But they don't want to loose recent data, that they certainly > lose > at system crash. With lost data You can lose Your users, not only as > records in > database, that was dropped on crash, but existent users as persons, who > don't > want to use Your service any more, as well as new potential users, who will > never uses Your service. > > > As long as the downtime is not within the first few hours after > > Taylor makes her post, it's not a huge deal. > > But You can not plan the time of Your crash. More over under peak load > chance > to fail increased. And low probability of crash doesn't means it never > occur. > It happens at the "best" time when You don't expect. The Chernobyl disaster > occured as result of overlapping of five events, every of which was low > probability. > > > I guess we have HA in the sense that we can continue to operate if > > one of our load balanced front end web servers goes down, as long > > as it doesn't happen right when we're at peak load. But our > > There are no problems with HA for web servers as such. There are number of > different solutions. But we are talking here about database and it is a > quite different problem. > > > memcached clusters and database clusters have never yet been really > > set up to continue running if we were to lose a node. Although it > > Are You sleeping well? I was already scared. > > > would help us a lot of we could, because then we could handle less > > risk-tolerant, higher dollar ventures without having to get into > > dealing with Oracle (which creates a lot of risk by itself, because > > of the high costs involved). > > As I mentioned early, even RAC may crash. Besides, it have no write > scalability > (Your lovely commercial software). In my practice in the past I did some > fault > tolerant setup based on Oracle Data Guard technology, but I was satisfied > with > it. > > OK, all Your arguments make some sense. I agree, in Your example there may > be > some tolerance for data lost and down time, in some sense and to some > degree. > But this is Your reasoning only. Can You imagine crash of Your system in > reality? I would say, if You need scalability means You are running a big > system. With big system You most likely will suffer big losses in case of > disaster. > > P.S. Your last message did not arrived to mailing list. If it is not > mistake, I > will leave it untouched. But If You want, I can bounce both Your message > and my > response to mailing list as is, without modification. It is what my open > source > software can, but Your lovely commercial can not to do. That's pretty funny. That's a good point that there are some things open source software can do that no currently available well-known commercial software can. The lack of write scalability in Oracle is a big one. Odd that their list publishes our email addresses, and if I hit "reply" (in gmail), it goes to the person who made the post rather than to the list. That's not the way most other email lists work. I haven't used any commercial software for several years now, unless you count gmail. I guess gmail is more-or-less commercial software, isn't it? Lol. It seems like our conversation has gone a bit afield of the list's topic, anyway. With virtual cloud servers, we can rent a bunch of hardware for the space of a few hours, while most of the time renting such a small amount of hardware that the cost is almost nothing. No I don't sleep well, and I'd like to get out of my situation as soon as possible, but at least I have a little food on the table for the moment. So have you found any way to do get write scalability and high availability? I've not yet thoroughly investigated all of the NoSQL systems yet. > ----- > -- > > *************************** > ## Vladimir Stavrinov > ## vst...@gm... > *************************** > > |
From: Vladimir S. <vst...@gm...> - 2012-11-05 15:31:24
|
On Mon, Oct 29, 2012 at 01:59:47PM -0700, Roger Mayes wrote: > Restoring a virt from an image is one way of restoring from a > backup. It's a bit quicker and more thorough, unless you have Normally You are restoring database from sql dump. If You want to do this with data files, then You should synchronize all of them over the cluster. > our hosting environment is limited. They're inexpensive enough for > us because they use commodity hardware, but using commodity But cloud infrastructure with all management tools and service itself costs money too. One of my providers offered me cloud instead of hardware rent. But calculation showed it's twice more expensive for the same capacity. Though it may be not a common case due to specific requirements, but nevertheless I think cloud is not suitable for cluster. Though I see the convenience for customers may overcome other factors. > hardware means they can only give us so much cpu, ram, io, and > network bandwidth on a single host. Hence the need for clustering. First, You loose performance with additional level for virtual machine (though not so much). And second, You can't upgrade kernel running on hardware host, leaving it on providers own. But this impacts not only performance, but reliability too. Though I see it is not interesting for You as You are getting that capacity what You are paying for. > Security became the victim of "speed" meaning system performance, > or "speed" meaning expediency as far as getting it setup and First is right. > running goes? Nobody should ever run database processes as the > root user. And they should never open direct database access ports > to the outside world. You are absolutely right here. > The users themselves don't expect their posts to stay around forever. First, they prefer delete unneeded data them self, than loose what they need. Second, they can tolerate to loose old data, but just this data You can restore from backup. But they don't want to loose recent data, that they certainly lose at system crash. With lost data You can lose Your users, not only as records in database, that was dropped on crash, but existent users as persons, who don't want to use Your service any more, as well as new potential users, who will never uses Your service. > As long as the downtime is not within the first few hours after > Taylor makes her post, it's not a huge deal. But You can not plan the time of Your crash. More over under peak load chance to fail increased. And low probability of crash doesn't means it never occur. It happens at the "best" time when You don't expect. The Chernobyl disaster occured as result of overlapping of five events, every of which was low probability. > I guess we have HA in the sense that we can continue to operate if > one of our load balanced front end web servers goes down, as long > as it doesn't happen right when we're at peak load. But our There are no problems with HA for web servers as such. There are number of different solutions. But we are talking here about database and it is a quite different problem. > memcached clusters and database clusters have never yet been really > set up to continue running if we were to lose a node. Although it Are You sleeping well? I was already scared. > would help us a lot of we could, because then we could handle less > risk-tolerant, higher dollar ventures without having to get into > dealing with Oracle (which creates a lot of risk by itself, because > of the high costs involved). As I mentioned early, even RAC may crash. Besides, it have no write scalability (Your lovely commercial software). In my practice in the past I did some fault tolerant setup based on Oracle Data Guard technology, but I was satisfied with it. OK, all Your arguments make some sense. I agree, in Your example there may be some tolerance for data lost and down time, in some sense and to some degree. But this is Your reasoning only. Can You imagine crash of Your system in reality? I would say, if You need scalability means You are running a big system. With big system You most likely will suffer big losses in case of disaster. P.S. Your last message did not arrived to mailing list. If it is not mistake, I will leave it untouched. But If You want, I can bounce both Your message and my response to mailing list as is, without modification. It is what my open source software can, but Your lovely commercial can not to do. -- *************************** ## Vladimir Stavrinov ## vst...@gm... *************************** |
From: Roger M. <rog...@gm...> - 2012-11-05 15:31:22
|
On Mon, Oct 29, 2012 at 4:56 AM, Vladimir Stavrinov <vst...@gm...>wrote: > On Sun, Oct 28, 2012 at 5:35 AM, Shavais Zarathustra <sh...@gm...> > wrote: > > > Well, the point would be to get a replacement server going, for the > server > > that died, with all the software installed and the configuration set up, > > after which my hope has been that we'd be able to reinitialize the > database > > on that host and perform some kind of recovery process to get it back up > and > > working within the cluster. But maybe that requires some of the HA > features > > that you're talking about that XC doesn't have working yet? > > With HA there will no down time, so You will have enough time for > recovering failed node. Without HA You should recreate cluster from > scratch from backup. In both cases virtual machine helps not so much. > > Restoring a virt from an image is one way of restoring from a backup. It's a bit quicker and more thorough, unless you have something like Legato. > clustering stuff, together with Oracle's database clustering, which was > all > > I heard a story where whole bank was crashed on RAC. Even HA did not help. > > > of a brave new/old world for me, with all this poor man's Open Source > stuff, > > "poor man's" ? Great! > > There are a lot of sort of dirty gems scattered about the muddy, sea weed and jelly fish-litered beach of Open Source software, which with a bit of manual buffing, are quite beautiful in their particular ways - but that landscape is not to be compared with the pristine, opulent castles and treasure rooms of commercial software. But then, neither is the price tag. > > Well, the hardware they have at these pseudo-cloud datacenters is all > > What You are describing here and below is cloud infrastructure that > itself has scalability and HA, what cluster must have too. So what for > do You want one inside other? You loose efficiency and money. > > As I explained before, the scalability offered by a single host in our hosting environment is limited. They're inexpensive enough for us because they use commodity hardware, but using commodity hardware means they can only give us so much cpu, ram, io, and network bandwidth on a single host. Hence the need for clustering. > >> logs should be handled on every node, it is not so simple. > > > > Yeah, I was thinking this was probably the case. So what I'm not sure > of is > > what you do after your datanode has been recovered as far as you can get > it > > recovered using the usual single database recovery techniques - how do > you > > Without HA at this point down time started again. And if You succeed > in recovering at some point in time where this node will consistent > with cluster, then You will be happy, otherwise You will recreate Your > cluster from scratch from backup again. > > > Unix Admin "is only as good as their backups". That's certainly the > truth. > > No doubt, definitely! Backup always and everywhere. But with backup > You can recover Your system at some point in past. So you have both > joys: down time and data lost in this case too. Backup is not > alternative for HA and vice verse: we need them both. > > > But I'm not concerned about the security of my DBA role, in fact I've > been > > One developer boasted me how he can do database user becomes unix user > root and shuts down the system. The answer on my horror was something > similar what we are reading here: the security there becomes the > victim of speed. And it was very serious and responsible institution > where this database was running. Security became the victim of "speed" meaning system performance, or "speed" meaning expediency as far as getting it setup and running goes? Nobody should ever run database processes as the root user. And they should never open direct database access ports to the outside world. > > need a throat to cut before I can cut it. The risk of a crash is small > and > > tolerable, but if I'm not convinced I'll be able to handle the load - > that's > > a show stopper. > > If You > need cluster means You are doing something that require HA. What data You are processing that requires scalability? As I mentioned before - one example has been forum posts from Taylor Swift fans in reaction to Taylor Swift making a Facebook post or a Twitter Tweet. If we lose that data at some point in the future, it's not anywhere near as important as being able to handle a whole lot of people making and reading those posts all at once. In fact, we eventually delete the data our selves. That's just one example. > Is it garbage You willing to loose? The users themselves don't expect their posts to stay around forever. > What are those business processes that make Your > heavy load? Taylor Swift makes a Facebook post. 250 thousand Taylor Swift fans from all over the country immediately jump onto our system and start messaging each other, and posting videos, pictures, etc., and over the course of the next several hours, several million other people eventually find their way to the site. We collect analytics on all that traffic and make (general trend) reports to various commercial interests. > Are they nonsense that can tolerate down time? As long as the downtime is not within the first few hours after Taylor makes her post, it's not a huge deal. > Please > tell me, do You have cluster that running without HA? Or do you know > such? > > We have HA in the sense that apart from the backplanes, there's no single point of failure in our hardware setup. And, I guess we have HA in the sense that we can continue to operate if one of our load balanced front end web servers goes down, as long as it doesn't happen right when we're at peak load. But our memcached clusters and database clusters have never yet been really set up to continue running if we were to lose a node. Although it would help us a lot of we could, because then we could handle less risk-tolerant, higher dollar ventures without having to get into dealing with Oracle (which creates a lot of risk by itself, because of the high costs involved). |
From: Tatsuo I. <is...@po...> - 2012-11-02 02:27:26
|
> I am sure you already know that a Postgres-XC cluster needs a unique GTM > (global transaction manager) to feed global transaction ID values and > global snapshots to each node of the cluster, the same GTM is used for the > feed of sequence values in a global way. > As a start if you want to have a look at how it is managed in XC, for the > postgres backend side have a look at nextval_internal in > src/backend/commands/sequence.c, we put in the PG code a hook called > GetNextValGTM to get the next global sequence value from GTM. There are > similar hooks for CREATE/ALTER/DROP SEQUENCE. All the backend-side client > APIs related to GTM are located in src/backend/access/transam/gtm.c. > > On the GTM side, I would recommend you to have a look at src/gtm/main, > mainly gtm_seq.c which contains all the sequence-related APIs. > Also, the message types exchanged between GTM and backend nodes are listed > in src/include/gtm/gtm_msg.h. MSG_SEQUENCE_GET_NEXT is for example the > message type used to get the next global value of a sequence. Look at how > messages of this type are managed on backend and GTM side and you will > understand easily the process flow. Thanks for a quick response. I will take a look at the source code. -- Tatsuo Ishii SRA OSS, Inc. Japan English: http://www.sraoss.co.jp/index_en.php Japanese: http://www.sraoss.co.jp |