You can subscribe to this list here.
2010 |
Jan
|
Feb
|
Mar
|
Apr
|
May
(2) |
Jun
|
Jul
|
Aug
(6) |
Sep
|
Oct
(19) |
Nov
(1) |
Dec
|
---|---|---|---|---|---|---|---|---|---|---|---|---|
2011 |
Jan
(12) |
Feb
(1) |
Mar
(4) |
Apr
(4) |
May
(32) |
Jun
(12) |
Jul
(11) |
Aug
(1) |
Sep
(6) |
Oct
(3) |
Nov
|
Dec
(10) |
2012 |
Jan
(11) |
Feb
(1) |
Mar
(3) |
Apr
(25) |
May
(53) |
Jun
(38) |
Jul
(103) |
Aug
(54) |
Sep
(31) |
Oct
(66) |
Nov
(77) |
Dec
(20) |
2013 |
Jan
(91) |
Feb
(86) |
Mar
(103) |
Apr
(107) |
May
(25) |
Jun
(37) |
Jul
(17) |
Aug
(59) |
Sep
(38) |
Oct
(78) |
Nov
(29) |
Dec
(15) |
2014 |
Jan
(23) |
Feb
(82) |
Mar
(118) |
Apr
(101) |
May
(103) |
Jun
(45) |
Jul
(6) |
Aug
(10) |
Sep
|
Oct
(32) |
Nov
|
Dec
(9) |
2015 |
Jan
(3) |
Feb
(5) |
Mar
|
Apr
(1) |
May
|
Jun
|
Jul
(9) |
Aug
(4) |
Sep
(3) |
Oct
|
Nov
|
Dec
|
2016 |
Jan
(3) |
Feb
|
Mar
|
Apr
|
May
|
Jun
|
Jul
|
Aug
|
Sep
|
Oct
|
Nov
|
Dec
|
2017 |
Jan
|
Feb
|
Mar
|
Apr
|
May
|
Jun
(3) |
Jul
|
Aug
|
Sep
|
Oct
|
Nov
|
Dec
|
2018 |
Jan
|
Feb
|
Mar
|
Apr
|
May
(4) |
Jun
|
Jul
|
Aug
|
Sep
|
Oct
|
Nov
|
Dec
|
S | M | T | W | T | F | S |
---|---|---|---|---|---|---|
|
|
1
|
2
|
3
|
4
(1) |
5
(5) |
6
(1) |
7
(3) |
8
(2) |
9
(4) |
10
(6) |
11
(2) |
12
|
13
|
14
(1) |
15
(2) |
16
(2) |
17
(6) |
18
|
19
(1) |
20
|
21
(4) |
22
(4) |
23
(5) |
24
(4) |
25
(1) |
26
(2) |
27
(2) |
28
(2) |
29
(6) |
30
(11) |
31
(14) |
|
|
From: Nikhil S. <ni...@st...> - 2013-01-31 12:09:23
|
Talk about barking up the wrong tree ;) I was working with PGXC 1.0 tree. So the earlier sent out patch applies against it. PFA, another patch which applies against MASTER. Regards, Nikhils On Thu, Jan 31, 2013 at 5:16 PM, Nikhil Sontakke <ni...@st...> wrote: > Additionally I also fixed a bug in invoking gtm_standby via gtm_ctl. > It would blindly append gtm_standby as the name for the binary to > invoke. > > Regards, > Nikhils > > On Thu, Jan 31, 2013 at 5:11 PM, Nikhil Sontakke <ni...@st...> wrote: >> Hi, >> >> PFA, patch which fixes this issue in the master git repo. >> >> The behavior is now similar to pg_ctl. Additionally the version >> checking was missing in gtm_ctl, gtm and gtm_proxy. I added the same >> as well. The binaries will be checked for version compatibility as >> well now which is always a good thing. >> >> Regards, >> Nikhils >> >> On Thu, Jan 31, 2013 at 12:40 PM, Nikhil Sontakke <ni...@st...> wrote: >>>> >>>> +1. It will be better if we can make it consistent with pg_ctl. We should be >>>> able to search gtm/proxy binary in the same place where gtm_ctl resides, no >>>> ? >>>> >>> >>> Yes. The pg_ctl code will provide enough inspiration for a generic >>> binary search based on relative path as well. >>> >>> Regards, >>> Nikhils >>> -- >>> StormDB - http://www.stormdb.com >>> The Database Cloud >>> Postgres-XC Support and Service >> >> >> >> -- >> StormDB - http://www.stormdb.com >> The Database Cloud >> Postgres-XC Support and Service > > > > -- > StormDB - http://www.stormdb.com > The Database Cloud > Postgres-XC Support and Service -- StormDB - http://www.stormdb.com The Database Cloud Postgres-XC Support and Service |
From: Nikhil S. <ni...@st...> - 2013-01-31 11:46:29
|
Additionally I also fixed a bug in invoking gtm_standby via gtm_ctl. It would blindly append gtm_standby as the name for the binary to invoke. Regards, Nikhils On Thu, Jan 31, 2013 at 5:11 PM, Nikhil Sontakke <ni...@st...> wrote: > Hi, > > PFA, patch which fixes this issue in the master git repo. > > The behavior is now similar to pg_ctl. Additionally the version > checking was missing in gtm_ctl, gtm and gtm_proxy. I added the same > as well. The binaries will be checked for version compatibility as > well now which is always a good thing. > > Regards, > Nikhils > > On Thu, Jan 31, 2013 at 12:40 PM, Nikhil Sontakke <ni...@st...> wrote: >>> >>> +1. It will be better if we can make it consistent with pg_ctl. We should be >>> able to search gtm/proxy binary in the same place where gtm_ctl resides, no >>> ? >>> >> >> Yes. The pg_ctl code will provide enough inspiration for a generic >> binary search based on relative path as well. >> >> Regards, >> Nikhils >> -- >> StormDB - http://www.stormdb.com >> The Database Cloud >> Postgres-XC Support and Service > > > > -- > StormDB - http://www.stormdb.com > The Database Cloud > Postgres-XC Support and Service -- StormDB - http://www.stormdb.com The Database Cloud Postgres-XC Support and Service |
From: Nikhil S. <ni...@st...> - 2013-01-31 11:42:06
|
Hi, PFA, patch which fixes this issue in the master git repo. The behavior is now similar to pg_ctl. Additionally the version checking was missing in gtm_ctl, gtm and gtm_proxy. I added the same as well. The binaries will be checked for version compatibility as well now which is always a good thing. Regards, Nikhils On Thu, Jan 31, 2013 at 12:40 PM, Nikhil Sontakke <ni...@st...> wrote: >> >> +1. It will be better if we can make it consistent with pg_ctl. We should be >> able to search gtm/proxy binary in the same place where gtm_ctl resides, no >> ? >> > > Yes. The pg_ctl code will provide enough inspiration for a generic > binary search based on relative path as well. > > Regards, > Nikhils > -- > StormDB - http://www.stormdb.com > The Database Cloud > Postgres-XC Support and Service -- StormDB - http://www.stormdb.com The Database Cloud Postgres-XC Support and Service |
From: Christophe Le R. <chr...@fr...> - 2013-01-31 09:44:41
|
Ok, thank you for your answers. Kriss_fr De : Michael Paquier [mailto:mic...@gm...] Envoyé : jeudi 31 janvier 2013 10:33 À : Christophe Le Roux Cc : pos...@li... Objet : Re: [Postgres-xc-general] Table distribution/replication and other things On Thu, Jan 31, 2013 at 6:10 PM, Christophe Le Roux <chr...@fr...<mailto:chr...@fr...>> wrote: Nice trick ! But too complex for my brain :) ( i don't understand why tab2 is distributed by replication) What do you think about that (possible new feature ?) : - Possibility to create group like : CREATE GROUPSERVER group1 ('serverA','serverB'); CREATE GROUPSERVER group2 ('serverC','serverD'); This already exists and is called GROUP NODE, have a look here: http://postgres-xc.github.com/1_0/sql-createnodegroup.html - Possibility to table distributed an replicated like : CREATE TABLE tab (col1 int, col2 int) DISTRIBUTE BY HASH(col1) ON GROUPSERVER 'group1' AND REPLICATE ON GROUPSERVER 'group2'; If one of these groupserver have not the same number of server, we have a warning message like that : "WARNING : GROUPSERVER groupX has not the same number of server of GROUPSERVER groupY, so we can't distribute by hash, modulo or RR but the replication on GROUPSERVER groupY is done." This is like raid10 for hdd the table is splited between serverA and serverB and mirrored on serverC and serverD. If serverA or serverB or serverA and serverB fail, the table is still available with serverC and serverD . Is it possible ? Not now. At least the trick I gave before works the same with vanilla Postgres... This feature is a little bit too premature with existing XC infrastructure. Such complex things would be possible by first reworking the partitioning of XC to make it more integrated with Postgres I think. -- Michael Paquier http://michael.otacoo.com |
From: Pavan D. <pav...@gm...> - 2013-01-31 09:42:33
|
On Thu, Jan 31, 2013 at 3:03 PM, Michael Paquier <mic...@gm...> wrote: > > > On Thu, Jan 31, 2013 at 6:10 PM, Christophe Le Roux >> >> >> This is like raid10 for hdd the table is splited between serverA and >> serverB and mirrored on serverC and serverD. If serverA or serverB or >> serverA and serverB fail, the table is still available with serverC and >> serverD . >> >> >> >> Is it possible ? > > Not now. At least the trick I gave before works the same with vanilla > Postgres... > This feature is a little bit too premature with existing XC infrastructure. > Such complex things would be possible by first reworking the partitioning of > XC to make it more integrated with Postgres I think. Why so ? I understand its a difficult problem, but I don't see how integration with Postgres will make it any easier, compared to any other distribution mechanism we have implemented so far. I know we once talked about supporting hybrid distribution but I don't quite remember if concluded those discussions ever. And this has been asked multiple times by now, so we should see if it can be done. Thanks, Pavan -- Pavan Deolasee http://www.linkedin.com/in/pavandeolasee |
From: Michael P. <mic...@gm...> - 2013-01-31 09:33:09
|
On Thu, Jan 31, 2013 at 6:10 PM, Christophe Le Roux < chr...@fr...> wrote: > Nice trick !**** > > ** ** > > But too complex for my brain J ( i don’t understand why tab2 is > distributed by replication)**** > > ** ** > > What do you think about that (possible new feature ?) :**** > > **- **Possibility to create group like :**** > > CREATE GROUPSERVER group1 (‘serverA’,’serverB’);**** > > CREATE GROUPSERVER group2 (‘serverC’,’serverD’); > This already exists and is called GROUP NODE, have a look here: http://postgres-xc.github.com/1_0/sql-createnodegroup.html > **** > > ** ** > > **- **Possibility to table distributed an replicated like :**** > > CREATE TABLE tab (col1 int, col2 int) DISTRIBUTE BY HASH(col1) ON > GROUPSERVER ‘group1’ AND REPLICATE ON GROUPSERVER ‘group2’;**** > > If one of these groupserver have not the same number of > server, we have a warning message like that : “WARNING : GROUPSERVER groupX > has not the same number of server of GROUPSERVER groupY, so we can’t > distribute by hash, modulo or RR but the replication on GROUPSERVER groupY > is done.”**** > > ** ** > > This is like raid10 for hdd the table is splited between serverA and > serverB and mirrored on serverC and serverD. If serverA or serverB or > serverA and serverB fail, the table is still available with serverC and > serverD .**** > > > Is it possible ? > Not now. At least the trick I gave before works the same with vanilla Postgres... This feature is a little bit too premature with existing XC infrastructure. Such complex things would be possible by first reworking the partitioning of XC to make it more integrated with Postgres I think. -- Michael Paquier http://michael.otacoo.com |
From: Christophe Le R. <chr...@fr...> - 2013-01-31 09:10:12
|
Nice trick ! But too complex for my brain :) ( i don't understand why tab2 is distributed by replication) What do you think about that (possible new feature ?) : - Possibility to create group like : CREATE GROUPSERVER group1 ('serverA','serverB'); CREATE GROUPSERVER group2 ('serverC','serverD'); - Possibility to table distributed an replicated like : CREATE TABLE tab (col1 int, col2 int) DISTRIBUTE BY HASH(col1) ON GROUPSERVER 'group1' AND REPLICATE ON GROUPSERVER 'group2'; If one of these groupserver have not the same number of server, we have a warning message like that : "WARNING : GROUPSERVER groupX has not the same number of server of GROUPSERVER groupY, so we can't distribute by hash, modulo or RR but the replication on GROUPSERVER groupY is done." This is like raid10 for hdd the table is splited between serverA and serverB and mirrored on serverC and serverD. If serverA or serverB or serverA and serverB fail, the table is still available with serverC and serverD . Is it possible ? Kriss_fr De : Michael Paquier [mailto:mic...@gm...] Envoyé : mercredi 30 janvier 2013 19:23 À : Christophe Le Roux Cc : pos...@li... Objet : Re: [Postgres-xc-general] Table distribution/replication and other things On Thu, Jan 31, 2013 at 3:12 AM, Michael Paquier <mic...@gm...<mailto:mic...@gm...>> wrote: On Wed, Jan 30, 2013 at 8:01 PM, Christophe Le Roux <chr...@fr...<mailto:chr...@fr...>> wrote: Can we distribute and replicate a table ? (A table distributed on 3 datanodes (for performance) and replicated on 3 others datanode(for availability without down time) - I'm not talking about hot standby or streaming replication included in standard postgresql) This is not supported directly, but you could try something like that, perhaps it will work that's just an idea. This idea uses as a base the internal Postgres partitioning mechanism: CREATE TABLE tab (col1 int, col2 int) DISTRIBUTE BY REPLICATION; CREATE TABLE tab1 (CHECK col1 >= 100 AND CHECK col1 < 200) INHERITS (tab) DISTRIBUTE BY HASH(col1); CREATE TABLE tab2 (CHECK col1 >= 200 AND CHECK col1 < 300) INHERITS (tab) DISTRIBUTE BY REPLICATION; Putting an index on column col1 would be good also. Here the secret is to have the parent table replicated. I forgot something here: You will need also to create a rule or a trigger to redirect INSERT/UPDATE/DELETE to the correct subtable. Have a rule at the postgres documentation here: http://www.postgresql.org/docs/9.2/static/ddl-partitioning.html. -- Michael Paquier http://michael.otacoo.com |
From: Koichi S. <koi...@gm...> - 2013-01-31 07:26:37
|
2013/1/30 Filip Rembiałkowski <fil...@gm...>: > > On Mon, Jan 28, 2013 at 7:23 PM, Koichi Suzuki <koi...@gm...> > wrote: >> >> I have not looked into it very in detail. So this is just a guess. >> >> XC's datanode cluster contains three more system catalogues, >> pgxc_class, pgxc_node and pgxc_nodegroup and also contains many more >> build-in functions which have to be hard-coded. > > > > Only adding new objects is fine. I think it does not prevent this kind of > migration. > > I can think of creating new objects in pg_catalog just before switching to > Postgres-XC. > > Does postgres-xc modify any existing postgres catalog tables? (adding > columns, changing types, any structure changes)? Yes, pg_aggregate has additional aggcollectfn column. > > Does postgres-xc modify any existing postgres catalog functions? (function > signatures and/or internal logic) Yes, many. grep PGXC src/include/catalog/*.h will show you the addition. Important function is pgxc_is_committed which is used by pgxc_clean to cleanup outstanding 2PC after a node crash. > Do you think it's possible to map/translate existing pg_class to pgxc_class, > existing TXIDs to GXIDs, etc? Can translate TXIDs to GXIDs. GTM should start with corresponding GXID value. > > I understand that pointing "empty" GTMs and Coordinators to this new special > datanode would simply not work? > We'd have to initialize them in a special way? I have not done such experiment yet but I don't think it works because pg_aggregate now has additional aggcollectfn and datanode (old PG database) aggregate should return intermediate result according to this new catalogs. Fortunatly, no OIDs are used in coordinator/datanode communication. It is libpq with GXID and global snapshot supply. Regards; --- Koichi > > >> >> >> Even though other part of the database cluster has no serious >> difference, the above could be very serious. > > > answers to above questions would help me understand _why_ it is serious. > > >> >> It looks a good idea to allow XC to start with only one node >> configuration, and then add nodes as needed. >> > > Yes, that's my idea also - I see there are ALTER TABLE commands which allow > to replicate/redistribute. > > >> >> Other than that, you can use pg_dump and pg_restore to move all your >> data from PostgreSQL to Postgres-XC. >> > > I know it's a standard procedure - but in my case this would take tens of > hours, and I'm looking for other ways. > > > Thank you, > Filip > |
From: Nikhil S. <ni...@st...> - 2013-01-31 07:10:31
|
> > +1. It will be better if we can make it consistent with pg_ctl. We should be > able to search gtm/proxy binary in the same place where gtm_ctl resides, no > ? > Yes. The pg_ctl code will provide enough inspiration for a generic binary search based on relative path as well. Regards, Nikhils -- StormDB - http://www.stormdb.com The Database Cloud Postgres-XC Support and Service |
From: Pavan D. <pav...@gm...> - 2013-01-31 06:27:08
|
On 31-Jan-2013, at 11:43 AM, Filip Rembiałkowski <fil...@gm...> wrote: > On Thu, Jan 31, 2013 at 12:11 AM, Nikhil Sontakke <ni...@st...> wrote: >> Forgot to add. Yes, I also find it irritating to explicitly specify >> "-p". Will submit a patch for this. > > Thank you. > It would be consistent to follow pg_ctl approach. > +1. It will be better if we can make it consistent with pg_ctl. We should be able to search gtm/proxy binary in the same place where gtm_ctl resides, no ? Thanks, Pavan |
From: Filip R. <fil...@gm...> - 2013-01-31 06:13:43
|
On Thu, Jan 31, 2013 at 12:11 AM, Nikhil Sontakke <ni...@st...>wrote: > Forgot to add. Yes, I also find it irritating to explicitly specify > "-p". Will submit a patch for this. > > Thank you. It would be consistent to follow pg_ctl approach. regards, Filip |
From: Michael P. <mic...@gm...> - 2013-01-31 06:13:34
|
On Thu, Jan 31, 2013 at 3:11 PM, Nikhil Sontakke <ni...@st...>wrote: > Forgot to add. Yes, I also find it irritating to explicitly specify > "-p". Will submit a patch for this. > +1. Simplicity is always a good solution. -- Michael Paquier http://michael.otacoo.com |
From: Nikhil S. <ni...@st...> - 2013-01-31 06:12:11
|
Forgot to add. Yes, I also find it irritating to explicitly specify "-p". Will submit a patch for this. Regards, Nikhils On Thu, Jan 31, 2013 at 11:29 AM, Nikhil Sontakke <ni...@st...> wrote: > Hi Filip, > >> filip@srv:~$ pgxc/bin/gtm_ctl -Z gtm -D pgxcdata/gtm0 start >> server starting >> filip@srv:~$ sh: 1: gtm: not found >> > > You can either include the directory containing gtm in your search > path or provide it explicitly to gtm_ctl by using -p > > pgxc/bin/gtm_ctl -Z gtm -D pgxcdata/gtm0 -p `pwd`/pgxc/bin start > > HTH, > Nikhils > -- > StormDB - http://www.stormdb.com > The Database Cloud > Postgres-XC Support and Service -- StormDB - http://www.stormdb.com The Database Cloud Postgres-XC Support and Service |
From: Nikhil S. <ni...@st...> - 2013-01-31 05:59:50
|
Hi Filip, > filip@srv:~$ pgxc/bin/gtm_ctl -Z gtm -D pgxcdata/gtm0 start > server starting > filip@srv:~$ sh: 1: gtm: not found > You can either include the directory containing gtm in your search path or provide it explicitly to gtm_ctl by using -p pgxc/bin/gtm_ctl -Z gtm -D pgxcdata/gtm0 -p `pwd`/pgxc/bin start HTH, Nikhils -- StormDB - http://www.stormdb.com The Database Cloud Postgres-XC Support and Service |
From: Filip R. <fil...@gm...> - 2013-01-30 18:38:40
|
After configuring and installing pg-xc from git trunk by ./configure --prefix=$HOME/pgxc && make && make install I try to run the GTM: filip@srv:~$ pgxc/bin/gtm_ctl -Z gtm -D pgxcdata/gtm0 start server starting filip@srv:~$ sh: 1: gtm: not found pg_ctl does not have this problem. It knows that it should call $HOME/pgxc/bin/postgres PS. Please advise - is this approppriate to file a bug report in sourceforge for such minor issue? |
From: Michael P. <mic...@gm...> - 2013-01-30 18:23:13
|
On Thu, Jan 31, 2013 at 3:12 AM, Michael Paquier <mic...@gm...>wrote: > On Wed, Jan 30, 2013 at 8:01 PM, Christophe Le Roux < > chr...@fr...> wrote: > >> Can we distribute and replicate a table ? (A table >> distributed on 3 datanodes (for performance) and replicated on 3 others >> datanode(for availability without down time) – I’m not talking about hot >> standby or streaming replication included in standard postgresql) >> > This is not supported directly, but you could try something like that, > perhaps it will work that's just an idea. This idea uses as a base the > internal Postgres partitioning mechanism: > CREATE TABLE tab (col1 int, col2 int) DISTRIBUTE BY REPLICATION; > CREATE TABLE tab1 (CHECK col1 >= 100 AND CHECK col1 < 200) INHERITS (tab) > *DISTRIBUTE BY HASH(col1);* > CREATE TABLE tab2 (CHECK col1 >= 200 AND CHECK col1 < 300) INHERITS (tab) > *DISTRIBUTE BY REPLICATION;* > Putting an index on column col1 would be good also. Here the secret is to > have the parent table replicated. > I forgot something here: You will need also to create a rule or a trigger to redirect INSERT/UPDATE/DELETE to the correct subtable. Have a rule at the postgres documentation here: http://www.postgresql.org/docs/9.2/static/ddl-partitioning.html. -- Michael Paquier http://michael.otacoo.com |
From: Michael P. <mic...@gm...> - 2013-01-30 18:12:18
|
On Wed, Jan 30, 2013 at 8:01 PM, Christophe Le Roux < chr...@fr...> wrote: > Can we distribute and replicate a table ? (A table > distributed on 3 datanodes (for performance) and replicated on 3 others > datanode(for availability without down time) – I’m not talking about hot > standby or streaming replication included in standard postgresql) > This is not supported directly, but you could try something like that, perhaps it will work that's just an idea. This idea uses as a base the internal Postgres partitioning mechanism: CREATE TABLE tab (col1 int, col2 int) DISTRIBUTE BY REPLICATION; CREATE TABLE tab1 (CHECK col1 >= 100 AND CHECK col1 < 200) INHERITS (tab) *DISTRIBUTE BY HASH(col1);* CREATE TABLE tab2 (CHECK col1 >= 200 AND CHECK col1 < 300) INHERITS (tab) *DISTRIBUTE BY REPLICATION;* Putting an index on column col1 would be good also. Here the secret is to have the parent table replicated. -- Michael Paquier http://michael.otacoo.com |
From: Filip R. <fil...@gm...> - 2013-01-30 14:43:56
|
Big up for questions that Kriss_fr asked. At our company we are considering a very similar setup. One of main advantages that draws us into even considering Postgres-XC, is possibility of having 2 datacenters in "active-active" layout. Thanks, Filip On Wed, Jan 30, 2013 at 7:13 AM, Christophe Le Roux < chr...@fr...> wrote: > Hi Pavan, > > The network between 2 DC is 1Gb/s with low latency ( like a LAN ). So this > is not an issue. > > Ashutosh say if B or C fail (all table of all databases are "DISTRIBUTE by > REPLICATION" - B and C should have same data), postgres-XC fail (for this > release). It is on the roadmap ? or already on the master branch of github ? > > Thanks. > > Kriss_fr > > -----Message d'origine----- > De : Pavan Deolasee [mailto:pav...@gm...] > Envoyé : mercredi 30 janvier 2013 13:56 > À : Christophe Le Roux > Cc : Ashutosh Bapat; pos...@li... > Objet : Re: [Postgres-xc-general] Table distribution/replication and other > things > > On Wed, Jan 30, 2013 at 5:46 PM, Christophe Le Roux < > chr...@fr...> wrote: > > Thank you for your answers. > > > > > > > > About the third question, I don't want to use standby server for now > > (except for GM standby). > > > > The cluster would be 4 servers : > > > > > > > > A - GTM server > > > > B - GTM proxy, coordinator and datanode server > > > > C - GTM proxy, coordinator and datanode server > > > > D - GTM standby server > > > > > > > > All tables of all databases are "DISTRIBUTE by REPLICATION" > > (postgres-XC capability). So we have same data at the same time on > datanodes B and C. > > > > So if B server fail , C server will do the job. > > > > If C server fail, B server will do the job. > > > > B and C server talk to A server (for GxId, sequence,..) but if A > > server fail, we can promote D server to the master GTM. > > > > > > > > Now, we suppose that : server A and B are located in DC1, and serveur > > C and D are located in DC2. > > > > B and C will do the jobs and talk to A server (for GxId, sequence,..). > > > > If I read correctly, B and C will be in different DCs which typically mean > that they will connected on a slow WAN link. Since for replication, > coordinator needs to send a lot of information to/for the datanode, I am > not sure if this would work well. But then it depends on how much workload > you are really putting on the server. It will also depend on what kind of > queries you would be running. I don't think anyone has tried (yet) to keep > the datanodes on different DCs, but give it a try. > > Thanks, > Pavan > -- > Pavan Deolasee > http://www.linkedin.com/in/pavandeolasee > > > ------------------------------------------------------------------------------ > Everyone hates slow websites. So do we. > Make your web apps faster with AppDynamics > Download AppDynamics Lite for free today: > http://p.sf.net/sfu/appdyn_d2d_jan > _______________________________________________ > Postgres-xc-general mailing list > Pos...@li... > https://lists.sourceforge.net/lists/listinfo/postgres-xc-general > |
From: Christophe Le R. <chr...@fr...> - 2013-01-30 13:14:01
|
Hi Pavan, The network between 2 DC is 1Gb/s with low latency ( like a LAN ). So this is not an issue. Ashutosh say if B or C fail (all table of all databases are "DISTRIBUTE by REPLICATION" - B and C should have same data), postgres-XC fail (for this release). It is on the roadmap ? or already on the master branch of github ? Thanks. Kriss_fr -----Message d'origine----- De : Pavan Deolasee [mailto:pav...@gm...] Envoyé : mercredi 30 janvier 2013 13:56 À : Christophe Le Roux Cc : Ashutosh Bapat; pos...@li... Objet : Re: [Postgres-xc-general] Table distribution/replication and other things On Wed, Jan 30, 2013 at 5:46 PM, Christophe Le Roux <chr...@fr...> wrote: > Thank you for your answers. > > > > About the third question, I don't want to use standby server for now > (except for GM standby). > > The cluster would be 4 servers : > > > > A - GTM server > > B - GTM proxy, coordinator and datanode server > > C - GTM proxy, coordinator and datanode server > > D - GTM standby server > > > > All tables of all databases are "DISTRIBUTE by REPLICATION" > (postgres-XC capability). So we have same data at the same time on datanodes B and C. > > So if B server fail , C server will do the job. > > If C server fail, B server will do the job. > > B and C server talk to A server (for GxId, sequence,..) but if A > server fail, we can promote D server to the master GTM. > > > > Now, we suppose that : server A and B are located in DC1, and serveur > C and D are located in DC2. > > B and C will do the jobs and talk to A server (for GxId, sequence,..). > If I read correctly, B and C will be in different DCs which typically mean that they will connected on a slow WAN link. Since for replication, coordinator needs to send a lot of information to/for the datanode, I am not sure if this would work well. But then it depends on how much workload you are really putting on the server. It will also depend on what kind of queries you would be running. I don't think anyone has tried (yet) to keep the datanodes on different DCs, but give it a try. Thanks, Pavan -- Pavan Deolasee http://www.linkedin.com/in/pavandeolasee |
From: Filip R. <fil...@gm...> - 2013-01-30 13:07:27
|
On Mon, Jan 28, 2013 at 7:23 PM, Koichi Suzuki <koi...@gm...>wrote: > I have not looked into it very in detail. So this is just a guess. > > XC's datanode cluster contains three more system catalogues, > pgxc_class, pgxc_node and pgxc_nodegroup and also contains many more > build-in functions which have to be hard-coded. > Only adding new objects is fine. I think it does not prevent this kind of migration. I can think of creating new objects in pg_catalog just before switching to Postgres-XC. Does postgres-xc modify any existing postgres catalog tables? (adding columns, changing types, any structure changes)? Does postgres-xc modify any existing postgres catalog functions? (function signatures and/or internal logic) Do you think it's possible to map/translate existing pg_class to pgxc_class, existing TXIDs to GXIDs, etc? I understand that pointing "empty" GTMs and Coordinators to this new special datanode would simply not work? We'd have to initialize them in a special way? > > Even though other part of the database cluster has no serious > difference, the above could be very serious. > answers to above questions would help me understand _why_ it is serious. > It looks a good idea to allow XC to start with only one node > configuration, and then add nodes as needed. > > Yes, that's my idea also - I see there are ALTER TABLE commands which allow to replicate/redistribute. > Other than that, you can use pg_dump and pg_restore to move all your > data from PostgreSQL to Postgres-XC. > > I know it's a standard procedure - but in my case this would take tens of hours, and I'm looking for other ways. Thank you, Filip |
From: Pavan D. <pav...@gm...> - 2013-01-30 12:56:48
|
On Wed, Jan 30, 2013 at 5:46 PM, Christophe Le Roux <chr...@fr...> wrote: > Thank you for your answers. > > > > About the third question, I don’t want to use standby server for now (except > for GM standby). > > The cluster would be 4 servers : > > > > A – GTM server > > B – GTM proxy, coordinator and datanode server > > C – GTM proxy, coordinator and datanode server > > D – GTM standby server > > > > All tables of all databases are “DISTRIBUTE by REPLICATION” (postgres-XC > capability). So we have same data at the same time on datanodes B and C. > > So if B server fail , C server will do the job. > > If C server fail, B server will do the job. > > B and C server talk to A server (for GxId, sequence,..) but if A server > fail, we can promote D server to the master GTM. > > > > Now, we suppose that : server A and B are located in DC1, and serveur C and > D are located in DC2. > > B and C will do the jobs and talk to A server (for GxId, sequence,..). > If I read correctly, B and C will be in different DCs which typically mean that they will connected on a slow WAN link. Since for replication, coordinator needs to send a lot of information to/for the datanode, I am not sure if this would work well. But then it depends on how much workload you are really putting on the server. It will also depend on what kind of queries you would be running. I don't think anyone has tried (yet) to keep the datanodes on different DCs, but give it a try. Thanks, Pavan -- Pavan Deolasee http://www.linkedin.com/in/pavandeolasee |
From: Ashutosh B. <ash...@en...> - 2013-01-30 12:40:02
|
On Wed, Jan 30, 2013 at 5:46 PM, Christophe Le Roux < chr...@fr...> wrote: > Thank you for your answers.**** > > ** ** > > About the third question, I don’t want to use standby server for now > (except for GM standby).**** > > The cluster would be 4 servers :**** > > ** ** > > A – GTM server**** > > B – GTM proxy, coordinator and datanode server**** > > C – GTM proxy, coordinator and datanode server**** > > D – GTM standby server**** > > ** ** > > All tables of all databases are “DISTRIBUTE by REPLICATION” (postgres-XC > capability). So we have same data at the same time on datanodes B and C.** > ** > > So if B server fail , C server will do the job.**** > > If C server fail, B server will do the job.**** > > B and C server talk to A server (for GxId, sequence,..) but if A server > fail, we can promote D server to the master GTM.**** > > ** > Thanks for elaborate example. Now it's clear. Here's a suggestion. You don't need separate servers for GTM or GTM standby. They can share servers with coordinator or datanode. But having them on separate servers would boost the performance. If one of B or C fails, Postgres-XC fails (for this release). But you can use coordinators on B and C simultaneously. > ** > > Now, we suppose that : server A and B are located in DC1, and serveur C > and D are located in DC2. > Postgres-XC is not tolerant to network failures (right now). You will need to test your deployment for this scenario. > **** > > B and C will do the jobs and talk to A server (for GxId, sequence,..).**** > > If DC2 fail, B (and A) servers will do the job.**** > > If DC1 fail, we can promote D server to the master GTM and C server will > do the job.**** > > ** ** > > ** ** > > Is it all right ?**** > > ** ** > > ** ** > > ** ** > > Kriss_fr**** > > ** ** > > *De :* Ashutosh Bapat [mailto:ash...@en...] > *Envoyé :* mercredi 30 janvier 2013 12:45 > *À :* Christophe Le Roux > *Cc :* pos...@li... > *Objet :* Re: [Postgres-xc-general] Table distribution/replication and > other things**** > > ** ** > > Hi Christophe, > Thanks for showing interest in Postgres-XC. Please find the answers below, > inlined > > **** > > On Wed, Jan 30, 2013 at 4:31 PM, Christophe Le Roux < > chr...@fr...> wrote:**** > > Hi the team,**** > > **** > > First of all thank you for your work on postgresql XC. It’s a very cool > features product, but I need more informations.**** > > **** > > My first question is about table distribution/replication :**** > > Can we distribute and replicate a table ? (A table > distributed on 3 datanodes (for performance) and replicated on 3 others > datanode(for availability without down time) – I’m not talking about hot > standby or streaming replication included in standard postgresql)**** > > Is it in the roadmap to add this capability ?**** > > **** > > > No, we can't have mixed distribution. You can use standbys for datanodes > and coordinators using the streaming replication and hot standby > capabilities of PostgreSQL. We do not have mixed distribution on our > roadmap, right now. > **** > > My second question is about GTM :**** > > During how many time (30 seconds,5 min, 15 min, 1h ...) > the GTM serveur can be down without impact the service (and without promote > a GTM stanby to master) ?**** > > **** > > > GTM provides the global transaction ids to datanodes and coordinators, so > downtime allowed depends upon the transaction load, or rate of incoming > transactions to the server. But, we don't have precise numbers here. > **** > > My third question is about global architecture :**** > > we have 2 datacenters with an extra low latency link (like > LAN).**** > > I want to put in the first DC one node with GTM and another with GTM > proxy, coordinator and datanode and in the second DC one node with GTM > standby and another node with GTM proxy,coordinator and datanode.**** > > All tables of all databases will be replicated (DISTRIBUTE BY REPLICATION). > **** > > Web server are in first and second DC.**** > > **** > > The goal is :**** > > service is delivred by first and second DC at the same time**** > > if one DC fail the service continue the other**** > > **** > > > If I understand your deployment correctly, you are actually replicating > the Postgres-XC setup at two DC's using standby servers. That may not work. > We haven't tested this configuration. > **** > > What do you think about this ?**** > > Can it be possible ?**** > > **** > > **** > > Thank you**** > > **** > > Kriss_fr **** > > > > ------------------------------------------------------------------------------ > Everyone hates slow websites. So do we. > Make your web apps faster with AppDynamics > Download AppDynamics Lite for free today: > http://p.sf.net/sfu/appdyn_d2d_jan > _______________________________________________ > Postgres-xc-general mailing list > Pos...@li... > https://lists.sourceforge.net/lists/listinfo/postgres-xc-general**** > > > > > -- > Best Wishes, > Ashutosh Bapat > EntepriseDB Corporation > The Enterprise Postgres Company**** > -- Best Wishes, Ashutosh Bapat EntepriseDB Corporation The Enterprise Postgres Company |
From: Christophe Le R. <chr...@fr...> - 2013-01-30 12:16:33
|
Thank you for your answers. About the third question, I don't want to use standby server for now (except for GM standby). The cluster would be 4 servers : A - GTM server B - GTM proxy, coordinator and datanode server C - GTM proxy, coordinator and datanode server D - GTM standby server All tables of all databases are "DISTRIBUTE by REPLICATION" (postgres-XC capability). So we have same data at the same time on datanodes B and C. So if B server fail , C server will do the job. If C server fail, B server will do the job. B and C server talk to A server (for GxId, sequence,..) but if A server fail, we can promote D server to the master GTM. Now, we suppose that : server A and B are located in DC1, and serveur C and D are located in DC2. B and C will do the jobs and talk to A server (for GxId, sequence,..). If DC2 fail, B (and A) servers will do the job. If DC1 fail, we can promote D server to the master GTM and C server will do the job. Is it all right ? Kriss_fr De : Ashutosh Bapat [mailto:ash...@en...] Envoyé : mercredi 30 janvier 2013 12:45 À : Christophe Le Roux Cc : pos...@li... Objet : Re: [Postgres-xc-general] Table distribution/replication and other things Hi Christophe, Thanks for showing interest in Postgres-XC. Please find the answers below, inlined On Wed, Jan 30, 2013 at 4:31 PM, Christophe Le Roux <chr...@fr...<mailto:chr...@fr...>> wrote: Hi the team, First of all thank you for your work on postgresql XC. It's a very cool features product, but I need more informations. My first question is about table distribution/replication : Can we distribute and replicate a table ? (A table distributed on 3 datanodes (for performance) and replicated on 3 others datanode(for availability without down time) - I'm not talking about hot standby or streaming replication included in standard postgresql) Is it in the roadmap to add this capability ? No, we can't have mixed distribution. You can use standbys for datanodes and coordinators using the streaming replication and hot standby capabilities of PostgreSQL. We do not have mixed distribution on our roadmap, right now. My second question is about GTM : During how many time (30 seconds,5 min, 15 min, 1h ...) the GTM serveur can be down without impact the service (and without promote a GTM stanby to master) ? GTM provides the global transaction ids to datanodes and coordinators, so downtime allowed depends upon the transaction load, or rate of incoming transactions to the server. But, we don't have precise numbers here. My third question is about global architecture : we have 2 datacenters with an extra low latency link (like LAN). I want to put in the first DC one node with GTM and another with GTM proxy, coordinator and datanode and in the second DC one node with GTM standby and another node with GTM proxy,coordinator and datanode. All tables of all databases will be replicated (DISTRIBUTE BY REPLICATION). Web server are in first and second DC. The goal is : service is delivred by first and second DC at the same time if one DC fail the service continue the other If I understand your deployment correctly, you are actually replicating the Postgres-XC setup at two DC's using standby servers. That may not work. We haven't tested this configuration. What do you think about this ? Can it be possible ? Thank you Kriss_fr ------------------------------------------------------------------------------ Everyone hates slow websites. So do we. Make your web apps faster with AppDynamics Download AppDynamics Lite for free today: http://p.sf.net/sfu/appdyn_d2d_jan _______________________________________________ Postgres-xc-general mailing list Pos...@li...<mailto:Pos...@li...> https://lists.sourceforge.net/lists/listinfo/postgres-xc-general -- Best Wishes, Ashutosh Bapat EntepriseDB Corporation The Enterprise Postgres Company |
From: Ashutosh B. <ash...@en...> - 2013-01-30 11:45:02
|
Hi Christophe, Thanks for showing interest in Postgres-XC. Please find the answers below, inlined On Wed, Jan 30, 2013 at 4:31 PM, Christophe Le Roux < chr...@fr...> wrote: > Hi the team,**** > > ** ** > > First of all thank you for your work on postgresql XC. It’s a very cool > features product, but I need more informations.**** > > ** ** > > My first question is about table distribution/replication :**** > > Can we distribute and replicate a table ? (A table > distributed on 3 datanodes (for performance) and replicated on 3 others > datanode(for availability without down time) – I’m not talking about hot > standby or streaming replication included in standard postgresql)**** > > Is it in the roadmap to add this capability ?**** > > ** > No, we can't have mixed distribution. You can use standbys for datanodes and coordinators using the streaming replication and hot standby capabilities of PostgreSQL. We do not have mixed distribution on our roadmap, right now. > ** > > My second question is about GTM :**** > > During how many time (30 seconds,5 min, 15 min, 1h ...) > the GTM serveur can be down without impact the service (and without promote > a GTM stanby to master) ?**** > > > GTM provides the global transaction ids to datanodes and coordinators, so downtime allowed depends upon the transaction load, or rate of incoming transactions to the server. But, we don't have precise numbers here. > My third question is about global architecture :**** > > we have 2 datacenters with an extra low latency link (like > LAN).**** > > I want to put in the first DC one node with GTM and another with GTM > proxy, coordinator and datanode and in the second DC one node with GTM > standby and another node with GTM proxy,coordinator and datanode.**** > > All tables of all databases will be replicated (DISTRIBUTE BY REPLICATION). > **** > > Web server are in first and second DC.**** > > ** ** > > The goal is :**** > > service is delivred by first and second DC at the same time**** > > if one DC fail the service continue the other**** > > ** > If I understand your deployment correctly, you are actually replicating the Postgres-XC setup at two DC's using standby servers. That may not work. We haven't tested this configuration. > ** > > What do you think about this ?**** > > Can it be possible ?**** > > ** ** > > ** ** > > Thank you**** > > ** ** > > Kriss_fr **** > > > ------------------------------------------------------------------------------ > Everyone hates slow websites. So do we. > Make your web apps faster with AppDynamics > Download AppDynamics Lite for free today: > http://p.sf.net/sfu/appdyn_d2d_jan > _______________________________________________ > Postgres-xc-general mailing list > Pos...@li... > https://lists.sourceforge.net/lists/listinfo/postgres-xc-general > > -- Best Wishes, Ashutosh Bapat EntepriseDB Corporation The Enterprise Postgres Company |
From: Christophe Le R. <chr...@fr...> - 2013-01-30 11:19:00
|
Hi the team, First of all thank you for your work on postgresql XC. It's a very cool features product, but I need more informations. My first question is about table distribution/replication : Can we distribute and replicate a table ? (A table distributed on 3 datanodes (for performance) and replicated on 3 others datanode(for availability without down time) - I'm not talking about hot standby or streaming replication included in standard postgresql) Is it in the roadmap to add this capability ? My second question is about GTM : During how many time (30 seconds,5 min, 15 min, 1h ...) the GTM serveur can be down without impact the service (and without promote a GTM stanby to master) ? My third question is about global architecture : we have 2 datacenters with an extra low latency link (like LAN). I want to put in the first DC one node with GTM and another with GTM proxy, coordinator and datanode and in the second DC one node with GTM standby and another node with GTM proxy,coordinator and datanode. All tables of all databases will be replicated (DISTRIBUTE BY REPLICATION). Web server are in first and second DC. The goal is : service is delivred by first and second DC at the same time if one DC fail the service continue the other What do you think about this ? Can it be possible ? Thank you Kriss_fr |