You can subscribe to this list here.
2010 |
Jan
|
Feb
|
Mar
|
Apr
|
May
(2) |
Jun
|
Jul
|
Aug
(6) |
Sep
|
Oct
(19) |
Nov
(1) |
Dec
|
---|---|---|---|---|---|---|---|---|---|---|---|---|
2011 |
Jan
(12) |
Feb
(1) |
Mar
(4) |
Apr
(4) |
May
(32) |
Jun
(12) |
Jul
(11) |
Aug
(1) |
Sep
(6) |
Oct
(3) |
Nov
|
Dec
(10) |
2012 |
Jan
(11) |
Feb
(1) |
Mar
(3) |
Apr
(25) |
May
(53) |
Jun
(38) |
Jul
(103) |
Aug
(54) |
Sep
(31) |
Oct
(66) |
Nov
(77) |
Dec
(20) |
2013 |
Jan
(91) |
Feb
(86) |
Mar
(103) |
Apr
(107) |
May
(25) |
Jun
(37) |
Jul
(17) |
Aug
(59) |
Sep
(38) |
Oct
(78) |
Nov
(29) |
Dec
(15) |
2014 |
Jan
(23) |
Feb
(82) |
Mar
(118) |
Apr
(101) |
May
(103) |
Jun
(45) |
Jul
(6) |
Aug
(10) |
Sep
|
Oct
(32) |
Nov
|
Dec
(9) |
2015 |
Jan
(3) |
Feb
(5) |
Mar
|
Apr
(1) |
May
|
Jun
|
Jul
(9) |
Aug
(4) |
Sep
(3) |
Oct
|
Nov
|
Dec
|
2016 |
Jan
(3) |
Feb
|
Mar
|
Apr
|
May
|
Jun
|
Jul
|
Aug
|
Sep
|
Oct
|
Nov
|
Dec
|
2017 |
Jan
|
Feb
|
Mar
|
Apr
|
May
|
Jun
(3) |
Jul
|
Aug
|
Sep
|
Oct
|
Nov
|
Dec
|
2018 |
Jan
|
Feb
|
Mar
|
Apr
|
May
(4) |
Jun
|
Jul
|
Aug
|
Sep
|
Oct
|
Nov
|
Dec
|
S | M | T | W | T | F | S |
---|---|---|---|---|---|---|
|
|
1
|
2
|
3
|
4
(1) |
5
(5) |
6
(1) |
7
(3) |
8
(2) |
9
(4) |
10
(6) |
11
(2) |
12
|
13
|
14
(1) |
15
(2) |
16
(2) |
17
(6) |
18
|
19
(1) |
20
|
21
(4) |
22
(4) |
23
(5) |
24
(4) |
25
(1) |
26
(2) |
27
(2) |
28
(2) |
29
(6) |
30
(11) |
31
(14) |
|
|
From: Nikhil S. <ni...@st...> - 2013-01-31 12:09:23
|
Talk about barking up the wrong tree ;) I was working with PGXC 1.0 tree. So the earlier sent out patch applies against it. PFA, another patch which applies against MASTER. Regards, Nikhils On Thu, Jan 31, 2013 at 5:16 PM, Nikhil Sontakke <ni...@st...> wrote: > Additionally I also fixed a bug in invoking gtm_standby via gtm_ctl. > It would blindly append gtm_standby as the name for the binary to > invoke. > > Regards, > Nikhils > > On Thu, Jan 31, 2013 at 5:11 PM, Nikhil Sontakke <ni...@st...> wrote: >> Hi, >> >> PFA, patch which fixes this issue in the master git repo. >> >> The behavior is now similar to pg_ctl. Additionally the version >> checking was missing in gtm_ctl, gtm and gtm_proxy. I added the same >> as well. The binaries will be checked for version compatibility as >> well now which is always a good thing. >> >> Regards, >> Nikhils >> >> On Thu, Jan 31, 2013 at 12:40 PM, Nikhil Sontakke <ni...@st...> wrote: >>>> >>>> +1. It will be better if we can make it consistent with pg_ctl. We should be >>>> able to search gtm/proxy binary in the same place where gtm_ctl resides, no >>>> ? >>>> >>> >>> Yes. The pg_ctl code will provide enough inspiration for a generic >>> binary search based on relative path as well. >>> >>> Regards, >>> Nikhils >>> -- >>> StormDB - http://www.stormdb.com >>> The Database Cloud >>> Postgres-XC Support and Service >> >> >> >> -- >> StormDB - http://www.stormdb.com >> The Database Cloud >> Postgres-XC Support and Service > > > > -- > StormDB - http://www.stormdb.com > The Database Cloud > Postgres-XC Support and Service -- StormDB - http://www.stormdb.com The Database Cloud Postgres-XC Support and Service |
From: Nikhil S. <ni...@st...> - 2013-01-31 11:46:29
|
Additionally I also fixed a bug in invoking gtm_standby via gtm_ctl. It would blindly append gtm_standby as the name for the binary to invoke. Regards, Nikhils On Thu, Jan 31, 2013 at 5:11 PM, Nikhil Sontakke <ni...@st...> wrote: > Hi, > > PFA, patch which fixes this issue in the master git repo. > > The behavior is now similar to pg_ctl. Additionally the version > checking was missing in gtm_ctl, gtm and gtm_proxy. I added the same > as well. The binaries will be checked for version compatibility as > well now which is always a good thing. > > Regards, > Nikhils > > On Thu, Jan 31, 2013 at 12:40 PM, Nikhil Sontakke <ni...@st...> wrote: >>> >>> +1. It will be better if we can make it consistent with pg_ctl. We should be >>> able to search gtm/proxy binary in the same place where gtm_ctl resides, no >>> ? >>> >> >> Yes. The pg_ctl code will provide enough inspiration for a generic >> binary search based on relative path as well. >> >> Regards, >> Nikhils >> -- >> StormDB - http://www.stormdb.com >> The Database Cloud >> Postgres-XC Support and Service > > > > -- > StormDB - http://www.stormdb.com > The Database Cloud > Postgres-XC Support and Service -- StormDB - http://www.stormdb.com The Database Cloud Postgres-XC Support and Service |
From: Nikhil S. <ni...@st...> - 2013-01-31 11:42:06
|
Hi, PFA, patch which fixes this issue in the master git repo. The behavior is now similar to pg_ctl. Additionally the version checking was missing in gtm_ctl, gtm and gtm_proxy. I added the same as well. The binaries will be checked for version compatibility as well now which is always a good thing. Regards, Nikhils On Thu, Jan 31, 2013 at 12:40 PM, Nikhil Sontakke <ni...@st...> wrote: >> >> +1. It will be better if we can make it consistent with pg_ctl. We should be >> able to search gtm/proxy binary in the same place where gtm_ctl resides, no >> ? >> > > Yes. The pg_ctl code will provide enough inspiration for a generic > binary search based on relative path as well. > > Regards, > Nikhils > -- > StormDB - http://www.stormdb.com > The Database Cloud > Postgres-XC Support and Service -- StormDB - http://www.stormdb.com The Database Cloud Postgres-XC Support and Service |
From: Christophe Le R. <chr...@fr...> - 2013-01-31 09:44:41
|
Ok, thank you for your answers. Kriss_fr De : Michael Paquier [mailto:mic...@gm...] Envoyé : jeudi 31 janvier 2013 10:33 À : Christophe Le Roux Cc : pos...@li... Objet : Re: [Postgres-xc-general] Table distribution/replication and other things On Thu, Jan 31, 2013 at 6:10 PM, Christophe Le Roux <chr...@fr...<mailto:chr...@fr...>> wrote: Nice trick ! But too complex for my brain :) ( i don't understand why tab2 is distributed by replication) What do you think about that (possible new feature ?) : - Possibility to create group like : CREATE GROUPSERVER group1 ('serverA','serverB'); CREATE GROUPSERVER group2 ('serverC','serverD'); This already exists and is called GROUP NODE, have a look here: http://postgres-xc.github.com/1_0/sql-createnodegroup.html - Possibility to table distributed an replicated like : CREATE TABLE tab (col1 int, col2 int) DISTRIBUTE BY HASH(col1) ON GROUPSERVER 'group1' AND REPLICATE ON GROUPSERVER 'group2'; If one of these groupserver have not the same number of server, we have a warning message like that : "WARNING : GROUPSERVER groupX has not the same number of server of GROUPSERVER groupY, so we can't distribute by hash, modulo or RR but the replication on GROUPSERVER groupY is done." This is like raid10 for hdd the table is splited between serverA and serverB and mirrored on serverC and serverD. If serverA or serverB or serverA and serverB fail, the table is still available with serverC and serverD . Is it possible ? Not now. At least the trick I gave before works the same with vanilla Postgres... This feature is a little bit too premature with existing XC infrastructure. Such complex things would be possible by first reworking the partitioning of XC to make it more integrated with Postgres I think. -- Michael Paquier http://michael.otacoo.com |
From: Pavan D. <pav...@gm...> - 2013-01-31 09:42:33
|
On Thu, Jan 31, 2013 at 3:03 PM, Michael Paquier <mic...@gm...> wrote: > > > On Thu, Jan 31, 2013 at 6:10 PM, Christophe Le Roux >> >> >> This is like raid10 for hdd the table is splited between serverA and >> serverB and mirrored on serverC and serverD. If serverA or serverB or >> serverA and serverB fail, the table is still available with serverC and >> serverD . >> >> >> >> Is it possible ? > > Not now. At least the trick I gave before works the same with vanilla > Postgres... > This feature is a little bit too premature with existing XC infrastructure. > Such complex things would be possible by first reworking the partitioning of > XC to make it more integrated with Postgres I think. Why so ? I understand its a difficult problem, but I don't see how integration with Postgres will make it any easier, compared to any other distribution mechanism we have implemented so far. I know we once talked about supporting hybrid distribution but I don't quite remember if concluded those discussions ever. And this has been asked multiple times by now, so we should see if it can be done. Thanks, Pavan -- Pavan Deolasee http://www.linkedin.com/in/pavandeolasee |
From: Michael P. <mic...@gm...> - 2013-01-31 09:33:09
|
On Thu, Jan 31, 2013 at 6:10 PM, Christophe Le Roux < chr...@fr...> wrote: > Nice trick !**** > > ** ** > > But too complex for my brain J ( i don’t understand why tab2 is > distributed by replication)**** > > ** ** > > What do you think about that (possible new feature ?) :**** > > **- **Possibility to create group like :**** > > CREATE GROUPSERVER group1 (‘serverA’,’serverB’);**** > > CREATE GROUPSERVER group2 (‘serverC’,’serverD’); > This already exists and is called GROUP NODE, have a look here: http://postgres-xc.github.com/1_0/sql-createnodegroup.html > **** > > ** ** > > **- **Possibility to table distributed an replicated like :**** > > CREATE TABLE tab (col1 int, col2 int) DISTRIBUTE BY HASH(col1) ON > GROUPSERVER ‘group1’ AND REPLICATE ON GROUPSERVER ‘group2’;**** > > If one of these groupserver have not the same number of > server, we have a warning message like that : “WARNING : GROUPSERVER groupX > has not the same number of server of GROUPSERVER groupY, so we can’t > distribute by hash, modulo or RR but the replication on GROUPSERVER groupY > is done.”**** > > ** ** > > This is like raid10 for hdd the table is splited between serverA and > serverB and mirrored on serverC and serverD. If serverA or serverB or > serverA and serverB fail, the table is still available with serverC and > serverD .**** > > > Is it possible ? > Not now. At least the trick I gave before works the same with vanilla Postgres... This feature is a little bit too premature with existing XC infrastructure. Such complex things would be possible by first reworking the partitioning of XC to make it more integrated with Postgres I think. -- Michael Paquier http://michael.otacoo.com |
From: Christophe Le R. <chr...@fr...> - 2013-01-31 09:10:12
|
Nice trick ! But too complex for my brain :) ( i don't understand why tab2 is distributed by replication) What do you think about that (possible new feature ?) : - Possibility to create group like : CREATE GROUPSERVER group1 ('serverA','serverB'); CREATE GROUPSERVER group2 ('serverC','serverD'); - Possibility to table distributed an replicated like : CREATE TABLE tab (col1 int, col2 int) DISTRIBUTE BY HASH(col1) ON GROUPSERVER 'group1' AND REPLICATE ON GROUPSERVER 'group2'; If one of these groupserver have not the same number of server, we have a warning message like that : "WARNING : GROUPSERVER groupX has not the same number of server of GROUPSERVER groupY, so we can't distribute by hash, modulo or RR but the replication on GROUPSERVER groupY is done." This is like raid10 for hdd the table is splited between serverA and serverB and mirrored on serverC and serverD. If serverA or serverB or serverA and serverB fail, the table is still available with serverC and serverD . Is it possible ? Kriss_fr De : Michael Paquier [mailto:mic...@gm...] Envoyé : mercredi 30 janvier 2013 19:23 À : Christophe Le Roux Cc : pos...@li... Objet : Re: [Postgres-xc-general] Table distribution/replication and other things On Thu, Jan 31, 2013 at 3:12 AM, Michael Paquier <mic...@gm...<mailto:mic...@gm...>> wrote: On Wed, Jan 30, 2013 at 8:01 PM, Christophe Le Roux <chr...@fr...<mailto:chr...@fr...>> wrote: Can we distribute and replicate a table ? (A table distributed on 3 datanodes (for performance) and replicated on 3 others datanode(for availability without down time) - I'm not talking about hot standby or streaming replication included in standard postgresql) This is not supported directly, but you could try something like that, perhaps it will work that's just an idea. This idea uses as a base the internal Postgres partitioning mechanism: CREATE TABLE tab (col1 int, col2 int) DISTRIBUTE BY REPLICATION; CREATE TABLE tab1 (CHECK col1 >= 100 AND CHECK col1 < 200) INHERITS (tab) DISTRIBUTE BY HASH(col1); CREATE TABLE tab2 (CHECK col1 >= 200 AND CHECK col1 < 300) INHERITS (tab) DISTRIBUTE BY REPLICATION; Putting an index on column col1 would be good also. Here the secret is to have the parent table replicated. I forgot something here: You will need also to create a rule or a trigger to redirect INSERT/UPDATE/DELETE to the correct subtable. Have a rule at the postgres documentation here: http://www.postgresql.org/docs/9.2/static/ddl-partitioning.html. -- Michael Paquier http://michael.otacoo.com |
From: Koichi S. <koi...@gm...> - 2013-01-31 07:26:37
|
2013/1/30 Filip Rembiałkowski <fil...@gm...>: > > On Mon, Jan 28, 2013 at 7:23 PM, Koichi Suzuki <koi...@gm...> > wrote: >> >> I have not looked into it very in detail. So this is just a guess. >> >> XC's datanode cluster contains three more system catalogues, >> pgxc_class, pgxc_node and pgxc_nodegroup and also contains many more >> build-in functions which have to be hard-coded. > > > > Only adding new objects is fine. I think it does not prevent this kind of > migration. > > I can think of creating new objects in pg_catalog just before switching to > Postgres-XC. > > Does postgres-xc modify any existing postgres catalog tables? (adding > columns, changing types, any structure changes)? Yes, pg_aggregate has additional aggcollectfn column. > > Does postgres-xc modify any existing postgres catalog functions? (function > signatures and/or internal logic) Yes, many. grep PGXC src/include/catalog/*.h will show you the addition. Important function is pgxc_is_committed which is used by pgxc_clean to cleanup outstanding 2PC after a node crash. > Do you think it's possible to map/translate existing pg_class to pgxc_class, > existing TXIDs to GXIDs, etc? Can translate TXIDs to GXIDs. GTM should start with corresponding GXID value. > > I understand that pointing "empty" GTMs and Coordinators to this new special > datanode would simply not work? > We'd have to initialize them in a special way? I have not done such experiment yet but I don't think it works because pg_aggregate now has additional aggcollectfn and datanode (old PG database) aggregate should return intermediate result according to this new catalogs. Fortunatly, no OIDs are used in coordinator/datanode communication. It is libpq with GXID and global snapshot supply. Regards; --- Koichi > > >> >> >> Even though other part of the database cluster has no serious >> difference, the above could be very serious. > > > answers to above questions would help me understand _why_ it is serious. > > >> >> It looks a good idea to allow XC to start with only one node >> configuration, and then add nodes as needed. >> > > Yes, that's my idea also - I see there are ALTER TABLE commands which allow > to replicate/redistribute. > > >> >> Other than that, you can use pg_dump and pg_restore to move all your >> data from PostgreSQL to Postgres-XC. >> > > I know it's a standard procedure - but in my case this would take tens of > hours, and I'm looking for other ways. > > > Thank you, > Filip > |
From: Nikhil S. <ni...@st...> - 2013-01-31 07:10:31
|
> > +1. It will be better if we can make it consistent with pg_ctl. We should be > able to search gtm/proxy binary in the same place where gtm_ctl resides, no > ? > Yes. The pg_ctl code will provide enough inspiration for a generic binary search based on relative path as well. Regards, Nikhils -- StormDB - http://www.stormdb.com The Database Cloud Postgres-XC Support and Service |
From: Pavan D. <pav...@gm...> - 2013-01-31 06:27:08
|
On 31-Jan-2013, at 11:43 AM, Filip Rembiałkowski <fil...@gm...> wrote: > On Thu, Jan 31, 2013 at 12:11 AM, Nikhil Sontakke <ni...@st...> wrote: >> Forgot to add. Yes, I also find it irritating to explicitly specify >> "-p". Will submit a patch for this. > > Thank you. > It would be consistent to follow pg_ctl approach. > +1. It will be better if we can make it consistent with pg_ctl. We should be able to search gtm/proxy binary in the same place where gtm_ctl resides, no ? Thanks, Pavan |
From: Filip R. <fil...@gm...> - 2013-01-31 06:13:43
|
On Thu, Jan 31, 2013 at 12:11 AM, Nikhil Sontakke <ni...@st...>wrote: > Forgot to add. Yes, I also find it irritating to explicitly specify > "-p". Will submit a patch for this. > > Thank you. It would be consistent to follow pg_ctl approach. regards, Filip |
From: Michael P. <mic...@gm...> - 2013-01-31 06:13:34
|
On Thu, Jan 31, 2013 at 3:11 PM, Nikhil Sontakke <ni...@st...>wrote: > Forgot to add. Yes, I also find it irritating to explicitly specify > "-p". Will submit a patch for this. > +1. Simplicity is always a good solution. -- Michael Paquier http://michael.otacoo.com |
From: Nikhil S. <ni...@st...> - 2013-01-31 06:12:11
|
Forgot to add. Yes, I also find it irritating to explicitly specify "-p". Will submit a patch for this. Regards, Nikhils On Thu, Jan 31, 2013 at 11:29 AM, Nikhil Sontakke <ni...@st...> wrote: > Hi Filip, > >> filip@srv:~$ pgxc/bin/gtm_ctl -Z gtm -D pgxcdata/gtm0 start >> server starting >> filip@srv:~$ sh: 1: gtm: not found >> > > You can either include the directory containing gtm in your search > path or provide it explicitly to gtm_ctl by using -p > > pgxc/bin/gtm_ctl -Z gtm -D pgxcdata/gtm0 -p `pwd`/pgxc/bin start > > HTH, > Nikhils > -- > StormDB - http://www.stormdb.com > The Database Cloud > Postgres-XC Support and Service -- StormDB - http://www.stormdb.com The Database Cloud Postgres-XC Support and Service |
From: Nikhil S. <ni...@st...> - 2013-01-31 05:59:50
|
Hi Filip, > filip@srv:~$ pgxc/bin/gtm_ctl -Z gtm -D pgxcdata/gtm0 start > server starting > filip@srv:~$ sh: 1: gtm: not found > You can either include the directory containing gtm in your search path or provide it explicitly to gtm_ctl by using -p pgxc/bin/gtm_ctl -Z gtm -D pgxcdata/gtm0 -p `pwd`/pgxc/bin start HTH, Nikhils -- StormDB - http://www.stormdb.com The Database Cloud Postgres-XC Support and Service |