You can subscribe to this list here.
2010 |
Jan
|
Feb
|
Mar
|
Apr
|
May
(2) |
Jun
|
Jul
|
Aug
(6) |
Sep
|
Oct
(19) |
Nov
(1) |
Dec
|
---|---|---|---|---|---|---|---|---|---|---|---|---|
2011 |
Jan
(12) |
Feb
(1) |
Mar
(4) |
Apr
(4) |
May
(32) |
Jun
(12) |
Jul
(11) |
Aug
(1) |
Sep
(6) |
Oct
(3) |
Nov
|
Dec
(10) |
2012 |
Jan
(11) |
Feb
(1) |
Mar
(3) |
Apr
(25) |
May
(53) |
Jun
(38) |
Jul
(103) |
Aug
(54) |
Sep
(31) |
Oct
(66) |
Nov
(77) |
Dec
(20) |
2013 |
Jan
(91) |
Feb
(86) |
Mar
(103) |
Apr
(107) |
May
(25) |
Jun
(37) |
Jul
(17) |
Aug
(59) |
Sep
(38) |
Oct
(78) |
Nov
(29) |
Dec
(15) |
2014 |
Jan
(23) |
Feb
(82) |
Mar
(118) |
Apr
(101) |
May
(103) |
Jun
(45) |
Jul
(6) |
Aug
(10) |
Sep
|
Oct
(32) |
Nov
|
Dec
(9) |
2015 |
Jan
(3) |
Feb
(5) |
Mar
|
Apr
(1) |
May
|
Jun
|
Jul
(9) |
Aug
(4) |
Sep
(3) |
Oct
|
Nov
|
Dec
|
2016 |
Jan
(3) |
Feb
|
Mar
|
Apr
|
May
|
Jun
|
Jul
|
Aug
|
Sep
|
Oct
|
Nov
|
Dec
|
2017 |
Jan
|
Feb
|
Mar
|
Apr
|
May
|
Jun
(3) |
Jul
|
Aug
|
Sep
|
Oct
|
Nov
|
Dec
|
2018 |
Jan
|
Feb
|
Mar
|
Apr
|
May
(4) |
Jun
|
Jul
|
Aug
|
Sep
|
Oct
|
Nov
|
Dec
|
S | M | T | W | T | F | S |
---|---|---|---|---|---|---|
|
|
1
|
2
|
3
|
4
(1) |
5
(1) |
6
(1) |
7
|
8
(1) |
9
|
10
|
11
|
12
|
13
|
14
|
15
|
16
|
17
|
18
|
19
|
20
|
21
|
22
|
23
|
24
|
25
|
26
|
27
|
28
|
29
|
30
|
31
|
|
|
From: Lionel F. <lio...@gm...> - 2011-03-08 13:09:23
|
Thanks for your valuable answers, my test cluster is now working (worth mentionning I was additionally mislead by unexpected /etc/hosts.allow set up...) Other notes below Regards 2011/3/6 Michael Paquier <mic...@gm...> > Hi Lionel, > > Just to complete a little bit the answers of my colleague... > > On Sat, Mar 5, 2011 at 3:36 PM, Abbas Butt <abb...@te...>wrote: > >> My actual setup is : >>> pgxc1 for GTM, Coordinator and datanode >>> pgxc2 for datanode only >>> >> >> You mean you will have 2 computers, one will run GTM, Coordinator and 1st >> datanode and the other the 2nd data node. If yes then this would be fine. >> >> BTW what Linux distribution will you be using? >> > RedHat Enterprise 5.4 by now. > >> > > >> >>> 1.General : Is a coordinator needed for each node, or one coordinator >>> 'to rule them all' can be setup ? >>> >> You can use one Coordinator with 100 Datanodes if you desire. > It may be better if the ratio Coordinator/Datanode is close to 1, but we > also found that if you set one Coordinator and one Datanode on the same > machine, Coordinator was using 30% of ressources and Datanode 70%. > With such numbers, a ratio of 0.5 is also possible. > Yes, I'm aiming to test, given your numbers, 1 Coordinator for 3 Datanodes, summing up to ~20 machines (if possible with our infrastructure folks). The breaking point may be the bandwidth (after the number of available hosts, of course). > >> >>> >>> 2. Configuration : What should be the differences between >>> postgresql.conf in /datanode and /coordinator, if there are any? >> >> [...] > You have also pooler connection parameters to set but I forgot all the > names. > No issue, I'll try by myself. And tweaking this needs I'm ready to run benchmarks, which is not the actual phase. > For datanode, you have to take care of GTM connection parameter and > pgxc_node_id (used to register on GTM). > > In your cluster you can have a Coordinator 1 and a Datanode 1 as the > difference between node type is made when registering nodes on GTM. > > >> >>> Some >>> portions are they ignored for a specific function (ex : coordinator vs >>> datanode config, etc...) >>> In this can pg_hba.conf & postgresql.conf be shared for the same >>> server (maybe using symbolic links...) >>> >> No, you have to set up each postgresql.conf and pg_hba.conf for each node > separately as each Coordinator and Datanode use different data folder. > I'm having pg_hba.conf same setup on all dirs/hosts for the moment, but for the others, I've understood what you meant when the cluster was finally running... > >> >>> 3. GTM : I get on second node an "WARNING: Do not have a GTM snapshot >>> available", can this be related to previous config files/setup ? >>> >> This error means that you didn't set up GTM connection parameters > correctly. > >> >> >>> >>> 4. Administration : are there ways of getting the online status of the >>> nodes from one node to another >>> >> >> This is currently under development. I can provide you details later. >> > Those experimental functionalities are located on a separate branch called > ha_support in the GIT repo. > It would be of a great interest in terms of supportability in a company-wide perspective. I'll give them a try if they're next to alpha status :) > > We are also thinking about adding some catalog extensions to allow > coordinator to keep an eye on Datanodes as such a view process is linker to > the connection pooling process. > Great idea. > This is just a thought though. > Now as we are focusing on code stability for the core, this is not a high > priority. > But as we merged with PostgreSQL 9.0, it may be possible in a close future > to use XC with HOT Standby nodes. Current streaming replication is not > synchronized so its usage is now limited in current XC. > > If you have any other questions, don't hesitate. > I never do :) Regards Lionel F. |
From: Michael P. <mic...@gm...> - 2011-03-06 09:58:01
|
Hi Lionel, Just to complete a little bit the answers of my colleague... On Sat, Mar 5, 2011 at 3:36 PM, Abbas Butt <abb...@te...> wrote: > My actual setup is : >> pgxc1 for GTM, Coordinator and datanode >> pgxc2 for datanode only >> > > You mean you will have 2 computers, one will run GTM, Coordinator and 1st > datanode and the other the 2nd data node. If yes then this would be fine. > > BTW what Linux distribution will you be using? > > Please remember that for the first time gtm has to be started with -x > option and it is better to start gtm before you initdb for the coordinator > and the 2 data nodes. For successive runs you should skip -x option. As a > general rule you should always start gtm first then the data nodes and then > the coordinator. > Basically, you can initialize your data directory when you want. The only point you have to take care is to start GTM before the other nodes as when you start a node it tries to register on GTM. After that it doesn't matter if you start Coordinators or Datanodes first. For GTM startup, you have to set for the first time -x which permit to set the first value of GXID GTM will feed to Postgres-XC nodes. If for instance you stopped you cluster, you can restart GTM from the same data folder as before with having to precise the first GXID value as this value will be taken automatically from the GTM data folder if the file where last GXID was written exists. > >> 1.General : Is a coordinator needed for each node, or one coordinator >> 'to rule them all' can be setup ? >> > You can use one Coordinator with 100 Datanodes if you desire. It may be better if the ratio Coordinator/Datanode is close to 1, but we also found that if you set one Coordinator and one Datanode on the same machine, Coordinator was using 30% of ressources and Datanode 70%. With such numbers, a ratio of 0.5 is also possible. > > >> >> 2. Configuration : What should be the differences between >> postgresql.conf in /datanode and /coordinator, if there are any? > > Coordinator uses a connection pooler process. So it needs to know the following parameters: coordinator_hosts, data_nodes_hosts, coordinator_ports, datanode_ports, pooler_port, data_node_num, coord_num, pgxc_node_id, gtm_port, gtm_host, datanode_user, datanode_passd, coord_passwd, coord_user. You have also pooler connection parameters to set but I forgot all the names. For datanode, you have to take care of GTM connection parameter and pgxc_node_id (used to register on GTM). In your cluster you can have a Coordinator 1 and a Datanode 1 as the difference between node type is made when registering nodes on GTM. > >> Some >> portions are they ignored for a specific function (ex : coordinator vs >> datanode config, etc...) >> In this can pg_hba.conf & postgresql.conf be shared for the same >> server (maybe using symbolic links...) >> > No, you have to set up each postgresql.conf and pg_hba.conf for each node separately as each Coordinator and Datanode use different data folder. > > >> 3. GTM : I get on second node an "WARNING: Do not have a GTM snapshot >> available", can this be related to previous config files/setup ? >> > This error means that you didn't set up GTM connection parameters correctly. > > >> >> 4. Administration : are there ways of getting the online status of the >> nodes from one node to another >> > > This is currently under development. I can provide you details later. > Those experimental functionalities are located on a separate branch called ha_support in the GIT repo. We are also thinking about adding some catalog extensions to allow coordinator to keep an eye on Datanodes as such a view process is linker to the connection pooling process. This is just a thought though. Now as we are focusing on code stability for the core, this is not a high priority. But as we merged with PostgreSQL 9.0, it may be possible in a close future to use XC with HOT Standby nodes. Current streaming replication is not synchronized so its usage is now limited in current XC. If you have any other questions, don't hesitate. Regards, -- Michael Paquier http://michaelpq.users.sourceforge.net |
From: Abbas B. <abb...@te...> - 2011-03-05 06:36:53
|
On Fri, Mar 4, 2011 at 8:06 PM, Lionel Frachon <lio...@gm...>wrote: > Hello, > > I'm starting an evaluation of the cluster (not having any PG > background, but Oracle & MySql for years...), and some questions > appear to me after reading the doc, any help appreciated. > Thank you, I would suggest playing around with plain PG first. That would help you a lot. > > My actual setup is : > pgxc1 for GTM, Coordinator and datanode > pgxc2 for datanode only > You mean you will have 2 computers, one will run GTM, Coordinator and 1st datanode and the other the 2nd data node. If yes then this would be fine. BTW what Linux distribution will you be using? Please remember that for the first time gtm has to be started with -x option and it is better to start gtm before you initdb for the coordinator and the 2 data nodes. For successive runs you should skip -x option. As a general rule you should always start gtm first then the data nodes and then the coordinator. > 1.General : Is a coordinator needed for each node, or one coordinator > 'to rule them all' can be setup ? > Yes, one coordinator serve multiple data nodes. > > 2. Configuration : What should be the differences between > postgresql.conf in /datanode and /coordinator, if there are any ? Yes each node in the cluster has to have a unique node ID, that is picked from the postgresql.conf. The coordinator needs to know the number of data nodes, and info required to to connect to them e.g. user name password etc Other than that I don't remember any thing on top of my head. Please take a look at the portions in the conf file specific to the XC, that would help. > Some > portions are they ignored for a specific function (ex : coordinator vs > datanode config, etc...) > In this can pg_hba.conf & postgresql.conf be shared for the same > server (maybe using symbolic links...) > No, as I explained earlier. > > 3. GTM : I get on second node an "WARNING: Do not have a GTM snapshot > available", can this be related to previous config files/setup ? > For the first data node the default configuration of a localhost GTM would be fine but for the 2nd data node you need to tell the data node where GTM is located, for this you need to change the gtm_host parameter in the postgresql.conf file of the 2nd datanode to the IP addeess of your first computer (which runs GTM). > > 4. Administration : are there ways of getting the online status of the > nodes from one node to another > This is currently under development. I can provide you details later. > > Thanks > > Lionel F. > > > ------------------------------------------------------------------------------ > What You Don't Know About Data Connectivity CAN Hurt You > This paper provides an overview of data connectivity, details > its effect on application quality, and explores various alternative > solutions. http://p.sf.net/sfu/progress-d2d > _______________________________________________ > Postgres-xc-general mailing list > Pos...@li... > https://lists.sourceforge.net/lists/listinfo/postgres-xc-general > |
From: Lionel F. <lio...@gm...> - 2011-03-04 15:06:40
|
Hello, I'm starting an evaluation of the cluster (not having any PG background, but Oracle & MySql for years...), and some questions appear to me after reading the doc, any help appreciated. My actual setup is : pgxc1 for GTM, Coordinator and datanode pgxc2 for datanode only 1.General : Is a coordinator needed for each node, or one coordinator 'to rule them all' can be setup ? 2. Configuration : What should be the differences between postgresql.conf in /datanode and /coordinator, if there are any ? Some portions are they ignored for a specific function (ex : coordinator vs datanode config, etc...) In this can pg_hba.conf & postgresql.conf be shared for the same server (maybe using symbolic links...) 3. GTM : I get on second node an "WARNING: Do not have a GTM snapshot available", can this be related to previous config files/setup ? 4. Administration : are there ways of getting the online status of the nodes from one node to another Thanks Lionel F. |