You can subscribe to this list here.
2010 |
Jan
|
Feb
|
Mar
|
Apr
|
May
(2) |
Jun
|
Jul
|
Aug
(6) |
Sep
|
Oct
(19) |
Nov
(1) |
Dec
|
---|---|---|---|---|---|---|---|---|---|---|---|---|
2011 |
Jan
(12) |
Feb
(1) |
Mar
(4) |
Apr
(4) |
May
(32) |
Jun
(12) |
Jul
(11) |
Aug
(1) |
Sep
(6) |
Oct
(3) |
Nov
|
Dec
(10) |
2012 |
Jan
(11) |
Feb
(1) |
Mar
(3) |
Apr
(25) |
May
(53) |
Jun
(38) |
Jul
(103) |
Aug
(54) |
Sep
(31) |
Oct
(66) |
Nov
(77) |
Dec
(20) |
2013 |
Jan
(91) |
Feb
(86) |
Mar
(103) |
Apr
(107) |
May
(25) |
Jun
(37) |
Jul
(17) |
Aug
(59) |
Sep
(38) |
Oct
(78) |
Nov
(29) |
Dec
(15) |
2014 |
Jan
(23) |
Feb
(82) |
Mar
(118) |
Apr
(101) |
May
(103) |
Jun
(45) |
Jul
(6) |
Aug
(10) |
Sep
|
Oct
(32) |
Nov
|
Dec
(9) |
2015 |
Jan
(3) |
Feb
(5) |
Mar
|
Apr
(1) |
May
|
Jun
|
Jul
(9) |
Aug
(4) |
Sep
(3) |
Oct
|
Nov
|
Dec
|
2016 |
Jan
(3) |
Feb
|
Mar
|
Apr
|
May
|
Jun
|
Jul
|
Aug
|
Sep
|
Oct
|
Nov
|
Dec
|
2017 |
Jan
|
Feb
|
Mar
|
Apr
|
May
|
Jun
(3) |
Jul
|
Aug
|
Sep
|
Oct
|
Nov
|
Dec
|
2018 |
Jan
|
Feb
|
Mar
|
Apr
|
May
(4) |
Jun
|
Jul
|
Aug
|
Sep
|
Oct
|
Nov
|
Dec
|
S | M | T | W | T | F | S |
---|---|---|---|---|---|---|
1
|
2
|
3
|
4
|
5
|
6
|
7
|
8
|
9
|
10
|
11
|
12
|
13
(2) |
14
|
15
(1) |
16
(2) |
17
|
18
|
19
|
20
|
21
|
22
|
23
|
24
|
25
|
26
(1) |
27
|
28
|
29
|
30
|
31
|
|
|
|
|
From: Dika Y. <di...@nt...> - 2010-08-26 09:47:01
|
Hi, How to configure the distribution and replication data store mode for postgres-XC? Best wishes, Dika.Ye ---------------------------------------------------------------------------- -------- |
From: Michael P. <mic...@gm...> - 2010-08-16 07:00:02
|
Hi, 1) The page 5 of the install manual, I found that the > "gtm_coordinator_id=n+1" of datanode 1, and the datanode n is > "gtm_coordinator_id=2n", what is mean, it means datanode 2 > "gtm_coordinator_id=4" and datanode 3 "gtm_coordinator_id=6", right? Is the > parameter gtm_coord_id same as gtm_coordinator_id, right? > Datanode and Coordinator both communicate with GTM, so you have to set a different value of gtm_coord_id for all the components of the cluster so as to make sure that they are identified uniquely by GTM . For example, imagine that you have a cluster made with j coordinators and k datanode, the install manual advices you to set gtm_coord_id from 1 to j for coordinators, and from (j+1) to (j+k) for Datanodes. Of course nothing is imposed, so feel free to set the id of each compoenent freely. > 2) If my data node server's data_node_user have no password, then I can set > the parameter data_node_password='' or I do not need to set it, just ignore > it? > A libpq protocol is used to connect from Datanode to Coordinators, I suppose it is OK to let it empty if your user has no password. > > 3) How does DB data Synchronization work? Does it need to set up the rsync? > Data Synchronization on Datanode is SQL-based. There are two types of tables that you can set: replicated and distributed. If you send to Coordinator a SQL query so as to update a tuple of a replicated table, the same SQL will be sent to all the Datanodes. If you update a tuple of a distributed table, the SQL will be sent to a targetted node. > 4) I setup the coordinator starup with pooler_port=6667, it seems start > successful, but when I use "netstat -tunlp" command to check the server port > status, it seems the port 6667 do not open, it is correct? > The port opened with pooler on coodinator won't appear with netstat. It is not the case for me. > > 5) I follow the install manual, configure the system like that: > Datanode 1 configuration: > postgresql.conf > gtm_coordinator_id = 2 > > #--------------------------------------------------------------------------# > Coordinator 1 configuration: > postgresql.conf > gtm_coordinator_id = 2 > I saw two configuration files (one of the Coordinators 1 and one of the Datanodes 1) using a same value 2 for gtm_coordinator_id. gtm_coordinator_id needs to be different for each component of the cluster, either Datanode or Coordinator. If some components share the same id, GTM is not able to differentiate each component of the cluster correctly. > > num_data_nodes = 4 > > > Then I start gtm server: su -c "/usr/local/pgsql/bin/gtm -x 628 -l > /disk/dbase/gtm/gtm.log -p 6666 -D /disk/dbase/gtm &" - dbuser > Start coordinator server: su -c "/usr/local/pgsql/bin/postgres -C -i -D > /disk/dbase/dbase-coord &" - dbuser > Start data node server: su -c "/usr/local/pgsql/bin/postgres -X -i -D > /disk/dbase/dbase &" - dbuser > > Seems all servers started successful, then I try to create database in one > the coordinator server: > # su -c "/usr/local/pgsql/bin/createuser -p 2000 -a -d messenger" - dbuser > # su -c "/usr/local/pgsql/bin/createdb -p 2000 -E UNICODE data1" - dbuser > # su -c "/usr/local/pgsql/bin/createlang -p 2000 plpgsql data1" - dbuser > > Then I check other coordinator server, seems no database replicate, check > the gtm.log: > ########## cut here ########## > 3022:1086060864:2010-08-14 14:12:33.719 HKT -LOG: unexpected EOF on client > connection > LOCATION: ReadCommand, main.c:867 > 3023:1086060864:2010-08-14 14:12:33.719 HKT -LOG: Cleaning up thread state > LOCATION: GTM_ThreadCleanup, gtm_thread.c:265 > 3024:1086060864:2010-08-14 14:12:37.360 HKT -LOG: Sending transaction id > 19914 > LOCATION: ProcessBeginTransactionGetGXIDCommand, gtm_txn.c:916 > 3025:1086060864:2010-08-14 14:12:37.377 HKT -LOG: unexpected EOF on client > connection > LOCATION: ReadCommand, main.c:867 > 3026:1086060864:2010-08-14 14:12:37.378 HKT -LOG: Cleaning up thread state > LOCATION: GTM_ThreadCleanup, gtm_thread.c:265 > 3027:1086060864:2010-08-14 14:12:38.563 HKT -LOG: Sending transaction id > 19915 > LOCATION: ProcessBeginTransactionGetGXIDCommand, gtm_txn.c:916 > 3028:1086060864:2010-08-14 14:12:38.582 HKT -LOG: unexpected EOF on client > connection > LOCATION: ReadCommand, main.c:867 > 3029:1086060864:2010-08-14 14:12:38.582 HKT -LOG: Cleaning up thread state > LOCATION: GTM_ThreadCleanup, gtm_thread.c:265 > 3030:1086060864:2010-08-14 14:12:47.692 HKT -LOG: Sending transaction id > 19916 > LOCATION: ProcessBeginTransactionGetGXIDCommand, gtm_txn.c:916 > ########## cut here ######### > > > I suppose this is your GTM log file, no? I think you should first check how gtm_coordinator_id is set for each Coordinator and Datanode. Except that you configuration looks correct. The way you are launching applications looks also OK. You have also to know that when you modify a catalog table of a Coordinator by launching DDL on it (user creation, database creation), the catalog table modification of this coordinator is not visible on other coordinators. After launching a DDL on a Coordinator, you have to synchronize catalog tables from the Coordinator whose catalog tables have been updated to other coordinators. You can use the utility pgxc_ddl for this purpose. DDL Hot synchronization is planned to be implemented in a next release. Thanks, -- Michael Paquier |
From: Dika Y. <di...@nt...> - 2010-08-16 03:25:19
|
Hi Suzuki, Thank you very much for your reply. Now, I understand the system topology, and I read the install manual, I have other questions: 1) The page 5 of the install manual, I found that the "gtm_coordinator_id=n+1" of datanode 1, and the datanode n is "gtm_coordinator_id=2n", what is mean, it means datanode 2 "gtm_coordinator_id=4" and datanode 3 "gtm_coordinator_id=6", right? Is the parameter gtm_coord_id same as gtm_coordinator_id, right? 2) If my data node server's data_node_user have no password, then I can set the parameter data_node_password='' or I do not need to set it, just ignore it? 3) How does DB data Synchronization work? Does it need to set up the rsync? 4) I setup the coordinator starup with pooler_port=6667, it seems start successful, but when I use "netstat -tunlp" command to check the server port status, it seems the port 6667 do not open, it is correct? 5) I follow the install manual, configure the system like that: Datanode 1 configuration: postgresql.conf ########## cut here ########## port = 12000 gtm_host = '1.1.1.1' gtm_port = 6666 gtm_coordinator_id = 2 ########## cut here ########## pg_hba.conf ########## cut here ########## host all all 1.1.1.0/24 trust ########## cut here ########## Coordinator 1 configuration: postgresql.conf ########## cut here ########## port = 2000 pooler_port = 6667 gtm_host = '1.1.1.1' gtm_port = 6666 gtm_coordinator_id = 1 num_data_nodes = 4 data_node_hosts = '1.1.1.11,1.1.1.12,1.1.1.13,1.1.1.14' data_node_ports = '12000,12000,12000,12000' data_node_users = 'dbuser' ########## cut here ########## pg_hba.conf ########## cut here ########## host all all 1.1.1.0/24 trust ########## cut here ########## #--------------------------------------------------------------------------# Datanode 1 configuration: postgresql.conf ########## cut here ########## port = 12000 gtm_host = '1.1.1.1' gtm_port = 6666 gtm_coordinator_id = 3 ########## cut here ########## pg_hba.conf ########## cut here ########## host all all 1.1.1.0/24 trust ########## cut here ########## Coordinator 1 configuration: postgresql.conf ########## cut here ########## port = 2000 pooler_port = 6667 gtm_host = '1.1.1.1' gtm_port = 6666 gtm_coordinator_id = 2 num_data_nodes = 4 data_node_hosts = '1.1.1.11,1.1.1.12,1.1.1.13,1.1.1.14' data_node_ports = '12000,12000,12000,12000' data_node_users = 'dbuser' ########## cut here ########## pg_hba.conf ########## cut here ########## host all all 1.1.1.0/24 trust ########## cut here ########## #--------------------------------------------------------------------------# Datanode 3 configuration: postgresql.conf ########## cut here ########## port = 12000 gtm_host = '1.1.1.1' gtm_port = 6666 gtm_coordinator_id = 4 ########## cut here ########## pg_hba.conf ########## cut here ########## host all all 1.1.1.0/24 trust ########## cut here ########## Coordinator 1 configuration: postgresql.conf ########## cut here ########## port = 2000 pooler_port = 6667 # Pool Manager TCP port gtm_host = '1.1.1.1' gtm_port = 6666 # (change requires restart) gtm_coordinator_id = 3 num_data_nodes = 4 # Number of Data Nodes # (change requires restart) data_node_hosts = '1.1.1.11,1.1.1.12,1.1.1.13,1.1.1.14' data_node_ports = '12000,12000,12000,12000' data_node_users = 'dbuser' ########## cut here ########## pg_hba.conf ########## cut here ########## host all all 1.1.1.0/24 trust ########## cut here ########## #--------------------------------------------------------------------------# Datanode 4 configuration: postgresql.conf ########## cut here ########## port = 12000 gtm_host = '1.1.1.1' gtm_port = 6666 gtm_coordinator_id = 5 ########## cut here ########## pg_hba.conf ########## cut here ########## host all all 1.1.1.0/24 trust ########## cut here ########## Coordinator 5 configuration: postgresql.conf ########## cut here ########## port = 2000 pooler_port = 6667 # Pool Manager TCP port gtm_host = '1.1.1.1' gtm_port = 6666 # (change requires restart) gtm_coordinator_id = 4 num_data_nodes = 4 # Number of Data Nodes # (change requires restart) data_node_hosts = '1.1.1.11,1.1.1.12,1.1.1.13,1.1.1.14' data_node_ports = '12000,12000,12000,12000' data_node_users = 'dbuser' ########## cut here ########## pg_hba.conf ########## cut here ########## host all all 1.1.1.0/24 trust ########## cut here ########## #--------------------------------------------------------------------------# Gtm server configuration: pgxc.conf ########## cut here ########## coordinator_hosts = '1.1.1.11,1.1.1.12,1.1.1.13,1.1.1.14' coordinator_ports = '2000,2000,2000,2000' coordinator_folders = '/disk/dbase/dbase-coord' ########## cut here ########## Then I start gtm server: su -c "/usr/local/pgsql/bin/gtm -x 628 -l /disk/dbase/gtm/gtm.log -p 6666 -D /disk/dbase/gtm &" - dbuser Start coordinator server: su -c "/usr/local/pgsql/bin/postgres -C -i -D /disk/dbase/dbase-coord &" - dbuser Start data node server: su -c "/usr/local/pgsql/bin/postgres -X -i -D /disk/dbase/dbase &" - dbuser Seems all servers started successful, then I try to create database in one the coordinator server: # su -c "/usr/local/pgsql/bin/createuser -p 2000 -a -d messenger" - dbuser # su -c "/usr/local/pgsql/bin/createdb -p 2000 -E UNICODE data1" - dbuser # su -c "/usr/local/pgsql/bin/createlang -p 2000 plpgsql data1" - dbuser Then I check other coordinator server, seems no database replicate, check the gtm.log: ########## cut here ########## 3022:1086060864:2010-08-14 14:12:33.719 HKT -LOG: unexpected EOF on client connection LOCATION: ReadCommand, main.c:867 3023:1086060864:2010-08-14 14:12:33.719 HKT -LOG: Cleaning up thread state LOCATION: GTM_ThreadCleanup, gtm_thread.c:265 3024:1086060864:2010-08-14 14:12:37.360 HKT -LOG: Sending transaction id 19914 LOCATION: ProcessBeginTransactionGetGXIDCommand, gtm_txn.c:916 3025:1086060864:2010-08-14 14:12:37.377 HKT -LOG: unexpected EOF on client connection LOCATION: ReadCommand, main.c:867 3026:1086060864:2010-08-14 14:12:37.378 HKT -LOG: Cleaning up thread state LOCATION: GTM_ThreadCleanup, gtm_thread.c:265 3027:1086060864:2010-08-14 14:12:38.563 HKT -LOG: Sending transaction id 19915 LOCATION: ProcessBeginTransactionGetGXIDCommand, gtm_txn.c:916 3028:1086060864:2010-08-14 14:12:38.582 HKT -LOG: unexpected EOF on client connection LOCATION: ReadCommand, main.c:867 3029:1086060864:2010-08-14 14:12:38.582 HKT -LOG: Cleaning up thread state LOCATION: GTM_ThreadCleanup, gtm_thread.c:265 3030:1086060864:2010-08-14 14:12:47.692 HKT -LOG: Sending transaction id 19916 LOCATION: ProcessBeginTransactionGetGXIDCommand, gtm_txn.c:916 ########## cut here ########## Do you know what happen to my servers? Is my configuration correct? Thanks again. Best wishes, Dika Ye -----Original Message----- From: koi...@gm... [mailto:koi...@gm...] On Behalf Of Koichi Suzuki Sent: 2010年8月14日 0:41 To: Dika Ye Cc: pos...@li... Subject: Re: [Postgres-xc-general] How to use it? Thank you very much for your interest in Postgres-XC. I hope my comments inline is helpful for you. ---------- Koichi Suzuki 2010/8/13 Dika Ye <di...@nt...>: > Hi list, > > > > I am newer of postgres-xc, now, I am planning to create the postgresql > cluster using this software, and here I have some questions: > > > > 1) Can postgres-xc data node and coordinator install in the same > machine, same working directory? How to deploy them? A coordinator and a data node can be installed in the same machine, but they can't share the working directory. From PostgreSQL point of view, they're separete databases. Please refer to "Install manual" for practice. Please write to me if you have further questions. > > 2) If I set the coordinator use port 2000 and data node use port > 12000, and which my app server will connect to? App server should connect to 2000. > > 3) Can I install more than one GTM server? And can they work on active > / standby or active / active mode? How to configure it? No. We're now designing GTM standby and trying to assign a resource for the implementation. > > 4) The document descript how to start GTM, coordinator and data node, > but don’t descript how to stop it, recovery it. When one of my data node > failed, how to recovery it? And how the coordinator, GTM server…… gtm_ctl and pg_ctl utilities can stop these features, like PostgreSQL. Michael: Could you provide the information how to stop them? > > 5) I saw the architecture of postgre-xc, my understanding is that my > app server will connect to coordinator server, then read / write into it, > right? Yes. Data node is not directly visible from applications. > > 6) Does it need to deploy a SAN/NAS/NFS file system for the data node > servers or coordinator servers? Or the data keep a copy in each data node > server or coordinator server? No. Each data node has its own database. XC provides two ways of table distribution, distributed and replicated. In distributed table, each tuple is distributed according to the value of distribution column. In replicated tables, tuples are replicated to all the data nodes. For the criteria which table should be distributed or replicated, please take a look at my presentation material for PGCon2010 available from Postgres-XC page in sourceforge.net. > > 7) How many data node server or coordinator server does it support? > How to add a new data node server or coordinator server into the current > postgres-xc system? So far, we tested up to ten coordinators and ten data nodes. We have to add coordinator or data node manually and it is not simple so far. We're planning to provide a utility for this by the end of this calendar year. > > > > Is there any configuration sample? Where can find it? Please take a look at installation manual and DBT-1 installation document. You will find an example. Please feel free to write any questions, comments and requirements to this mailing list. Best Regards; --- Koichi Suzuki > > > > Thanks. > > > > Best wishes, > > > > Dika.Ye > > > > ------------------------------------------------------------------------------ > This SF.net email is sponsored by > > Make an app they can't live without > Enter the BlackBerry Developer Challenge > http://p.sf.net/sfu/RIM-dev2dev > _______________________________________________ > Postgres-xc-general mailing list > Pos...@li... > https://lists.sourceforge.net/lists/listinfo/postgres-xc-general > > |
From: Michael P. <mic...@gm...> - 2010-08-15 23:38:07
|
Hi, > > > 4) The document descript how to start GTM, coordinator and data > node, > > but don’t descript how to stop it, recovery it. When one of my data node > > failed, how to recovery it? And how the coordinator, GTM server…… > > gtm_ctl and pg_ctl utilities can stop these features, like PostgreSQL. > Michael: Could you provide the information how to stop them? > > As written below we designed two utilities to start, restart or stop Postgres-XC components: - gtm_ctl, used to interact with GTM or GTM-proxy - pg_ctl, directly inspired from the existing application in PostgreSQL. It has been extended so as to be able to interact with Postgres-XC Coordinator and Datanode. To be more precise, for pg_ctl: 1) This permits to start a Datanode or a Cooordinator with the data folder specified and some customized options. pg_ctl start -S coordinator -D /data/folder -o '-p $port_number + additional options' pg_ctl start -S datanode -D /data/folder -o '-p $port_number + additional options' 2) This permits to stop a Coordinator or a Datanode with the data folder specified. pg_ctl stop -S coordinator -D /data/folder pg_ctl stop -S Datanode -D /data/folder 3) This permits to restart a Coordinator or a Datanode with the data_folder specified and relaunch it with the new options specidied in the string of '-o'. pg_ctl restart -S coordinator -D /data/folder -o '-p $port_number + additional options' pg_ctl restart -S Datanode -D /data/folder -o '-p $port_number + additional options' gtm_ctl permits to do the same with GTM and GTM-proxy processes. use -S option to specify if the data folder chosen is of the type 'GTM' or 'GTM-proxy' As before you can start/stop/restart processes such types of commands: gtm_ctl start[stop/restart] -S gtm[gtm-proxy] -D /data/folder -o 'new options' You can refer to this document to find all the details of those applications: https://sourceforge.net/projects/postgres-xc/files/Version_0.9.2/PG-XC_ReferenceManual_v0_9_2.pdf/download Thanks, -- Michael Paquier |
From: Koichi S. <koi...@gm...> - 2010-08-13 16:41:36
|
Thank you very much for your interest in Postgres-XC. I hope my comments inline is helpful for you. ---------- Koichi Suzuki 2010/8/13 Dika Ye <di...@nt...>: > Hi list, > > > > I am newer of postgres-xc, now, I am planning to create the postgresql > cluster using this software, and here I have some questions: > > > > 1) Can postgres-xc data node and coordinator install in the same > machine, same working directory? How to deploy them? A coordinator and a data node can be installed in the same machine, but they can't share the working directory. From PostgreSQL point of view, they're separete databases. Please refer to "Install manual" for practice. Please write to me if you have further questions. > > 2) If I set the coordinator use port 2000 and data node use port > 12000, and which my app server will connect to? App server should connect to 2000. > > 3) Can I install more than one GTM server? And can they work on active > / standby or active / active mode? How to configure it? No. We're now designing GTM standby and trying to assign a resource for the implementation. > > 4) The document descript how to start GTM, coordinator and data node, > but don’t descript how to stop it, recovery it. When one of my data node > failed, how to recovery it? And how the coordinator, GTM server…… gtm_ctl and pg_ctl utilities can stop these features, like PostgreSQL. Michael: Could you provide the information how to stop them? > > 5) I saw the architecture of postgre-xc, my understanding is that my > app server will connect to coordinator server, then read / write into it, > right? Yes. Data node is not directly visible from applications. > > 6) Does it need to deploy a SAN/NAS/NFS file system for the data node > servers or coordinator servers? Or the data keep a copy in each data node > server or coordinator server? No. Each data node has its own database. XC provides two ways of table distribution, distributed and replicated. In distributed table, each tuple is distributed according to the value of distribution column. In replicated tables, tuples are replicated to all the data nodes. For the criteria which table should be distributed or replicated, please take a look at my presentation material for PGCon2010 available from Postgres-XC page in sourceforge.net. > > 7) How many data node server or coordinator server does it support? > How to add a new data node server or coordinator server into the current > postgres-xc system? So far, we tested up to ten coordinators and ten data nodes. We have to add coordinator or data node manually and it is not simple so far. We're planning to provide a utility for this by the end of this calendar year. > > > > Is there any configuration sample? Where can find it? Please take a look at installation manual and DBT-1 installation document. You will find an example. Please feel free to write any questions, comments and requirements to this mailing list. Best Regards; --- Koichi Suzuki > > > > Thanks. > > > > Best wishes, > > > > Dika.Ye > > > > ------------------------------------------------------------------------------ > This SF.net email is sponsored by > > Make an app they can't live without > Enter the BlackBerry Developer Challenge > http://p.sf.net/sfu/RIM-dev2dev > _______________________________________________ > Postgres-xc-general mailing list > Pos...@li... > https://lists.sourceforge.net/lists/listinfo/postgres-xc-general > > |
From: Dika Y. <di...@nt...> - 2010-08-13 11:00:36
|
Hi list, I am newer of postgres-xc, now, I am planning to create the postgresql cluster using this software, and here I have some questions: 1) Can postgres-xc data node and coordinator install in the same machine, same working directory? How to deploy them? 2) If I set the coordinator use port 2000 and data node use port 12000, and which my app server will connect to? 3) Can I install more than one GTM server? And can they work on active / standby or active / active mode? How to configure it? 4) The document descript how to start GTM, coordinator and data node, but don't descript how to stop it, recovery it. When one of my data node failed, how to recovery it? And how the coordinator, GTM server.. 5) I saw the architecture of postgre-xc, my understanding is that my app server will connect to coordinator server, then read / write into it, right? 6) Does it need to deploy a SAN/NAS/NFS file system for the data node servers or coordinator servers? Or the data keep a copy in each data node server or coordinator server? 7) How many data node server or coordinator server does it support? How to add a new data node server or coordinator server into the current postgres-xc system? Is there any configuration sample? Where can find it? Thanks. Best wishes, Dika.Ye |