You can subscribe to this list here.
2010 |
Jan
|
Feb
|
Mar
|
Apr
|
May
(2) |
Jun
|
Jul
|
Aug
(6) |
Sep
|
Oct
(19) |
Nov
(1) |
Dec
|
---|---|---|---|---|---|---|---|---|---|---|---|---|
2011 |
Jan
(12) |
Feb
(1) |
Mar
(4) |
Apr
(4) |
May
(32) |
Jun
(12) |
Jul
(11) |
Aug
(1) |
Sep
(6) |
Oct
(3) |
Nov
|
Dec
(10) |
2012 |
Jan
(11) |
Feb
(1) |
Mar
(3) |
Apr
(25) |
May
(53) |
Jun
(38) |
Jul
(103) |
Aug
(54) |
Sep
(31) |
Oct
(66) |
Nov
(77) |
Dec
(20) |
2013 |
Jan
(91) |
Feb
(86) |
Mar
(103) |
Apr
(107) |
May
(25) |
Jun
(37) |
Jul
(17) |
Aug
(59) |
Sep
(38) |
Oct
(78) |
Nov
(29) |
Dec
(15) |
2014 |
Jan
(23) |
Feb
(82) |
Mar
(118) |
Apr
(101) |
May
(103) |
Jun
(45) |
Jul
(6) |
Aug
(10) |
Sep
|
Oct
(32) |
Nov
|
Dec
(9) |
2015 |
Jan
(3) |
Feb
(5) |
Mar
|
Apr
(1) |
May
|
Jun
|
Jul
(9) |
Aug
(4) |
Sep
(3) |
Oct
|
Nov
|
Dec
|
2016 |
Jan
(3) |
Feb
|
Mar
|
Apr
|
May
|
Jun
|
Jul
|
Aug
|
Sep
|
Oct
|
Nov
|
Dec
|
2017 |
Jan
|
Feb
|
Mar
|
Apr
|
May
|
Jun
(3) |
Jul
|
Aug
|
Sep
|
Oct
|
Nov
|
Dec
|
2018 |
Jan
|
Feb
|
Mar
|
Apr
|
May
(4) |
Jun
|
Jul
|
Aug
|
Sep
|
Oct
|
Nov
|
Dec
|
S | M | T | W | T | F | S |
---|---|---|---|---|---|---|
|
|
|
|
|
1
(6) |
2
(3) |
3
(5) |
4
(4) |
5
|
6
(2) |
7
|
8
|
9
|
10
|
11
(1) |
12
|
13
|
14
(10) |
15
(3) |
16
(4) |
17
(4) |
18
(9) |
19
(18) |
20
(1) |
21
(6) |
22
(10) |
23
|
24
|
25
(1) |
26
(5) |
27
(5) |
28
(3) |
29
(1) |
30
(2) |
31
|
|
|
|
|
|
|
From: seikath <se...@gm...> - 2013-03-27 08:47:14
|
I was about to put a request for that, most of the available sources do not have that particular info, all of them mantion only the datanodes creation. About the Barcleona, well, I keep my word guys, whoever is in *B*arcelona and have some free time, ping me please. At Michael: Im not a /Spaniard/, but I do enjoy watching football ( Visca Barça !!! ) with friends , some beers and sharing ideas and experience .. :) In fact I do that with several friends of mine from SkySQL .. Cheers, and thank you all again. Ivan On 03/27/2013 04:52 AM, Michael Paquier wrote: > > > On Wed, Mar 27, 2013 at 3:16 AM, Nikhil Sontakke <ni...@st... <mailto:ni...@st...>> wrote: > > > prod-xc-coord01 > > postgres=# select * from pgxc_node; > > node_name | node_type | node_port | node_host | nodeis_primary | nodeis_preferred | node_id > > -----------+-----------+-----------+--------------+----------------+------------------+------------ > > coord1 | C | 5432 | localhost | f | f | 1885696643 > > datanode1 | D | 6543 | localhost | t | t | 888802358 > > datanode2 | D | 6543 | 10.101.51.38 | f | f | -905831925 > > > > prod-xc-coord02 > > postgres=# select * from pgxc_node > > node_name | node_type | node_port | node_host | nodeis_primary | nodeis_preferred | node_id > > -----------+-----------+-----------+--------------+----------------+------------------+------------- > > coord2 | C | 5432 | localhost | f | f | -1197102633 > > datanode1 | D | 6543 | 10.245.114.8 | t | f | 888802358 > > datanode2 | D | 6543 | localhost | f | t | -905831925 > > > > after that setup I was able to create tabases from the both coordinator nodes, but each coordinator does not see the database created by the other > coordinator. > > > > You need to add coord1 metadata on coord2 node and vice versa as well. > > There are so many people making the same mistake and requesting help for similar stuff on this mailing list that it should be worth improving the documentation. > -- > Michael |
From: Nikhil S. <ni...@st...> - 2013-03-27 06:06:55
|
> Whenever you have a chance to visit Barcelona, let me know, I owe you one :) > Barcelona! Sure Ivan :) Regards, Nikhils > Kind regards, > > Ivan > > On 03/26/2013 07:16 PM, Nikhil Sontakke wrote: >>> prod-xc-coord01 >>> postgres=# select * from pgxc_node; >>> node_name | node_type | node_port | node_host | nodeis_primary | nodeis_preferred | node_id >>> -----------+-----------+-----------+--------------+----------------+------------------+------------ >>> coord1 | C | 5432 | localhost | f | f | 1885696643 >>> datanode1 | D | 6543 | localhost | t | t | 888802358 >>> datanode2 | D | 6543 | 10.101.51.38 | f | f | -905831925 >>> >>> prod-xc-coord02 >>> postgres=# select * from pgxc_node; >>> node_name | node_type | node_port | node_host | nodeis_primary | nodeis_preferred | node_id >>> -----------+-----------+-----------+--------------+----------------+------------------+------------- >>> coord2 | C | 5432 | localhost | f | f | -1197102633 >>> datanode1 | D | 6543 | 10.245.114.8 | t | f | 888802358 >>> datanode2 | D | 6543 | localhost | f | t | -905831925 >>> >>> after that setup I was able to create tabases from the both coordinator nodes, but each coordinator does not see the database created by the other coordinator. >>> >> You need to add coord1 metadata on coord2 node and vice versa as well. >> >> Regards, >> Nikhils > > ------------------------------------------------------------------------------ > Own the Future-Intel® Level Up Game Demo Contest 2013 > Rise to greatness in Intel's independent game demo contest. > Compete for recognition, cash, and the chance to get your game > on Steam. $5K grand prize plus 10 genre and skill prizes. > Submit your demo by 6/6/13. http://p.sf.net/sfu/intel_levelupd2d > _______________________________________________ > Postgres-xc-general mailing list > Pos...@li... > https://lists.sourceforge.net/lists/listinfo/postgres-xc-general -- StormDB - http://www.stormdb.com The Database Cloud |
From: Michael P. <mic...@gm...> - 2013-03-27 03:56:30
|
On Wed, Mar 27, 2013 at 4:44 AM, seikath <se...@gm...> wrote: > Whenever you have a chance to visit Barcelona, let me know, I owe you one > :) > Avoid to say that... I can answer really, really quickly, and my home country is close to yours. My country even lost a soccer match yesterday against yours... ;) -- Michael |
From: Michael P. <mic...@gm...> - 2013-03-27 03:52:41
|
On Wed, Mar 27, 2013 at 3:16 AM, Nikhil Sontakke <ni...@st...>wrote: > > prod-xc-coord01 > > postgres=# select * from pgxc_node; > > node_name | node_type | node_port | node_host | nodeis_primary | > nodeis_preferred | node_id > > > -----------+-----------+-----------+--------------+----------------+------------------+------------ > > coord1 | C | 5432 | localhost | f | f > | 1885696643 > > datanode1 | D | 6543 | localhost | t | t > | 888802358 > > datanode2 | D | 6543 | 10.101.51.38 | f | f > | -905831925 > > > > prod-xc-coord02 > > postgres=# select * from pgxc_node; > > node_name | node_type | node_port | node_host | nodeis_primary | > nodeis_preferred | node_id > > > -----------+-----------+-----------+--------------+----------------+------------------+------------- > > coord2 | C | 5432 | localhost | f | f > | -1197102633 > > datanode1 | D | 6543 | 10.245.114.8 | t | f > | 888802358 > > datanode2 | D | 6543 | localhost | f | t > | -905831925 > > > > after that setup I was able to create tabases from the both coordinator > nodes, but each coordinator does not see the database created by the other > coordinator. > > > > You need to add coord1 metadata on coord2 node and vice versa as well. > There are so many people making the same mistake and requesting help for similar stuff on this mailing list that it should be worth improving the documentation. -- Michael |
From: Ashutosh B. <ash...@en...> - 2013-03-27 03:25:08
|
Hi Ivan, On Tue, Mar 26, 2013 at 8:01 PM, seikath <se...@gm...> wrote: > Hello Ashutosh, > > My initial setup was datanode1 as primary on all coordinators. > > But the database created on coordinator1 was not visible by coordinator2 > even it was populated at the both datanodes. > > Can you provide some more information? This looks like a bug. > So I tested with one primary datanode on one coordinator, hoping the GTM > will know that and will distribute the info. > Anyway, these are my firsts steps with XC, so I might did some simple > config error .. > > I want to use the coordinators as a entry points for loadbalanced external > SQL requests > > Kind regards, > > Ivan > > > On 03/26/2013 03:21 PM, Ashutosh Bapat wrote: > > > > On Tue, Mar 26, 2013 at 7:46 PM, seikath <se...@gm...> wrote: > >> Hello all, >> >> I have an XC setup of 4 AWS instances: >> >> ============================= >> instance: prod-xc-coord1 >> >> coordinator config at prod-xc-coord1 >> listen_addresses = '*' >> port = 5432 >> max_connections = 100 >> shared_buffers = 120MB >> max_prepared_transactions = 100 >> datestyle = 'iso, mdy' >> lc_messages = 'en_US.UTF-8' >> lc_monetary = 'en_US.UTF-8' >> lc_numeric = 'en_US.UTF-8' >> lc_time = 'en_US.UTF-8' >> default_text_search_config = 'pg_catalog.english' >> pooler_port = 6667 >> min_pool_size = 1 >> max_pool_size = 100 >> max_coordinators = 16 >> max_datanodes = 16 >> gtm_host = '10.196.154.85' >> gtm_port = 6543 >> pgxc_node_name = 'coord1' >> enforce_two_phase_commit = on >> enable_fast_query_shipping = on >> enable_remotejoin = on >> enable_remotegroup = on >> >> datanode config at prod-xc-coord1 >> listen_addresses = '*' >> port = 6543 >> max_connections = 100 >> shared_buffers = 320MB >> max_prepared_transactions = 100 >> datestyle = 'iso, mdy' >> lc_messages = 'en_US.UTF-8' >> lc_monetary = 'en_US.UTF-8' >> lc_numeric = 'en_US.UTF-8' >> lc_time = 'en_US.UTF-8' >> default_text_search_config = 'pg_catalog.english' >> max_coordinators = 16 >> max_datanodes = 16 >> gtm_host = '10.196.154.85' >> gtm_port = 6543 >> pgxc_node_name = 'datanode1' >> enforce_two_phase_commit = on >> enable_fast_query_shipping = on >> enable_remotejoin = on >> enable_remotegroup = on >> >> >> ============================= >> instance : prod-xc-coord2 >> >> coordinator config at prod-xc-coord2 >> listen_addresses = '*' >> port = 5432 >> max_connections = 100 >> superuser_reserved_connections = 3 >> shared_buffers = 120MB >> max_prepared_transactions = 100 >> datestyle = 'iso, mdy' >> lc_messages = 'en_US.UTF-8' >> lc_monetary = 'en_US.UTF-8' >> lc_numeric = 'en_US.UTF-8' >> lc_time = 'en_US.UTF-8' >> default_text_search_config = 'pg_catalog.english' >> pooler_port = 6667 >> min_pool_size = 1 >> max_pool_size = 100 >> max_coordinators = 16 >> max_datanodes = 16 >> gtm_host = '10.196.154.85' >> gtm_port = 6543 >> pgxc_node_name = 'coord2' >> enforce_two_phase_commit = on >> enable_fast_query_shipping = on >> enable_remotejoin = on >> enable_remotegroup = on >> >> datanode config at prod-xc-coord2 >> listen_addresses = '*' >> port = 6543 >> max_connections = 100 >> shared_buffers = 320MB >> max_prepared_transactions = 100 >> datestyle = 'iso, mdy' >> lc_messages = 'en_US.UTF-8' >> lc_monetary = 'en_US.UTF-8' >> lc_numeric = 'en_US.UTF-8' >> lc_time = 'en_US.UTF-8' >> default_text_search_config = 'pg_catalog.english' >> max_coordinators = 16 >> max_datanodes = 16 >> gtm_host = '10.196.154.85' >> gtm_port = 6543 >> pgxc_node_name = 'datanode2' >> enforce_two_phase_commit = on >> enable_fast_query_shipping = on >> enable_remotejoin = on >> enable_remotegroup = on >> >> ============================= >> instance prod-xc-gtm-proxy : IP 10.196.154.85 >> >> proxy config: >> nodename = 'one' >> listen_addresses = '*' >> port = 6543 >> gtm_host = '10.244.158.120' >> gtm_port = 5432 >> >> ============================= >> instance prod-xc-gtm : IP 10.244.158.120 >> gtm config >> nodename = 'one' >> listen_addresses = '*' >> port = 5432 >> >> >> ============================= >> >> the pg_hba,conf of both coordinator and data nodes at both prod-xc-coord1 >> and prod-xc-coord2 >> allows the other node to connect: >> ================================================= >> pg_hba,conf at prod-xc-coord01 IP 10.245.114.8 >> local all all trust >> host all all 127.0.0.1/32 trust >> host all all ::1/128 trust >> host all all 10.101.51.38/32 trust >> >> pg_hba,conf at prod-xc-coord02 IP 10.101.51.38 >> local all all trust >> host all all 127.0.0.1/32 trust >> host all all ::1/128 trust >> host all all 10.245.114.8/32 trust >> >> the connectivity is tested and confirmed. >> ================================================= >> >> initial nodes setup: >> prod-xc-coord01 >> postgres=# select * from pgxc_node; >> node_name | node_type | node_port | node_host | nodeis_primary | >> nodeis_preferred | node_id >> >> -----------+-----------+-----------+--------------+----------------+------------------+------------ >> coord1 | C | 5432 | localhost | f | f >> | 1885696643 >> datanode1 | D | 6543 | localhost | t | t >> | 888802358 >> datanode2 | D | 6543 | 10.101.51.38 | f | f >> | -905831925 >> >> prod-xc-coord02 >> postgres=# select * from pgxc_node; >> node_name | node_type | node_port | node_host | nodeis_primary | >> nodeis_preferred | node_id >> >> -----------+-----------+-----------+--------------+----------------+------------------+------------- >> coord2 | C | 5432 | localhost | f | f >> | -1197102633 >> datanode1 | D | 6543 | 10.245.114.8 | t | f >> | 888802358 >> datanode2 | D | 6543 | localhost | f | t >> | -905831925 >> >> after that setup I was able to create tabases from the both coordinator >> nodes, but each coordinator does not see the database created by the other >> coordinator. >> >> >> >> then tested and the node setup with one only primary node: >> prod-xc-coord02 >> postgres=# alter node datanode1 with (type = 'datanode', host = >> '10.245.114.8', port = 6543, primary=false,preferred=false); >> ALTER NODE >> postgres=# select pgxc_pool_reload(); >> pgxc_pool_reload >> ------------------ >> t >> (1 row) >> postgres=# select * from pgxc_node; >> node_name | node_type | node_port | node_host | nodeis_primary | >> nodeis_preferred | node_id >> >> -----------+-----------+-----------+--------------+----------------+------------------+------------- >> coord2 | C | 5432 | localhost | f | f >> | -1197102633 >> datanode2 | D | 6543 | localhost | f | t >> | -905831925 >> datanode1 | D | 6543 | 10.245.114.8 | f | f >> | 888802358 >> (3 rows) >> >> the result is the same. >> >> I know I am missing something simple as a config or open port, but at the >> moment I cant figure out whats missing in the setup. >> >> > What are you trying to do here? You have set primary node for datanode1 to > false, which is what the query result displays. Can you please elaborate > what's going wrong? > > >> In general our plan is to use loadbalancer in frond of several instances >> hosting one coordinator and one datanode. >> >> I apologize for the ugly paste, but I am not sure if that mail list >> support html formatting. >> >> Kind regards, >> >> Ivan >> >> >> >> >> >> >> >> >> >> ------------------------------------------------------------------------------ >> Own the Future-Intel® Level Up Game Demo Contest 2013 >> Rise to greatness in Intel's independent game demo contest. >> Compete for recognition, cash, and the chance to get your game >> on Steam. $5K grand prize plus 10 genre and skill prizes. >> Submit your demo by 6/6/13. http://p.sf.net/sfu/intel_levelupd2d >> _______________________________________________ >> Postgres-xc-general mailing list >> Pos...@li... >> https://lists.sourceforge.net/lists/listinfo/postgres-xc-general >> > > > > -- > Best Wishes, > Ashutosh Bapat > EntepriseDB Corporation > The Enterprise Postgres Company > > > > ------------------------------------------------------------------------------ > Own the Future-Intel® Level Up Game Demo Contest 2013 > Rise to greatness in Intel's independent game demo contest. > Compete for recognition, cash, and the chance to get your game > on Steam. $5K grand prize plus 10 genre and skill prizes. > Submit your demo by 6/6/13. http://p.sf.net/sfu/intel_levelupd2d > _______________________________________________ > Postgres-xc-general mailing list > Pos...@li... > https://lists.sourceforge.net/lists/listinfo/postgres-xc-general > > -- Best Wishes, Ashutosh Bapat EntepriseDB Corporation The Enterprise Postgres Company |