You can subscribe to this list here.
| 2010 |
Jan
|
Feb
|
Mar
|
Apr
|
May
(2) |
Jun
|
Jul
|
Aug
(6) |
Sep
|
Oct
(19) |
Nov
(1) |
Dec
|
|---|---|---|---|---|---|---|---|---|---|---|---|---|
| 2011 |
Jan
(12) |
Feb
(1) |
Mar
(4) |
Apr
(4) |
May
(32) |
Jun
(12) |
Jul
(11) |
Aug
(1) |
Sep
(6) |
Oct
(3) |
Nov
|
Dec
(10) |
| 2012 |
Jan
(11) |
Feb
(1) |
Mar
(3) |
Apr
(25) |
May
(53) |
Jun
(38) |
Jul
(103) |
Aug
(54) |
Sep
(31) |
Oct
(66) |
Nov
(77) |
Dec
(20) |
| 2013 |
Jan
(91) |
Feb
(86) |
Mar
(103) |
Apr
(107) |
May
(25) |
Jun
(37) |
Jul
(17) |
Aug
(59) |
Sep
(38) |
Oct
(78) |
Nov
(29) |
Dec
(15) |
| 2014 |
Jan
(23) |
Feb
(82) |
Mar
(118) |
Apr
(101) |
May
(103) |
Jun
(45) |
Jul
(6) |
Aug
(10) |
Sep
|
Oct
(32) |
Nov
|
Dec
(9) |
| 2015 |
Jan
(3) |
Feb
(5) |
Mar
|
Apr
(1) |
May
|
Jun
|
Jul
(9) |
Aug
(4) |
Sep
(3) |
Oct
|
Nov
|
Dec
|
| 2016 |
Jan
(3) |
Feb
|
Mar
|
Apr
|
May
|
Jun
|
Jul
|
Aug
|
Sep
|
Oct
|
Nov
|
Dec
|
| 2017 |
Jan
|
Feb
|
Mar
|
Apr
|
May
|
Jun
(3) |
Jul
|
Aug
|
Sep
|
Oct
|
Nov
|
Dec
|
| 2018 |
Jan
|
Feb
|
Mar
|
Apr
|
May
(4) |
Jun
|
Jul
|
Aug
|
Sep
|
Oct
|
Nov
|
Dec
|
|
From: Mason S. <ma...@st...> - 2013-06-05 12:30:14
|
On Wed, Jun 5, 2013 at 7:37 AM, Andrei Martsinchyk < and...@gm...> wrote: > Hi Iván, > > First of all, you should not replicate all your tables. No such case when > it might be reasonable, except maybe some ultimate test case. Single > Postgres server would perform better then your four-node cluster in any > application. So think again about the distribution planning. > It might be ok if the data set is relatively small (fits in memory on all nodes) and there is very high read-only concurrency with multiple coordinators. Anyway, I agree, in general distributing tables is the thing to do. > > Regarding your problem, I guess node definitions was different at the > moment when you created your tables. > To verify, please run following query on all your coordinators: > > select nodeoids from pgxc_class; > > The result should look like this: > > nodeoids > --------------------------- > 16386 16387 16388 16389 > 16386 16387 16388 16389 > ... > (N rows) > > If you see less then four node OIDs in some or all rows that is the cause. > > > > > > 2013/6/5 seikath <se...@gm...> > >> Hello guys, >> >> I am facing one problem with XC I would like to know if its common or not. >> >> 4 AWS based XC nodes installed with datanode and coordinator on each one, >> then I have separated one gtm-proxy and one gtm node. >> version used: >> psql (Postgres-XC) 1.0.3 >> (based on PostgreSQL) 9.1.9 >> >> once installed, it operates OK as db/roles creation , database import etc. >> >> The issue is, on db import, I get only three of the nodes replicated , as >> all the tables definitions are with *distribute by replication* : >> >> xzcat prod-db01-new.2013-06-04.12.31.34.sql.xz | sed 'h;/^CREATE >> TABLE/,/^);/s/;/ DISTRIBUTE BY REPLICATION;/' | psql -U postgres dbname >> I see no errors at the time of the dbimport , >> the tables , indexes , pkeys are replicated, just the data is only at >> three of the 4 active nodes. >> >> Details : >> >> # coordinatods config: >> >> boxes@vpc-xc-coord-01:5432::boxes_production=[Wed Jun 5 08:41:26 UTC 2013]> select * from pgxc_node; >> node_name | node_type | node_port | node_host | nodeis_primary | nodeis_preferred | node_id >> ------------+-----------+-----------+-----------+----------------+------------------+------------- >> coord01 | C | 5432 | localhost | f | f | -951114102 >> datanode01 | D | 6543 | localhost | t | t | -561864558 >> coord02 | C | 5432 | 10.0.1.12 | f | f | -1523582700 >> datanode02 | D | 6543 | 10.0.1.12 | f | f | 670480207 >> coord03 | C | 5432 | 10.0.1.13 | f | f | 1641506819 >> datanode03 | D | 6543 | 10.0.1.13 | f | f | -1804036519 >> coord04 | C | 5432 | 10.0.1.14 | f | f | -1385444041 >> datanode04 | D | 6543 | 10.0.1.14 | f | f | 1005050720 >> (8 rows) >> >> postgres@vpc-xc-coord-02:5432::boxes_production=[Wed Jun 5 08:42:24 UTC 2013]# select * from pgxc_node; >> node_name | node_type | node_port | node_host | nodeis_primary | nodeis_preferred | node_id >> ------------+-----------+-----------+-----------+----------------+------------------+------------- >> coord02 | C | 5432 | localhost | f | f | -1523582700 >> datanode02 | D | 6543 | localhost | f | t | 670480207 >> coord01 | C | 5432 | 10.0.1.11 | f | f | -951114102 >> datanode01 | D | 6543 | 10.0.1.11 | t | f | -561864558 >> coord03 | C | 5432 | 10.0.1.13 | f | f | 1641506819 >> datanode03 | D | 6543 | 10.0.1.13 | f | f | -1804036519 >> coord04 | C | 5432 | 10.0.1.14 | f | f | -1385444041 >> datanode04 | D | 6543 | 10.0.1.14 | f | f | 1005050720 >> (8 rows) >> postgres@vpc-xc-coord-03:5432::boxes_production=[Wed Jun 5 08:42:57 UTC 2013]# select * from pgxc_node; >> node_name | node_type | node_port | node_host | nodeis_primary | nodeis_preferred | node_id >> ------------+-----------+-----------+-----------+----------------+------------------+------------- >> coord03 | C | 5432 | localhost | f | f | 1641506819 >> datanode03 | D | 6543 | localhost | f | t | -1804036519 >> coord01 | C | 5432 | 10.0.1.11 | f | f | -951114102 >> datanode01 | D | 6543 | 10.0.1.11 | t | f | -561864558 >> coord02 | C | 5432 | 10.0.1.12 | f | f | -1523582700 >> datanode02 | D | 6543 | 10.0.1.12 | f | f | 670480207 >> coord04 | C | 5432 | 10.0.1.14 | f | f | -1385444041 >> datanode04 | D | 6543 | 10.0.1.14 | f | f | 1005050720 >> >> postgres@vpc-xc-coord-04:5432::boxes_production=[Wed Jun 5 08:20:35 UTC 2013]# select * from pgxc_node; >> node_name | node_type | node_port | node_host | nodeis_primary | nodeis_preferred | node_id >> ------------+-----------+-----------+-----------+----------------+------------------+------------- >> coord04 | C | 5432 | localhost | f | f | -1385444041 >> datanode04 | D | 6543 | localhost | f | t | 1005050720 >> coord01 | C | 5432 | 10.0.1.11 | f | f | -951114102 >> datanode01 | D | 6543 | 10.0.1.11 | t | f | -561864558 >> coord02 | C | 5432 | 10.0.1.12 | f | f | -1523582700 >> datanode02 | D | 6543 | 10.0.1.12 | f | f | 670480207 >> coord03 | C | 5432 | 10.0.1.13 | f | f | 1641506819 >> datanode03 | D | 6543 | 10.0.1.13 | f | f | -1804036519 >> >> >> fist node coordinator and datanodes config: >> ==================================================================== >> >> postgres@vpc-xc-coord-01:[Wed Jun 05 08:40:00][/usr/local/pgsql]$ cat datanode.postgresql.conf >> listen_addresses = '*' # what IP address(es) to listen on; >> port = 6543 # (change requires restart) >> max_connections = 100 # (change requires restart) >> shared_buffers = 320MB # min 128kB >> max_prepared_transactions = 100 # zero disables the feature >> datestyle = 'iso, mdy' >> lc_messages = 'en_US.UTF-8' # locale for system error message >> lc_monetary = 'en_US.UTF-8' # locale for monetary formatting >> lc_numeric = 'en_US.UTF-8' # locale for number formatting >> lc_time = 'en_US.UTF-8' # locale for time formatting >> default_text_search_config = 'pg_catalog.english' >> include '/usr/local/pgsql/gtm.include.conf' >> include '/usr/local/pgsql/datanode_node_name.conf' >> #pgxc_node_name = 'datanode04' # Coordinator or Datanode name >> enforce_two_phase_commit = on # Enforce the usage of two-phase commit on transactions >> enable_fast_query_shipping = on >> enable_remotejoin = on >> enable_remotegroup = on >> >> >> postgres@vpc-xc-coord-01:[Wed Jun 05 08:43:29][/usr/local/pgsql]$ cat coordinator.postgresql.conf >> listen_addresses = '*' # what IP address(es) to listen on; >> port = 5432 # (change requires restart) >> max_connections = 100 # (change requires restart) >> shared_buffers = 120MB # min 128kB >> max_prepared_transactions = 100 # zero disables the feature >> datestyle = 'iso, mdy' >> lc_messages = 'en_US.UTF-8' # locale for system error message >> lc_monetary = 'en_US.UTF-8' # locale for monetary formatting >> lc_numeric = 'en_US.UTF-8' # locale for number formatting >> lc_time = 'en_US.UTF-8' # locale for time formatting >> default_text_search_config = 'pg_catalog.english' >> pooler_port = 6667 # Pool Manager TCP port >> min_pool_size = 1 # Initial pool size >> max_pool_size = 100 # Maximum pool size >> max_coordinators = 16 # Maximum number of Coordinators >> max_datanodes = 16 # Maximum number of Datanodes >> include '/usr/local/pgsql/gtm.include.conf' >> include '/usr/local/pgsql/coordinator_node_name.conf' >> enforce_two_phase_commit = on # Enforce the usage of two-phase commit on transactions >> enable_fast_query_shipping = on >> enable_remotejoin = on >> enable_remotegroup = on >> >> >> postgres@vpc-xc-coord-01:[Wed Jun 05 08:43:38][/usr/local/pgsql]$ cat /usr/local/pgsql/gtm.include.conf >> gtm_host = '10.0.1.16' # Host name or address of GTM Proxy, if not - direct link to GTM >> gtm_port = 6543 >> >> #gtm_host = '127.0.0.1' >> #gtm_port = 5434 >> >> boxes@vpc-xc-coord-01:5432::boxes_production=[Wed Jun 5 08:42:04 UTC 2013]> select count(*) from friends; >> count >> ------- >> 1 >> (1 row) >> >> Time: 4.698 ms >> >> postgres@vpc-xc-coord-02:5432::boxes_production=[Wed Jun 5 08:42:25 UTC 2013]# select count(*) from friends; >> count >> ------- >> 41416 >> (1 row) >> >> postgres@vpc-xc-coord-03:5432::boxes_production=[Wed Jun 5 08:43:01 UTC 2013]# select count(*) from friends; >> count >> ------- >> 41416 >> (1 row) >> >> postgres@vpc-xc-coord-04:5432::boxes_production=[Wed Jun 5 08:42:53 UTC 2013]# select count(*) from friends; >> count >> ------- >> 41416 >> (1 row) >> >> # identical configs : >> postgres@vpc-xc-coord-01:[Wed Jun 05 08:53:58][/usr/local/pgsql]$ >> md5sum coordinator.postgresql.conf >> b7f61b5d8baeec83cd82d9f1ee744728 coordinator.postgresql.conf >> postgres@vpc-xc-coord-02:[Wed Jun 05 08:53:53][~]$ md5sum >> coordinator.postgresql.conf >> b7f61b5d8baeec83cd82d9f1ee744728 coordinator.postgresql.conf >> >> postgres@vpc-xc-coord-01:[Wed Jun 05 08:54:58][/usr/local/pgsql]$ md5sum >> datanode.postgresql.conf >> 00d6a5736b6401dc6cc3d820fb412082 datanode.postgresql.conf >> postgres@vpc-xc-coord-02:[Wed Jun 05 08:54:37][~]$ md5sum >> datanode.postgresql.conf >> 00d6a5736b6401dc6cc3d820fb412082 datanode.postgresql.conf >> >> postgres@vpc-xc-coord-01:[Wed Jun 05 08:55:24][/usr/local/pgsql]$ md5sum >> /usr/local/pgsql/gtm.include.conf >> a6e7c3a21958a23bfb5054dc645d9576 /usr/local/pgsql/gtm.include.conf >> postgres@vpc-xc-coord-02:[Wed Jun 05 08:55:01][~]$ md5sum >> /usr/local/pgsql/gtm.include.conf >> a6e7c3a21958a23bfb5054dc645d9576 /usr/local/pgsql/gtm.include.conf >> >> >> ==================================================================== >> I >> In general I suspect wrong configuration, but atm can not find any. >> >> I did a test importing the same db at the second XC node, and its the >> same issue: the fixt XC gets eveyrthing replicated but the actual data. >> >> My plan is to launch new clone instance of the vpc-xc-coord-02 as a >> replacement of the vpc-xc-coord-01, but this is the desperate plan C >> I want to know what happens .. :) >> >> >> Kind regards, >> >> Iván >> >> >> >> >> >> ------------------------------------------------------------------------------ >> How ServiceNow helps IT people transform IT departments: >> 1. A cloud service to automate IT design, transition and operations >> 2. Dashboards that offer high-level views of enterprise services >> 3. A single system of record for all IT processes >> http://p.sf.net/sfu/servicenow-d2d-j >> _______________________________________________ >> Postgres-xc-general mailing list >> Pos...@li... >> https://lists.sourceforge.net/lists/listinfo/postgres-xc-general >> >> > > > -- > Andrei Martsinchyk > > StormDB - http://www.stormdb.com > The Database Cloud > > > > ------------------------------------------------------------------------------ > How ServiceNow helps IT people transform IT departments: > 1. A cloud service to automate IT design, transition and operations > 2. Dashboards that offer high-level views of enterprise services > 3. A single system of record for all IT processes > http://p.sf.net/sfu/servicenow-d2d-j > _______________________________________________ > Postgres-xc-general mailing list > Pos...@li... > https://lists.sourceforge.net/lists/listinfo/postgres-xc-general > > -- Mason Sharp StormDB - http://www.stormdb.com The Database Cloud Postgres-XC Support and Services |
|
From: Andrei M. <and...@gm...> - 2013-06-05 11:37:43
|
Hi Iván,
First of all, you should not replicate all your tables. No such case when
it might be reasonable, except maybe some ultimate test case. Single
Postgres server would perform better then your four-node cluster in any
application. So think again about the distribution planning.
Regarding your problem, I guess node definitions was different at the
moment when you created your tables.
To verify, please run following query on all your coordinators:
select nodeoids from pgxc_class;
The result should look like this:
nodeoids
---------------------------
16386 16387 16388 16389
16386 16387 16388 16389
...
(N rows)
If you see less then four node OIDs in some or all rows that is the cause.
2013/6/5 seikath <se...@gm...>
> Hello guys,
>
> I am facing one problem with XC I would like to know if its common or not.
>
> 4 AWS based XC nodes installed with datanode and coordinator on each one,
> then I have separated one gtm-proxy and one gtm node.
> version used:
> psql (Postgres-XC) 1.0.3
> (based on PostgreSQL) 9.1.9
>
> once installed, it operates OK as db/roles creation , database import etc.
>
> The issue is, on db import, I get only three of the nodes replicated , as
> all the tables definitions are with *distribute by replication* :
>
> xzcat prod-db01-new.2013-06-04.12.31.34.sql.xz | sed 'h;/^CREATE
> TABLE/,/^);/s/;/ DISTRIBUTE BY REPLICATION;/' | psql -U postgres dbname
> I see no errors at the time of the dbimport ,
> the tables , indexes , pkeys are replicated, just the data is only at
> three of the 4 active nodes.
>
> Details :
>
> # coordinatods config:
>
> boxes@vpc-xc-coord-01:5432::boxes_production=[Wed Jun 5 08:41:26 UTC 2013]> select * from pgxc_node;
> node_name | node_type | node_port | node_host | nodeis_primary | nodeis_preferred | node_id
> ------------+-----------+-----------+-----------+----------------+------------------+-------------
> coord01 | C | 5432 | localhost | f | f | -951114102
> datanode01 | D | 6543 | localhost | t | t | -561864558
> coord02 | C | 5432 | 10.0.1.12 | f | f | -1523582700
> datanode02 | D | 6543 | 10.0.1.12 | f | f | 670480207
> coord03 | C | 5432 | 10.0.1.13 | f | f | 1641506819
> datanode03 | D | 6543 | 10.0.1.13 | f | f | -1804036519
> coord04 | C | 5432 | 10.0.1.14 | f | f | -1385444041
> datanode04 | D | 6543 | 10.0.1.14 | f | f | 1005050720
> (8 rows)
>
> postgres@vpc-xc-coord-02:5432::boxes_production=[Wed Jun 5 08:42:24 UTC 2013]# select * from pgxc_node;
> node_name | node_type | node_port | node_host | nodeis_primary | nodeis_preferred | node_id
> ------------+-----------+-----------+-----------+----------------+------------------+-------------
> coord02 | C | 5432 | localhost | f | f | -1523582700
> datanode02 | D | 6543 | localhost | f | t | 670480207
> coord01 | C | 5432 | 10.0.1.11 | f | f | -951114102
> datanode01 | D | 6543 | 10.0.1.11 | t | f | -561864558
> coord03 | C | 5432 | 10.0.1.13 | f | f | 1641506819
> datanode03 | D | 6543 | 10.0.1.13 | f | f | -1804036519
> coord04 | C | 5432 | 10.0.1.14 | f | f | -1385444041
> datanode04 | D | 6543 | 10.0.1.14 | f | f | 1005050720
> (8 rows)
> postgres@vpc-xc-coord-03:5432::boxes_production=[Wed Jun 5 08:42:57 UTC 2013]# select * from pgxc_node;
> node_name | node_type | node_port | node_host | nodeis_primary | nodeis_preferred | node_id
> ------------+-----------+-----------+-----------+----------------+------------------+-------------
> coord03 | C | 5432 | localhost | f | f | 1641506819
> datanode03 | D | 6543 | localhost | f | t | -1804036519
> coord01 | C | 5432 | 10.0.1.11 | f | f | -951114102
> datanode01 | D | 6543 | 10.0.1.11 | t | f | -561864558
> coord02 | C | 5432 | 10.0.1.12 | f | f | -1523582700
> datanode02 | D | 6543 | 10.0.1.12 | f | f | 670480207
> coord04 | C | 5432 | 10.0.1.14 | f | f | -1385444041
> datanode04 | D | 6543 | 10.0.1.14 | f | f | 1005050720
>
> postgres@vpc-xc-coord-04:5432::boxes_production=[Wed Jun 5 08:20:35 UTC 2013]# select * from pgxc_node;
> node_name | node_type | node_port | node_host | nodeis_primary | nodeis_preferred | node_id
> ------------+-----------+-----------+-----------+----------------+------------------+-------------
> coord04 | C | 5432 | localhost | f | f | -1385444041
> datanode04 | D | 6543 | localhost | f | t | 1005050720
> coord01 | C | 5432 | 10.0.1.11 | f | f | -951114102
> datanode01 | D | 6543 | 10.0.1.11 | t | f | -561864558
> coord02 | C | 5432 | 10.0.1.12 | f | f | -1523582700
> datanode02 | D | 6543 | 10.0.1.12 | f | f | 670480207
> coord03 | C | 5432 | 10.0.1.13 | f | f | 1641506819
> datanode03 | D | 6543 | 10.0.1.13 | f | f | -1804036519
>
>
> fist node coordinator and datanodes config:
> ====================================================================
>
> postgres@vpc-xc-coord-01:[Wed Jun 05 08:40:00][/usr/local/pgsql]$ cat datanode.postgresql.conf
> listen_addresses = '*' # what IP address(es) to listen on;
> port = 6543 # (change requires restart)
> max_connections = 100 # (change requires restart)
> shared_buffers = 320MB # min 128kB
> max_prepared_transactions = 100 # zero disables the feature
> datestyle = 'iso, mdy'
> lc_messages = 'en_US.UTF-8' # locale for system error message
> lc_monetary = 'en_US.UTF-8' # locale for monetary formatting
> lc_numeric = 'en_US.UTF-8' # locale for number formatting
> lc_time = 'en_US.UTF-8' # locale for time formatting
> default_text_search_config = 'pg_catalog.english'
> include '/usr/local/pgsql/gtm.include.conf'
> include '/usr/local/pgsql/datanode_node_name.conf'
> #pgxc_node_name = 'datanode04' # Coordinator or Datanode name
> enforce_two_phase_commit = on # Enforce the usage of two-phase commit on transactions
> enable_fast_query_shipping = on
> enable_remotejoin = on
> enable_remotegroup = on
>
>
> postgres@vpc-xc-coord-01:[Wed Jun 05 08:43:29][/usr/local/pgsql]$ cat coordinator.postgresql.conf
> listen_addresses = '*' # what IP address(es) to listen on;
> port = 5432 # (change requires restart)
> max_connections = 100 # (change requires restart)
> shared_buffers = 120MB # min 128kB
> max_prepared_transactions = 100 # zero disables the feature
> datestyle = 'iso, mdy'
> lc_messages = 'en_US.UTF-8' # locale for system error message
> lc_monetary = 'en_US.UTF-8' # locale for monetary formatting
> lc_numeric = 'en_US.UTF-8' # locale for number formatting
> lc_time = 'en_US.UTF-8' # locale for time formatting
> default_text_search_config = 'pg_catalog.english'
> pooler_port = 6667 # Pool Manager TCP port
> min_pool_size = 1 # Initial pool size
> max_pool_size = 100 # Maximum pool size
> max_coordinators = 16 # Maximum number of Coordinators
> max_datanodes = 16 # Maximum number of Datanodes
> include '/usr/local/pgsql/gtm.include.conf'
> include '/usr/local/pgsql/coordinator_node_name.conf'
> enforce_two_phase_commit = on # Enforce the usage of two-phase commit on transactions
> enable_fast_query_shipping = on
> enable_remotejoin = on
> enable_remotegroup = on
>
>
> postgres@vpc-xc-coord-01:[Wed Jun 05 08:43:38][/usr/local/pgsql]$ cat /usr/local/pgsql/gtm.include.conf
> gtm_host = '10.0.1.16' # Host name or address of GTM Proxy, if not - direct link to GTM
> gtm_port = 6543
>
> #gtm_host = '127.0.0.1'
> #gtm_port = 5434
>
> boxes@vpc-xc-coord-01:5432::boxes_production=[Wed Jun 5 08:42:04 UTC 2013]> select count(*) from friends;
> count
> -------
> 1
> (1 row)
>
> Time: 4.698 ms
>
> postgres@vpc-xc-coord-02:5432::boxes_production=[Wed Jun 5 08:42:25 UTC 2013]# select count(*) from friends;
> count
> -------
> 41416
> (1 row)
>
> postgres@vpc-xc-coord-03:5432::boxes_production=[Wed Jun 5 08:43:01 UTC 2013]# select count(*) from friends;
> count
> -------
> 41416
> (1 row)
>
> postgres@vpc-xc-coord-04:5432::boxes_production=[Wed Jun 5 08:42:53 UTC 2013]# select count(*) from friends;
> count
> -------
> 41416
> (1 row)
>
> # identical configs :
> postgres@vpc-xc-coord-01:[Wed Jun 05 08:53:58][/usr/local/pgsql]$ md5sum
> coordinator.postgresql.conf
> b7f61b5d8baeec83cd82d9f1ee744728 coordinator.postgresql.conf
> postgres@vpc-xc-coord-02:[Wed Jun 05 08:53:53][~]$ md5sum
> coordinator.postgresql.conf
> b7f61b5d8baeec83cd82d9f1ee744728 coordinator.postgresql.conf
>
> postgres@vpc-xc-coord-01:[Wed Jun 05 08:54:58][/usr/local/pgsql]$ md5sum
> datanode.postgresql.conf
> 00d6a5736b6401dc6cc3d820fb412082 datanode.postgresql.conf
> postgres@vpc-xc-coord-02:[Wed Jun 05 08:54:37][~]$ md5sum
> datanode.postgresql.conf
> 00d6a5736b6401dc6cc3d820fb412082 datanode.postgresql.conf
>
> postgres@vpc-xc-coord-01:[Wed Jun 05 08:55:24][/usr/local/pgsql]$ md5sum
> /usr/local/pgsql/gtm.include.conf
> a6e7c3a21958a23bfb5054dc645d9576 /usr/local/pgsql/gtm.include.conf
> postgres@vpc-xc-coord-02:[Wed Jun 05 08:55:01][~]$ md5sum
> /usr/local/pgsql/gtm.include.conf
> a6e7c3a21958a23bfb5054dc645d9576 /usr/local/pgsql/gtm.include.conf
>
>
> ====================================================================
> I
> In general I suspect wrong configuration, but atm can not find any.
>
> I did a test importing the same db at the second XC node, and its the same
> issue: the fixt XC gets eveyrthing replicated but the actual data.
>
> My plan is to launch new clone instance of the vpc-xc-coord-02 as a
> replacement of the vpc-xc-coord-01, but this is the desperate plan C
> I want to know what happens .. :)
>
>
> Kind regards,
>
> Iván
>
>
>
>
>
> ------------------------------------------------------------------------------
> How ServiceNow helps IT people transform IT departments:
> 1. A cloud service to automate IT design, transition and operations
> 2. Dashboards that offer high-level views of enterprise services
> 3. A single system of record for all IT processes
> http://p.sf.net/sfu/servicenow-d2d-j
> _______________________________________________
> Postgres-xc-general mailing list
> Pos...@li...
> https://lists.sourceforge.net/lists/listinfo/postgres-xc-general
>
>
--
Andrei Martsinchyk
StormDB - http://www.stormdb.com
The Database Cloud
|
|
From: seikath <se...@gm...> - 2013-06-05 09:01:53
|
Hello guys,
I am facing one problem with XC I would like to know if its common or not.
4 AWS based XC nodes installed with datanode and coordinator on each one, then I have separated one gtm-proxy and one gtm node.
version used:
psql (Postgres-XC) 1.0.3
(based on PostgreSQL) 9.1.9
once installed, it operates OK as db/roles creation , database import etc.
The issue is, on db import, I get only three of the nodes replicated , as all the tables definitions are with *distribute by replication* :
xzcat prod-db01-new.2013-06-04.12.31.34.sql.xz | sed 'h;/^CREATE TABLE/,/^);/s/;/ DISTRIBUTE BY REPLICATION;/' | psql -U postgres dbname
I see no errors at the time of the dbimport ,
the tables , indexes , pkeys are replicated, just the data is only at three of the 4 active nodes.
Details :
# coordinatods config:
boxes@vpc-xc-coord-01:5432::boxes_production=[Wed Jun 5 08:41:26 UTC 2013]> select * from pgxc_node;
node_name | node_type | node_port | node_host | nodeis_primary | nodeis_preferred | node_id
------------+-----------+-----------+-----------+----------------+------------------+-------------
coord01 | C | 5432 | localhost | f | f | -951114102
datanode01 | D | 6543 | localhost | t | t | -561864558
coord02 | C | 5432 | 10.0.1.12 | f | f | -1523582700
datanode02 | D | 6543 | 10.0.1.12 | f | f | 670480207
coord03 | C | 5432 | 10.0.1.13 | f | f | 1641506819
datanode03 | D | 6543 | 10.0.1.13 | f | f | -1804036519
coord04 | C | 5432 | 10.0.1.14 | f | f | -1385444041
datanode04 | D | 6543 | 10.0.1.14 | f | f | 1005050720
(8 rows)
postgres@vpc-xc-coord-02:5432::boxes_production=[Wed Jun 5 08:42:24 UTC 2013]# select * from pgxc_node;
node_name | node_type | node_port | node_host | nodeis_primary | nodeis_preferred | node_id
------------+-----------+-----------+-----------+----------------+------------------+-------------
coord02 | C | 5432 | localhost | f | f | -1523582700
datanode02 | D | 6543 | localhost | f | t | 670480207
coord01 | C | 5432 | 10.0.1.11 | f | f | -951114102
datanode01 | D | 6543 | 10.0.1.11 | t | f | -561864558
coord03 | C | 5432 | 10.0.1.13 | f | f | 1641506819
datanode03 | D | 6543 | 10.0.1.13 | f | f | -1804036519
coord04 | C | 5432 | 10.0.1.14 | f | f | -1385444041
datanode04 | D | 6543 | 10.0.1.14 | f | f | 1005050720
(8 rows)
postgres@vpc-xc-coord-03:5432::boxes_production=[Wed Jun 5 08:42:57 UTC 2013]# select * from pgxc_node;
node_name | node_type | node_port | node_host | nodeis_primary | nodeis_preferred | node_id
------------+-----------+-----------+-----------+----------------+------------------+-------------
coord03 | C | 5432 | localhost | f | f | 1641506819
datanode03 | D | 6543 | localhost | f | t | -1804036519
coord01 | C | 5432 | 10.0.1.11 | f | f | -951114102
datanode01 | D | 6543 | 10.0.1.11 | t | f | -561864558
coord02 | C | 5432 | 10.0.1.12 | f | f | -1523582700
datanode02 | D | 6543 | 10.0.1.12 | f | f | 670480207
coord04 | C | 5432 | 10.0.1.14 | f | f | -1385444041
datanode04 | D | 6543 | 10.0.1.14 | f | f | 1005050720
postgres@vpc-xc-coord-04:5432::boxes_production=[Wed Jun 5 08:20:35 UTC 2013]# select * from pgxc_node;
node_name | node_type | node_port | node_host | nodeis_primary | nodeis_preferred | node_id
------------+-----------+-----------+-----------+----------------+------------------+-------------
coord04 | C | 5432 | localhost | f | f | -1385444041
datanode04 | D | 6543 | localhost | f | t | 1005050720
coord01 | C | 5432 | 10.0.1.11 | f | f | -951114102
datanode01 | D | 6543 | 10.0.1.11 | t | f | -561864558
coord02 | C | 5432 | 10.0.1.12 | f | f | -1523582700
datanode02 | D | 6543 | 10.0.1.12 | f | f | 670480207
coord03 | C | 5432 | 10.0.1.13 | f | f | 1641506819
datanode03 | D | 6543 | 10.0.1.13 | f | f | -1804036519
fist node coordinator and datanodes config:
====================================================================
postgres@vpc-xc-coord-01:[Wed Jun 05 08:40:00][/usr/local/pgsql]$ cat datanode.postgresql.conf
listen_addresses = '*' # what IP address(es) to listen on;
port = 6543 # (change requires restart)
max_connections = 100 # (change requires restart)
shared_buffers = 320MB # min 128kB
max_prepared_transactions = 100 # zero disables the feature
datestyle = 'iso, mdy'
lc_messages = 'en_US.UTF-8' # locale for system error message
lc_monetary = 'en_US.UTF-8' # locale for monetary formatting
lc_numeric = 'en_US.UTF-8' # locale for number formatting
lc_time = 'en_US.UTF-8' # locale for time formatting
default_text_search_config = 'pg_catalog.english'
include '/usr/local/pgsql/gtm.include.conf'
include '/usr/local/pgsql/datanode_node_name.conf'
#pgxc_node_name = 'datanode04' # Coordinator or Datanode name
enforce_two_phase_commit = on # Enforce the usage of two-phase commit on transactions
enable_fast_query_shipping = on
enable_remotejoin = on
enable_remotegroup = on
postgres@vpc-xc-coord-01:[Wed Jun 05 08:43:29][/usr/local/pgsql]$ cat coordinator.postgresql.conf
listen_addresses = '*' # what IP address(es) to listen on;
port = 5432 # (change requires restart)
max_connections = 100 # (change requires restart)
shared_buffers = 120MB # min 128kB
max_prepared_transactions = 100 # zero disables the feature
datestyle = 'iso, mdy'
lc_messages = 'en_US.UTF-8' # locale for system error message
lc_monetary = 'en_US.UTF-8' # locale for monetary formatting
lc_numeric = 'en_US.UTF-8' # locale for number formatting
lc_time = 'en_US.UTF-8' # locale for time formatting
default_text_search_config = 'pg_catalog.english'
pooler_port = 6667 # Pool Manager TCP port
min_pool_size = 1 # Initial pool size
max_pool_size = 100 # Maximum pool size
max_coordinators = 16 # Maximum number of Coordinators
max_datanodes = 16 # Maximum number of Datanodes
include '/usr/local/pgsql/gtm.include.conf'
include '/usr/local/pgsql/coordinator_node_name.conf'
enforce_two_phase_commit = on # Enforce the usage of two-phase commit on transactions
enable_fast_query_shipping = on
enable_remotejoin = on
enable_remotegroup = on
postgres@vpc-xc-coord-01:[Wed Jun 05 08:43:38][/usr/local/pgsql]$ cat /usr/local/pgsql/gtm.include.conf
gtm_host = '10.0.1.16' # Host name or address of GTM Proxy, if not - direct link to GTM
gtm_port = 6543
#gtm_host = '127.0.0.1'
#gtm_port = 5434
boxes@vpc-xc-coord-01:5432::boxes_production=[Wed Jun 5 08:42:04 UTC 2013]> select count(*) from friends;
count
-------
1
(1 row)
Time: 4.698 ms
postgres@vpc-xc-coord-02:5432::boxes_production=[Wed Jun 5 08:42:25 UTC 2013]# select count(*) from friends;
count
-------
41416
(1 row)
postgres@vpc-xc-coord-03:5432::boxes_production=[Wed Jun 5 08:43:01 UTC 2013]# select count(*) from friends;
count
-------
41416
(1 row)
postgres@vpc-xc-coord-04:5432::boxes_production=[Wed Jun 5 08:42:53 UTC 2013]# select count(*) from friends;
count
-------
41416
(1 row)
# identical configs :
postgres@vpc-xc-coord-01:[Wed Jun 05 08:53:58][/usr/local/pgsql]$ md5sum coordinator.postgresql.conf
b7f61b5d8baeec83cd82d9f1ee744728 coordinator.postgresql.conf
postgres@vpc-xc-coord-02:[Wed Jun 05 08:53:53][~]$ md5sum coordinator.postgresql.conf
b7f61b5d8baeec83cd82d9f1ee744728 coordinator.postgresql.conf
postgres@vpc-xc-coord-01:[Wed Jun 05 08:54:58][/usr/local/pgsql]$ md5sum datanode.postgresql.conf
00d6a5736b6401dc6cc3d820fb412082 datanode.postgresql.conf
postgres@vpc-xc-coord-02:[Wed Jun 05 08:54:37][~]$ md5sum datanode.postgresql.conf
00d6a5736b6401dc6cc3d820fb412082 datanode.postgresql.conf
postgres@vpc-xc-coord-01:[Wed Jun 05 08:55:24][/usr/local/pgsql]$ md5sum /usr/local/pgsql/gtm.include.conf
a6e7c3a21958a23bfb5054dc645d9576 /usr/local/pgsql/gtm.include.conf
postgres@vpc-xc-coord-02:[Wed Jun 05 08:55:01][~]$ md5sum /usr/local/pgsql/gtm.include.conf
a6e7c3a21958a23bfb5054dc645d9576 /usr/local/pgsql/gtm.include.conf
====================================================================
I
In general I suspect wrong configuration, but atm can not find any.
I did a test importing the same db at the second XC node, and its the same issue: the fixt XC gets eveyrthing replicated but the actual data.
My plan is to launch new clone instance of the vpc-xc-coord-02 as a replacement of the vpc-xc-coord-01, but this is the desperate plan C
I want to know what happens .. :)
Kind regards,
Iván
|
|
From: Ashutosh B. <ash...@en...> - 2013-06-05 04:15:26
|
Hi Matt, Which version of XC are you using? There has been a lot of change in the planner since last release. You may try the latest master HEAD (to be released as 1.2 in about a month). It will help if you can provide all the table definitions and EXPLAIN outputs. On Wed, Jun 5, 2013 at 5:40 AM, Matt Warner <MW...@xi...> wrote: > I need to correct item 3, below. The coordinator and only one of the data > nodes goes to work. One by one, each of the data nodes appears to spin up > to process the request and then go back to sleep.**** > > ** ** > > ?**** > > ** ** > > *From:* Matt Warner > *Sent:* Tuesday, June 04, 2013 5:00 PM > *To:* 'Pos...@li...' > *Subject:* XC Performance with Subquery**** > > ** ** > > I’ve been experimenting with XC and see interesting results. I’m hoping > someone can help explain something I’m seeing.**** > > ** ** > > **1. **I created two distributed tables, one with a primary key, > one with a foreign key, and hashed both tables by that key. I’m expecting > this to mean that the data for a given key is localized to a single node.* > *** > > **2. **When I perform a simple “select count(*) from table1” I see > all 8 data nodes consuming CPU (plus the coordinator), which I take to be a > good sign—all nodes are working in parallel.**** > > **3. **When I perform a join on the distribution key, I see only > the coordinator go to work instead of all 8 data nodes.**** > > **4. **I notice that the explain plan appears similar to page 55 of > this document ( > http://www.pgcon.org/2012/schedule/attachments/224_Postgres-XC_tutorial.pdf > ).**** > > **5. **I have indexes on the distribution keys, but that does not > seem to make any difference.**** > > ** ** > > How do I get XC to perform the join on the data nodes? To be verbose, I am > expecting to see more CPU resources consumed in this query:**** > > ** ** > > select count(*) from tablea a1 where exists (select null from tableb a2 > where a2.fk_accn_id=a1.pk_accn_id and a2.fk_sta_id=52);**** > > ** ** > > Rewriting this as a simple join does not seem to work any better.**** > > ** ** > > What am I missing?**** > > ** ** > > TIA,**** > > ** ** > > Matt**** > > > ------------------------------------------------------------------------------ > How ServiceNow helps IT people transform IT departments: > 1. A cloud service to automate IT design, transition and operations > 2. Dashboards that offer high-level views of enterprise services > 3. A single system of record for all IT processes > http://p.sf.net/sfu/servicenow-d2d-j > _______________________________________________ > Postgres-xc-general mailing list > Pos...@li... > https://lists.sourceforge.net/lists/listinfo/postgres-xc-general > > -- Best Wishes, Ashutosh Bapat EntepriseDB Corporation The Postgres Database Company |
|
From: 鈴木 幸市 <ko...@in...> - 2013-06-05 01:29:52
|
Thanks a lot. It is very helpful. --- Koichi Suzuki On 2013/06/05, at 9:49, Mason Sharp <ma...@st...> wrote: > For easier installation, we have made available RPMs for Postgres-XC, versions 1.0.1 through 1.0.3. They can be found here: > > http://yum.stormdb.com/repos/Postgres-XC/ > > Enjoy! > > > > -- > Mason Sharp > > StormDB - http://www.stormdb.com > The Database Cloud > Postgres-XC Support and Services > ------------------------------------------------------------------------------ > How ServiceNow helps IT people transform IT departments: > 1. A cloud service to automate IT design, transition and operations > 2. Dashboards that offer high-level views of enterprise services > 3. A single system of record for all IT processes > http://p.sf.net/sfu/servicenow-d2d-j_______________________________________________ > Postgres-xc-general mailing list > Pos...@li... > https://lists.sourceforge.net/lists/listinfo/postgres-xc-general |
|
From: Michael P. <mic...@gm...> - 2013-06-05 01:25:49
|
On Wed, Jun 5, 2013 at 9:49 AM, Mason Sharp <ma...@st...> wrote: > For easier installation, we have made available RPMs for Postgres-XC, > versions 1.0.1 through 1.0.3. They can be found here: > > http://yum.stormdb.com/repos/Postgres-XC/ > > Enjoy! > +1. That's absolutely great! -- Michael |
|
From: Mason S. <ma...@st...> - 2013-06-05 01:18:31
|
For easier installation, we have made available RPMs for Postgres-XC, versions 1.0.1 through 1.0.3. They can be found here: http://yum.stormdb.com/repos/Postgres-XC/ Enjoy! -- Mason Sharp StormDB - http://www.stormdb.com The Database Cloud Postgres-XC Support and Services |
|
From: Matt W. <MW...@XI...> - 2013-06-05 00:12:38
|
I've been experimenting with XC and see interesting results. I'm hoping someone can help explain something I'm seeing. 1. I created two distributed tables, one with a primary key, one with a foreign key, and hashed both tables by that key. I'm expecting this to mean that the data for a given key is localized to a single node. 2. When I perform a simple "select count(*) from table1" I see all 8 data nodes consuming CPU (plus the coordinator), which I take to be a good sign-all nodes are working in parallel. 3. When I perform a join on the distribution key, I see only the coordinator go to work instead of all 8 data nodes. 4. I notice that the explain plan appears similar to page 55 of this document (http://www.pgcon.org/2012/schedule/attachments/224_Postgres-XC_tutorial.pdf). 5. I have indexes on the distribution keys, but that does not seem to make any difference. How do I get XC to perform the join on the data nodes? To be verbose, I am expecting to see more CPU resources consumed in this query: select count(*) from tablea a1 where exists (select null from tableb a2 where a2.fk_accn_id=a1.pk_accn_id and a2.fk_sta_id=52); Rewriting this as a simple join does not seem to work any better. What am I missing? TIA, Matt |
|
From: Matt W. <MW...@XI...> - 2013-06-05 00:12:18
|
I need to correct item 3, below. The coordinator and only one of the data nodes goes to work. One by one, each of the data nodes appears to spin up to process the request and then go back to sleep. ? From: Matt Warner Sent: Tuesday, June 04, 2013 5:00 PM To: 'Pos...@li...' Subject: XC Performance with Subquery I've been experimenting with XC and see interesting results. I'm hoping someone can help explain something I'm seeing. 1. I created two distributed tables, one with a primary key, one with a foreign key, and hashed both tables by that key. I'm expecting this to mean that the data for a given key is localized to a single node. 2. When I perform a simple "select count(*) from table1" I see all 8 data nodes consuming CPU (plus the coordinator), which I take to be a good sign-all nodes are working in parallel. 3. When I perform a join on the distribution key, I see only the coordinator go to work instead of all 8 data nodes. 4. I notice that the explain plan appears similar to page 55 of this document (http://www.pgcon.org/2012/schedule/attachments/224_Postgres-XC_tutorial.pdf). 5. I have indexes on the distribution keys, but that does not seem to make any difference. How do I get XC to perform the join on the data nodes? To be verbose, I am expecting to see more CPU resources consumed in this query: select count(*) from tablea a1 where exists (select null from tableb a2 where a2.fk_accn_id=a1.pk_accn_id and a2.fk_sta_id=52); Rewriting this as a simple join does not seem to work any better. What am I missing? TIA, Matt |
|
From: Donald K. <don...@gm...> - 2013-05-23 21:05:28
|
OMG...i am such a dunce! I didn't realize i put on subnet ... sorry about that. Thank you! On Thu, May 23, 2013 at 4:55 PM, Andrei Martsinchyk < and...@gm...> wrote: > Specify exact IP addresses of the coordinators, not subnets: > > host all all 100.100.10.170/*32* trust > host all all 100.100.10.171/*32* trust > > this is secure enough. > You do not have to enable connections from GTM, it never connects to > nodes, just accepts connections. You should just allow connections from > other coordinators. > So, if you have two coordinators 100.100.10.170 and 100.100.10.171, you > should put > host all all 100.100.10.171/32 trust > to pg_hba.conf of coordinator running on 100.100.10.170, and > host all all 100.100.10.170/32 trust > to pg_hba.conf of coordinator running on 100.100.10.171. > All datanodes need both these lines. > > > > > 2013/5/23 Donald Keyass <don...@gm...> > >> Right i did that ... by putting it for the coordinator servers. >> >> # IPv4 local connections: >> host all all 100.100.10.170/24 trust >> host all all 100.100.10.171/24 trust >> host all all 100.100.10.178/24 trust >> #GTM >> host all all 127.0.0.1/32 md5 >> host all all 100.100.10.0/24 md5 >> >> however, that means there's no password auth occuring at all then? B/c i >> then tried to connect from another server (a 9.2 db instance which has the >> client) and another server which had 9.0, I did: >> >> psql -h 10.10.0.170 -d testdb -U testuser >> >> No password prompt. and no, the pgpass file does not have an entry for >> these coordinator servers. >> >> So in this case that means i'm essentially making the PG-XC instance open >> to users w/o password auth. When I changed to md5, failed. >> >> >> >> >> On Thu, May 23, 2013 at 4:05 PM, Andrei Martsinchyk < >> and...@gm...> wrote: >> >>> Ah, sorry, I forgot to mention that coordinators should also be >>> configured to accept connections from other coordinators without a password. >>> Coordinator is sending DDLs to datanodes and coordinators, while DMLs >>> and SELECTs go only to datanodes. >>> That's why you are getting the error when trying to execute a DDL. >>> >>> >>> 2013/5/23 Donald Keyass <don...@gm...> >>> >>>> Alright i should apologize. I can do DMLs, but I can not do ddls. it >>>> seems only the postgres sys user can do that. I created a new postgres >>>> user w/ superuser privs: >>>> >>>> CREATE ROLE anotheruser WITH LOGIN PASSWORD 'password123' SUPERUSER >>>> INHERIT CREATEDB CREATEROLE; >>>> >>>> Granting privs, etc... >>>> Still can't do ddl's. i get the >>>> "ERROR: Failed to get pooled connections". >>>> >>>> when i create the user ... i do it through the coordinator. should I >>>> be creating user through the datanode on each datanode? >>>> >>>> >>>> On Thu, May 23, 2013 at 11:21 AM, Donald Keyass < >>>> don...@gm...> wrote: >>>> >>>>> actually selecting for the tables works. So I can run queries. But i >>>>> can't do any dmls or ddls in the database (ie create tables, >>>>> insert/update/delete). >>>>> >>>>> >>>>> On Thu, May 23, 2013 at 10:59 AM, Donald Keyass < >>>>> don...@gm...> wrote: >>>>> >>>>>> Andrei, >>>>>> Thanks! That was it. I set the hba for the datanodes as the same as >>>>>> coordinators. There was no documentation that I saw about the datanodes >>>>>> hba has to be passwordless (or in trust mode from all coordinators for all >>>>>> users). >>>>>> Again, thanks for the help! >>>>>> >>>>>> >>>>>> On Thu, May 23, 2013 at 3:36 AM, Andrei Martsinchyk < >>>>>> and...@gm...> wrote: >>>>>> >>>>>>> Hi Donald, >>>>>>> >>>>>>> Please checke your pg_hba.conf on the datanodes, they should allow >>>>>>> passwordless access from coordinators. >>>>>>> When coordinator builds up connection string it does not include the >>>>>>> password. >>>>>>> >>>>>>> >>>>>>> 2013/5/23 Koichi Suzuki <koi...@gm...> >>>>>>> >>>>>>>> The following is a typical additional postgresql.conf file lines >>>>>>>> for coordinators. They're generated by pgxc_ctl utility and works well. >>>>>>>> Could you compare this with your ones? Especially, I'm interested how you >>>>>>>> configured pooler_port. >>>>>>>> >>>>>>>> Regards; >>>>>>>> >>>>>>>> ----8<----8<---- >>>>>>>> #=========================================== >>>>>>>> # Added at initialization. 20130523_11:33:33 >>>>>>>> port = 20004 >>>>>>>> pooler_port = 20010 >>>>>>>> gtm_host = 'node13' >>>>>>>> gtm_port = 20001 >>>>>>>> # End of Additon >>>>>>>> ---->8------->8------ >>>>>>>> >>>>>>>> >>>>>>>> ---------- >>>>>>>> Koichi Suzuki >>>>>>>> >>>>>>>> >>>>>>>> 2013/5/22 Nikhil Sontakke <ni...@st...> >>>>>>>> >>>>>>>>> Hi Donald, >>>>>>>>> >>>>>>>>> This looks like a user permissions related internal bug in PGXC. >>>>>>>>> >>>>>>>>> Regards, >>>>>>>>> Nikhils >>>>>>>>> >>>>>>>>> >>>>>>>>> >>>>>>>>> On Thu, May 23, 2013 at 3:13 AM, Donald Keyass < >>>>>>>>> don...@gm...> wrote: >>>>>>>>> >>>>>>>>>> The testdb and user were the very last thing I created after I >>>>>>>>>> created the node. >>>>>>>>>> >>>>>>>>>> >>>>>>>>>> On Wed, May 22, 2013 at 4:54 PM, Koichi Suzuki < >>>>>>>>>> koi...@gm...> wrote: >>>>>>>>>> >>>>>>>>>>> Did you issue CREATE NODE before you create databases/roles? >>>>>>>>>>> If you CREATE NODE after you create databases/users, they are not visible >>>>>>>>>>> to other node and may fall into errors. When you create each node by >>>>>>>>>>> initdb, you also create gtm's using initgtm, connect all the >>>>>>>>>>> datanodes/coordinators to the gtm and then issue CREATE NODE before you >>>>>>>>>>> create anothing else. >>>>>>>>>>> >>>>>>>>>>> >>>>>>>>>>> You can visit >>>>>>>>>>> https://github.com/koichi-szk/PGXC-Tools/blob/master/pgxc_ctl/pgxc_ctlto look into pgxc_ctl shell script, which takes care of configuration, >>>>>>>>>>> failover, start and stop the cluster. init_all function will be helpful >>>>>>>>>>> for you. This shows what you should do to initialize your cluster. The >>>>>>>>>>> C-version of pgxc_ctl will be included in coming 1.1 release with new >>>>>>>>>>> Postgres-XC feature. >>>>>>>>>>> >>>>>>>>>>> Regards; >>>>>>>>>>> >>>>>>>>>> >>>>>>>>>> >>>>>>>>>> >>>>>>>>>> ------------------------------------------------------------------------------ >>>>>>>>>> Try New Relic Now & We'll Send You this Cool Shirt >>>>>>>>>> New Relic is the only SaaS-based application performance >>>>>>>>>> monitoring service >>>>>>>>>> that delivers powerful full stack analytics. Optimize and monitor >>>>>>>>>> your >>>>>>>>>> browser, app, & servers with just a few lines of code. Try New >>>>>>>>>> Relic >>>>>>>>>> and get this awesome Nerd Life shirt! >>>>>>>>>> http://p.sf.net/sfu/newrelic_d2d_may >>>>>>>>>> _______________________________________________ >>>>>>>>>> Postgres-xc-general mailing list >>>>>>>>>> Pos...@li... >>>>>>>>>> https://lists.sourceforge.net/lists/listinfo/postgres-xc-general >>>>>>>>>> >>>>>>>>>> >>>>>>>>> >>>>>>>>> >>>>>>>>> -- >>>>>>>>> StormDB - http://www.stormdb.com >>>>>>>>> The Database Cloud >>>>>>>>> >>>>>>>> >>>>>>>> >>>>>>>> >>>>>>>> ------------------------------------------------------------------------------ >>>>>>>> Try New Relic Now & We'll Send You this Cool Shirt >>>>>>>> New Relic is the only SaaS-based application performance monitoring >>>>>>>> service >>>>>>>> that delivers powerful full stack analytics. Optimize and monitor >>>>>>>> your >>>>>>>> browser, app, & servers with just a few lines of code. Try New Relic >>>>>>>> and get this awesome Nerd Life shirt! >>>>>>>> http://p.sf.net/sfu/newrelic_d2d_may >>>>>>>> _______________________________________________ >>>>>>>> Postgres-xc-general mailing list >>>>>>>> Pos...@li... >>>>>>>> https://lists.sourceforge.net/lists/listinfo/postgres-xc-general >>>>>>>> >>>>>>>> >>>>>>> >>>>>>> >>>>>>> -- >>>>>>> Andrei Martsinchyk >>>>>>> >>>>>>> >>>>>>> StormDB - http://www.stormdb.com >>>>>>> The Database Cloud >>>>>>> >>>>>>> >>>>>>> >>>>>>> ------------------------------------------------------------------------------ >>>>>>> Try New Relic Now & We'll Send You this Cool Shirt >>>>>>> New Relic is the only SaaS-based application performance monitoring >>>>>>> service >>>>>>> that delivers powerful full stack analytics. Optimize and monitor >>>>>>> your >>>>>>> browser, app, & servers with just a few lines of code. Try New Relic >>>>>>> and get this awesome Nerd Life shirt! >>>>>>> http://p.sf.net/sfu/newrelic_d2d_may >>>>>>> _______________________________________________ >>>>>>> Postgres-xc-general mailing list >>>>>>> Pos...@li... >>>>>>> https://lists.sourceforge.net/lists/listinfo/postgres-xc-general >>>>>>> >>>>>>> >>>>>> >>>>> >>>> >>> >>> >>> -- >>> Andrei Martsinchyk >>> >>> StormDB - http://www.stormdb.com >>> The Database Cloud >>> >>> >> > > > -- > Andrei Martsinchyk > > StormDB - http://www.stormdb.com > The Database Cloud > > |
|
From: Andrei M. <and...@gm...> - 2013-05-23 20:55:12
|
Specify exact IP addresses of the coordinators, not subnets: host all all 100.100.10.170/*32* trust host all all 100.100.10.171/*32* trust this is secure enough. You do not have to enable connections from GTM, it never connects to nodes, just accepts connections. You should just allow connections from other coordinators. So, if you have two coordinators 100.100.10.170 and 100.100.10.171, you should put host all all 100.100.10.171/32 trust to pg_hba.conf of coordinator running on 100.100.10.170, and host all all 100.100.10.170/32 trust to pg_hba.conf of coordinator running on 100.100.10.171. All datanodes need both these lines. 2013/5/23 Donald Keyass <don...@gm...> > Right i did that ... by putting it for the coordinator servers. > > # IPv4 local connections: > host all all 100.100.10.170/24 trust > host all all 100.100.10.171/24 trust > host all all 100.100.10.178/24 trust > #GTM > host all all 127.0.0.1/32 md5 > host all all 100.100.10.0/24 md5 > > however, that means there's no password auth occuring at all then? B/c i > then tried to connect from another server (a 9.2 db instance which has the > client) and another server which had 9.0, I did: > > psql -h 10.10.0.170 -d testdb -U testuser > > No password prompt. and no, the pgpass file does not have an entry for > these coordinator servers. > > So in this case that means i'm essentially making the PG-XC instance open > to users w/o password auth. When I changed to md5, failed. > > > > > On Thu, May 23, 2013 at 4:05 PM, Andrei Martsinchyk < > and...@gm...> wrote: > >> Ah, sorry, I forgot to mention that coordinators should also be >> configured to accept connections from other coordinators without a password. >> Coordinator is sending DDLs to datanodes and coordinators, while DMLs and >> SELECTs go only to datanodes. >> That's why you are getting the error when trying to execute a DDL. >> >> >> 2013/5/23 Donald Keyass <don...@gm...> >> >>> Alright i should apologize. I can do DMLs, but I can not do ddls. it >>> seems only the postgres sys user can do that. I created a new postgres >>> user w/ superuser privs: >>> >>> CREATE ROLE anotheruser WITH LOGIN PASSWORD 'password123' SUPERUSER >>> INHERIT CREATEDB CREATEROLE; >>> >>> Granting privs, etc... >>> Still can't do ddl's. i get the >>> "ERROR: Failed to get pooled connections". >>> >>> when i create the user ... i do it through the coordinator. should I >>> be creating user through the datanode on each datanode? >>> >>> >>> On Thu, May 23, 2013 at 11:21 AM, Donald Keyass < >>> don...@gm...> wrote: >>> >>>> actually selecting for the tables works. So I can run queries. But i >>>> can't do any dmls or ddls in the database (ie create tables, >>>> insert/update/delete). >>>> >>>> >>>> On Thu, May 23, 2013 at 10:59 AM, Donald Keyass < >>>> don...@gm...> wrote: >>>> >>>>> Andrei, >>>>> Thanks! That was it. I set the hba for the datanodes as the same as >>>>> coordinators. There was no documentation that I saw about the datanodes >>>>> hba has to be passwordless (or in trust mode from all coordinators for all >>>>> users). >>>>> Again, thanks for the help! >>>>> >>>>> >>>>> On Thu, May 23, 2013 at 3:36 AM, Andrei Martsinchyk < >>>>> and...@gm...> wrote: >>>>> >>>>>> Hi Donald, >>>>>> >>>>>> Please checke your pg_hba.conf on the datanodes, they should allow >>>>>> passwordless access from coordinators. >>>>>> When coordinator builds up connection string it does not include the >>>>>> password. >>>>>> >>>>>> >>>>>> 2013/5/23 Koichi Suzuki <koi...@gm...> >>>>>> >>>>>>> The following is a typical additional postgresql.conf file lines for >>>>>>> coordinators. They're generated by pgxc_ctl utility and works well. >>>>>>> Could you compare this with your ones? Especially, I'm interested how you >>>>>>> configured pooler_port. >>>>>>> >>>>>>> Regards; >>>>>>> >>>>>>> ----8<----8<---- >>>>>>> #=========================================== >>>>>>> # Added at initialization. 20130523_11:33:33 >>>>>>> port = 20004 >>>>>>> pooler_port = 20010 >>>>>>> gtm_host = 'node13' >>>>>>> gtm_port = 20001 >>>>>>> # End of Additon >>>>>>> ---->8------->8------ >>>>>>> >>>>>>> >>>>>>> ---------- >>>>>>> Koichi Suzuki >>>>>>> >>>>>>> >>>>>>> 2013/5/22 Nikhil Sontakke <ni...@st...> >>>>>>> >>>>>>>> Hi Donald, >>>>>>>> >>>>>>>> This looks like a user permissions related internal bug in PGXC. >>>>>>>> >>>>>>>> Regards, >>>>>>>> Nikhils >>>>>>>> >>>>>>>> >>>>>>>> >>>>>>>> On Thu, May 23, 2013 at 3:13 AM, Donald Keyass < >>>>>>>> don...@gm...> wrote: >>>>>>>> >>>>>>>>> The testdb and user were the very last thing I created after I >>>>>>>>> created the node. >>>>>>>>> >>>>>>>>> >>>>>>>>> On Wed, May 22, 2013 at 4:54 PM, Koichi Suzuki < >>>>>>>>> koi...@gm...> wrote: >>>>>>>>> >>>>>>>>>> Did you issue CREATE NODE before you create databases/roles? If >>>>>>>>>> you CREATE NODE after you create databases/users, they are not visible to >>>>>>>>>> other node and may fall into errors. When you create each node by initdb, >>>>>>>>>> you also create gtm's using initgtm, connect all the datanodes/coordinators >>>>>>>>>> to the gtm and then issue CREATE NODE before you create anothing else. >>>>>>>>>> >>>>>>>>>> >>>>>>>>>> You can visit >>>>>>>>>> https://github.com/koichi-szk/PGXC-Tools/blob/master/pgxc_ctl/pgxc_ctlto look into pgxc_ctl shell script, which takes care of configuration, >>>>>>>>>> failover, start and stop the cluster. init_all function will be helpful >>>>>>>>>> for you. This shows what you should do to initialize your cluster. The >>>>>>>>>> C-version of pgxc_ctl will be included in coming 1.1 release with new >>>>>>>>>> Postgres-XC feature. >>>>>>>>>> >>>>>>>>>> Regards; >>>>>>>>>> >>>>>>>>> >>>>>>>>> >>>>>>>>> >>>>>>>>> ------------------------------------------------------------------------------ >>>>>>>>> Try New Relic Now & We'll Send You this Cool Shirt >>>>>>>>> New Relic is the only SaaS-based application performance >>>>>>>>> monitoring service >>>>>>>>> that delivers powerful full stack analytics. Optimize and monitor >>>>>>>>> your >>>>>>>>> browser, app, & servers with just a few lines of code. Try New >>>>>>>>> Relic >>>>>>>>> and get this awesome Nerd Life shirt! >>>>>>>>> http://p.sf.net/sfu/newrelic_d2d_may >>>>>>>>> _______________________________________________ >>>>>>>>> Postgres-xc-general mailing list >>>>>>>>> Pos...@li... >>>>>>>>> https://lists.sourceforge.net/lists/listinfo/postgres-xc-general >>>>>>>>> >>>>>>>>> >>>>>>>> >>>>>>>> >>>>>>>> -- >>>>>>>> StormDB - http://www.stormdb.com >>>>>>>> The Database Cloud >>>>>>>> >>>>>>> >>>>>>> >>>>>>> >>>>>>> ------------------------------------------------------------------------------ >>>>>>> Try New Relic Now & We'll Send You this Cool Shirt >>>>>>> New Relic is the only SaaS-based application performance monitoring >>>>>>> service >>>>>>> that delivers powerful full stack analytics. Optimize and monitor >>>>>>> your >>>>>>> browser, app, & servers with just a few lines of code. Try New Relic >>>>>>> and get this awesome Nerd Life shirt! >>>>>>> http://p.sf.net/sfu/newrelic_d2d_may >>>>>>> _______________________________________________ >>>>>>> Postgres-xc-general mailing list >>>>>>> Pos...@li... >>>>>>> https://lists.sourceforge.net/lists/listinfo/postgres-xc-general >>>>>>> >>>>>>> >>>>>> >>>>>> >>>>>> -- >>>>>> Andrei Martsinchyk >>>>>> >>>>>> >>>>>> StormDB - http://www.stormdb.com >>>>>> The Database Cloud >>>>>> >>>>>> >>>>>> >>>>>> ------------------------------------------------------------------------------ >>>>>> Try New Relic Now & We'll Send You this Cool Shirt >>>>>> New Relic is the only SaaS-based application performance monitoring >>>>>> service >>>>>> that delivers powerful full stack analytics. Optimize and monitor your >>>>>> browser, app, & servers with just a few lines of code. Try New Relic >>>>>> and get this awesome Nerd Life shirt! >>>>>> http://p.sf.net/sfu/newrelic_d2d_may >>>>>> _______________________________________________ >>>>>> Postgres-xc-general mailing list >>>>>> Pos...@li... >>>>>> https://lists.sourceforge.net/lists/listinfo/postgres-xc-general >>>>>> >>>>>> >>>>> >>>> >>> >> >> >> -- >> Andrei Martsinchyk >> >> StormDB - http://www.stormdb.com >> The Database Cloud >> >> > -- Andrei Martsinchyk StormDB - http://www.stormdb.com The Database Cloud |
|
From: Donald K. <don...@gm...> - 2013-05-23 20:14:06
|
Right i did that ... by putting it for the coordinator servers. # IPv4 local connections: host all all 100.100.10.170/24 trust host all all 100.100.10.171/24 trust host all all 100.100.10.178/24 trust #GTM host all all 127.0.0.1/32 md5 host all all 100.100.10.0/24 md5 however, that means there's no password auth occuring at all then? B/c i then tried to connect from another server (a 9.2 db instance which has the client) and another server which had 9.0, I did: psql -h 10.10.0.170 -d testdb -U testuser No password prompt. and no, the pgpass file does not have an entry for these coordinator servers. So in this case that means i'm essentially making the PG-XC instance open to users w/o password auth. When I changed to md5, failed. On Thu, May 23, 2013 at 4:05 PM, Andrei Martsinchyk < and...@gm...> wrote: > Ah, sorry, I forgot to mention that coordinators should also be configured > to accept connections from other coordinators without a password. > Coordinator is sending DDLs to datanodes and coordinators, while DMLs and > SELECTs go only to datanodes. > That's why you are getting the error when trying to execute a DDL. > > > 2013/5/23 Donald Keyass <don...@gm...> > >> Alright i should apologize. I can do DMLs, but I can not do ddls. it >> seems only the postgres sys user can do that. I created a new postgres >> user w/ superuser privs: >> >> CREATE ROLE anotheruser WITH LOGIN PASSWORD 'password123' SUPERUSER >> INHERIT CREATEDB CREATEROLE; >> >> Granting privs, etc... >> Still can't do ddl's. i get the >> "ERROR: Failed to get pooled connections". >> >> when i create the user ... i do it through the coordinator. should I be >> creating user through the datanode on each datanode? >> >> >> On Thu, May 23, 2013 at 11:21 AM, Donald Keyass < >> don...@gm...> wrote: >> >>> actually selecting for the tables works. So I can run queries. But i >>> can't do any dmls or ddls in the database (ie create tables, >>> insert/update/delete). >>> >>> >>> On Thu, May 23, 2013 at 10:59 AM, Donald Keyass < >>> don...@gm...> wrote: >>> >>>> Andrei, >>>> Thanks! That was it. I set the hba for the datanodes as the same as >>>> coordinators. There was no documentation that I saw about the datanodes >>>> hba has to be passwordless (or in trust mode from all coordinators for all >>>> users). >>>> Again, thanks for the help! >>>> >>>> >>>> On Thu, May 23, 2013 at 3:36 AM, Andrei Martsinchyk < >>>> and...@gm...> wrote: >>>> >>>>> Hi Donald, >>>>> >>>>> Please checke your pg_hba.conf on the datanodes, they should allow >>>>> passwordless access from coordinators. >>>>> When coordinator builds up connection string it does not include the >>>>> password. >>>>> >>>>> >>>>> 2013/5/23 Koichi Suzuki <koi...@gm...> >>>>> >>>>>> The following is a typical additional postgresql.conf file lines for >>>>>> coordinators. They're generated by pgxc_ctl utility and works well. >>>>>> Could you compare this with your ones? Especially, I'm interested how you >>>>>> configured pooler_port. >>>>>> >>>>>> Regards; >>>>>> >>>>>> ----8<----8<---- >>>>>> #=========================================== >>>>>> # Added at initialization. 20130523_11:33:33 >>>>>> port = 20004 >>>>>> pooler_port = 20010 >>>>>> gtm_host = 'node13' >>>>>> gtm_port = 20001 >>>>>> # End of Additon >>>>>> ---->8------->8------ >>>>>> >>>>>> >>>>>> ---------- >>>>>> Koichi Suzuki >>>>>> >>>>>> >>>>>> 2013/5/22 Nikhil Sontakke <ni...@st...> >>>>>> >>>>>>> Hi Donald, >>>>>>> >>>>>>> This looks like a user permissions related internal bug in PGXC. >>>>>>> >>>>>>> Regards, >>>>>>> Nikhils >>>>>>> >>>>>>> >>>>>>> >>>>>>> On Thu, May 23, 2013 at 3:13 AM, Donald Keyass < >>>>>>> don...@gm...> wrote: >>>>>>> >>>>>>>> The testdb and user were the very last thing I created after I >>>>>>>> created the node. >>>>>>>> >>>>>>>> >>>>>>>> On Wed, May 22, 2013 at 4:54 PM, Koichi Suzuki < >>>>>>>> koi...@gm...> wrote: >>>>>>>> >>>>>>>>> Did you issue CREATE NODE before you create databases/roles? If >>>>>>>>> you CREATE NODE after you create databases/users, they are not visible to >>>>>>>>> other node and may fall into errors. When you create each node by initdb, >>>>>>>>> you also create gtm's using initgtm, connect all the datanodes/coordinators >>>>>>>>> to the gtm and then issue CREATE NODE before you create anothing else. >>>>>>>>> >>>>>>>>> >>>>>>>>> You can visit >>>>>>>>> https://github.com/koichi-szk/PGXC-Tools/blob/master/pgxc_ctl/pgxc_ctlto look into pgxc_ctl shell script, which takes care of configuration, >>>>>>>>> failover, start and stop the cluster. init_all function will be helpful >>>>>>>>> for you. This shows what you should do to initialize your cluster. The >>>>>>>>> C-version of pgxc_ctl will be included in coming 1.1 release with new >>>>>>>>> Postgres-XC feature. >>>>>>>>> >>>>>>>>> Regards; >>>>>>>>> >>>>>>>> >>>>>>>> >>>>>>>> >>>>>>>> ------------------------------------------------------------------------------ >>>>>>>> Try New Relic Now & We'll Send You this Cool Shirt >>>>>>>> New Relic is the only SaaS-based application performance monitoring >>>>>>>> service >>>>>>>> that delivers powerful full stack analytics. Optimize and monitor >>>>>>>> your >>>>>>>> browser, app, & servers with just a few lines of code. Try New Relic >>>>>>>> and get this awesome Nerd Life shirt! >>>>>>>> http://p.sf.net/sfu/newrelic_d2d_may >>>>>>>> _______________________________________________ >>>>>>>> Postgres-xc-general mailing list >>>>>>>> Pos...@li... >>>>>>>> https://lists.sourceforge.net/lists/listinfo/postgres-xc-general >>>>>>>> >>>>>>>> >>>>>>> >>>>>>> >>>>>>> -- >>>>>>> StormDB - http://www.stormdb.com >>>>>>> The Database Cloud >>>>>>> >>>>>> >>>>>> >>>>>> >>>>>> ------------------------------------------------------------------------------ >>>>>> Try New Relic Now & We'll Send You this Cool Shirt >>>>>> New Relic is the only SaaS-based application performance monitoring >>>>>> service >>>>>> that delivers powerful full stack analytics. Optimize and monitor your >>>>>> browser, app, & servers with just a few lines of code. Try New Relic >>>>>> and get this awesome Nerd Life shirt! >>>>>> http://p.sf.net/sfu/newrelic_d2d_may >>>>>> _______________________________________________ >>>>>> Postgres-xc-general mailing list >>>>>> Pos...@li... >>>>>> https://lists.sourceforge.net/lists/listinfo/postgres-xc-general >>>>>> >>>>>> >>>>> >>>>> >>>>> -- >>>>> Andrei Martsinchyk >>>>> >>>>> >>>>> StormDB - http://www.stormdb.com >>>>> The Database Cloud >>>>> >>>>> >>>>> >>>>> ------------------------------------------------------------------------------ >>>>> Try New Relic Now & We'll Send You this Cool Shirt >>>>> New Relic is the only SaaS-based application performance monitoring >>>>> service >>>>> that delivers powerful full stack analytics. Optimize and monitor your >>>>> browser, app, & servers with just a few lines of code. Try New Relic >>>>> and get this awesome Nerd Life shirt! >>>>> http://p.sf.net/sfu/newrelic_d2d_may >>>>> _______________________________________________ >>>>> Postgres-xc-general mailing list >>>>> Pos...@li... >>>>> https://lists.sourceforge.net/lists/listinfo/postgres-xc-general >>>>> >>>>> >>>> >>> >> > > > -- > Andrei Martsinchyk > > StormDB - http://www.stormdb.com > The Database Cloud > > |
|
From: Andrei M. <and...@gm...> - 2013-05-23 20:06:08
|
Ah, sorry, I forgot to mention that coordinators should also be configured to accept connections from other coordinators without a password. Coordinator is sending DDLs to datanodes and coordinators, while DMLs and SELECTs go only to datanodes. That's why you are getting the error when trying to execute a DDL. 2013/5/23 Donald Keyass <don...@gm...> > Alright i should apologize. I can do DMLs, but I can not do ddls. it > seems only the postgres sys user can do that. I created a new postgres > user w/ superuser privs: > > CREATE ROLE anotheruser WITH LOGIN PASSWORD 'password123' SUPERUSER > INHERIT CREATEDB CREATEROLE; > > Granting privs, etc... > Still can't do ddl's. i get the > "ERROR: Failed to get pooled connections". > > when i create the user ... i do it through the coordinator. should I be > creating user through the datanode on each datanode? > > > On Thu, May 23, 2013 at 11:21 AM, Donald Keyass < > don...@gm...> wrote: > >> actually selecting for the tables works. So I can run queries. But i >> can't do any dmls or ddls in the database (ie create tables, >> insert/update/delete). >> >> >> On Thu, May 23, 2013 at 10:59 AM, Donald Keyass < >> don...@gm...> wrote: >> >>> Andrei, >>> Thanks! That was it. I set the hba for the datanodes as the same as >>> coordinators. There was no documentation that I saw about the datanodes >>> hba has to be passwordless (or in trust mode from all coordinators for all >>> users). >>> Again, thanks for the help! >>> >>> >>> On Thu, May 23, 2013 at 3:36 AM, Andrei Martsinchyk < >>> and...@gm...> wrote: >>> >>>> Hi Donald, >>>> >>>> Please checke your pg_hba.conf on the datanodes, they should allow >>>> passwordless access from coordinators. >>>> When coordinator builds up connection string it does not include the >>>> password. >>>> >>>> >>>> 2013/5/23 Koichi Suzuki <koi...@gm...> >>>> >>>>> The following is a typical additional postgresql.conf file lines for >>>>> coordinators. They're generated by pgxc_ctl utility and works well. >>>>> Could you compare this with your ones? Especially, I'm interested how you >>>>> configured pooler_port. >>>>> >>>>> Regards; >>>>> >>>>> ----8<----8<---- >>>>> #=========================================== >>>>> # Added at initialization. 20130523_11:33:33 >>>>> port = 20004 >>>>> pooler_port = 20010 >>>>> gtm_host = 'node13' >>>>> gtm_port = 20001 >>>>> # End of Additon >>>>> ---->8------->8------ >>>>> >>>>> >>>>> ---------- >>>>> Koichi Suzuki >>>>> >>>>> >>>>> 2013/5/22 Nikhil Sontakke <ni...@st...> >>>>> >>>>>> Hi Donald, >>>>>> >>>>>> This looks like a user permissions related internal bug in PGXC. >>>>>> >>>>>> Regards, >>>>>> Nikhils >>>>>> >>>>>> >>>>>> >>>>>> On Thu, May 23, 2013 at 3:13 AM, Donald Keyass < >>>>>> don...@gm...> wrote: >>>>>> >>>>>>> The testdb and user were the very last thing I created after I >>>>>>> created the node. >>>>>>> >>>>>>> >>>>>>> On Wed, May 22, 2013 at 4:54 PM, Koichi Suzuki < >>>>>>> koi...@gm...> wrote: >>>>>>> >>>>>>>> Did you issue CREATE NODE before you create databases/roles? If >>>>>>>> you CREATE NODE after you create databases/users, they are not visible to >>>>>>>> other node and may fall into errors. When you create each node by initdb, >>>>>>>> you also create gtm's using initgtm, connect all the datanodes/coordinators >>>>>>>> to the gtm and then issue CREATE NODE before you create anothing else. >>>>>>>> >>>>>>>> >>>>>>>> You can visit >>>>>>>> https://github.com/koichi-szk/PGXC-Tools/blob/master/pgxc_ctl/pgxc_ctlto look into pgxc_ctl shell script, which takes care of configuration, >>>>>>>> failover, start and stop the cluster. init_all function will be helpful >>>>>>>> for you. This shows what you should do to initialize your cluster. The >>>>>>>> C-version of pgxc_ctl will be included in coming 1.1 release with new >>>>>>>> Postgres-XC feature. >>>>>>>> >>>>>>>> Regards; >>>>>>>> >>>>>>> >>>>>>> >>>>>>> >>>>>>> ------------------------------------------------------------------------------ >>>>>>> Try New Relic Now & We'll Send You this Cool Shirt >>>>>>> New Relic is the only SaaS-based application performance monitoring >>>>>>> service >>>>>>> that delivers powerful full stack analytics. Optimize and monitor >>>>>>> your >>>>>>> browser, app, & servers with just a few lines of code. Try New Relic >>>>>>> and get this awesome Nerd Life shirt! >>>>>>> http://p.sf.net/sfu/newrelic_d2d_may >>>>>>> _______________________________________________ >>>>>>> Postgres-xc-general mailing list >>>>>>> Pos...@li... >>>>>>> https://lists.sourceforge.net/lists/listinfo/postgres-xc-general >>>>>>> >>>>>>> >>>>>> >>>>>> >>>>>> -- >>>>>> StormDB - http://www.stormdb.com >>>>>> The Database Cloud >>>>>> >>>>> >>>>> >>>>> >>>>> ------------------------------------------------------------------------------ >>>>> Try New Relic Now & We'll Send You this Cool Shirt >>>>> New Relic is the only SaaS-based application performance monitoring >>>>> service >>>>> that delivers powerful full stack analytics. Optimize and monitor your >>>>> browser, app, & servers with just a few lines of code. Try New Relic >>>>> and get this awesome Nerd Life shirt! >>>>> http://p.sf.net/sfu/newrelic_d2d_may >>>>> _______________________________________________ >>>>> Postgres-xc-general mailing list >>>>> Pos...@li... >>>>> https://lists.sourceforge.net/lists/listinfo/postgres-xc-general >>>>> >>>>> >>>> >>>> >>>> -- >>>> Andrei Martsinchyk >>>> >>>> >>>> StormDB - http://www.stormdb.com >>>> The Database Cloud >>>> >>>> >>>> >>>> ------------------------------------------------------------------------------ >>>> Try New Relic Now & We'll Send You this Cool Shirt >>>> New Relic is the only SaaS-based application performance monitoring >>>> service >>>> that delivers powerful full stack analytics. Optimize and monitor your >>>> browser, app, & servers with just a few lines of code. Try New Relic >>>> and get this awesome Nerd Life shirt! >>>> http://p.sf.net/sfu/newrelic_d2d_may >>>> _______________________________________________ >>>> Postgres-xc-general mailing list >>>> Pos...@li... >>>> https://lists.sourceforge.net/lists/listinfo/postgres-xc-general >>>> >>>> >>> >> > -- Andrei Martsinchyk StormDB - http://www.stormdb.com The Database Cloud |
|
From: Donald K. <don...@gm...> - 2013-05-23 19:16:00
|
Alright i should apologize. I can do DMLs, but I can not do ddls. it seems only the postgres sys user can do that. I created a new postgres user w/ superuser privs: CREATE ROLE anotheruser WITH LOGIN PASSWORD 'password123' SUPERUSER INHERIT CREATEDB CREATEROLE; Granting privs, etc... Still can't do ddl's. i get the "ERROR: Failed to get pooled connections". when i create the user ... i do it through the coordinator. should I be creating user through the datanode on each datanode? On Thu, May 23, 2013 at 11:21 AM, Donald Keyass <don...@gm...>wrote: > actually selecting for the tables works. So I can run queries. But i > can't do any dmls or ddls in the database (ie create tables, > insert/update/delete). > > > On Thu, May 23, 2013 at 10:59 AM, Donald Keyass < > don...@gm...> wrote: > >> Andrei, >> Thanks! That was it. I set the hba for the datanodes as the same as >> coordinators. There was no documentation that I saw about the datanodes >> hba has to be passwordless (or in trust mode from all coordinators for all >> users). >> Again, thanks for the help! >> >> >> On Thu, May 23, 2013 at 3:36 AM, Andrei Martsinchyk < >> and...@gm...> wrote: >> >>> Hi Donald, >>> >>> Please checke your pg_hba.conf on the datanodes, they should allow >>> passwordless access from coordinators. >>> When coordinator builds up connection string it does not include the >>> password. >>> >>> >>> 2013/5/23 Koichi Suzuki <koi...@gm...> >>> >>>> The following is a typical additional postgresql.conf file lines for >>>> coordinators. They're generated by pgxc_ctl utility and works well. >>>> Could you compare this with your ones? Especially, I'm interested how you >>>> configured pooler_port. >>>> >>>> Regards; >>>> >>>> ----8<----8<---- >>>> #=========================================== >>>> # Added at initialization. 20130523_11:33:33 >>>> port = 20004 >>>> pooler_port = 20010 >>>> gtm_host = 'node13' >>>> gtm_port = 20001 >>>> # End of Additon >>>> ---->8------->8------ >>>> >>>> >>>> ---------- >>>> Koichi Suzuki >>>> >>>> >>>> 2013/5/22 Nikhil Sontakke <ni...@st...> >>>> >>>>> Hi Donald, >>>>> >>>>> This looks like a user permissions related internal bug in PGXC. >>>>> >>>>> Regards, >>>>> Nikhils >>>>> >>>>> >>>>> >>>>> On Thu, May 23, 2013 at 3:13 AM, Donald Keyass < >>>>> don...@gm...> wrote: >>>>> >>>>>> The testdb and user were the very last thing I created after I >>>>>> created the node. >>>>>> >>>>>> >>>>>> On Wed, May 22, 2013 at 4:54 PM, Koichi Suzuki < >>>>>> koi...@gm...> wrote: >>>>>> >>>>>>> Did you issue CREATE NODE before you create databases/roles? If >>>>>>> you CREATE NODE after you create databases/users, they are not visible to >>>>>>> other node and may fall into errors. When you create each node by initdb, >>>>>>> you also create gtm's using initgtm, connect all the datanodes/coordinators >>>>>>> to the gtm and then issue CREATE NODE before you create anothing else. >>>>>>> >>>>>>> >>>>>>> You can visit >>>>>>> https://github.com/koichi-szk/PGXC-Tools/blob/master/pgxc_ctl/pgxc_ctlto look into pgxc_ctl shell script, which takes care of configuration, >>>>>>> failover, start and stop the cluster. init_all function will be helpful >>>>>>> for you. This shows what you should do to initialize your cluster. The >>>>>>> C-version of pgxc_ctl will be included in coming 1.1 release with new >>>>>>> Postgres-XC feature. >>>>>>> >>>>>>> Regards; >>>>>>> >>>>>> >>>>>> >>>>>> >>>>>> ------------------------------------------------------------------------------ >>>>>> Try New Relic Now & We'll Send You this Cool Shirt >>>>>> New Relic is the only SaaS-based application performance monitoring >>>>>> service >>>>>> that delivers powerful full stack analytics. Optimize and monitor your >>>>>> browser, app, & servers with just a few lines of code. Try New Relic >>>>>> and get this awesome Nerd Life shirt! >>>>>> http://p.sf.net/sfu/newrelic_d2d_may >>>>>> _______________________________________________ >>>>>> Postgres-xc-general mailing list >>>>>> Pos...@li... >>>>>> https://lists.sourceforge.net/lists/listinfo/postgres-xc-general >>>>>> >>>>>> >>>>> >>>>> >>>>> -- >>>>> StormDB - http://www.stormdb.com >>>>> The Database Cloud >>>>> >>>> >>>> >>>> >>>> ------------------------------------------------------------------------------ >>>> Try New Relic Now & We'll Send You this Cool Shirt >>>> New Relic is the only SaaS-based application performance monitoring >>>> service >>>> that delivers powerful full stack analytics. Optimize and monitor your >>>> browser, app, & servers with just a few lines of code. Try New Relic >>>> and get this awesome Nerd Life shirt! >>>> http://p.sf.net/sfu/newrelic_d2d_may >>>> _______________________________________________ >>>> Postgres-xc-general mailing list >>>> Pos...@li... >>>> https://lists.sourceforge.net/lists/listinfo/postgres-xc-general >>>> >>>> >>> >>> >>> -- >>> Andrei Martsinchyk >>> >>> >>> StormDB - http://www.stormdb.com >>> The Database Cloud >>> >>> >>> >>> ------------------------------------------------------------------------------ >>> Try New Relic Now & We'll Send You this Cool Shirt >>> New Relic is the only SaaS-based application performance monitoring >>> service >>> that delivers powerful full stack analytics. Optimize and monitor your >>> browser, app, & servers with just a few lines of code. Try New Relic >>> and get this awesome Nerd Life shirt! >>> http://p.sf.net/sfu/newrelic_d2d_may >>> _______________________________________________ >>> Postgres-xc-general mailing list >>> Pos...@li... >>> https://lists.sourceforge.net/lists/listinfo/postgres-xc-general >>> >>> >> > |
|
From: Donald K. <don...@gm...> - 2013-05-23 15:21:32
|
actually selecting for the tables works. So I can run queries. But i can't do any dmls or ddls in the database (ie create tables, insert/update/delete). On Thu, May 23, 2013 at 10:59 AM, Donald Keyass <don...@gm...>wrote: > Andrei, > Thanks! That was it. I set the hba for the datanodes as the same as > coordinators. There was no documentation that I saw about the datanodes > hba has to be passwordless (or in trust mode from all coordinators for all > users). > Again, thanks for the help! > > > On Thu, May 23, 2013 at 3:36 AM, Andrei Martsinchyk < > and...@gm...> wrote: > >> Hi Donald, >> >> Please checke your pg_hba.conf on the datanodes, they should allow >> passwordless access from coordinators. >> When coordinator builds up connection string it does not include the >> password. >> >> >> 2013/5/23 Koichi Suzuki <koi...@gm...> >> >>> The following is a typical additional postgresql.conf file lines for >>> coordinators. They're generated by pgxc_ctl utility and works well. >>> Could you compare this with your ones? Especially, I'm interested how you >>> configured pooler_port. >>> >>> Regards; >>> >>> ----8<----8<---- >>> #=========================================== >>> # Added at initialization. 20130523_11:33:33 >>> port = 20004 >>> pooler_port = 20010 >>> gtm_host = 'node13' >>> gtm_port = 20001 >>> # End of Additon >>> ---->8------->8------ >>> >>> >>> ---------- >>> Koichi Suzuki >>> >>> >>> 2013/5/22 Nikhil Sontakke <ni...@st...> >>> >>>> Hi Donald, >>>> >>>> This looks like a user permissions related internal bug in PGXC. >>>> >>>> Regards, >>>> Nikhils >>>> >>>> >>>> >>>> On Thu, May 23, 2013 at 3:13 AM, Donald Keyass < >>>> don...@gm...> wrote: >>>> >>>>> The testdb and user were the very last thing I created after I created >>>>> the node. >>>>> >>>>> >>>>> On Wed, May 22, 2013 at 4:54 PM, Koichi Suzuki < >>>>> koi...@gm...> wrote: >>>>> >>>>>> Did you issue CREATE NODE before you create databases/roles? If you >>>>>> CREATE NODE after you create databases/users, they are not visible to other >>>>>> node and may fall into errors. When you create each node by initdb, you >>>>>> also create gtm's using initgtm, connect all the datanodes/coordinators to >>>>>> the gtm and then issue CREATE NODE before you create anothing else. >>>>>> >>>>>> >>>>>> You can visit >>>>>> https://github.com/koichi-szk/PGXC-Tools/blob/master/pgxc_ctl/pgxc_ctlto look into pgxc_ctl shell script, which takes care of configuration, >>>>>> failover, start and stop the cluster. init_all function will be helpful >>>>>> for you. This shows what you should do to initialize your cluster. The >>>>>> C-version of pgxc_ctl will be included in coming 1.1 release with new >>>>>> Postgres-XC feature. >>>>>> >>>>>> Regards; >>>>>> >>>>> >>>>> >>>>> >>>>> ------------------------------------------------------------------------------ >>>>> Try New Relic Now & We'll Send You this Cool Shirt >>>>> New Relic is the only SaaS-based application performance monitoring >>>>> service >>>>> that delivers powerful full stack analytics. Optimize and monitor your >>>>> browser, app, & servers with just a few lines of code. Try New Relic >>>>> and get this awesome Nerd Life shirt! >>>>> http://p.sf.net/sfu/newrelic_d2d_may >>>>> _______________________________________________ >>>>> Postgres-xc-general mailing list >>>>> Pos...@li... >>>>> https://lists.sourceforge.net/lists/listinfo/postgres-xc-general >>>>> >>>>> >>>> >>>> >>>> -- >>>> StormDB - http://www.stormdb.com >>>> The Database Cloud >>>> >>> >>> >>> >>> ------------------------------------------------------------------------------ >>> Try New Relic Now & We'll Send You this Cool Shirt >>> New Relic is the only SaaS-based application performance monitoring >>> service >>> that delivers powerful full stack analytics. Optimize and monitor your >>> browser, app, & servers with just a few lines of code. Try New Relic >>> and get this awesome Nerd Life shirt! >>> http://p.sf.net/sfu/newrelic_d2d_may >>> _______________________________________________ >>> Postgres-xc-general mailing list >>> Pos...@li... >>> https://lists.sourceforge.net/lists/listinfo/postgres-xc-general >>> >>> >> >> >> -- >> Andrei Martsinchyk >> >> >> StormDB - http://www.stormdb.com >> The Database Cloud >> >> >> >> ------------------------------------------------------------------------------ >> Try New Relic Now & We'll Send You this Cool Shirt >> New Relic is the only SaaS-based application performance monitoring >> service >> that delivers powerful full stack analytics. Optimize and monitor your >> browser, app, & servers with just a few lines of code. Try New Relic >> and get this awesome Nerd Life shirt! >> http://p.sf.net/sfu/newrelic_d2d_may >> _______________________________________________ >> Postgres-xc-general mailing list >> Pos...@li... >> https://lists.sourceforge.net/lists/listinfo/postgres-xc-general >> >> > |
|
From: Donald K. <don...@gm...> - 2013-05-23 14:59:09
|
Andrei, Thanks! That was it. I set the hba for the datanodes as the same as coordinators. There was no documentation that I saw about the datanodes hba has to be passwordless (or in trust mode from all coordinators for all users). Again, thanks for the help! On Thu, May 23, 2013 at 3:36 AM, Andrei Martsinchyk < and...@gm...> wrote: > Hi Donald, > > Please checke your pg_hba.conf on the datanodes, they should allow > passwordless access from coordinators. > When coordinator builds up connection string it does not include the > password. > > > 2013/5/23 Koichi Suzuki <koi...@gm...> > >> The following is a typical additional postgresql.conf file lines for >> coordinators. They're generated by pgxc_ctl utility and works well. >> Could you compare this with your ones? Especially, I'm interested how you >> configured pooler_port. >> >> Regards; >> >> ----8<----8<---- >> #=========================================== >> # Added at initialization. 20130523_11:33:33 >> port = 20004 >> pooler_port = 20010 >> gtm_host = 'node13' >> gtm_port = 20001 >> # End of Additon >> ---->8------->8------ >> >> >> ---------- >> Koichi Suzuki >> >> >> 2013/5/22 Nikhil Sontakke <ni...@st...> >> >>> Hi Donald, >>> >>> This looks like a user permissions related internal bug in PGXC. >>> >>> Regards, >>> Nikhils >>> >>> >>> >>> On Thu, May 23, 2013 at 3:13 AM, Donald Keyass < >>> don...@gm...> wrote: >>> >>>> The testdb and user were the very last thing I created after I created >>>> the node. >>>> >>>> >>>> On Wed, May 22, 2013 at 4:54 PM, Koichi Suzuki < >>>> koi...@gm...> wrote: >>>> >>>>> Did you issue CREATE NODE before you create databases/roles? If you >>>>> CREATE NODE after you create databases/users, they are not visible to other >>>>> node and may fall into errors. When you create each node by initdb, you >>>>> also create gtm's using initgtm, connect all the datanodes/coordinators to >>>>> the gtm and then issue CREATE NODE before you create anothing else. >>>>> >>>>> >>>>> You can visit >>>>> https://github.com/koichi-szk/PGXC-Tools/blob/master/pgxc_ctl/pgxc_ctlto look into pgxc_ctl shell script, which takes care of configuration, >>>>> failover, start and stop the cluster. init_all function will be helpful >>>>> for you. This shows what you should do to initialize your cluster. The >>>>> C-version of pgxc_ctl will be included in coming 1.1 release with new >>>>> Postgres-XC feature. >>>>> >>>>> Regards; >>>>> >>>> >>>> >>>> >>>> ------------------------------------------------------------------------------ >>>> Try New Relic Now & We'll Send You this Cool Shirt >>>> New Relic is the only SaaS-based application performance monitoring >>>> service >>>> that delivers powerful full stack analytics. Optimize and monitor your >>>> browser, app, & servers with just a few lines of code. Try New Relic >>>> and get this awesome Nerd Life shirt! >>>> http://p.sf.net/sfu/newrelic_d2d_may >>>> _______________________________________________ >>>> Postgres-xc-general mailing list >>>> Pos...@li... >>>> https://lists.sourceforge.net/lists/listinfo/postgres-xc-general >>>> >>>> >>> >>> >>> -- >>> StormDB - http://www.stormdb.com >>> The Database Cloud >>> >> >> >> >> ------------------------------------------------------------------------------ >> Try New Relic Now & We'll Send You this Cool Shirt >> New Relic is the only SaaS-based application performance monitoring >> service >> that delivers powerful full stack analytics. Optimize and monitor your >> browser, app, & servers with just a few lines of code. Try New Relic >> and get this awesome Nerd Life shirt! >> http://p.sf.net/sfu/newrelic_d2d_may >> _______________________________________________ >> Postgres-xc-general mailing list >> Pos...@li... >> https://lists.sourceforge.net/lists/listinfo/postgres-xc-general >> >> > > > -- > Andrei Martsinchyk > > > StormDB - http://www.stormdb.com > The Database Cloud > > > > ------------------------------------------------------------------------------ > Try New Relic Now & We'll Send You this Cool Shirt > New Relic is the only SaaS-based application performance monitoring service > that delivers powerful full stack analytics. Optimize and monitor your > browser, app, & servers with just a few lines of code. Try New Relic > and get this awesome Nerd Life shirt! http://p.sf.net/sfu/newrelic_d2d_may > _______________________________________________ > Postgres-xc-general mailing list > Pos...@li... > https://lists.sourceforge.net/lists/listinfo/postgres-xc-general > > |
|
From: Donald K. <don...@gm...> - 2013-05-23 14:50:22
|
I left the pooler port as the default which I believe is 6667. But I don't see how that effects a postgresql generated user vs the postgres user which can do ddls/dmls/and queries against tables. If the pooler port would effect pg generated user, wouldn't the postgres user as well be effected by this? On Wed, May 22, 2013 at 10:36 PM, Koichi Suzuki <koi...@gm...>wrote: > The following is a typical additional postgresql.conf file lines for > coordinators. They're generated by pgxc_ctl utility and works well. > Could you compare this with your ones? Especially, I'm interested how you > configured pooler_port. > > Regards; > > ----8<----8<---- > #=========================================== > # Added at initialization. 20130523_11:33:33 > port = 20004 > pooler_port = 20010 > gtm_host = 'node13' > gtm_port = 20001 > # End of Additon > ---->8------->8------ > > > ---------- > Koichi Suzuki > > > 2013/5/22 Nikhil Sontakke <ni...@st...> > >> Hi Donald, >> >> This looks like a user permissions related internal bug in PGXC. >> >> Regards, >> Nikhils >> >> >> >> On Thu, May 23, 2013 at 3:13 AM, Donald Keyass < >> don...@gm...> wrote: >> >>> The testdb and user were the very last thing I created after I created >>> the node. >>> >>> >>> On Wed, May 22, 2013 at 4:54 PM, Koichi Suzuki < >>> koi...@gm...> wrote: >>> >>>> Did you issue CREATE NODE before you create databases/roles? If you >>>> CREATE NODE after you create databases/users, they are not visible to other >>>> node and may fall into errors. When you create each node by initdb, you >>>> also create gtm's using initgtm, connect all the datanodes/coordinators to >>>> the gtm and then issue CREATE NODE before you create anothing else. >>>> >>>> >>>> You can visit >>>> https://github.com/koichi-szk/PGXC-Tools/blob/master/pgxc_ctl/pgxc_ctlto look into pgxc_ctl shell script, which takes care of configuration, >>>> failover, start and stop the cluster. init_all function will be helpful >>>> for you. This shows what you should do to initialize your cluster. The >>>> C-version of pgxc_ctl will be included in coming 1.1 release with new >>>> Postgres-XC feature. >>>> >>>> Regards; >>>> >>> >>> >>> >>> ------------------------------------------------------------------------------ >>> Try New Relic Now & We'll Send You this Cool Shirt >>> New Relic is the only SaaS-based application performance monitoring >>> service >>> that delivers powerful full stack analytics. Optimize and monitor your >>> browser, app, & servers with just a few lines of code. Try New Relic >>> and get this awesome Nerd Life shirt! >>> http://p.sf.net/sfu/newrelic_d2d_may >>> _______________________________________________ >>> Postgres-xc-general mailing list >>> Pos...@li... >>> https://lists.sourceforge.net/lists/listinfo/postgres-xc-general >>> >>> >> >> >> -- >> StormDB - http://www.stormdb.com >> The Database Cloud >> > > |
|
From: Andrei M. <and...@gm...> - 2013-05-23 07:36:58
|
Hi Donald, Please checke your pg_hba.conf on the datanodes, they should allow passwordless access from coordinators. When coordinator builds up connection string it does not include the password. 2013/5/23 Koichi Suzuki <koi...@gm...> > The following is a typical additional postgresql.conf file lines for > coordinators. They're generated by pgxc_ctl utility and works well. > Could you compare this with your ones? Especially, I'm interested how you > configured pooler_port. > > Regards; > > ----8<----8<---- > #=========================================== > # Added at initialization. 20130523_11:33:33 > port = 20004 > pooler_port = 20010 > gtm_host = 'node13' > gtm_port = 20001 > # End of Additon > ---->8------->8------ > > > ---------- > Koichi Suzuki > > > 2013/5/22 Nikhil Sontakke <ni...@st...> > >> Hi Donald, >> >> This looks like a user permissions related internal bug in PGXC. >> >> Regards, >> Nikhils >> >> >> >> On Thu, May 23, 2013 at 3:13 AM, Donald Keyass < >> don...@gm...> wrote: >> >>> The testdb and user were the very last thing I created after I created >>> the node. >>> >>> >>> On Wed, May 22, 2013 at 4:54 PM, Koichi Suzuki < >>> koi...@gm...> wrote: >>> >>>> Did you issue CREATE NODE before you create databases/roles? If you >>>> CREATE NODE after you create databases/users, they are not visible to other >>>> node and may fall into errors. When you create each node by initdb, you >>>> also create gtm's using initgtm, connect all the datanodes/coordinators to >>>> the gtm and then issue CREATE NODE before you create anothing else. >>>> >>>> >>>> You can visit >>>> https://github.com/koichi-szk/PGXC-Tools/blob/master/pgxc_ctl/pgxc_ctlto look into pgxc_ctl shell script, which takes care of configuration, >>>> failover, start and stop the cluster. init_all function will be helpful >>>> for you. This shows what you should do to initialize your cluster. The >>>> C-version of pgxc_ctl will be included in coming 1.1 release with new >>>> Postgres-XC feature. >>>> >>>> Regards; >>>> >>> >>> >>> >>> ------------------------------------------------------------------------------ >>> Try New Relic Now & We'll Send You this Cool Shirt >>> New Relic is the only SaaS-based application performance monitoring >>> service >>> that delivers powerful full stack analytics. Optimize and monitor your >>> browser, app, & servers with just a few lines of code. Try New Relic >>> and get this awesome Nerd Life shirt! >>> http://p.sf.net/sfu/newrelic_d2d_may >>> _______________________________________________ >>> Postgres-xc-general mailing list >>> Pos...@li... >>> https://lists.sourceforge.net/lists/listinfo/postgres-xc-general >>> >>> >> >> >> -- >> StormDB - http://www.stormdb.com >> The Database Cloud >> > > > > ------------------------------------------------------------------------------ > Try New Relic Now & We'll Send You this Cool Shirt > New Relic is the only SaaS-based application performance monitoring service > that delivers powerful full stack analytics. Optimize and monitor your > browser, app, & servers with just a few lines of code. Try New Relic > and get this awesome Nerd Life shirt! http://p.sf.net/sfu/newrelic_d2d_may > _______________________________________________ > Postgres-xc-general mailing list > Pos...@li... > https://lists.sourceforge.net/lists/listinfo/postgres-xc-general > > -- Andrei Martsinchyk StormDB - http://www.stormdb.com The Database Cloud |
|
From: Koichi S. <koi...@gm...> - 2013-05-23 02:36:55
|
The following is a typical additional postgresql.conf file lines for coordinators. They're generated by pgxc_ctl utility and works well. Could you compare this with your ones? Especially, I'm interested how you configured pooler_port. Regards; ----8<----8<---- #=========================================== # Added at initialization. 20130523_11:33:33 port = 20004 pooler_port = 20010 gtm_host = 'node13' gtm_port = 20001 # End of Additon ---->8------->8------ ---------- Koichi Suzuki 2013/5/22 Nikhil Sontakke <ni...@st...> > Hi Donald, > > This looks like a user permissions related internal bug in PGXC. > > Regards, > Nikhils > > > > On Thu, May 23, 2013 at 3:13 AM, Donald Keyass <don...@gm... > > wrote: > >> The testdb and user were the very last thing I created after I created >> the node. >> >> >> On Wed, May 22, 2013 at 4:54 PM, Koichi Suzuki <koi...@gm... >> > wrote: >> >>> Did you issue CREATE NODE before you create databases/roles? If you >>> CREATE NODE after you create databases/users, they are not visible to other >>> node and may fall into errors. When you create each node by initdb, you >>> also create gtm's using initgtm, connect all the datanodes/coordinators to >>> the gtm and then issue CREATE NODE before you create anothing else. >>> >>> >>> You can visit >>> https://github.com/koichi-szk/PGXC-Tools/blob/master/pgxc_ctl/pgxc_ctlto look into pgxc_ctl shell script, which takes care of configuration, >>> failover, start and stop the cluster. init_all function will be helpful >>> for you. This shows what you should do to initialize your cluster. The >>> C-version of pgxc_ctl will be included in coming 1.1 release with new >>> Postgres-XC feature. >>> >>> Regards; >>> >> >> >> >> ------------------------------------------------------------------------------ >> Try New Relic Now & We'll Send You this Cool Shirt >> New Relic is the only SaaS-based application performance monitoring >> service >> that delivers powerful full stack analytics. Optimize and monitor your >> browser, app, & servers with just a few lines of code. Try New Relic >> and get this awesome Nerd Life shirt! >> http://p.sf.net/sfu/newrelic_d2d_may >> _______________________________________________ >> Postgres-xc-general mailing list >> Pos...@li... >> https://lists.sourceforge.net/lists/listinfo/postgres-xc-general >> >> > > > -- > StormDB - http://www.stormdb.com > The Database Cloud > |
|
From: Nikhil S. <ni...@st...> - 2013-05-22 22:51:23
|
Hi Donald, This looks like a user permissions related internal bug in PGXC. Regards, Nikhils On Thu, May 23, 2013 at 3:13 AM, Donald Keyass <don...@gm...>wrote: > The testdb and user were the very last thing I created after I created the > node. > > > On Wed, May 22, 2013 at 4:54 PM, Koichi Suzuki <koi...@gm...>wrote: > >> Did you issue CREATE NODE before you create databases/roles? If you >> CREATE NODE after you create databases/users, they are not visible to other >> node and may fall into errors. When you create each node by initdb, you >> also create gtm's using initgtm, connect all the datanodes/coordinators to >> the gtm and then issue CREATE NODE before you create anothing else. >> >> >> You can visit >> https://github.com/koichi-szk/PGXC-Tools/blob/master/pgxc_ctl/pgxc_ctlto look into pgxc_ctl shell script, which takes care of configuration, >> failover, start and stop the cluster. init_all function will be helpful >> for you. This shows what you should do to initialize your cluster. The >> C-version of pgxc_ctl will be included in coming 1.1 release with new >> Postgres-XC feature. >> >> Regards; >> > > > > ------------------------------------------------------------------------------ > Try New Relic Now & We'll Send You this Cool Shirt > New Relic is the only SaaS-based application performance monitoring service > that delivers powerful full stack analytics. Optimize and monitor your > browser, app, & servers with just a few lines of code. Try New Relic > and get this awesome Nerd Life shirt! http://p.sf.net/sfu/newrelic_d2d_may > _______________________________________________ > Postgres-xc-general mailing list > Pos...@li... > https://lists.sourceforge.net/lists/listinfo/postgres-xc-general > > -- StormDB - http://www.stormdb.com The Database Cloud |
|
From: Donald K. <don...@gm...> - 2013-05-22 21:43:20
|
The testdb and user were the very last thing I created after I created the node. On Wed, May 22, 2013 at 4:54 PM, Koichi Suzuki <koi...@gm...>wrote: > Did you issue CREATE NODE before you create databases/roles? If you > CREATE NODE after you create databases/users, they are not visible to other > node and may fall into errors. When you create each node by initdb, you > also create gtm's using initgtm, connect all the datanodes/coordinators to > the gtm and then issue CREATE NODE before you create anothing else. > > > You can visit > https://github.com/koichi-szk/PGXC-Tools/blob/master/pgxc_ctl/pgxc_ctl to > look into pgxc_ctl shell script, which takes care of configuration, > failover, start and stop the cluster. init_all function will be helpful > for you. This shows what you should do to initialize your cluster. The > C-version of pgxc_ctl will be included in coming 1.1 release with new > Postgres-XC feature. > > Regards; > |
|
From: Koichi S. <koi...@gm...> - 2013-05-22 20:54:51
|
Did you issue CREATE NODE before you create databases/roles? If you CREATE NODE after you create databases/users, they are not visible to other node and may fall into errors. When you create each node by initdb, you also create gtm's using initgtm, connect all the datanodes/coordinators to the gtm and then issue CREATE NODE before you create anothing else. You can visit https://github.com/koichi-szk/PGXC-Tools/blob/master/pgxc_ctl/pgxc_ctl to look into pgxc_ctl shell script, which takes care of configuration, failover, start and stop the cluster. init_all function will be helpful for you. This shows what you should do to initialize your cluster. The C-version of pgxc_ctl will be included in coming 1.1 release with new Postgres-XC feature. Regards; |
|
From: Donald K. <don...@gm...> - 2013-05-22 19:38:28
|
I'm running pg-xc 1.0.3 on debian linux (squeeze).
I have 3 boxes. 1 for GTM and 2 for datanodes and coordinators.
I have all three up and running. 1 GTM, 2 coordinators and 2 datanodes.
I log in as system user postgres. I can create databases, create users,
select, etc...
So i created a database called testdb. In the testdb, I created a table
called test_tab.
I can query it as postgres user thru both coordinators.
WHen I create a postgresql user:
CREATE ROLE testuser WITH LOGIN PASSWORD 'testpass';
GRANT ALL PRIVILEGES ON DATABASE testdb to testuser;
GRANT CONNECT ON DATABASE testdb TO testuser;
GRANT ALL PRIVILEGES ON ALL TABLES IN SCHEMA public TO testuser;
I can see the table, but when i select from it, i get:
ERROR: Failed to get pooled connections
I did this on each of the nodes:
/usr/local/pgsql/bin/psql -c "CREATE NODE datanode0 WITH (TYPE =
'datanode', PORT = 5433, HOST = '100.100.10.170')" postgres
/usr/local/pgsql/bin/psql -c "CREATE NODE datanode1 WITH (TYPE =
'datanode', PORT = 5433, HOST = '100.100.10.171')" postgres
/usr/local/pgsql/bin/psql -c "CREATE NODE coord0 WITH (TYPE =
'coordinator', PORT = 5432, HOST = '100.100.10.170')" postgres
/usr/local/pgsql/bin/psql -c "CREATE NODE coord1 WITH (TYPE =
'coordinator', PORT = 5432, HOST = '100.100.10.171')" postgres
and
/usr/local/pgsql/bin/psql -c "SELECT pgxc_pool_reload()" postgres
where i got result of true.
I even did:
postgres=# select pgxc_pool_check();
pgxc_pool_check
-----------------
t
(1 row)
and both are ok.
when i do and lsof -i -n -P
postgres 32131 postgres 3u IPv4 7245254 0t0 TCP *:5433
(LISTEN)
postgres 32131 postgres 4u IPv6 7245255 0t0 TCP *:5433
(LISTEN)
postgres 32141 postgres 3u IPv4 7245283 0t0 TCP *:5432
(LISTEN)
postgres 32141 postgres 4u IPv6 7245284 0t0 TCP *:5432
(LISTEN)
on both servers (data/coord)
So I know they're listening. And two, i was able to create a database and
table and insert and select from it as system user postgres.
But when I create a postgresql user. It doesn't work.
My Coord and DataNode pg_hba.conf is:
# "local" is for Unix domain socket connections only
local all postgres trust
# IPv4 local connections:
host all postgres 100.100.10.170/24 trust
host all postgres 100.100.10.171/24 trust
host all postgres 100.100.10.178/24 trust
#GTM
host all all 127.0.0.1/32 md5
host all all 100.100.10.0/24 md5
# IPv6 local connections:
host all all ::1/128 md5
and in my pgxc_node:
node_name | node_type | node_port | node_host | nodeis_primary |
nodeis_preferred | node_id
-----------+-----------+-----------+-------------+----------------+------------------+-------------
datanode0 | D | 5433 | 100.100.10.170 | f | f
| -96290283
datanode1 | D | 5433 | 100.100.10.171 | f | f
| 888802358
coord1 | C | 5432 | 100.100.10.171 | f | f
| 1885696643
coord0 | C | 5432 | 100.100.10.170 | f | f
| -1944967855
for both servers.
I have them on both coordinator and datanode databases.
I'm @ a loss as to why a system user w/ admin privs can query but a
postgres user w/ priviledges to query gets a :
ERROR: Failed to get pooled connections
I'm figuring it's a config issue somewhere, but i'm not sure as to where.
Can some one point me in the right direction?
Thank you in advance,
Don.
|
|
From: 鈴木 幸市 <ko...@in...> - 2013-05-13 09:51:02
|
I think all the core members and active developers know about this so this will be maintained in the future too. Committers should be careful about it though. Regards; --- Koichi Suzuki On 2013/05/13, at 18:28, Tomonari Katsumata <kat...@po...> wrote: > Hi, Suzuki-san > > According to my test, 1.0.2 works fine, so > I don't have any problem now. > And 1.1dev works fine too. > > I hope nobody will change this feature for future relese. > > thanks, > ----------------- > NTT Software Corporation > Tomonari Katsumata > > >> Katsumata-san; >> >> Unfortunately it was not included in 1.0.2 to save development > resources. I'd like to know how serious it is for 1.0, not for 1.1. > Depending upon severity, I may be able to re-prioritize my resource. >> >> Regards; >> --- >> Koichi Suzuki >> >> >> >> On 2013/05/13, at 14:39, Tomonari Katsumata > <kat...@po...> wrote: >> >>> Hi, >>> >>> I'm thinking about how to recover the system after GTM crash. >>> >>> After GTM stopped, I just start it again. >>> It works fine with Postgres-XC 1.0.2, and I could do "CREATE TABLE", >>> "INSERT" some data and "SELECT" them. >>> >>> But with Postgres-XC 1.0.1, It doesn't work same. >>> I have to make GTM_proxy reconnect to GTM manually(gtm_ctl reconnect). >>> In some cases, I have to restart GTM_proxy. >>> >>> I want to know 2 things about this. >>> 1. I think this new feature(*) is included in Postgres-XC 1.0.2, right? >>> (*) reconnect doesn't be required with restarting GTM. >>> 2. May I think this feature will never change for future releases? >>> (I've confirmed the feature is alive in Postgres-XC 1.1dev !) >>> >>> ============= >>> Environment >>> ============= >>> 2 servers are there. >>> components are destributed like bellow. >>> >>> server1 - GTM, GTM_proxy1, Coordinator1, Datanode1 >>> server2 - GTM_proxy2, Coordinator2, Datanode2 >>> >>> ========== >>> reproduce >>> ========== >>> 1. stop GTM manually (kill -9 <PID of GTM>) >>> 2. start GTM again manually. >>> >>> >>> regards, >>> ----------------- >>> NTT Software Corporation >>> Tomonari Katsumata >>> >>> >>> >>> > ------------------------------------------------------------------------------ >>> Learn Graph Databases - Download FREE O'Reilly Book >>> "Graph Databases" is the definitive new guide to graph databases and >>> their applications. This 200-page book is written by three acclaimed >>> leaders in the field. The early access version is available now. >>> Download your free book today! http://p.sf.net/sfu/neotech_d2d_may >>> _______________________________________________ >>> Postgres-xc-general mailing list >>> Pos...@li... >>> https://lists.sourceforge.net/lists/listinfo/postgres-xc-general >>> >> >> > > > > > > ------------------------------------------------------------------------------ > Learn Graph Databases - Download FREE O'Reilly Book > "Graph Databases" is the definitive new guide to graph databases and > their applications. This 200-page book is written by three acclaimed > leaders in the field. The early access version is available now. > Download your free book today! http://p.sf.net/sfu/neotech_d2d_may > _______________________________________________ > Postgres-xc-general mailing list > Pos...@li... > https://lists.sourceforge.net/lists/listinfo/postgres-xc-general > |
|
From: Tomonari K. <kat...@po...> - 2013-05-13 09:32:23
|
Hi, Suzuki-san According to my test, 1.0.2 works fine, so I don't have any problem now. And 1.1dev works fine too. I hope nobody will change this feature for future relese. thanks, ----------------- NTT Software Corporation Tomonari Katsumata > Katsumata-san; > > Unfortunately it was not included in 1.0.2 to save development resources. I'd like to know how serious it is for 1.0, not for 1.1. Depending upon severity, I may be able to re-prioritize my resource. > > Regards; > --- > Koichi Suzuki > > > > On 2013/05/13, at 14:39, Tomonari Katsumata <kat...@po...> wrote: > >> Hi, >> >> I'm thinking about how to recover the system after GTM crash. >> >> After GTM stopped, I just start it again. >> It works fine with Postgres-XC 1.0.2, and I could do "CREATE TABLE", >> "INSERT" some data and "SELECT" them. >> >> But with Postgres-XC 1.0.1, It doesn't work same. >> I have to make GTM_proxy reconnect to GTM manually(gtm_ctl reconnect). >> In some cases, I have to restart GTM_proxy. >> >> I want to know 2 things about this. >> 1. I think this new feature(*) is included in Postgres-XC 1.0.2, right? >> (*) reconnect doesn't be required with restarting GTM. >> 2. May I think this feature will never change for future releases? >> (I've confirmed the feature is alive in Postgres-XC 1.1dev !) >> >> ============= >> Environment >> ============= >> 2 servers are there. >> components are destributed like bellow. >> >> server1 - GTM, GTM_proxy1, Coordinator1, Datanode1 >> server2 - GTM_proxy2, Coordinator2, Datanode2 >> >> ========== >> reproduce >> ========== >> 1. stop GTM manually (kill -9 <PID of GTM>) >> 2. start GTM again manually. >> >> >> regards, >> ----------------- >> NTT Software Corporation >> Tomonari Katsumata >> >> >> >> ------------------------------------------------------------------------------ >> Learn Graph Databases - Download FREE O'Reilly Book >> "Graph Databases" is the definitive new guide to graph databases and >> their applications. This 200-page book is written by three acclaimed >> leaders in the field. The early access version is available now. >> Download your free book today! http://p.sf.net/sfu/neotech_d2d_may >> _______________________________________________ >> Postgres-xc-general mailing list >> Pos...@li... >> https://lists.sourceforge.net/lists/listinfo/postgres-xc-general >> > > |