You can subscribe to this list here.
2010 |
Jan
|
Feb
|
Mar
|
Apr
|
May
(2) |
Jun
|
Jul
|
Aug
(6) |
Sep
|
Oct
(19) |
Nov
(1) |
Dec
|
---|---|---|---|---|---|---|---|---|---|---|---|---|
2011 |
Jan
(12) |
Feb
(1) |
Mar
(4) |
Apr
(4) |
May
(32) |
Jun
(12) |
Jul
(11) |
Aug
(1) |
Sep
(6) |
Oct
(3) |
Nov
|
Dec
(10) |
2012 |
Jan
(11) |
Feb
(1) |
Mar
(3) |
Apr
(25) |
May
(53) |
Jun
(38) |
Jul
(103) |
Aug
(54) |
Sep
(31) |
Oct
(66) |
Nov
(77) |
Dec
(20) |
2013 |
Jan
(91) |
Feb
(86) |
Mar
(103) |
Apr
(107) |
May
(25) |
Jun
(37) |
Jul
(17) |
Aug
(59) |
Sep
(38) |
Oct
(78) |
Nov
(29) |
Dec
(15) |
2014 |
Jan
(23) |
Feb
(82) |
Mar
(118) |
Apr
(101) |
May
(103) |
Jun
(45) |
Jul
(6) |
Aug
(10) |
Sep
|
Oct
(32) |
Nov
|
Dec
(9) |
2015 |
Jan
(3) |
Feb
(5) |
Mar
|
Apr
(1) |
May
|
Jun
|
Jul
(9) |
Aug
(4) |
Sep
(3) |
Oct
|
Nov
|
Dec
|
2016 |
Jan
(3) |
Feb
|
Mar
|
Apr
|
May
|
Jun
|
Jul
|
Aug
|
Sep
|
Oct
|
Nov
|
Dec
|
2017 |
Jan
|
Feb
|
Mar
|
Apr
|
May
|
Jun
(3) |
Jul
|
Aug
|
Sep
|
Oct
|
Nov
|
Dec
|
2018 |
Jan
|
Feb
|
Mar
|
Apr
|
May
(4) |
Jun
|
Jul
|
Aug
|
Sep
|
Oct
|
Nov
|
Dec
|
S | M | T | W | T | F | S |
---|---|---|---|---|---|---|
|
|
1
|
2
(3) |
3
(3) |
4
(4) |
5
(1) |
6
|
7
(7) |
8
(6) |
9
(5) |
10
(7) |
11
(7) |
12
(1) |
13
|
14
(3) |
15
(4) |
16
(6) |
17
(13) |
18
(6) |
19
|
20
|
21
|
22
|
23
(1) |
24
(4) |
25
(5) |
26
(1) |
27
|
28
(2) |
29
(10) |
30
(2) |
|
|
|
From: Juned K. <jkh...@gm...> - 2014-04-18 10:38:11
|
yeah sure i would like to test it first you mean this presentation http://sourceforge.net/apps/mediawiki/postgres-xc/index.php?title=PGOpen2013_Postgres_Open_2013 ? On Fri, Apr 18, 2014 at 2:30 PM, 鈴木 幸市 <ko...@in...> wrote: > No, ALTER TABLE tablename DELETE NODE (nodename); > > This is very similar to the thing you should do to add the node > > ALTER TABLE table name ADD NODE (ndoename); > > It is included in my demo story at the last PG Open presentation. > > If you’re not sure, you can test it using smaller work table. > > Regards; > --- > Koichi Suzuki > > 2014/04/18 16:54、Juned Khan <jkh...@gm...> のメール: > >> You mean first i have to modify my table structure lets consider i >> have replicated tables so i have to remove "DISTRIBUTE by >> REPLICATION" right ? >> >> And then i have to run DROP node command to remove datanode slave >> >> pgxc_ctl won't do it for me ? >> i.e >> PGXC remove datanode slave datanode1 >> >> PGXC add datanode slave datanode3 node5 20008 /home/postgres/pgxc/nodes/dn_slave >> >> And how it will copy database after adding datanode slave ? >> >> >> >> On Fri, Apr 18, 2014 at 12:59 PM, 鈴木 幸市 <ko...@in...> wrote: >>> Reverse order. >>> >>> First, exclude removing node from the node set your tables are distributed >>> or replicated. You can do this with ALTER TABLE. All the row in the >>> table will be redistributed to modified set of nodes for distributed tables. >>> In the case of replicated tables, tables in the removing node will just be >>> detached. >>> >>> Then, you can issue DROP NODE before you really stop and clear the node >>> resource. >>> --- >>> Koichi Suzuki >>> >>> 2014/04/18 16:13、Juned Khan <jkh...@gm...> のメール: >>> >>> what if i just remove datanode slave and add it again ? all data will be >>> copied to slave ? >>> will it impact on master ? >>> >>> >>> On Fri, Apr 18, 2014 at 6:29 AM, 鈴木 幸市 <ko...@in...> wrote: >>>> >>>> The impact to the master server is almost the same as in the case of >>>> vanilla PG to build a slave using pg_basebackup. It sends all the master >>>> database file resource to the slave together with its WAL. Good thing >>>> compared with more primitive way of pg_start_backup() and pg_stop_backup() >>>> is the data are read directly from the cache so impact to I/O workload will >>>> be smaller. >>>> >>>> If you’re concerning safety to the master resource, there could be another >>>> means, for example, to stop the master, copy everything to the slave, change >>>> master to enable WAL shipping and slave to run as the slave. Principally, >>>> this should work and you can keep the master resource safer but I’ve not >>>> tested this yet. >>>> >>>> Regards; >>>> --- >>>> Koichi Suzuki >>>> >>>> 2014/04/17 20:23、Juned Khan <jkh...@gm...> のメール: >>>> >>>> Hi koichi, >>>> >>>> Is there any other short solution to fix this issue ? >>>> >>>> 1. Run checkpoint and vacuum full at the master, >>>> Found the docs to perform vacuum full but i have confusion about how >>>> to run checkpoint manually. >>>> >>>> 2. Build the slave from the scotch using pg_basebackup (pgxc_ctl provides >>>> this means). >>>> should i run this command from datanode slave server something like ( >>>> pg_basebackup -U postgres -R -D /srv/pgsql/standby --host=192.168.1.17 >>>> --port=5432) >>>> >>>> And very important thing how it will impact on datanode master server ? as >>>> of now i have only this master running on server. >>>> so as of now i just don't want to take a chance, and somehow want to start >>>> slave for backup. >>>> >>>> Please advice >>>> >>>> Regards >>>> Juned Khan >>>> >>>> On Thu, Apr 17, 2014 at 4:43 PM, Juned Khan <jkh...@gm...> wrote: >>>>> >>>>> so i have to do experiment for this. i really need that much connection. >>>>> >>>>> Thanks for the suggestion @Michael >>>>> >>>>> >>>>> On Thu, Apr 17, 2014 at 11:40 AM, Michael Paquier >>>>> <mic...@gm...> wrote: >>>>>> >>>>>> On Thu, Apr 17, 2014 at 2:51 PM, Juned Khan <jkh...@gm...> wrote: >>>>>>> And i can use pgpool and pgbouncer with pgxc right ? >>>>>> In front of the Coordinators, that's fine. But I am not sure in from >>>>>> of the Datanodes as XC has one extra connection parameter to identify >>>>>> the node type from which connection is from, and a couple of >>>>>> additional message types to pass down transaction ID, timestamp and >>>>>> snapshot data from Coordinator to Datanodes (actually Coordinators as >>>>>> well for DDL queries). If those message types and/or connection >>>>>> parameters get filtered by pgpool or pgbouncer, well you cannot use >>>>>> them. I've personally never given a try though, but the idea is worth >>>>>> an attempt to reduce lock contention that could be caused by a too >>>>>> high value of max_connections. >>>>>> -- >>>>>> Michael >>>>> >>>>> >>>>> >>>>> >>>> >>>> ------------------------------------------------------------------------------ >>>> Learn Graph Databases - Download FREE O'Reilly Book >>>> "Graph Databases" is the definitive new guide to graph databases and their >>>> applications. Written by three acclaimed leaders in the field, >>>> this first edition is now available. Download your free book today! >>>> http://p.sf.net/sfu/NeoTech_______________________________________________ >>>> Postgres-xc-general mailing list >>>> Pos...@li... >>>> https://lists.sourceforge.net/lists/listinfo/postgres-xc-general >>>> >>>> >>> >>> >>> >>> -- >> > -- Thanks, Juned Khan iNextrix Technologies Pvt Ltd. www.inextrix.com |
From: 鈴木 幸市 <ko...@in...> - 2014-04-18 09:00:38
|
No, ALTER TABLE tablename DELETE NODE (nodename); This is very similar to the thing you should do to add the node ALTER TABLE table name ADD NODE (ndoename); It is included in my demo story at the last PG Open presentation. If you’re not sure, you can test it using smaller work table. Regards; --- Koichi Suzuki 2014/04/18 16:54、Juned Khan <jkh...@gm...> のメール: > You mean first i have to modify my table structure lets consider i > have replicated tables so i have to remove "DISTRIBUTE by > REPLICATION" right ? > > And then i have to run DROP node command to remove datanode slave > > pgxc_ctl won't do it for me ? > i.e > PGXC remove datanode slave datanode1 > > PGXC add datanode slave datanode3 node5 20008 /home/postgres/pgxc/nodes/dn_slave > > And how it will copy database after adding datanode slave ? > > > > On Fri, Apr 18, 2014 at 12:59 PM, 鈴木 幸市 <ko...@in...> wrote: >> Reverse order. >> >> First, exclude removing node from the node set your tables are distributed >> or replicated. You can do this with ALTER TABLE. All the row in the >> table will be redistributed to modified set of nodes for distributed tables. >> In the case of replicated tables, tables in the removing node will just be >> detached. >> >> Then, you can issue DROP NODE before you really stop and clear the node >> resource. >> --- >> Koichi Suzuki >> >> 2014/04/18 16:13、Juned Khan <jkh...@gm...> のメール: >> >> what if i just remove datanode slave and add it again ? all data will be >> copied to slave ? >> will it impact on master ? >> >> >> On Fri, Apr 18, 2014 at 6:29 AM, 鈴木 幸市 <ko...@in...> wrote: >>> >>> The impact to the master server is almost the same as in the case of >>> vanilla PG to build a slave using pg_basebackup. It sends all the master >>> database file resource to the slave together with its WAL. Good thing >>> compared with more primitive way of pg_start_backup() and pg_stop_backup() >>> is the data are read directly from the cache so impact to I/O workload will >>> be smaller. >>> >>> If you’re concerning safety to the master resource, there could be another >>> means, for example, to stop the master, copy everything to the slave, change >>> master to enable WAL shipping and slave to run as the slave. Principally, >>> this should work and you can keep the master resource safer but I’ve not >>> tested this yet. >>> >>> Regards; >>> --- >>> Koichi Suzuki >>> >>> 2014/04/17 20:23、Juned Khan <jkh...@gm...> のメール: >>> >>> Hi koichi, >>> >>> Is there any other short solution to fix this issue ? >>> >>> 1. Run checkpoint and vacuum full at the master, >>> Found the docs to perform vacuum full but i have confusion about how >>> to run checkpoint manually. >>> >>> 2. Build the slave from the scotch using pg_basebackup (pgxc_ctl provides >>> this means). >>> should i run this command from datanode slave server something like ( >>> pg_basebackup -U postgres -R -D /srv/pgsql/standby --host=192.168.1.17 >>> --port=5432) >>> >>> And very important thing how it will impact on datanode master server ? as >>> of now i have only this master running on server. >>> so as of now i just don't want to take a chance, and somehow want to start >>> slave for backup. >>> >>> Please advice >>> >>> Regards >>> Juned Khan >>> >>> On Thu, Apr 17, 2014 at 4:43 PM, Juned Khan <jkh...@gm...> wrote: >>>> >>>> so i have to do experiment for this. i really need that much connection. >>>> >>>> Thanks for the suggestion @Michael >>>> >>>> >>>> On Thu, Apr 17, 2014 at 11:40 AM, Michael Paquier >>>> <mic...@gm...> wrote: >>>>> >>>>> On Thu, Apr 17, 2014 at 2:51 PM, Juned Khan <jkh...@gm...> wrote: >>>>>> And i can use pgpool and pgbouncer with pgxc right ? >>>>> In front of the Coordinators, that's fine. But I am not sure in from >>>>> of the Datanodes as XC has one extra connection parameter to identify >>>>> the node type from which connection is from, and a couple of >>>>> additional message types to pass down transaction ID, timestamp and >>>>> snapshot data from Coordinator to Datanodes (actually Coordinators as >>>>> well for DDL queries). If those message types and/or connection >>>>> parameters get filtered by pgpool or pgbouncer, well you cannot use >>>>> them. I've personally never given a try though, but the idea is worth >>>>> an attempt to reduce lock contention that could be caused by a too >>>>> high value of max_connections. >>>>> -- >>>>> Michael >>>> >>>> >>>> >>>> >>> >>> ------------------------------------------------------------------------------ >>> Learn Graph Databases - Download FREE O'Reilly Book >>> "Graph Databases" is the definitive new guide to graph databases and their >>> applications. Written by three acclaimed leaders in the field, >>> this first edition is now available. Download your free book today! >>> http://p.sf.net/sfu/NeoTech_______________________________________________ >>> Postgres-xc-general mailing list >>> Pos...@li... >>> https://lists.sourceforge.net/lists/listinfo/postgres-xc-general >>> >>> >> >> >> >> -- > |
From: Juned K. <jkh...@gm...> - 2014-04-18 07:55:05
|
You mean first i have to modify my table structure lets consider i have replicated tables so i have to remove "DISTRIBUTE by REPLICATION" right ? And then i have to run DROP node command to remove datanode slave pgxc_ctl won't do it for me ? i.e PGXC remove datanode slave datanode1 PGXC add datanode slave datanode3 node5 20008 /home/postgres/pgxc/nodes/dn_slave And how it will copy database after adding datanode slave ? On Fri, Apr 18, 2014 at 12:59 PM, 鈴木 幸市 <ko...@in...> wrote: > Reverse order. > > First, exclude removing node from the node set your tables are distributed > or replicated. You can do this with ALTER TABLE. All the row in the > table will be redistributed to modified set of nodes for distributed tables. > In the case of replicated tables, tables in the removing node will just be > detached. > > Then, you can issue DROP NODE before you really stop and clear the node > resource. > --- > Koichi Suzuki > > 2014/04/18 16:13、Juned Khan <jkh...@gm...> のメール: > > what if i just remove datanode slave and add it again ? all data will be > copied to slave ? > will it impact on master ? > > > On Fri, Apr 18, 2014 at 6:29 AM, 鈴木 幸市 <ko...@in...> wrote: >> >> The impact to the master server is almost the same as in the case of >> vanilla PG to build a slave using pg_basebackup. It sends all the master >> database file resource to the slave together with its WAL. Good thing >> compared with more primitive way of pg_start_backup() and pg_stop_backup() >> is the data are read directly from the cache so impact to I/O workload will >> be smaller. >> >> If you’re concerning safety to the master resource, there could be another >> means, for example, to stop the master, copy everything to the slave, change >> master to enable WAL shipping and slave to run as the slave. Principally, >> this should work and you can keep the master resource safer but I’ve not >> tested this yet. >> >> Regards; >> --- >> Koichi Suzuki >> >> 2014/04/17 20:23、Juned Khan <jkh...@gm...> のメール: >> >> Hi koichi, >> >> Is there any other short solution to fix this issue ? >> >> 1. Run checkpoint and vacuum full at the master, >> Found the docs to perform vacuum full but i have confusion about how >> to run checkpoint manually. >> >> 2. Build the slave from the scotch using pg_basebackup (pgxc_ctl provides >> this means). >> should i run this command from datanode slave server something like ( >> pg_basebackup -U postgres -R -D /srv/pgsql/standby --host=192.168.1.17 >> --port=5432) >> >> And very important thing how it will impact on datanode master server ? as >> of now i have only this master running on server. >> so as of now i just don't want to take a chance, and somehow want to start >> slave for backup. >> >> Please advice >> >> Regards >> Juned Khan >> >> On Thu, Apr 17, 2014 at 4:43 PM, Juned Khan <jkh...@gm...> wrote: >>> >>> so i have to do experiment for this. i really need that much connection. >>> >>> Thanks for the suggestion @Michael >>> >>> >>> On Thu, Apr 17, 2014 at 11:40 AM, Michael Paquier >>> <mic...@gm...> wrote: >>>> >>>> On Thu, Apr 17, 2014 at 2:51 PM, Juned Khan <jkh...@gm...> wrote: >>>> > And i can use pgpool and pgbouncer with pgxc right ? >>>> In front of the Coordinators, that's fine. But I am not sure in from >>>> of the Datanodes as XC has one extra connection parameter to identify >>>> the node type from which connection is from, and a couple of >>>> additional message types to pass down transaction ID, timestamp and >>>> snapshot data from Coordinator to Datanodes (actually Coordinators as >>>> well for DDL queries). If those message types and/or connection >>>> parameters get filtered by pgpool or pgbouncer, well you cannot use >>>> them. I've personally never given a try though, but the idea is worth >>>> an attempt to reduce lock contention that could be caused by a too >>>> high value of max_connections. >>>> -- >>>> Michael >>> >>> >>> >>> >> >> ------------------------------------------------------------------------------ >> Learn Graph Databases - Download FREE O'Reilly Book >> "Graph Databases" is the definitive new guide to graph databases and their >> applications. Written by three acclaimed leaders in the field, >> this first edition is now available. Download your free book today! >> http://p.sf.net/sfu/NeoTech_______________________________________________ >> Postgres-xc-general mailing list >> Pos...@li... >> https://lists.sourceforge.net/lists/listinfo/postgres-xc-general >> >> > > > > -- |
From: 鈴木 幸市 <ko...@in...> - 2014-04-18 07:29:25
|
Reverse order. First, exclude removing node from the node set your tables are distributed or replicated. You can do this with ALTER TABLE. All the row in the table will be redistributed to modified set of nodes for distributed tables. In the case of replicated tables, tables in the removing node will just be detached. Then, you can issue DROP NODE before you really stop and clear the node resource. --- Koichi Suzuki 2014/04/18 16:13、Juned Khan <jkh...@gm...<mailto:jkh...@gm...>> のメール: what if i just remove datanode slave and add it again ? all data will be copied to slave ? will it impact on master ? On Fri, Apr 18, 2014 at 6:29 AM, 鈴木 幸市 <ko...@in...<mailto:ko...@in...>> wrote: The impact to the master server is almost the same as in the case of vanilla PG to build a slave using pg_basebackup. It sends all the master database file resource to the slave together with its WAL. Good thing compared with more primitive way of pg_start_backup() and pg_stop_backup() is the data are read directly from the cache so impact to I/O workload will be smaller. If you’re concerning safety to the master resource, there could be another means, for example, to stop the master, copy everything to the slave, change master to enable WAL shipping and slave to run as the slave. Principally, this should work and you can keep the master resource safer but I’ve not tested this yet. Regards; --- Koichi Suzuki 2014/04/17 20:23、Juned Khan <jkh...@gm...<mailto:jkh...@gm...>> のメール: Hi koichi, Is there any other short solution to fix this issue ? 1. Run checkpoint and vacuum full at the master, Found the docs to perform vacuum full but i have confusion about how to run checkpoint manually. 2. Build the slave from the scotch using pg_basebackup (pgxc_ctl provides this means). should i run this command from datanode slave server something like ( pg_basebackup -U postgres -R -D /srv/pgsql/standby --host=192.168.1.17 --port=5432) And very important thing how it will impact on datanode master server ? as of now i have only this master running on server. so as of now i just don't want to take a chance, and somehow want to start slave for backup. Please advice Regards Juned Khan On Thu, Apr 17, 2014 at 4:43 PM, Juned Khan <jkh...@gm...<mailto:jkh...@gm...>> wrote: so i have to do experiment for this. i really need that much connection. Thanks for the suggestion @Michael On Thu, Apr 17, 2014 at 11:40 AM, Michael Paquier <mic...@gm...<mailto:mic...@gm...>> wrote: On Thu, Apr 17, 2014 at 2:51 PM, Juned Khan <jkh...@gm...<mailto:jkh...@gm...>> wrote: > And i can use pgpool and pgbouncer with pgxc right ? In front of the Coordinators, that's fine. But I am not sure in from of the Datanodes as XC has one extra connection parameter to identify the node type from which connection is from, and a couple of additional message types to pass down transaction ID, timestamp and snapshot data from Coordinator to Datanodes (actually Coordinators as well for DDL queries). If those message types and/or connection parameters get filtered by pgpool or pgbouncer, well you cannot use them. I've personally never given a try though, but the idea is worth an attempt to reduce lock contention that could be caused by a too high value of max_connections. -- Michael ------------------------------------------------------------------------------ Learn Graph Databases - Download FREE O'Reilly Book "Graph Databases" is the definitive new guide to graph databases and their applications. Written by three acclaimed leaders in the field, this first edition is now available. Download your free book today! http://p.sf.net/sfu/NeoTech_______________________________________________ Postgres-xc-general mailing list Pos...@li...<mailto:Pos...@li...> https://lists.sourceforge.net/lists/listinfo/postgres-xc-general -- Thanks, Juned Khan iNextrix Technologies Pvt Ltd. www.inextrix.com<http://www.inextrix.com/> |
From: Juned K. <jkh...@gm...> - 2014-04-18 07:14:03
|
what if i just remove datanode slave and add it again ? all data will be copied to slave ? will it impact on master ? On Fri, Apr 18, 2014 at 6:29 AM, 鈴木 幸市 <ko...@in...> wrote: > The impact to the master server is almost the same as in the case of > vanilla PG to build a slave using pg_basebackup. It sends all the master > database file resource to the slave together with its WAL. Good thing > compared with more primitive way of pg_start_backup() and pg_stop_backup() > is the data are read directly from the cache so impact to I/O workload will > be smaller. > > If you’re concerning safety to the master resource, there could be > another means, for example, to stop the master, copy everything to the > slave, change master to enable WAL shipping and slave to run as the slave. > Principally, this should work and you can keep the master resource safer > but I’ve not tested this yet. > > Regards; > --- > Koichi Suzuki > > 2014/04/17 20:23、Juned Khan <jkh...@gm...> のメール: > > Hi koichi, > > Is there any other short solution to fix this issue ? > > 1. Run checkpoint and vacuum full at the master, > Found the docs to perform vacuum full but i have confusion about how > to run checkpoint manually. > > 2. Build the slave from the scotch using pg_basebackup (pgxc_ctl provides > this means). > should i run this command from datanode slave server something like > ( pg_basebackup -U postgres -R -D /srv/pgsql/standby --host=192.168.1.17 > --port=5432) > > And very important thing how it will impact on datanode master server ? > as of now i have only this master running on server. > so as of now i just don't want to take a chance, and somehow want to > start slave for backup. > > Please advice > > Regards > Juned Khan > > On Thu, Apr 17, 2014 at 4:43 PM, Juned Khan <jkh...@gm...> wrote: > >> so i have to do experiment for this. i really need that much connection. >> >> Thanks for the suggestion @Michael >> >> >> On Thu, Apr 17, 2014 at 11:40 AM, Michael Paquier < >> mic...@gm...> wrote: >> >>> On Thu, Apr 17, 2014 at 2:51 PM, Juned Khan <jkh...@gm...> wrote: >>> > And i can use pgpool and pgbouncer with pgxc right ? >>> In front of the Coordinators, that's fine. But I am not sure in from >>> of the Datanodes as XC has one extra connection parameter to identify >>> the node type from which connection is from, and a couple of >>> additional message types to pass down transaction ID, timestamp and >>> snapshot data from Coordinator to Datanodes (actually Coordinators as >>> well for DDL queries). If those message types and/or connection >>> parameters get filtered by pgpool or pgbouncer, well you cannot use >>> them. I've personally never given a try though, but the idea is worth >>> an attempt to reduce lock contention that could be caused by a too >>> high value of max_connections. >>> -- >>> Michael >>> >> >> >> >> > ------------------------------------------------------------------------------ > Learn Graph Databases - Download FREE O'Reilly Book > "Graph Databases" is the definitive new guide to graph databases and their > applications. Written by three acclaimed leaders in the field, > this first edition is now available. Download your free book today! > http://p.sf.net/sfu/NeoTech_______________________________________________ > Postgres-xc-general mailing list > Pos...@li... > https://lists.sourceforge.net/lists/listinfo/postgres-xc-general > > > -- Thanks, Juned Khan iNextrix Technologies Pvt Ltd. www.inextrix.com |
From: 鈴木 幸市 <ko...@in...> - 2014-04-18 00:59:43
|
The impact to the master server is almost the same as in the case of vanilla PG to build a slave using pg_basebackup. It sends all the master database file resource to the slave together with its WAL. Good thing compared with more primitive way of pg_start_backup() and pg_stop_backup() is the data are read directly from the cache so impact to I/O workload will be smaller. If you’re concerning safety to the master resource, there could be another means, for example, to stop the master, copy everything to the slave, change master to enable WAL shipping and slave to run as the slave. Principally, this should work and you can keep the master resource safer but I’ve not tested this yet. Regards; --- Koichi Suzuki 2014/04/17 20:23、Juned Khan <jkh...@gm...<mailto:jkh...@gm...>> のメール: Hi koichi, Is there any other short solution to fix this issue ? 1. Run checkpoint and vacuum full at the master, Found the docs to perform vacuum full but i have confusion about how to run checkpoint manually. 2. Build the slave from the scotch using pg_basebackup (pgxc_ctl provides this means). should i run this command from datanode slave server something like ( pg_basebackup -U postgres -R -D /srv/pgsql/standby --host=192.168.1.17 --port=5432) And very important thing how it will impact on datanode master server ? as of now i have only this master running on server. so as of now i just don't want to take a chance, and somehow want to start slave for backup. Please advice Regards Juned Khan On Thu, Apr 17, 2014 at 4:43 PM, Juned Khan <jkh...@gm...<mailto:jkh...@gm...>> wrote: so i have to do experiment for this. i really need that much connection. Thanks for the suggestion @Michael On Thu, Apr 17, 2014 at 11:40 AM, Michael Paquier <mic...@gm...<mailto:mic...@gm...>> wrote: On Thu, Apr 17, 2014 at 2:51 PM, Juned Khan <jkh...@gm...<mailto:jkh...@gm...>> wrote: > And i can use pgpool and pgbouncer with pgxc right ? In front of the Coordinators, that's fine. But I am not sure in from of the Datanodes as XC has one extra connection parameter to identify the node type from which connection is from, and a couple of additional message types to pass down transaction ID, timestamp and snapshot data from Coordinator to Datanodes (actually Coordinators as well for DDL queries). If those message types and/or connection parameters get filtered by pgpool or pgbouncer, well you cannot use them. I've personally never given a try though, but the idea is worth an attempt to reduce lock contention that could be caused by a too high value of max_connections. -- Michael ------------------------------------------------------------------------------ Learn Graph Databases - Download FREE O'Reilly Book "Graph Databases" is the definitive new guide to graph databases and their applications. Written by three acclaimed leaders in the field, this first edition is now available. Download your free book today! http://p.sf.net/sfu/NeoTech_______________________________________________ Postgres-xc-general mailing list Pos...@li... https://lists.sourceforge.net/lists/listinfo/postgres-xc-general |
From: Juned K. <jkh...@gm...> - 2014-04-17 11:24:06
|
Hi koichi, Is there any other short solution to fix this issue ? 1. Run checkpoint and vacuum full at the master, Found the docs to perform vacuum full but i have confusion about how to run checkpoint manually. 2. Build the slave from the scotch using pg_basebackup (pgxc_ctl provides this means). should i run this command from datanode slave server something like ( pg_basebackup -U postgres -R -D /srv/pgsql/standby --host=192.168.1.17 --port=5432) And very important thing how it will impact on datanode master server ? as of now i have only this master running on server. so as of now i just don't want to take a chance, and somehow want to start slave for backup. Please advice Regards Juned Khan On Thu, Apr 17, 2014 at 4:43 PM, Juned Khan <jkh...@gm...> wrote: > so i have to do experiment for this. i really need that much connection. > > Thanks for the suggestion @Michael > > > On Thu, Apr 17, 2014 at 11:40 AM, Michael Paquier < > mic...@gm...> wrote: > >> On Thu, Apr 17, 2014 at 2:51 PM, Juned Khan <jkh...@gm...> wrote: >> > And i can use pgpool and pgbouncer with pgxc right ? >> In front of the Coordinators, that's fine. But I am not sure in from >> of the Datanodes as XC has one extra connection parameter to identify >> the node type from which connection is from, and a couple of >> additional message types to pass down transaction ID, timestamp and >> snapshot data from Coordinator to Datanodes (actually Coordinators as >> well for DDL queries). If those message types and/or connection >> parameters get filtered by pgpool or pgbouncer, well you cannot use >> them. I've personally never given a try though, but the idea is worth >> an attempt to reduce lock contention that could be caused by a too >> high value of max_connections. >> -- >> Michael >> > > > > |
From: Juned K. <jkh...@gm...> - 2014-04-17 11:13:29
|
so i have to do experiment for this. i really need that much connection. Thanks for the suggestion @Michael On Thu, Apr 17, 2014 at 11:40 AM, Michael Paquier <mic...@gm... > wrote: > On Thu, Apr 17, 2014 at 2:51 PM, Juned Khan <jkh...@gm...> wrote: > > And i can use pgpool and pgbouncer with pgxc right ? > In front of the Coordinators, that's fine. But I am not sure in from > of the Datanodes as XC has one extra connection parameter to identify > the node type from which connection is from, and a couple of > additional message types to pass down transaction ID, timestamp and > snapshot data from Coordinator to Datanodes (actually Coordinators as > well for DDL queries). If those message types and/or connection > parameters get filtered by pgpool or pgbouncer, well you cannot use > them. I've personally never given a try though, but the idea is worth > an attempt to reduce lock contention that could be caused by a too > high value of max_connections. > -- > Michael > -- Thanks, Juned Khan iNextrix Technologies Pvt Ltd. www.inextrix.com |
From: Juned K. <jkh...@gm...> - 2014-04-17 09:30:39
|
Ohh i see i think i read the same earlier but still i can see data in newly created node. myDB=# select count(*) from accounts; count ------- 79 (1 row) And all datanode has same records, so i wonder from where data comes from ? is it synced automatically ? On Thu, Apr 17, 2014 at 1:53 PM, 鈴木 幸市 <ko...@in...> wrote: > While adding a coordinator or a datanode, only catalogs are copied. You > should issue ALTER TABLE to each table to redistribute table rows. > If you don’t want new datanode to be invoked in some tables, you can skip > ALTER TABLE. Table rows will be distributed in the old set of > datanodes. > > If you do it manually, it’s long process and there could be many > pitfalls. Hope pgxc_ctl helps. > > Regards; > --- > Koichi Suzuki > > 2014/04/17 16:49、Juned Khan <jkh...@gm...> のメール: > > Hi All, > > I want to add few more coordinator and datnode in pgxc. during adding > process whole database is being copy to new component. > > I have very large database around 8GB, in this database only one table is > there which holds the large number of records. > > is there anyway to exclude such table while adding new coordinator or > datanode using pgxc_ctl command ? > > I want to exclude some table just to make adding coordinator and datanode > process smother and faster. > > Please suggest. > > -- > Thanks, > Juned Khan > <http://www.inextrix.com/> > > ------------------------------------------------------------------------------ > Learn Graph Databases - Download FREE O'Reilly Book > "Graph Databases" is the definitive new guide to graph databases and their > applications. Written by three acclaimed leaders in the field, > this first edition is now available. Download your free book today! > http://p.sf.net/sfu/NeoTech_______________________________________________ > Postgres-xc-general mailing list > Pos...@li... > https://lists.sourceforge.net/lists/listinfo/postgres-xc-general > > > -- Thanks, Juned Khan iNextrix Technologies Pvt Ltd. www.inextrix.com |
From: Ashutosh B. <ash...@en...> - 2014-04-17 09:16:41
|
On Thu, Apr 17, 2014 at 1:19 PM, Juned Khan <jkh...@gm...> wrote: > Hi All, > > I want to add few more coordinator and datnode in pgxc. during adding > process whole database is being copy to new component. > > That shouldn't happen. It should copy only the object definitions and catalog information and not the data. How are you copying the database? > I have very large database around 8GB, in this database only one table is > there which holds the large number of records. > > is there anyway to exclude such table while adding new coordinator or > datanode using pgxc_ctl command ? > > I want to exclude some table just to make adding coordinator and datanode > process smother and faster. > > Please suggest. > > -- > Thanks, > Juned Khan > <http://www.inextrix.com/> > > > ------------------------------------------------------------------------------ > Learn Graph Databases - Download FREE O'Reilly Book > "Graph Databases" is the definitive new guide to graph databases and their > applications. Written by three acclaimed leaders in the field, > this first edition is now available. Download your free book today! > http://p.sf.net/sfu/NeoTech > _______________________________________________ > Postgres-xc-general mailing list > Pos...@li... > https://lists.sourceforge.net/lists/listinfo/postgres-xc-general > > -- Best Wishes, Ashutosh Bapat EnterpriseDB Corporation The Postgres Database Company |
From: 鈴木 幸市 <ko...@in...> - 2014-04-17 08:23:55
|
While adding a coordinator or a datanode, only catalogs are copied. You should issue ALTER TABLE to each table to redistribute table rows. If you don’t want new datanode to be invoked in some tables, you can skip ALTER TABLE. Table rows will be distributed in the old set of datanodes. If you do it manually, it’s long process and there could be many pitfalls. Hope pgxc_ctl helps. Regards; --- Koichi Suzuki 2014/04/17 16:49、Juned Khan <jkh...@gm...<mailto:jkh...@gm...>> のメール: Hi All, I want to add few more coordinator and datnode in pgxc. during adding process whole database is being copy to new component. I have very large database around 8GB, in this database only one table is there which holds the large number of records. is there anyway to exclude such table while adding new coordinator or datanode using pgxc_ctl command ? I want to exclude some table just to make adding coordinator and datanode process smother and faster. Please suggest. -- Thanks, Juned Khan <http://www.inextrix.com/> ------------------------------------------------------------------------------ Learn Graph Databases - Download FREE O'Reilly Book "Graph Databases" is the definitive new guide to graph databases and their applications. Written by three acclaimed leaders in the field, this first edition is now available. Download your free book today! http://p.sf.net/sfu/NeoTech_______________________________________________ Postgres-xc-general mailing list Pos...@li... https://lists.sourceforge.net/lists/listinfo/postgres-xc-general |
From: Juned K. <jkh...@gm...> - 2014-04-17 07:49:16
|
Hi All, I want to add few more coordinator and datnode in pgxc. during adding process whole database is being copy to new component. I have very large database around 8GB, in this database only one table is there which holds the large number of records. is there anyway to exclude such table while adding new coordinator or datanode using pgxc_ctl command ? I want to exclude some table just to make adding coordinator and datanode process smother and faster. Please suggest. -- Thanks, Juned Khan <http://www.inextrix.com/> |
From: Michael P. <mic...@gm...> - 2014-04-17 06:10:27
|
On Thu, Apr 17, 2014 at 2:51 PM, Juned Khan <jkh...@gm...> wrote: > And i can use pgpool and pgbouncer with pgxc right ? In front of the Coordinators, that's fine. But I am not sure in from of the Datanodes as XC has one extra connection parameter to identify the node type from which connection is from, and a couple of additional message types to pass down transaction ID, timestamp and snapshot data from Coordinator to Datanodes (actually Coordinators as well for DDL queries). If those message types and/or connection parameters get filtered by pgpool or pgbouncer, well you cannot use them. I've personally never given a try though, but the idea is worth an attempt to reduce lock contention that could be caused by a too high value of max_connections. -- Michael |
From: Juned K. <jkh...@gm...> - 2014-04-17 05:51:57
|
And i can use pgpool and pgbouncer with pgxc right ? On Thu, Apr 17, 2014 at 10:07 AM, Michael Paquier <mic...@gm... > wrote: > > > > On Thu, Apr 17, 2014 at 1:13 PM, Juned Khan <jkh...@gm...> wrote: > >> Hi Michael, >> >> Why i had to set max_connection value to that much because it was giving >> me error like. >> >> FATAL: sorry, too many clients already at /ust/local/... >> >> should i use pgpool to handle that much connection? >> > pgbouncer is a better choice if you just want connection pooling. pgpool > is a swiss knife with more features that you may need. > -- > Michael > -- Thanks, Juned Khan iNextrix Technologies Pvt Ltd. www.inextrix.com |
From: Michael P. <mic...@gm...> - 2014-04-17 04:37:21
|
On Thu, Apr 17, 2014 at 1:13 PM, Juned Khan <jkh...@gm...> wrote: > Hi Michael, > > Why i had to set max_connection value to that much because it was giving > me error like. > > FATAL: sorry, too many clients already at /ust/local/... > > should i use pgpool to handle that much connection? > pgbouncer is a better choice if you just want connection pooling. pgpool is a swiss knife with more features that you may need. -- Michael |
From: Juned K. <jkh...@gm...> - 2014-04-17 04:16:12
|
Hi koicihi, Can i search for the WAL file which contains this value and edit it and then try to start datanode slave again ? How it will impact to existing components, i mean what risks are there ? On Thu, Apr 17, 2014 at 9:43 AM, Juned Khan <jkh...@gm...> wrote: > Hi Michael, > > Why i had to set max_connection value to that much because it was giving > me error like. > > FATAL: sorry, too many clients already at /ust/local/... > > should i use pgpool to handle that much connection? > > > > On Thu, Apr 17, 2014 at 6:41 AM, Michael Paquier < > mic...@gm...> wrote: > >> >> >> >> On Wed, Apr 16, 2014 at 9:53 PM, Juned Khan <jkh...@gm...> wrote: >> >>> Hi All, >>> >>> When i tried to set max_connection value to 1000 it gave me this error >>> >> That's a lot, man! Concurrency between sessions is going to blow up your >> performance. >> >> From the docs i came to know that to use than much connection i have to >>> modify kernel configuration >>> >>> But now i am now trying to set 500 connection instead of 1000 then its >>> giving below error. >>> >>> LOG: database system was interrupted while in recovery at log time >>> 2014-04-16 09:06:06 WAT >>> HINT: If this has occurred more than once some data might be corrupted >>> and you might need to choose an earlier recovery target. >>> LOG: entering standby mode >>> LOG: restored log file "000000010000000B00000011" from archive >>> FATAL: hot standby is not possible because max_connections = 500 is a >>> lower setting than on the master server (its value was 1000) >>> LOG: startup process (PID 16829) exited with exit code 1 >>> LOG: aborting startup due to startup process failure >>> >> Anyone please suggest if any other way is there to fix this like clearing >>> cache or something so it can read correct values. >>> >> Update the master first, then the slave. >> -- >> Michael >> > > > > > -- Thanks, Juned Khan iNextrix Technologies Pvt Ltd. www.inextrix.com |
From: Juned K. <jkh...@gm...> - 2014-04-17 04:13:24
|
Hi Michael, Why i had to set max_connection value to that much because it was giving me error like. FATAL: sorry, too many clients already at /ust/local/... should i use pgpool to handle that much connection? On Thu, Apr 17, 2014 at 6:41 AM, Michael Paquier <mic...@gm...>wrote: > > > > On Wed, Apr 16, 2014 at 9:53 PM, Juned Khan <jkh...@gm...> wrote: > >> Hi All, >> >> When i tried to set max_connection value to 1000 it gave me this error >> > That's a lot, man! Concurrency between sessions is going to blow up your > performance. > > From the docs i came to know that to use than much connection i have to >> modify kernel configuration >> >> But now i am now trying to set 500 connection instead of 1000 then its >> giving below error. >> >> LOG: database system was interrupted while in recovery at log time >> 2014-04-16 09:06:06 WAT >> HINT: If this has occurred more than once some data might be corrupted >> and you might need to choose an earlier recovery target. >> LOG: entering standby mode >> LOG: restored log file "000000010000000B00000011" from archive >> FATAL: hot standby is not possible because max_connections = 500 is a >> lower setting than on the master server (its value was 1000) >> LOG: startup process (PID 16829) exited with exit code 1 >> LOG: aborting startup due to startup process failure >> > Anyone please suggest if any other way is there to fix this like clearing >> cache or something so it can read correct values. >> > Update the master first, then the slave. > -- > Michael > |
From: Michael P. <mic...@gm...> - 2014-04-17 01:11:13
|
On Wed, Apr 16, 2014 at 9:53 PM, Juned Khan <jkh...@gm...> wrote: > Hi All, > > When i tried to set max_connection value to 1000 it gave me this error > That's a lot, man! Concurrency between sessions is going to blow up your performance. >From the docs i came to know that to use than much connection i have to > modify kernel configuration > > But now i am now trying to set 500 connection instead of 1000 then its > giving below error. > > LOG: database system was interrupted while in recovery at log time > 2014-04-16 09:06:06 WAT > HINT: If this has occurred more than once some data might be corrupted > and you might need to choose an earlier recovery target. > LOG: entering standby mode > LOG: restored log file "000000010000000B00000011" from archive > FATAL: hot standby is not possible because max_connections = 500 is a > lower setting than on the master server (its value was 1000) > LOG: startup process (PID 16829) exited with exit code 1 > LOG: aborting startup due to startup process failure > Anyone please suggest if any other way is there to fix this like clearing > cache or something so it can read correct values. > Update the master first, then the slave. -- Michael |
From: 鈴木 幸市 <ko...@in...> - 2014-04-17 01:07:50
|
The setup of max_connection=1000 might have written to WAL and shipped to the slave. This could be the cause of the issue. You can do the following to recover from the issue. 1. Run checkpoint and vacuum full at the master, 2. Build the slave from the scotch using pg_basebackup (pgxc_ctl provides this means). 3. Start the slave. Regards; --- Koichi Suzuki 2014/04/16 21:53、Juned Khan <jkh...@gm...> のメール: > Hi All, > > When i tried to set max_connection value to 1000 it gave me this error > > FATAL: could not create semaphores: No space left on device > DETAIL: Failed system call was semget(20008064, 17, 03600). > > From the docs i came to know that to use than much connection i have to modify kernel configuration > > But now i am now trying to set 500 connection instead of 1000 then its giving below error. > > LOG: database system was interrupted while in recovery at log time 2014-04-16 09:06:06 WAT > HINT: If this has occurred more than once some data might be corrupted and you might need to choose an earlier recovery target. > LOG: entering standby mode > LOG: restored log file "000000010000000B00000011" from archive > FATAL: hot standby is not possible because max_connections = 500 is a lower setting than on the master server (its value was 1000) > LOG: startup process (PID 16829) exited with exit code 1 > LOG: aborting startup due to startup process failure > > It says master value is 1000 but i have set it to 500. seems it reading old values i guess. > > database=# show max_connections; > max_connections > ----------------- > 500 > (1 row) > > I have restarted all components several times but no luck. > > Anyone please suggest if any other way is there to fix this like clearing cache or something so it can read correct values. > > > Regards > Juned Khan > ------------------------------------------------------------------------------ > Learn Graph Databases - Download FREE O'Reilly Book > "Graph Databases" is the definitive new guide to graph databases and their > applications. Written by three acclaimed leaders in the field, > this first edition is now available. Download your free book today! > http://p.sf.net/sfu/NeoTech_______________________________________________ > Postgres-xc-general mailing list > Pos...@li... > https://lists.sourceforge.net/lists/listinfo/postgres-xc-general |
From: Juned K. <jkh...@gm...> - 2014-04-16 12:53:23
|
Hi All, When i tried to set max_connection value to 1000 it gave me this error FATAL: could not create semaphores: No space left on device DETAIL: Failed system call was semget(20008064, 17, 03600). >From the docs i came to know that to use than much connection i have to modify kernel configuration But now i am now trying to set 500 connection instead of 1000 then its giving below error. LOG: database system was interrupted while in recovery at log time 2014-04-16 09:06:06 WAT HINT: If this has occurred more than once some data might be corrupted and you might need to choose an earlier recovery target. LOG: entering standby mode LOG: restored log file "000000010000000B00000011" from archive FATAL: hot standby is not possible because max_connections = 500 is a lower setting than on the master server (its value was 1000) LOG: startup process (PID 16829) exited with exit code 1 LOG: aborting startup due to startup process failure It says master value is 1000 but i have set it to 500. seems it reading old values i guess. database=# show max_connections; max_connections ----------------- 500 (1 row) I have restarted all components several times but no luck. Anyone please suggest if any other way is there to fix this like clearing cache or something so it can read correct values. Regards Juned Khan |
From: 鈴木 幸市 <ko...@in...> - 2014-04-16 07:02:48
|
It is what we’d like to have in the next release. I’m planning to discuss next XC major feature in XC days in Ottawa, May. This meeting will be held as one of the PGCon-related meetings. This is definitely the one we want to have, as well as enforcing cross-node referential constraint. Thank you. --- Koichi Suzuki 2014/04/16 15:53、Aaron Jackson <aja...@re...<mailto:aja...@re...>> のメール: Yes, I completely understand why it has to be done. I was working with an in-memory distributed database earlier this year and they allowed constraints like these as long as the constraint included the partitioned / distributed key. In theory, I guess that's viable because it means that the data node alone be capable of guaranteeing uniqueness without any further coordination. I'll try adding a unique index post creation to see if it works. Thank you ________________________________ From: 鈴木 幸市 [ko...@in...<mailto:ko...@in...>] Sent: Tuesday, April 15, 2014 7:54 PM To: Aaron Jackson Cc: pos...@li...<mailto:pos...@li...> Subject: Re: [Postgres-xc-general] Creating Unique Indices It is in the list of the future work. To enforce this, we need cross-node operation which need additional infrastructure. So at present, you can add unique index to distributed table if distribution column is involved. You can add unique index freely to replicated tables. For the same reason, you cannot add reference integrity between distributed tables. Regards; --- Koichi Suzuki 2014/04/16 4:36、Aaron Jackson <aja...@re...<mailto:aja...@re...>> のメール: Is there any capability to create unique indices where one part of the constraint is the distribution key. In theory, if I created distributed on column name, but then created a unique index on name + level, the constraint could be applied at the data node level. Aaron ------------------------------------------------------------------------------ Learn Graph Databases - Download FREE O'Reilly Book "Graph Databases" is the definitive new guide to graph databases and their applications. Written by three acclaimed leaders in the field, this first edition is now available. Download your free book today! http://p.sf.net/sfu/NeoTech_______________________________________________ Postgres-xc-general mailing list Pos...@li...<mailto:Pos...@li...> https://lists.sourceforge.net/lists/listinfo/postgres-xc-general |
From: Aaron J. <aja...@re...> - 2014-04-16 06:54:39
|
Yes, I completely understand why it has to be done. I was working with an in-memory distributed database earlier this year and they allowed constraints like these as long as the constraint included the partitioned / distributed key. In theory, I guess that's viable because it means that the data node alone be capable of guaranteeing uniqueness without any further coordination. I'll try adding a unique index post creation to see if it works. Thank you ________________________________ From: 鈴木 幸市 [ko...@in...] Sent: Tuesday, April 15, 2014 7:54 PM To: Aaron Jackson Cc: pos...@li... Subject: Re: [Postgres-xc-general] Creating Unique Indices It is in the list of the future work. To enforce this, we need cross-node operation which need additional infrastructure. So at present, you can add unique index to distributed table if distribution column is involved. You can add unique index freely to replicated tables. For the same reason, you cannot add reference integrity between distributed tables. Regards; --- Koichi Suzuki 2014/04/16 4:36、Aaron Jackson <aja...@re...<mailto:aja...@re...>> のメール: Is there any capability to create unique indices where one part of the constraint is the distribution key. In theory, if I created distributed on column name, but then created a unique index on name + level, the constraint could be applied at the data node level. Aaron ------------------------------------------------------------------------------ Learn Graph Databases - Download FREE O'Reilly Book "Graph Databases" is the definitive new guide to graph databases and their applications. Written by three acclaimed leaders in the field, this first edition is now available. Download your free book today! http://p.sf.net/sfu/NeoTech_______________________________________________ Postgres-xc-general mailing list Pos...@li... https://lists.sourceforge.net/lists/listinfo/postgres-xc-general |
From: Aaron J. <aja...@re...> - 2014-04-16 06:52:15
|
Yes, I was able to isolate it down to the test clause of the loop evaluation at the end of ProcArray. To ensure I wasn't looking at volatile values, I copied the values used during the test so they weren't subject to changes to memory. I'll put the exact scenario back together again for you and post it here. It was pretty obvious when the loop began to exit prematurely and post-test values indicated that things which should be truths weren't (e.g. 1 < 3 = 1 but the test was returning 0). I will have an answer by the weekend. Aaron ________________________________ From: 鈴木 幸市 [ko...@in...] Sent: Sunday, April 13, 2014 7:55 PM To: Aaron Jackson Cc: pos...@li... Subject: Re: [Postgres-xc-general] failed to find proc - increasing numProcs Thank you Aaron for the detailed analysis. As long as the issue is just for XC, we need a fix for it to work correctly regardless the compiler optimization. Did to locate where such wrong estimation takes place? And what compilation option did you use? They are very helpful. Best; --- Koichi Suzuki 2014/04/12 11:40、Aaron Jackson <aja...@re...<mailto:aja...@re...>> のメール: It appears that problem is a compiler optimization issue. I narrowed the issue down to the loop at the end of the ProcArrayRemove method. I'm not entirely sure why, but the compiler generated code that evaluates the test block of the loop improperly. Since changing the compiler options, the problem has been resolved. Aaron ________________________________ From: Aaron Jackson [aja...@re...<mailto:aja...@re...>] Sent: Friday, April 11, 2014 1:07 AM To: pos...@li...<mailto:pos...@li...> Subject: Re: [Postgres-xc-general] failed to find proc - increasing numProcs I forgot to mention that if I injected a context switch (sleep(0) did the trick as did an elog statement) during the test in the ProcArrayRemove, that it no longer failed. Hopefully that will help in understanding the reasons why that may have triggered the ProcArrayRemove to succeed. ------------------------------------------------------------------------------ Put Bad Developers to Shame Dominate Development with Jenkins Continuous Integration Continuously Automate Build, Test & Deployment Start a new project now. Try Jenkins in the cloud. http://p.sf.net/sfu/13600_Cloudbees_______________________________________________ Postgres-xc-general mailing list Pos...@li... https://lists.sourceforge.net/lists/listinfo/postgres-xc-general |
From: Masataka S. <pg...@gm...> - 2014-04-16 06:24:40
|
It is worth trying at a glance to the proxy log. AFAIK you're needless to remove register.node with typical operations. If you meet this issue, I think you'd better review your operations. Regards. On 15 April 2014 20:30, Juned Khan <jkh...@gm...> wrote: > Hi Masataka, > > You answer helped but again i am having this problem. do i need to remove > register.node file again ? > > GTM logs > > 1:140587359135488:2014-04-15 12:22:56.711 WAT -LOG: Saving transaction > restoration info, backed-up gxid: 259975 > LOCATION: GTM_WriteRestorePointXid, gtm_txn.c:2649 > 1:140587359135488:2014-04-15 12:22:56.712 WAT -LOG: Started to run as > GTM-Active. > LOCATION: main, main.c:641 > 1:140587359135488:2014-04-15 12:22:57.712 WAT -LOG: Any GTM standby node > not found in registered node(s). > LOCATION: gtm_standby_connect_to_standby_int, gtm_standby.c:381 > 1:140587350980352:2014-04-15 12:22:57.717 WAT -LOG: Failed to establish a > connection with GTM standby. - 0x13ac390 > LOCATION: gtm_standby_connect_to_standby_int, gtm_standby.c:396 > 1:140587321448192:2014-04-15 12:27:02.055 WAT -LOG: unexpected EOF on > client connection > LOCATION: ReadCommand, main.c:1374 > 1:140587359135488:2014-04-15 12:27:03.928 WAT -LOG: GTM shutting down. > LOCATION: ServerLoop, main.c:802 > 1:140587359135488:2014-04-15 12:27:03.928 WAT -LOG: Saving transaction info > - next_gxid: 257975 > LOCATION: GTM_SaveTxnInfo, gtm_txn.c:2632 > Received signal 15 > > > GTM Proxy logs > LOCATION: pgxcnode_add_info, register_common.c:249 > 1:140179440826112:2014-04-15 12:27:22.893 WAT -LOG: Node with the given ID > number already exists > LOCATION: pgxcnode_add_info, register_common.c:249 > 1:140179440826112:2014-04-15 12:27:22.893 WAT -LOG: Node with the given ID > number already exists > LOCATION: pgxcnode_add_info, register_common.c:249 > 1:140179440826112:2014-04-15 12:27:22.893 WAT -LOG: Node with the given ID > number already exists > LOCATION: pgxcnode_add_info, register_common.c:249 > 1:140179440826112:2014-04-15 12:27:22.893 WAT -LOG: Node with the given ID > number already exists > LOCATION: pgxcnode_add_info, register_common.c:249 > 1:140179440826112:2014-04-15 12:27:22.893 WAT -LOG: Node with the given ID > number already exists > LOCATION: pgxcnode_add_info, register_common.c:249 > > > > > On Tue, Apr 15, 2014 at 10:49 AM, Juned Khan <jkh...@gm...> wrote: >> >> Hi Masataka, >> >> Thanks for help >> >> >> On Tue, Apr 15, 2014 at 7:07 AM, Masataka Saito <pg...@gm...> wrote: >>> >>> I'm not sure but GTM or GTM proxy seems to be wrong. >>> >>> Please check their log. >>> If you find an issue, search postgres-xc-general ml for keyword >>> "register.node". >>> >>> Regards. >>> >>> On 14 April 2014 20:11, Juned Khan <jkh...@gm...> wrote: >>> > Here few more logs of the component which doesn't start >>> > >>> > root@db02:~# tail -f >>> > >>> > /home/postgres/pgxc/nodes/dn_master/pg_log/postgresql-2014-04-14_114619.log >>> > LOG: database system was interrupted; last known up at 2014-04-14 >>> > 11:00:55 >>> > WAT >>> > LOG: database system was not properly shut down; automatic recovery in >>> > progress >>> > LOG: record with zero length at 4/9A01A588 >>> > LOG: redo is not required >>> > FATAL: the database system is starting up >>> > LOG: autovacuum launcher started >>> > LOG: database system is ready to accept connections >>> > WARNING: worker took too long to start; canceled >>> > FATAL: Can not register Datanode on GTM >>> > >>> > >>> > root@db02:~# tail -f >>> > /home/postgres/pgxc/nodes/coord/pg_log/postgresql-2014-04-14_114612.log >>> > DEBUG: 00000: name: unnamed; blockState: DEFAULT; state: >>> > INPROGR, >>> > xid/subid/cid: 0/1/0, nestlvl: 1, children: >>> > LOCATION: ShowTransactionStateRec, xact.c:5238 >>> > DEBUG: 00000: Autovacuum launcher: connection to GTM closed >>> > LOCATION: CloseGTM, gtm.c:116 >>> > DEBUG: 00000: Autovacuum launcher: connection established to GTM with >>> > string host=db02 port=20002 node_name=coord1 >>> > LOCATION: InitGTM, gtm.c:84 >>> > WARNING: 01000: Xid is invalid. >>> > LOCATION: GetNewTransactionId, varsup.c:160 >>> > DEBUG: 00000: Getting snapshot. Current XID = 0 >>> > LOCATION: GetSnapshotDataCoordinator, procarray.c:3054 >>> > DEBUG: 00000: Autovacuum launcher: connection to GTM closed >>> > LOCATION: CloseGTM, gtm.c:116 >>> > DEBUG: 00000: Autovacuum launcher: connection established to GTM with >>> > string host=db02 port=20002 node_name=coord1 >>> > LOCATION: InitGTM, gtm.c:84 >>> > >>> > >>> > >>> > >>> > >>> > On Mon, Apr 14, 2014 at 3:56 PM, Juned Khan <jkh...@gm...> >>> > wrote: >>> >> >>> >> Hi all, >>> >> >>> >> Yesterday due some problem datanode slave stopped working on one of my >>> >> DB >>> >> server. After figuring out the issue i freed out the space on that >>> >> server. >>> >> now almost 50% disk is free on server. >>> >> >>> >> the problem is after that incident i am not able start all my pgxc >>> >> components, now its uncertain some of components starts and sometimes >>> >> it >>> >> does not each time when i execute "stop all" and "start all" scenario >>> >> is >>> >> different.I am not able to figure out the problem which causing this >>> >> issue. >>> >> >>> >> I have enabled the debug log though and tried to identify the issue >>> >> and >>> >> after reviewing the logs it seems memory related issue, i am not sure >>> >> about >>> >> this. >>> >> >>> >> PGXC start all >>> >> Start GTM master >>> >> gtm_ctl: PID file "/home/postgres/pgxc/nodes/gtm/gtm.pid" does not >>> >> exist >>> >> Is server running? >>> >> server starting >>> >> Start GTM slavegtm_ctl: PID file >>> >> "/home/postgres/pgxc/nodes/gtm/gtm.pid" >>> >> does not exist >>> >> Is server running? >>> >> server starting >>> >> Done. >>> >> Starting all the gtm proxies. >>> >> Starting gtm proxy gtm_pxy1. >>> >> Starting gtm proxy gtm_pxy2. >>> >> gtm_proxy: no process found >>> >> server starting >>> >> gtm_proxy: no process found >>> >> server starting >>> >> Done. >>> >> Starting coordinator master. >>> >> Starting coordinator master coord1 >>> >> Starting coordinator master coord2 >>> >> DEBUG: 00000: postgres: PostmasterMain: initial environment dump: >>> >> LOCATION: PostmasterMain, postmaster.c:962 >>> >> DEBUG: 00000: ----------------------------------------- >>> >> LOCATION: PostmasterMain, postmaster.c:964 >>> >> DEBUG: 00000: MAIL=/var/mail/postgres >>> >> LOCATION: PostmasterMain, postmaster.c:967 >>> >> DEBUG: 00000: SSH_CLIENT=41.218.72.115 48992 59696 >>> >> LOCATION: PostmasterMain, postmaster.c:967 >>> >> DEBUG: 00000: USER=postgres >>> >> LOCATION: PostmasterMain, postmaster.c:967 >>> >> DEBUG: 00000: LANGUAGE=en_ZA:en >>> >> LOCATION: PostmasterMain, postmaster.c:967 >>> >> DEBUG: 00000: SHLVL=1 >>> >> LOCATION: PostmasterMain, postmaster.c:967 >>> >> DEBUG: 00000: HOME=/home/postgres >>> >> LOCATION: PostmasterMain, postmaster.c:967 >>> >> DEBUG: 00000: PG_GRANDPARENT_PID=19891 >>> >> LOCATION: PostmasterMain, postmaster.c:967 >>> >> DEBUG: 00000: LOGNAME=postgres >>> >> LOCATION: PostmasterMain, postmaster.c:967 >>> >> DEBUG: 00000: _=/usr/local/bin/pg_ctl >>> >> LOCATION: PostmasterMain, postmaster.c:967 >>> >> DEBUG: 00000: PGSYSCONFDIR=/usr/local/pgsql/etc >>> >> LOCATION: PostmasterMain, postmaster.c:967 >>> >> DEBUG: 00000: >>> >> PATH=/usr/local/bin:/usr/bin:/bin:/usr/bin/X11:/usr/games >>> >> LOCATION: PostmasterMain, postmaster.c:967 >>> >> DEBUG: 00000: LANG=en_ZA.UTF-8 >>> >> LOCATION: PostmasterMain, postmaster.c:967 >>> >> DEBUG: 00000: SHELL=/bin/bash >>> >> LOCATION: PostmasterMain, postmaster.c:967 >>> >> DEBUG: 00000: PWD=/home/postgres >>> >> LOCATION: PostmasterMain, postmaster.c:967 >>> >> DEBUG: 00000: SSH_CONNECTION=41.218.72.115 48992 41.218.72.115 59696 >>> >> LOCATION: PostmasterMain, postmaster.c:967 >>> >> DEBUG: 00000: PGDATA=/home/postgres/pgxc/nodes/coord >>> >> LOCATION: PostmasterMain, postmaster.c:967 >>> >> DEBUG: 00000: LC_COLLATE=en_ZA.UTF-8 >>> >> LOCATION: PostmasterMain, postmaster.c:967 >>> >> DEBUG: 00000: LC_CTYPE=en_ZA.UTF-8 >>> >> LOCATION: PostmasterMain, postmaster.c:967 >>> >> DEBUG: 00000: LC_MESSAGES=en_ZA.UTF-8 >>> >> LOCATION: PostmasterMain, postmaster.c:967 >>> >> DEBUG: 00000: LC_MONETARY=C >>> >> LOCATION: PostmasterMain, postmaster.c:967 >>> >> DEBUG: 00000: LC_NUMERIC=C >>> >> LOCATION: PostmasterMain, postmaster.c:967 >>> >> DEBUG: 00000: LC_TIME=C >>> >> LOCATION: PostmasterMain, postmaster.c:967 >>> >> DEBUG: 00000: ----------------------------------------- >>> >> LOCATION: PostmasterMain, postmaster.c:969 >>> >> DEBUG: 00000: invoking IpcMemoryCreate(size=148193280) >>> >> LOCATION: CreateSharedMemoryAndSemaphores, ipci.c:149 >>> >> DEBUG: 00000: SlruScanDirectory invoking callback on pg_notify/0000 >>> >> LOCATION: SlruScanDirectory, slru.c:1312 >>> >> DEBUG: 00000: removing file "pg_notify/0000" >>> >> LOCATION: SlruScanDirCbDeleteAll, slru.c:1277 >>> >> DEBUG: 00000: max_safe_fds = 984, usable_fds = 1000, already_open = 6 >>> >> LOCATION: set_max_safe_fds, fd.c:548 >>> >> LOG: 00000: redirecting log output to logging collector process >>> >> HINT: Future log output will appear in directory "pg_log". >>> >> LOCATION: SysLogger_Start, syslogger.c:649 >>> >> DEBUG: 00000: postgres: PostmasterMain: initial environment dump: >>> >> LOCATION: PostmasterMain, postmaster.c:962 >>> >> DEBUG: 00000: ----------------------------------------- >>> >> LOCATION: PostmasterMain, postmaster.c:964 >>> >> DEBUG: 00000: MAIL=/var/mail/postgres >>> >> LOCATION: PostmasterMain, postmaster.c:967 >>> >> DEBUG: 00000: SSH_CLIENT=41.218.72.115 33169 59696 >>> >> LOCATION: PostmasterMain, postmaster.c:967 >>> >> DEBUG: 00000: USER=postgres >>> >> LOCATION: PostmasterMain, postmaster.c:967 >>> >> DEBUG: 00000: LANGUAGE=en_ZA:en >>> >> LOCATION: PostmasterMain, postmaster.c:967 >>> >> DEBUG: 00000: SHLVL=1 >>> >> LOCATION: PostmasterMain, postmaster.c:967 >>> >> DEBUG: 00000: HOME=/home/postgres >>> >> LOCATION: PostmasterMain, postmaster.c:967 >>> >> DEBUG: 00000: PG_GRANDPARENT_PID=19507 >>> >> LOCATION: PostmasterMain, postmaster.c:967 >>> >> DEBUG: 00000: LOGNAME=postgres >>> >> LOCATION: PostmasterMain, postmaster.c:967 >>> >> DEBUG: 00000: _=/usr/local/bin/pg_ctl >>> >> LOCATION: PostmasterMain, postmaster.c:967 >>> >> DEBUG: 00000: PGSYSCONFDIR=/usr/local/pgsql/etc >>> >> LOCATION: PostmasterMain, postmaster.c:967 >>> >> DEBUG: 00000: >>> >> PATH=/usr/local/bin:/usr/bin:/bin:/usr/bin/X11:/usr/games >>> >> LOCATION: PostmasterMain, postmaster.c:967 >>> >> DEBUG: 00000: LANG=en_ZA.UTF-8 >>> >> LOCATION: PostmasterMain, postmaster.c:967 >>> >> DEBUG: 00000: SHELL=/bin/bash >>> >> LOCATION: PostmasterMain, postmaster.c:967 >>> >> DEBUG: 00000: PWD=/home/postgres >>> >> LOCATION: PostmasterMain, postmaster.c:967 >>> >> DEBUG: 00000: SSH_CONNECTION=41.218.72.115 33169 41.218.72.114 59696 >>> >> LOCATION: PostmasterMain, postmaster.c:967 >>> >> DEBUG: 00000: PGDATA=/home/postgres/pgxc/nodes/coord >>> >> LOCATION: PostmasterMain, postmaster.c:967 >>> >> DEBUG: 00000: LC_COLLATE=en_ZA.UTF-8 >>> >> LOCATION: PostmasterMain, postmaster.c:967 >>> >> DEBUG: 00000: LC_CTYPE=en_ZA.UTF-8 >>> >> LOCATION: PostmasterMain, postmaster.c:967 >>> >> DEBUG: 00000: LC_MESSAGES=en_ZA.UTF-8 >>> >> LOCATION: PostmasterMain, postmaster.c:967 >>> >> DEBUG: 00000: LC_MONETARY=C >>> >> LOCATION: PostmasterMain, postmaster.c:967 >>> >> DEBUG: 00000: LC_NUMERIC=C >>> >> LOCATION: PostmasterMain, postmaster.c:967 >>> >> DEBUG: 00000: LC_TIME=C >>> >> LOCATION: PostmasterMain, postmaster.c:967 >>> >> DEBUG: 00000: ----------------------------------------- >>> >> LOCATION: PostmasterMain, postmaster.c:969 >>> >> DEBUG: 00000: invoking IpcMemoryCreate(size=148193280) >>> >> LOCATION: CreateSharedMemoryAndSemaphores, ipci.c:149 >>> >> DEBUG: 00000: SlruScanDirectory invoking callback on pg_notify/0000 >>> >> LOCATION: SlruScanDirectory, slru.c:1312 >>> >> DEBUG: 00000: removing file "pg_notify/0000" >>> >> LOCATION: SlruScanDirCbDeleteAll, slru.c:1277 >>> >> DEBUG: 00000: max_safe_fds = 984, usable_fds = 1000, already_open = 6 >>> >> LOCATION: set_max_safe_fds, fd.c:548 >>> >> LOG: 00000: redirecting log output to logging collector process >>> >> HINT: Future log output will appear in directory "pg_log". >>> >> LOCATION: SysLogger_Start, syslogger.c:649 >>> >> DEBUG: 00000: logger shutting down >>> >> LOCATION: SysLoggerMain, syslogger.c:517 >>> >> DEBUG: 00000: shmem_exit(0): 0 callbacks to make >>> >> LOCATION: shmem_exit, ipc.c:212 >>> >> DEBUG: 00000: proc_exit(0): 0 callbacks to make >>> >> LOCATION: proc_exit_prepare, ipc.c:184 >>> >> DEBUG: 00000: exit(0) >>> >> LOCATION: proc_exit, ipc.c:135 >>> >> DEBUG: 00000: shmem_exit(-1): 0 callbacks to make >>> >> LOCATION: shmem_exit, ipc.c:212 >>> >> DEBUG: 00000: proc_exit(-1): 0 callbacks to make >>> >> LOCATION: proc_exit_prepare, ipc.c:184 >>> >> pg_ctl: could not start server >>> >> Examine the log output. >>> >> Done. >>> >> Starting all the datanode masters. >>> >> Starting datanode master datanode1. >>> >> LOG: redirecting log output to logging collector process >>> >> HINT: Future log output will appear in directory "pg_log". >>> >> Done. >>> >> Starting all the datanode slaves. >>> >> Starting datanode slave datanode1. >>> >> LOG: redirecting log output to logging collector process >>> >> HINT: Future log output will appear in directory "pg_log". >>> >> Done. >>> >> PGXC monitor all >>> >> Running: gtm master >>> >> Running: gtm slave >>> >> Running: gtm proxy gtm_pxy1 >>> >> Running: gtm proxy gtm_pxy2 >>> >> Running: coordinator master coord1 >>> >> Not running: coordinator master coord2 >>> >> Not running: datanode master datanode1 >>> >> Running: datanode slave datanode1 >>> >> >>> >> >>> >> My DB server has 32 GB RAM and 260 GB of Hard disk and i am using >>> >> pgxc-1.2.1. what would be the optimal memory related postgresql.conf >>> >> configuration for this server. >>> >> >>> >> Anyone has an idea about this issue ? >>> >> >>> >> -- >>> >> Thanks, >>> >> Juned Khan >>> >> iNextrix Technologies Pvt Ltd. >>> >> www.inextrix.com >>> > >>> > >>> > >>> > >>> > -- >>> > Thanks, >>> > Juned Khan >>> > iNextrix Technologies Pvt Ltd. >>> > www.inextrix.com >>> > >>> > >>> > ------------------------------------------------------------------------------ >>> > Learn Graph Databases - Download FREE O'Reilly Book >>> > "Graph Databases" is the definitive new guide to graph databases and >>> > their >>> > applications. Written by three acclaimed leaders in the field, >>> > this first edition is now available. Download your free book today! >>> > http://p.sf.net/sfu/NeoTech >>> > _______________________________________________ >>> > Postgres-xc-general mailing list >>> > Pos...@li... >>> > https://lists.sourceforge.net/lists/listinfo/postgres-xc-general >>> > >> >> >> >> >> -- >> Thanks, >> Juned Khan >> iNextrix Technologies Pvt Ltd. >> www.inextrix.com > > > > > -- > Thanks, > Juned Khan > iNextrix Technologies Pvt Ltd. > www.inextrix.com |
From: 鈴木 幸市 <ko...@in...> - 2014-04-16 00:55:03
|
It is in the list of the future work. To enforce this, we need cross-node operation which need additional infrastructure. So at present, you can add unique index to distributed table if distribution column is involved. You can add unique index freely to replicated tables. For the same reason, you cannot add reference integrity between distributed tables. Regards; --- Koichi Suzuki 2014/04/16 4:36、Aaron Jackson <aja...@re...<mailto:aja...@re...>> のメール: Is there any capability to create unique indices where one part of the constraint is the distribution key. In theory, if I created distributed on column name, but then created a unique index on name + level, the constraint could be applied at the data node level. Aaron ------------------------------------------------------------------------------ Learn Graph Databases - Download FREE O'Reilly Book "Graph Databases" is the definitive new guide to graph databases and their applications. Written by three acclaimed leaders in the field, this first edition is now available. Download your free book today! http://p.sf.net/sfu/NeoTech_______________________________________________ Postgres-xc-general mailing list Pos...@li... https://lists.sourceforge.net/lists/listinfo/postgres-xc-general |