You can subscribe to this list here.
2010 |
Jan
|
Feb
|
Mar
|
Apr
|
May
(2) |
Jun
|
Jul
|
Aug
(6) |
Sep
|
Oct
(19) |
Nov
(1) |
Dec
|
---|---|---|---|---|---|---|---|---|---|---|---|---|
2011 |
Jan
(12) |
Feb
(1) |
Mar
(4) |
Apr
(4) |
May
(32) |
Jun
(12) |
Jul
(11) |
Aug
(1) |
Sep
(6) |
Oct
(3) |
Nov
|
Dec
(10) |
2012 |
Jan
(11) |
Feb
(1) |
Mar
(3) |
Apr
(25) |
May
(53) |
Jun
(38) |
Jul
(103) |
Aug
(54) |
Sep
(31) |
Oct
(66) |
Nov
(77) |
Dec
(20) |
2013 |
Jan
(91) |
Feb
(86) |
Mar
(103) |
Apr
(107) |
May
(25) |
Jun
(37) |
Jul
(17) |
Aug
(59) |
Sep
(38) |
Oct
(78) |
Nov
(29) |
Dec
(15) |
2014 |
Jan
(23) |
Feb
(82) |
Mar
(118) |
Apr
(101) |
May
(103) |
Jun
(45) |
Jul
(6) |
Aug
(10) |
Sep
|
Oct
(32) |
Nov
|
Dec
(9) |
2015 |
Jan
(3) |
Feb
(5) |
Mar
|
Apr
(1) |
May
|
Jun
|
Jul
(9) |
Aug
(4) |
Sep
(3) |
Oct
|
Nov
|
Dec
|
2016 |
Jan
(3) |
Feb
|
Mar
|
Apr
|
May
|
Jun
|
Jul
|
Aug
|
Sep
|
Oct
|
Nov
|
Dec
|
2017 |
Jan
|
Feb
|
Mar
|
Apr
|
May
|
Jun
(3) |
Jul
|
Aug
|
Sep
|
Oct
|
Nov
|
Dec
|
2018 |
Jan
|
Feb
|
Mar
|
Apr
|
May
(4) |
Jun
|
Jul
|
Aug
|
Sep
|
Oct
|
Nov
|
Dec
|
S | M | T | W | T | F | S |
---|---|---|---|---|---|---|
|
|
1
|
2
(3) |
3
(3) |
4
(4) |
5
(1) |
6
|
7
(7) |
8
(6) |
9
(5) |
10
(7) |
11
(7) |
12
(1) |
13
|
14
(3) |
15
(4) |
16
(6) |
17
(13) |
18
(6) |
19
|
20
|
21
|
22
|
23
(1) |
24
(4) |
25
(5) |
26
(1) |
27
|
28
(2) |
29
(10) |
30
(2) |
|
|
|
From: Juned K. <jkh...@gm...> - 2014-04-17 11:24:06
|
Hi koichi, Is there any other short solution to fix this issue ? 1. Run checkpoint and vacuum full at the master, Found the docs to perform vacuum full but i have confusion about how to run checkpoint manually. 2. Build the slave from the scotch using pg_basebackup (pgxc_ctl provides this means). should i run this command from datanode slave server something like ( pg_basebackup -U postgres -R -D /srv/pgsql/standby --host=192.168.1.17 --port=5432) And very important thing how it will impact on datanode master server ? as of now i have only this master running on server. so as of now i just don't want to take a chance, and somehow want to start slave for backup. Please advice Regards Juned Khan On Thu, Apr 17, 2014 at 4:43 PM, Juned Khan <jkh...@gm...> wrote: > so i have to do experiment for this. i really need that much connection. > > Thanks for the suggestion @Michael > > > On Thu, Apr 17, 2014 at 11:40 AM, Michael Paquier < > mic...@gm...> wrote: > >> On Thu, Apr 17, 2014 at 2:51 PM, Juned Khan <jkh...@gm...> wrote: >> > And i can use pgpool and pgbouncer with pgxc right ? >> In front of the Coordinators, that's fine. But I am not sure in from >> of the Datanodes as XC has one extra connection parameter to identify >> the node type from which connection is from, and a couple of >> additional message types to pass down transaction ID, timestamp and >> snapshot data from Coordinator to Datanodes (actually Coordinators as >> well for DDL queries). If those message types and/or connection >> parameters get filtered by pgpool or pgbouncer, well you cannot use >> them. I've personally never given a try though, but the idea is worth >> an attempt to reduce lock contention that could be caused by a too >> high value of max_connections. >> -- >> Michael >> > > > > |
From: Juned K. <jkh...@gm...> - 2014-04-17 11:13:29
|
so i have to do experiment for this. i really need that much connection. Thanks for the suggestion @Michael On Thu, Apr 17, 2014 at 11:40 AM, Michael Paquier <mic...@gm... > wrote: > On Thu, Apr 17, 2014 at 2:51 PM, Juned Khan <jkh...@gm...> wrote: > > And i can use pgpool and pgbouncer with pgxc right ? > In front of the Coordinators, that's fine. But I am not sure in from > of the Datanodes as XC has one extra connection parameter to identify > the node type from which connection is from, and a couple of > additional message types to pass down transaction ID, timestamp and > snapshot data from Coordinator to Datanodes (actually Coordinators as > well for DDL queries). If those message types and/or connection > parameters get filtered by pgpool or pgbouncer, well you cannot use > them. I've personally never given a try though, but the idea is worth > an attempt to reduce lock contention that could be caused by a too > high value of max_connections. > -- > Michael > -- Thanks, Juned Khan iNextrix Technologies Pvt Ltd. www.inextrix.com |
From: Juned K. <jkh...@gm...> - 2014-04-17 09:30:39
|
Ohh i see i think i read the same earlier but still i can see data in newly created node. myDB=# select count(*) from accounts; count ------- 79 (1 row) And all datanode has same records, so i wonder from where data comes from ? is it synced automatically ? On Thu, Apr 17, 2014 at 1:53 PM, 鈴木 幸市 <ko...@in...> wrote: > While adding a coordinator or a datanode, only catalogs are copied. You > should issue ALTER TABLE to each table to redistribute table rows. > If you don’t want new datanode to be invoked in some tables, you can skip > ALTER TABLE. Table rows will be distributed in the old set of > datanodes. > > If you do it manually, it’s long process and there could be many > pitfalls. Hope pgxc_ctl helps. > > Regards; > --- > Koichi Suzuki > > 2014/04/17 16:49、Juned Khan <jkh...@gm...> のメール: > > Hi All, > > I want to add few more coordinator and datnode in pgxc. during adding > process whole database is being copy to new component. > > I have very large database around 8GB, in this database only one table is > there which holds the large number of records. > > is there anyway to exclude such table while adding new coordinator or > datanode using pgxc_ctl command ? > > I want to exclude some table just to make adding coordinator and datanode > process smother and faster. > > Please suggest. > > -- > Thanks, > Juned Khan > <http://www.inextrix.com/> > > ------------------------------------------------------------------------------ > Learn Graph Databases - Download FREE O'Reilly Book > "Graph Databases" is the definitive new guide to graph databases and their > applications. Written by three acclaimed leaders in the field, > this first edition is now available. Download your free book today! > http://p.sf.net/sfu/NeoTech_______________________________________________ > Postgres-xc-general mailing list > Pos...@li... > https://lists.sourceforge.net/lists/listinfo/postgres-xc-general > > > -- Thanks, Juned Khan iNextrix Technologies Pvt Ltd. www.inextrix.com |
From: Ashutosh B. <ash...@en...> - 2014-04-17 09:16:41
|
On Thu, Apr 17, 2014 at 1:19 PM, Juned Khan <jkh...@gm...> wrote: > Hi All, > > I want to add few more coordinator and datnode in pgxc. during adding > process whole database is being copy to new component. > > That shouldn't happen. It should copy only the object definitions and catalog information and not the data. How are you copying the database? > I have very large database around 8GB, in this database only one table is > there which holds the large number of records. > > is there anyway to exclude such table while adding new coordinator or > datanode using pgxc_ctl command ? > > I want to exclude some table just to make adding coordinator and datanode > process smother and faster. > > Please suggest. > > -- > Thanks, > Juned Khan > <http://www.inextrix.com/> > > > ------------------------------------------------------------------------------ > Learn Graph Databases - Download FREE O'Reilly Book > "Graph Databases" is the definitive new guide to graph databases and their > applications. Written by three acclaimed leaders in the field, > this first edition is now available. Download your free book today! > http://p.sf.net/sfu/NeoTech > _______________________________________________ > Postgres-xc-general mailing list > Pos...@li... > https://lists.sourceforge.net/lists/listinfo/postgres-xc-general > > -- Best Wishes, Ashutosh Bapat EnterpriseDB Corporation The Postgres Database Company |
From: 鈴木 幸市 <ko...@in...> - 2014-04-17 08:23:55
|
While adding a coordinator or a datanode, only catalogs are copied. You should issue ALTER TABLE to each table to redistribute table rows. If you don’t want new datanode to be invoked in some tables, you can skip ALTER TABLE. Table rows will be distributed in the old set of datanodes. If you do it manually, it’s long process and there could be many pitfalls. Hope pgxc_ctl helps. Regards; --- Koichi Suzuki 2014/04/17 16:49、Juned Khan <jkh...@gm...<mailto:jkh...@gm...>> のメール: Hi All, I want to add few more coordinator and datnode in pgxc. during adding process whole database is being copy to new component. I have very large database around 8GB, in this database only one table is there which holds the large number of records. is there anyway to exclude such table while adding new coordinator or datanode using pgxc_ctl command ? I want to exclude some table just to make adding coordinator and datanode process smother and faster. Please suggest. -- Thanks, Juned Khan <http://www.inextrix.com/> ------------------------------------------------------------------------------ Learn Graph Databases - Download FREE O'Reilly Book "Graph Databases" is the definitive new guide to graph databases and their applications. Written by three acclaimed leaders in the field, this first edition is now available. Download your free book today! http://p.sf.net/sfu/NeoTech_______________________________________________ Postgres-xc-general mailing list Pos...@li... https://lists.sourceforge.net/lists/listinfo/postgres-xc-general |
From: Juned K. <jkh...@gm...> - 2014-04-17 07:49:16
|
Hi All, I want to add few more coordinator and datnode in pgxc. during adding process whole database is being copy to new component. I have very large database around 8GB, in this database only one table is there which holds the large number of records. is there anyway to exclude such table while adding new coordinator or datanode using pgxc_ctl command ? I want to exclude some table just to make adding coordinator and datanode process smother and faster. Please suggest. -- Thanks, Juned Khan <http://www.inextrix.com/> |
From: Michael P. <mic...@gm...> - 2014-04-17 06:10:27
|
On Thu, Apr 17, 2014 at 2:51 PM, Juned Khan <jkh...@gm...> wrote: > And i can use pgpool and pgbouncer with pgxc right ? In front of the Coordinators, that's fine. But I am not sure in from of the Datanodes as XC has one extra connection parameter to identify the node type from which connection is from, and a couple of additional message types to pass down transaction ID, timestamp and snapshot data from Coordinator to Datanodes (actually Coordinators as well for DDL queries). If those message types and/or connection parameters get filtered by pgpool or pgbouncer, well you cannot use them. I've personally never given a try though, but the idea is worth an attempt to reduce lock contention that could be caused by a too high value of max_connections. -- Michael |
From: Juned K. <jkh...@gm...> - 2014-04-17 05:51:57
|
And i can use pgpool and pgbouncer with pgxc right ? On Thu, Apr 17, 2014 at 10:07 AM, Michael Paquier <mic...@gm... > wrote: > > > > On Thu, Apr 17, 2014 at 1:13 PM, Juned Khan <jkh...@gm...> wrote: > >> Hi Michael, >> >> Why i had to set max_connection value to that much because it was giving >> me error like. >> >> FATAL: sorry, too many clients already at /ust/local/... >> >> should i use pgpool to handle that much connection? >> > pgbouncer is a better choice if you just want connection pooling. pgpool > is a swiss knife with more features that you may need. > -- > Michael > -- Thanks, Juned Khan iNextrix Technologies Pvt Ltd. www.inextrix.com |
From: Michael P. <mic...@gm...> - 2014-04-17 04:37:21
|
On Thu, Apr 17, 2014 at 1:13 PM, Juned Khan <jkh...@gm...> wrote: > Hi Michael, > > Why i had to set max_connection value to that much because it was giving > me error like. > > FATAL: sorry, too many clients already at /ust/local/... > > should i use pgpool to handle that much connection? > pgbouncer is a better choice if you just want connection pooling. pgpool is a swiss knife with more features that you may need. -- Michael |
From: Juned K. <jkh...@gm...> - 2014-04-17 04:16:12
|
Hi koicihi, Can i search for the WAL file which contains this value and edit it and then try to start datanode slave again ? How it will impact to existing components, i mean what risks are there ? On Thu, Apr 17, 2014 at 9:43 AM, Juned Khan <jkh...@gm...> wrote: > Hi Michael, > > Why i had to set max_connection value to that much because it was giving > me error like. > > FATAL: sorry, too many clients already at /ust/local/... > > should i use pgpool to handle that much connection? > > > > On Thu, Apr 17, 2014 at 6:41 AM, Michael Paquier < > mic...@gm...> wrote: > >> >> >> >> On Wed, Apr 16, 2014 at 9:53 PM, Juned Khan <jkh...@gm...> wrote: >> >>> Hi All, >>> >>> When i tried to set max_connection value to 1000 it gave me this error >>> >> That's a lot, man! Concurrency between sessions is going to blow up your >> performance. >> >> From the docs i came to know that to use than much connection i have to >>> modify kernel configuration >>> >>> But now i am now trying to set 500 connection instead of 1000 then its >>> giving below error. >>> >>> LOG: database system was interrupted while in recovery at log time >>> 2014-04-16 09:06:06 WAT >>> HINT: If this has occurred more than once some data might be corrupted >>> and you might need to choose an earlier recovery target. >>> LOG: entering standby mode >>> LOG: restored log file "000000010000000B00000011" from archive >>> FATAL: hot standby is not possible because max_connections = 500 is a >>> lower setting than on the master server (its value was 1000) >>> LOG: startup process (PID 16829) exited with exit code 1 >>> LOG: aborting startup due to startup process failure >>> >> Anyone please suggest if any other way is there to fix this like clearing >>> cache or something so it can read correct values. >>> >> Update the master first, then the slave. >> -- >> Michael >> > > > > > -- Thanks, Juned Khan iNextrix Technologies Pvt Ltd. www.inextrix.com |
From: Juned K. <jkh...@gm...> - 2014-04-17 04:13:24
|
Hi Michael, Why i had to set max_connection value to that much because it was giving me error like. FATAL: sorry, too many clients already at /ust/local/... should i use pgpool to handle that much connection? On Thu, Apr 17, 2014 at 6:41 AM, Michael Paquier <mic...@gm...>wrote: > > > > On Wed, Apr 16, 2014 at 9:53 PM, Juned Khan <jkh...@gm...> wrote: > >> Hi All, >> >> When i tried to set max_connection value to 1000 it gave me this error >> > That's a lot, man! Concurrency between sessions is going to blow up your > performance. > > From the docs i came to know that to use than much connection i have to >> modify kernel configuration >> >> But now i am now trying to set 500 connection instead of 1000 then its >> giving below error. >> >> LOG: database system was interrupted while in recovery at log time >> 2014-04-16 09:06:06 WAT >> HINT: If this has occurred more than once some data might be corrupted >> and you might need to choose an earlier recovery target. >> LOG: entering standby mode >> LOG: restored log file "000000010000000B00000011" from archive >> FATAL: hot standby is not possible because max_connections = 500 is a >> lower setting than on the master server (its value was 1000) >> LOG: startup process (PID 16829) exited with exit code 1 >> LOG: aborting startup due to startup process failure >> > Anyone please suggest if any other way is there to fix this like clearing >> cache or something so it can read correct values. >> > Update the master first, then the slave. > -- > Michael > |
From: Michael P. <mic...@gm...> - 2014-04-17 01:11:13
|
On Wed, Apr 16, 2014 at 9:53 PM, Juned Khan <jkh...@gm...> wrote: > Hi All, > > When i tried to set max_connection value to 1000 it gave me this error > That's a lot, man! Concurrency between sessions is going to blow up your performance. >From the docs i came to know that to use than much connection i have to > modify kernel configuration > > But now i am now trying to set 500 connection instead of 1000 then its > giving below error. > > LOG: database system was interrupted while in recovery at log time > 2014-04-16 09:06:06 WAT > HINT: If this has occurred more than once some data might be corrupted > and you might need to choose an earlier recovery target. > LOG: entering standby mode > LOG: restored log file "000000010000000B00000011" from archive > FATAL: hot standby is not possible because max_connections = 500 is a > lower setting than on the master server (its value was 1000) > LOG: startup process (PID 16829) exited with exit code 1 > LOG: aborting startup due to startup process failure > Anyone please suggest if any other way is there to fix this like clearing > cache or something so it can read correct values. > Update the master first, then the slave. -- Michael |
From: 鈴木 幸市 <ko...@in...> - 2014-04-17 01:07:50
|
The setup of max_connection=1000 might have written to WAL and shipped to the slave. This could be the cause of the issue. You can do the following to recover from the issue. 1. Run checkpoint and vacuum full at the master, 2. Build the slave from the scotch using pg_basebackup (pgxc_ctl provides this means). 3. Start the slave. Regards; --- Koichi Suzuki 2014/04/16 21:53、Juned Khan <jkh...@gm...> のメール: > Hi All, > > When i tried to set max_connection value to 1000 it gave me this error > > FATAL: could not create semaphores: No space left on device > DETAIL: Failed system call was semget(20008064, 17, 03600). > > From the docs i came to know that to use than much connection i have to modify kernel configuration > > But now i am now trying to set 500 connection instead of 1000 then its giving below error. > > LOG: database system was interrupted while in recovery at log time 2014-04-16 09:06:06 WAT > HINT: If this has occurred more than once some data might be corrupted and you might need to choose an earlier recovery target. > LOG: entering standby mode > LOG: restored log file "000000010000000B00000011" from archive > FATAL: hot standby is not possible because max_connections = 500 is a lower setting than on the master server (its value was 1000) > LOG: startup process (PID 16829) exited with exit code 1 > LOG: aborting startup due to startup process failure > > It says master value is 1000 but i have set it to 500. seems it reading old values i guess. > > database=# show max_connections; > max_connections > ----------------- > 500 > (1 row) > > I have restarted all components several times but no luck. > > Anyone please suggest if any other way is there to fix this like clearing cache or something so it can read correct values. > > > Regards > Juned Khan > ------------------------------------------------------------------------------ > Learn Graph Databases - Download FREE O'Reilly Book > "Graph Databases" is the definitive new guide to graph databases and their > applications. Written by three acclaimed leaders in the field, > this first edition is now available. Download your free book today! > http://p.sf.net/sfu/NeoTech_______________________________________________ > Postgres-xc-general mailing list > Pos...@li... > https://lists.sourceforge.net/lists/listinfo/postgres-xc-general |