You can subscribe to this list here.
2010 |
Jan
|
Feb
|
Mar
|
Apr
|
May
(2) |
Jun
|
Jul
|
Aug
(6) |
Sep
|
Oct
(19) |
Nov
(1) |
Dec
|
---|---|---|---|---|---|---|---|---|---|---|---|---|
2011 |
Jan
(12) |
Feb
(1) |
Mar
(4) |
Apr
(4) |
May
(32) |
Jun
(12) |
Jul
(11) |
Aug
(1) |
Sep
(6) |
Oct
(3) |
Nov
|
Dec
(10) |
2012 |
Jan
(11) |
Feb
(1) |
Mar
(3) |
Apr
(25) |
May
(53) |
Jun
(38) |
Jul
(103) |
Aug
(54) |
Sep
(31) |
Oct
(66) |
Nov
(77) |
Dec
(20) |
2013 |
Jan
(91) |
Feb
(86) |
Mar
(103) |
Apr
(107) |
May
(25) |
Jun
(37) |
Jul
(17) |
Aug
(59) |
Sep
(38) |
Oct
(78) |
Nov
(29) |
Dec
(15) |
2014 |
Jan
(23) |
Feb
(82) |
Mar
(118) |
Apr
(101) |
May
(103) |
Jun
(45) |
Jul
(6) |
Aug
(10) |
Sep
|
Oct
(32) |
Nov
|
Dec
(9) |
2015 |
Jan
(3) |
Feb
(5) |
Mar
|
Apr
(1) |
May
|
Jun
|
Jul
(9) |
Aug
(4) |
Sep
(3) |
Oct
|
Nov
|
Dec
|
2016 |
Jan
(3) |
Feb
|
Mar
|
Apr
|
May
|
Jun
|
Jul
|
Aug
|
Sep
|
Oct
|
Nov
|
Dec
|
2017 |
Jan
|
Feb
|
Mar
|
Apr
|
May
|
Jun
(3) |
Jul
|
Aug
|
Sep
|
Oct
|
Nov
|
Dec
|
2018 |
Jan
|
Feb
|
Mar
|
Apr
|
May
(4) |
Jun
|
Jul
|
Aug
|
Sep
|
Oct
|
Nov
|
Dec
|
S | M | T | W | T | F | S |
---|---|---|---|---|---|---|
|
|
|
|
|
|
1
(14) |
2
(4) |
3
(8) |
4
(4) |
5
(3) |
6
|
7
|
8
|
9
|
10
(2) |
11
(2) |
12
|
13
(5) |
14
(5) |
15
(1) |
16
(1) |
17
(2) |
18
|
19
(1) |
20
(2) |
21
(2) |
22
(2) |
23
(1) |
24
(13) |
25
(4) |
26
(1) |
27
(1) |
28
(4) |
|
From: Masataka S. <pg...@gm...> - 2014-02-20 02:16:05
|
I'm afraid you got this issue. Can you check logs in GTM and GTM proxy? https://sourceforge.net/p/postgres-xc/bugs/474/ If it prove right, please take the following steps. 1. Stop all component 2. Remove register.node in GTM and GTM Proxy 3. Start GTM 4. Start GTM proxy 5. Start coordinators and datanodes Following materials will be help you to configure. * Postgres Open 2013 (Especially SPOF Analysis in Slide deck) http://sourceforge.net/apps/mediawiki/postgres-xc/index.php?title=Presentation_Materials And the configuration topics described in these materials are deprecated yet, but very helpful to know what pgxc_ctl does. * PGDay.EU, Oct 24th, 2012, Prague, Czeck http://postgres-xc.sourceforge.net/misc-docs/Prague_Presentation_20121024.pdf * Presentation at PGConfChina 2012, Jun16th, 2012, Beijing, China (I believe this is the best material for geeks) http://postgres-xc.sourceforge.net/misc-docs/PGConfChina_20120610_Distribution.pdf Regards. On 20 February 2014 02:03, Sergio Sinuco <ser...@da...> wrote: > Hello everybody. > > I just begin to try postgrexc 1.1. I have followed configuration tutorial > for real servers in postgresxc.wikia.com > > http://postgresxc.wikia.com/wiki/Real_Server_configuration > > I setup three nodes. The first one with a GTM. The second and third with a > GTM Proxy, a datanode and a coordinator in each one. > > Everything is OK, until I kill the GTM process in first node. After restart > everything in all nodes I can't insert new data in tables. When I try it, > following message is showed: > > ERROR: > STATEMENT: INSERT INTO public.test(name) VALUES ('CARLOS'::text) > > Then, I can't connect to coordinators and following message is showed: > > ERROR: GTM error, could not obtain snapshot XID = 0 > WARNING: can not connect to GTM: Connection refused > > I know that there is a way to configure Postgres-XC in order to have high > availability (GTM Stand By and other stuff), but the question is, when GTM > process and even its contingency crash there is no way to get a right > behaviour of the cluster? > > Thanks. > > -- > Sergio E. Sinuco Leon > Arquitecto de soluciones > Datatraffic S.A.S. > Móvil: (57) 310 884 26 50 > Fijo (+571) 7426160 Ext 115 > Calle 93 # 15-27 Ofc. 502 > Calle 29 # 6 - 94 Ofc. 601 > Bogotá, Colombia. > www.datatraffic.com.co > > ------------------------------------------------------------------------------ > Managing the Performance of Cloud-Based Applications > Take advantage of what the Cloud has to offer - Avoid Common Pitfalls. > Read the Whitepaper. > http://pubads.g.doubleclick.net/gampad/clk?id=121054471&iu=/4140/ostg.clktrk > _______________________________________________ > Postgres-xc-general mailing list > Pos...@li... > https://lists.sourceforge.net/lists/listinfo/postgres-xc-general > |
From: Koichi S. <koi...@gm...> - 2014-02-20 01:20:02
|
We've found one of the cause of this problem and a patch has been submitted to developers mailing list. Hope this is included in the minor release as well. The thread is titled Yet another analysis of "GTM error, could not obtain snapshot" You will find the patch in developers mailing list. It's very helpful if you try this patch to 1.1 and let us know how it works. Masataka: Could you provide info for this patch, what release it is for and your view how it works with Sergio's case? Regards; --- Koichi Suzuki 2014-02-20 2:03 GMT+09:00 Sergio Sinuco <ser...@da...>: > Hello everybody. > > I just begin to try postgrexc 1.1. I have followed configuration tutorial > for real servers in postgresxc.wikia.com > > http://postgresxc.wikia.com/wiki/Real_Server_configuration > > I setup three nodes. The first one with a GTM. The second and third with a > GTM Proxy, a datanode and a coordinator in each one. > > Everything is OK, until I kill the GTM process in first node. After restart > everything in all nodes I can't insert new data in tables. When I try it, > following message is showed: > > ERROR: > STATEMENT: INSERT INTO public.test(name) VALUES ('CARLOS'::text) > > Then, I can't connect to coordinators and following message is showed: > > ERROR: GTM error, could not obtain snapshot XID = 0 > WARNING: can not connect to GTM: Connection refused > > I know that there is a way to configure Postgres-XC in order to have high > availability (GTM Stand By and other stuff), but the question is, when GTM > process and even its contingency crash there is no way to get a right > behaviour of the cluster? > > Thanks. > > -- > Sergio E. Sinuco Leon > Arquitecto de soluciones > Datatraffic S.A.S. > Móvil: (57) 310 884 26 50 > Fijo (+571) 7426160 Ext 115 > Calle 93 # 15-27 Ofc. 502 > Calle 29 # 6 - 94 Ofc. 601 > Bogotá, Colombia. > www.datatraffic.com.co > > ------------------------------------------------------------------------------ > Managing the Performance of Cloud-Based Applications > Take advantage of what the Cloud has to offer - Avoid Common Pitfalls. > Read the Whitepaper. > http://pubads.g.doubleclick.net/gampad/clk?id=121054471&iu=/4140/ostg.clktrk > _______________________________________________ > Postgres-xc-general mailing list > Pos...@li... > https://lists.sourceforge.net/lists/listinfo/postgres-xc-general > |
From: Sergio S. <ser...@da...> - 2014-02-19 17:35:19
|
Hello everybody. I just begin to try postgrexc 1.1. I have followed configuration tutorial for real servers in postgresxc.wikia.com http://postgresxc.wikia.com/wiki/Real_Server_configuration I setup three nodes. The first one with a GTM. The second and third with a GTM Proxy, a datanode and a coordinator in each one. Everything is OK, until I kill the GTM process in first node. After restart everything in all nodes I can't insert new data in tables. When I try it, following message is showed: ERROR: STATEMENT: INSERT INTO public.test(name) VALUES ('CARLOS'::text) Then, I can't connect to coordinators and following message is showed: ERROR: GTM error, could not obtain snapshot XID = 0 WARNING: can not connect to GTM: Connection refused I know that there is a way to configure Postgres-XC in order to have high availability (GTM Stand By and other stuff), but the question is, when GTM process and even its contingency crash there is no way to get a right behaviour of the cluster? Thanks. -- Sergio E. Sinuco Leon Arquitecto de soluciones Datatraffic S.A.S. Móvil: (57) 310 884 26 50 Fijo (+571) 7426160 Ext 115 Calle 93 # 15-27 Ofc. 502 Calle 29 # 6 - 94 Ofc. 601 Bogotá, Colombia. www.datatraffic.com.co |
From: Koichi S. <koi...@gm...> - 2014-02-17 07:38:14
|
If you mean master-master just replication, you should specify each table as DISTRIBUTE BY REPLICATION. Adding servers could be complicated steps. I advice to use pgxc_ctl to configure and maintain your cluster. You will find the document at http://postgres-xc.sourceforge.net/docs/1_1/pgxc-ctl.html When you add new server, you can add new datanode/coordinator/gtm_proxy with pgxc_ctl commands. Also, you can redistribute your table (in this case, add a node to replicate the data), you can use ALTER TABLE. Document will be at http://postgres-xc.sourceforge.net/docs/1_1/sql-altertable.html XC is not intended to use as multimaster replication. It assumes that you have a few big tables updated very frequently (transaction tables) which should be distributed, not replicated and other tables which are relatively stable and are joined frequently with transaction tables (master tables). Regards; --- Koichi Suzuki 2014-02-17 15:51 GMT+09:00 Juned Khan <jkh...@gm...>: > Hi All, > > I want to use postgre-xc for multi master uses. I want to setup two server > with GTM,cordinator and datanodes. > > And in future i want to add another servers as well with same configuration, > and in future i will add more. > > To test this, i have setup two server with GTM,Coordinator and datanode on > same server. i have follow below link for configuration. > > http://postgresxc.wikia.com/wiki/Laptop_configuration > > Now How do i configuration this two system as master master configuration, > will add more masters in future. > > I found link of GTM standby configuration,then i have setup GTM standby on > one separate system using below link. > > http://postgresxc.wikia.com/wiki/GTM_Standby_Configuration > > then i started GTM, datanodes and ordinators on both server, then i tried to > run below command on Stand By server. > > gtm_ctl promote -Z gtm -D /home/postgresxc/pgxc/gtm/ -o "-s gtm -t 20001" > > But got this error: > gtm_ctl: could not send promote signal (PID: 11937): No such process > > So my question is am i going to right way to achieve my goal ? how do i > avoid this error ? > > > -- > Thanks, > Juned Khan > iNextrix Technologies Pvt Ltd. > www.inextrix.com > > ------------------------------------------------------------------------------ > Android apps run on BlackBerry 10 > Introducing the new BlackBerry 10.2.1 Runtime for Android apps. > Now with support for Jelly Bean, Bluetooth, Mapview and more. > Get your Android app in front of a whole new audience. Start now. > http://pubads.g.doubleclick.net/gampad/clk?id=124407151&iu=/4140/ostg.clktrk > _______________________________________________ > Postgres-xc-general mailing list > Pos...@li... > https://lists.sourceforge.net/lists/listinfo/postgres-xc-general > |
From: Juned K. <jkh...@gm...> - 2014-02-17 06:51:41
|
Hi All, I want to use postgre-xc for multi master uses. I want to setup two server with GTM,cordinator and datanodes. And in future i want to add another servers as well with same configuration, and in future i will add more. To test this, i have setup two server with GTM,Coordinator and datanode on same server. i have follow below link for configuration. http://postgresxc.wikia.com/wiki/Laptop_configuration Now How do i configuration this two system as master master configuration, will add more masters in future. I found link of GTM standby configuration,then i have setup GTM standby on one separate system using below link. http://postgresxc.wikia.com/wiki/GTM_Standby_Configuration then i started GTM, datanodes and ordinators on both server, then i tried to run below command on Stand By server. gtm_ctl promote -Z gtm -D /home/postgresxc/pgxc/gtm/ -o "-s gtm -t 20001" But got this error: gtm_ctl: could not send promote signal (PID: 11937): No such process So my question is am i going to right way to achieve my goal ? how do i avoid this error ? -- Thanks, Juned Khan iNextrix Technologies Pvt Ltd. www.inextrix.com |
From: Rishi R. <the...@gm...> - 2014-02-16 08:19:25
|
FYI: I've been running some more tests on the REL1_2_STABLE branch to see if I can find more information on what's wrong with my system. To keep my setup simple, I have reverted to using initdb and initgtm to create the cluster and am running only the gtm, coordinator and datanode. They are all hosted on the same machine and communicate using TCP sockets on localhost. TL;DR there is a stack trace at the end of the email. I've added some more debug logging to src/backend/storage/ipc/procarray.c 436: elog(LOG, "failed to find proc %p in ProcArray", proc); + elog(LOG, "pid %d", proc->pid); + elog(LOG, "pgprocno %d", proc->pgprocno); The logs only seem to start happening after I connect using psql. Here's what they look like on the coordinator without any psql commands issued: SID 5300263f.163d LOG: 00000: database system was shut down at 2014-02-15 19:13:59 EST SID 5300263f.163d LOCATION: StartupXLOG, xlog.c:4959 SID 5300263f.163b LOG: 00000: database system is ready to accept connections SID 5300263f.163b LOCATION: reaper, postmaster.c:2768 SID 5300263f.1642 LOG: 00000: autovacuum launcher started SID 5300263f.1642 LOCATION: AutoVacLauncherMain, autovacuum.c:417 SID 5300276c.16c3 LOG: 00000: failed to find proc 0x7f777d08ab80 in ProcArray SID 5300276c.16c3 LOCATION: ProcArrayRemove, procarray.c:436 SID 5300276c.16c3 LOG: 00000: pid 5827 SID 5300276c.16c3 LOCATION: ProcArrayRemove, procarray.c:437 SID 5300276c.16c3 LOG: 00000: pgprocno 102 SID 5300276c.16c3 LOCATION: ProcArrayRemove, procarray.c:438 SID 530027a8.16f9 LOG: 00000: failed to find proc 0x7f777d08ab80 in ProcArray SID 530027a8.16f9 LOCATION: ProcArrayRemove, procarray.c:436 SID 530027a8.16f9 LOG: 00000: pid 5881 SID 530027a8.16f9 LOCATION: ProcArrayRemove, procarray.c:437 SID 530027a8.16f9 LOG: 00000: pgprocno 102 SID 530027a8.16f9 LOCATION: ProcArrayRemove, procarray.c:438 Here is what the logs in the coordinator look like after I issued drop database test; SID 53002923.1764 LOG: 00000: statement: drop database test; SID 53002923.1764 LOCATION: exec_simple_query, postgres.c:966 SID 53002923.1764 LOG: 00000: failed to find proc 0x7f777d08d980 in ProcArray SID 53002923.1764 LOCATION: ProcArrayRemove, procarray.c:436 SID 53002923.1764 STATEMENT: drop database test; SID 53002923.1764 LOG: 00000: pid 0 SID 53002923.1764 LOCATION: ProcArrayRemove, procarray.c:437 SID 53002923.1764 STATEMENT: drop database test; SID 53002923.1764 LOG: 00000: pgprocno 118 SID 53002923.1764 LOCATION: ProcArrayRemove, procarray.c:438 SID 53002923.1764 STATEMENT: drop database test; SID 5300294c.176a LOG: 00000: failed to find proc 0x7f777d08ab80 in ProcArray SID 5300294c.176a LOCATION: ProcArrayRemove, procarray.c:436 SID 5300294c.176a LOG: 00000: pid 5994 SID 5300294c.176a LOCATION: ProcArrayRemove, procarray.c:437 SID 5300294c.176a LOG: 00000: pgprocno 102 SID 5300294c.176a LOCATION: ProcArrayRemove, procarray.c:438 SID 53002988.1784 LOG: 00000: failed to find proc 0x7f777d08ab80 in ProcArray SID 53002988.1784 LOCATION: ProcArrayRemove, procarray.c:436 SID 53002988.1784 LOG: 00000: pid 6020 SID 53002988.1784 LOCATION: ProcArrayRemove, procarray.c:437 SID 53002988.1784 LOG: 00000: pgprocno 102 SID 53002988.1784 LOCATION: ProcArrayRemove, procarray.c:438 SID 530029c4.1791 LOG: 00000: failed to find proc 0x7f777d08ab80 in ProcArray SID 530029c4.1791 LOCATION: ProcArrayRemove, procarray.c:436 SID 530029c4.1791 LOG: 00000: pid 6033 SID 530029c4.1791 LOCATION: ProcArrayRemove, procarray.c:437 SID 530029c4.1791 LOG: 00000: pgprocno 102 SID 530029c4.1791 LOCATION: ProcArrayRemove, procarray.c:438 SID 53002a00.17c6 LOG: 00000: failed to find proc 0x7f777d08ab80 in ProcArray SID 53002a00.17c6 LOCATION: ProcArrayRemove, procarray.c:436 SID 53002a00.17c6 LOG: 00000: pid 6086 SID 53002a00.17c6 LOCATION: ProcArrayRemove, procarray.c:437 SID 53002a00.17c6 LOG: 00000: pgprocno 102 SID 53002a00.17c6 LOCATION: ProcArrayRemove, procarray.c:438 At this point, these logs start appearing in the datanode's log: SID 53002927.1766 LOG: 00000: failed to find proc 0x7f46c031e980 in ProcArray SID 53002927.1766 LOCATION: ProcArrayRemove, procarray.c:436 SID 53002927.1766 STATEMENT: COMMIT PREPARED 'T10226' SID 53002927.1766 LOG: 00000: pid 0 SID 53002927.1766 LOCATION: ProcArrayRemove, procarray.c:437 SID 53002927.1766 STATEMENT: COMMIT PREPARED 'T10226' SID 53002927.1766 LOG: 00000: pgprocno 118 SID 53002927.1766 LOCATION: ProcArrayRemove, procarray.c:438 SID 53002927.1766 STATEMENT: COMMIT PREPARED 'T10226' SID 5300293d.1767 LOG: 00000: failed to find proc 0x7f46c031bb80 in ProcArray SID 5300293d.1767 LOCATION: ProcArrayRemove, procarray.c:436 SID 5300293d.1767 LOG: 00000: pid 5991 SID 5300293d.1767 LOCATION: ProcArrayRemove, procarray.c:437 SID 5300293d.1767 LOG: 00000: pgprocno 102 SID 5300293d.1767 LOCATION: ProcArrayRemove, procarray.c:438 SID 53002979.1781 LOG: 00000: failed to find proc 0x7f46c031bb80 in ProcArray SID 53002979.1781 LOCATION: ProcArrayRemove, procarray.c:436 SID 53002979.1781 LOG: 00000: pid 6017 SID 53002979.1781 LOCATION: ProcArrayRemove, procarray.c:437 SID 53002979.1781 LOG: 00000: pgprocno 102 SID 53002979.1781 LOCATION: ProcArrayRemove, procarray.c:438 SID 530029b5.178e LOG: 00000: failed to find proc 0x7f46c031bb80 in ProcArray ... When I disconnect psql, the logs stop in the coordinator but continue in the datanode (there's an open connection between the coordinator and the datanode). After I set autovacuum = off, the logs with pgprocno 102 disappeared in both the datanode and coordinator. I originally set out to get stack traces for both proc 118 and 102. Unfortunately, it seems that the autovacuum launcher spawns multiple processes making it difficult to intercept the process with gdb. I was able to get a stack trace for pgprocno 118 from the open connection between the datanode and coordinator: #0 ProcArrayRemove (proc=proc@entry=0x7ffcb25f9980, latestXid=latestXid@entry=10437) at procarray.c:436 #1 0x00000000004b69ed in FinishPreparedTransaction (gid=<optimized out>, isCommit=<optimized out>) at twophase.c:1368 #2 0x0000000000674551 in standard_ProcessUtility (parsetree=0x29d4110, queryString=0x29d3730 "COMMIT PREPARED 'T10437'", context=PROCESS_UTILITY_TOPLEVEL, params=0x0, dest=<optimized out>, sentToRemote=<optimized out>, completionTag=0x7fff2c308910 "") at utility.c:574 #3 0x0000000000670d02 in PortalRunUtility (portal=portal@entry=0x29d99c0, utilityStmt=utilityStmt@entry=0x29d4110, isTopLevel=isTopLevel@entry=1 '\001', dest=dest@entry=0x29d4450, completionTag=completionTag@entry=0x7fff2c308910 "") at pquery.c:1285 #4 0x00000000006718cd in PortalRunMulti (portal=portal@entry=0x29d99c0, isTopLevel=isTopLevel@entry=1 '\001', dest=dest@entry=0x29d4450, altdest=altdest@entry=0x29d4450, completionTag=completionTag@entry=0x7fff2c308910 "") at pquery.c:1432 #5 0x00000000006723e9 in PortalRun (portal=portal@entry=0x29d99c0, count=count@entry=9223372036854775807, isTopLevel=isTopLevel@entry=1 '\001', dest=dest@entry=0x29d4450, altdest=altdest@entry=0x29d4450, completionTag=completionTag@entry=0x7fff2c308910 "") at pquery.c:882 #6 0x00000000006702d2 in exec_simple_query (query_string=0x29d3730 "COMMIT PREPARED 'T10437'") at postgres.c:1140 #7 PostgresMain (argc=<optimized out>, argv=argv@entry=0x29bb360, dbname=0x29bb288 "postgres", username=<optimized out>) at postgres.c:4251 #8 0x00000000004621c6 in BackendRun (port=0x29dd9a0) at postmaster.c:4205 #9 BackendStartup (port=0x29dd9a0) at postmaster.c:3894 #10 ServerLoop () at postmaster.c:1705 #11 0x000000000062f768 in PostmasterMain (argc=argc@entry=4, argv=argv@entry=0x29b9340) at postmaster.c:1374 #12 0x0000000000462b47 in main (argc=4, argv=0x29b9340) at main.c:196 I'm not familiar with the codebase, so I'm not entirely sure what's happening. As far as I can tell, FinishPreparedTransaction is trying to end a two phase commit, which it cannot find. I did a select * from pg_prepared_xacts on both the coordinator and the datanode, but both tables were empty. The commits seem to complete successfully despite the log. I'll keep digging to see what I can find. On Sat, Feb 15, 2014 at 2:44 AM, Rishi Ramraj <the...@gm...>wrote: > I just recompiled XC using the REL1_2_STABLE branch. I used pgxc_ctl to > configure and create the cluster. The physical configuration is still the > same, except now there's a GTM proxy on the machine as well. > > The database seems to be functioning correctly, but I'm still getting the > ProcArray error. Here's the log from the coordinator: > > 2014-02-15 02:26:28 EST SID 52ff16a4.3cb0 XID 0LOG: 00000: database > system was shut down at 2014-02-15 02:23:33 EST > 2014-02-15 02:26:28 EST SID 52ff16a4.3cb0 XID 0LOCATION: StartupXLOG, > xlog.c:4959 > 2014-02-15 02:26:28 EST SID 52ff16a4.3ca5 XID 0LOG: 00000: database > system is ready to accept connections > 2014-02-15 02:26:28 EST SID 52ff16a4.3ca5 XID 0LOCATION: reaper, > postmaster.c:2768 > 2014-02-15 02:26:28 EST SID 52ff16a4.3cb5 XID 0LOG: 00000: autovacuum > launcher started > 2014-02-15 02:26:28 EST SID 52ff16a4.3cb5 XID 0LOCATION: > AutoVacLauncherMain, autovacuum.c:417 > 2014-02-15 02:28:53 EST SID 52ff1727.3cd2 XID 0ERROR: 42601: syntax error > at or near "asdf" at character 1 > 2014-02-15 02:28:53 EST SID 52ff1727.3cd2 XID 0LOCATION: scanner_yyerror, > scan.l:1044 > 2014-02-15 02:28:53 EST SID 52ff1727.3cd2 XID 0STATEMENT: asdf > ; > 2014-02-15 02:29:30 EST SID 52ff1759.3cd3 XID 0LOG: 00000: failed to find > proc 0x7f3c12ff5b80 in ProcArray > 2014-02-15 02:29:30 EST SID 52ff1759.3cd3 XID 0LOCATION: ProcArrayRemove, > procarray.c:436 > 2014-02-15 02:30:30 EST SID 52ff1795.3cd5 XID 0LOG: 00000: failed to find > proc 0x7f3c12ff5b80 in ProcArray > 2014-02-15 02:30:30 EST SID 52ff1795.3cd5 XID 0LOCATION: ProcArrayRemove, > procarray.c:436 > 2014-02-15 02:31:01 EST SID 52ff1727.3cd2 XID 0LOG: 00000: statement: > create node data with (type='datanode', host='localhost', port=20008); > 2014-02-15 02:31:01 EST SID 52ff1727.3cd2 XID 0LOCATION: > exec_simple_query, postgres.c:966 > 2014-02-15 02:31:08 EST SID 52ff1727.3cd2 XID 0ERROR: 42601: syntax error > at or near "exit" at character 1 > 2014-02-15 02:31:08 EST SID 52ff1727.3cd2 XID 0LOCATION: scanner_yyerror, > scan.l:1044 > 2014-02-15 02:31:08 EST SID 52ff1727.3cd2 XID 0STATEMENT: exit > ; > 2014-02-15 02:32:29 EST SID 52ff1808.3d1e XID 0LOG: 00000: statement: > create database test; > 2014-02-15 02:32:29 EST SID 52ff1808.3d1e XID 0LOCATION: > exec_simple_query, postgres.c:966 > 2014-02-15 02:32:29 EST SID 52ff180d.3d20 XID 0LOG: 00000: failed to find > proc 0x7f3c12ff5b80 in ProcArray > 2014-02-15 02:32:29 EST SID 52ff180d.3d20 XID 0LOCATION: ProcArrayRemove, > procarray.c:436 > 2014-02-15 02:32:31 EST SID 52ff1808.3d1e XID 2054LOG: 00000: failed to > find proc 0x7f3c12ff8980 in ProcArray > 2014-02-15 02:32:31 EST SID 52ff1808.3d1e XID 2054LOCATION: > ProcArrayRemove, procarray.c:436 > 2014-02-15 02:32:31 EST SID 52ff1808.3d1e XID 2054STATEMENT: create > database test; > 2014-02-15 02:33:30 EST SID 52ff1849.3e12 XID 0LOG: 00000: failed to find > proc 0x7f3c12ff5b80 in ProcArray > 2014-02-15 02:33:30 EST SID 52ff1849.3e12 XID 0LOCATION: ProcArrayRemove, > procarray.c:436 > 2014-02-15 02:34:30 EST SID 52ff1885.3e2e XID 0LOG: 00000: failed to find > proc 0x7f3c12ff5b80 in ProcArray > 2014-02-15 02:34:30 EST SID 52ff1885.3e2e XID 0LOCATION: ProcArrayRemove, > procarray.c:436 > 2014-02-15 02:35:30 EST SID 52ff18c1.3e30 XID 0LOG: 00000: failed to find > proc 0x7f3c12ff5b80 in ProcArray > 2014-02-15 02:35:30 EST SID 52ff18c1.3e30 XID 0LOCATION: ProcArrayRemove, > procarray.c:436 > > Here's the log from the datanode: > > LOG: database system was shut down at 2014-02-15 02:23:35 EST > LOG: database system is ready to accept connections > LOG: autovacuum launcher started > ERROR: PGXC Node data: object already defined > STATEMENT: create node data with (type='coordinator', host='localhost', > port=20004); > LOG: failed to find proc 0x7f5a2cc55b80 in ProcArray > LOG: failed to find proc 0x7f5a2cc58980 in ProcArray > STATEMENT: COMMIT PREPARED 'T2051' > LOG: failed to find proc 0x7f5a2cc55b80 in ProcArray > LOG: failed to find proc 0x7f5a2cc55b80 in ProcArray > LOG: failed to find proc 0x7f5a2cc55b80 in ProcArray > LOG: failed to find proc 0x7f5a2cc55b80 in ProcArray > LOG: failed to find proc 0x7f5a2cc55b80 in ProcArray > > I've also attached the pgxcConf file I used to generate the cluster. I'm > going to try to run it under gdb and get a stack trace. Let me know if > there are any other diagnostics you would like me to run. That being said, > the database in its current condition is fine for my testing :) > > > On Thu, Feb 13, 2014 at 11:14 PM, Rishi Ramraj < > the...@gm...> wrote: > >> Will give it a shot. Thanks for the help! >> >> >> On Thu, Feb 13, 2014 at 11:05 PM, Koichi Suzuki <koi...@gm...>wrote: >> >>> I'm afraid something is wrong inside but sorry I couldn't locate what >>> it is. Series of error message is supposed to be from COMMIT, ABORT, >>> COMMIT PREPARED or ABORT PREPARED. >>> >>> I'd advise to recreate the cluster with pgxc_ctl. I hope this is >>> better to track what is going on. You will find materials for this >>> from PGXC wiki page. Try www.postgres-xc.org, >>> >>> Regards; >>> --- >>> Koichi Suzuki >>> >>> >>> 2014-02-14 12:57 GMT+09:00 Rishi Ramraj <the...@gm...>: >>> > Apparently I left the datanode running, and after a while the logs were >>> > filled with the following: >>> > >>> > FATAL: sorry, too many clients already >>> > >>> > Here are the logs with the increased log levels. First on the >>> coordinator: >>> > >>> > 2014-02-13 22:24:31 EST SID 52fd8c6f.386 XID 0LOG: database system >>> was shut >>> > down at 2014-02-13 22:07:11 EST >>> > 2014-02-13 22:24:31 EST SID 52fd8c6e.382 XID 0LOG: database system is >>> ready >>> > to accept connections >>> > 2014-02-13 22:24:31 EST SID 52fd8c6f.38b XID 0LOG: autovacuum launcher >>> > started >>> > 2014-02-13 22:45:18 EST SID 52fd914a.59f XID 0LOG: statement: drop >>> database >>> > test; >>> > 2014-02-13 22:45:18 EST SID 52fd914a.59f XID 11571ERROR: database >>> "test" >>> > does not exist >>> > 2014-02-13 22:45:18 EST SID 52fd914a.59f XID 11571STATEMENT: drop >>> database >>> > test; >>> > 2014-02-13 22:45:32 EST SID 52fd915c.5a2 XID 0LOG: failed to find proc >>> > 0x7f0297db9c60 in ProcArray >>> > 2014-02-13 22:45:33 EST SID 52fd914a.59f XID 0LOG: statement: create >>> > database test; >>> > 2014-02-13 22:45:35 EST SID 52fd914a.59f XID 11575LOG: failed to find >>> proc >>> > 0x7f0297dbca60 in ProcArray >>> > 2014-02-13 22:45:35 EST SID 52fd914a.59f XID 11575STATEMENT: create >>> > database test; >>> > 2014-02-13 22:46:32 EST SID 52fd9198.5b1 XID 0LOG: failed to find proc >>> > 0x7f0297db9c60 in ProcArray >>> > 2014-02-13 22:47:02 EST SID 52fd91b6.5b6 XID 0LOG: failed to find proc >>> > 0x7f0297db9c60 in ProcArray >>> > 2014-02-13 22:47:32 EST SID 52fd91d4.5bb XID 0LOG: failed to find proc >>> > 0x7f0297db9c60 in ProcArray >>> > 2014-02-13 22:48:02 EST SID 52fd91f2.5be XID 0LOG: failed to find proc >>> > 0x7f0297db9c60 in ProcArray >>> > 2014-02-13 22:48:32 EST SID 52fd9210.5c7 XID 0LOG: failed to find proc >>> > 0x7f0297db9c60 in ProcArray >>> > 2014-02-13 22:49:02 EST SID 52fd922e.5d1 XID 0LOG: failed to find proc >>> > 0x7f0297db9c60 in ProcArray >>> > 2014-02-13 22:49:32 EST SID 52fd924c.5d6 XID 0LOG: failed to find proc >>> > 0x7f0297db9c60 in ProcArray >>> > 2014-02-13 22:50:02 EST SID 52fd926a.5da XID 0LOG: failed to find proc >>> > 0x7f0297db9c60 in ProcArray >>> > 2014-02-13 22:50:32 EST SID 52fd9288.5e0 XID 0LOG: failed to find proc >>> > 0x7f0297db9c60 in ProcArray >>> > 2014-02-13 22:51:02 EST SID 52fd92a6.5e3 XID 0LOG: failed to find proc >>> > 0x7f0297db9c60 in ProcArray >>> > 2014-02-13 22:51:32 EST SID 52fd92c4.5ee XID 0LOG: failed to find proc >>> > 0x7f0297db9c60 in ProcArray >>> > 2014-02-13 22:52:02 EST SID 52fd92e2.5f1 XID 0LOG: failed to find proc >>> > 0x7f0297db9c60 in ProcArray >>> > >>> > Next, on the datanode: >>> > >>> > 2014-02-13 22:24:21 EST SID 52fd8c65.33a XID 0LOG: database system >>> was shut >>> > down at 2014-02-13 22:23:35 EST >>> > 2014-02-13 22:24:21 EST SID 52fd8c64.336 XID 0LOG: database system is >>> ready >>> > to accept connections >>> > 2014-02-13 22:24:21 EST SID 52fd8c65.33e XID 0LOG: autovacuum launcher >>> > started >>> > 2014-02-13 22:45:34 EST SID 52fd915e.5a4 XID 0LOG: statement: START >>> > TRANSACTION ISOLATION LEVEL read committed READ WRITE >>> > 2014-02-13 22:45:34 EST SID 52fd915e.5a4 XID 0LOG: statement: create >>> > database test; >>> > 2014-02-13 22:45:35 EST SID 52fd915e.5a4 XID 11574LOG: statement: >>> PREPARE >>> > TRANSACTION 'T11574' >>> > 2014-02-13 22:45:35 EST SID 52fd915e.5a4 XID 0LOG: statement: COMMIT >>> > PREPARED 'T11574' >>> > 2014-02-13 22:45:35 EST SID 52fd915e.5a4 XID 11575LOG: failed to find >>> proc >>> > 0x7f1c3de46a60 in ProcArray >>> > 2014-02-13 22:45:35 EST SID 52fd915e.5a4 XID 11575STATEMENT: COMMIT >>> > PREPARED 'T11574' >>> > 2014-02-13 22:46:22 EST SID 52fd918e.5ae XID 0LOG: failed to find proc >>> > 0x7f1c3de43c60 in ProcArray >>> > 2014-02-13 22:47:22 EST SID 52fd91ca.5b8 XID 0LOG: failed to find proc >>> > 0x7f1c3de43c60 in ProcArray >>> > 2014-02-13 22:48:22 EST SID 52fd9206.5c2 XID 0LOG: failed to find proc >>> > 0x7f1c3de43c60 in ProcArray >>> > 2014-02-13 22:49:22 EST SID 52fd9242.5d3 XID 0LOG: failed to find proc >>> > 0x7f1c3de43c60 in ProcArray >>> > 2014-02-13 22:50:22 EST SID 52fd927e.5dc XID 0LOG: failed to find proc >>> > 0x7f1c3de43c60 in ProcArray >>> > 2014-02-13 22:51:22 EST SID 52fd92ba.5e5 XID 0LOG: failed to find proc >>> > 0x7f1c3de43c60 in ProcArray >>> > 2014-02-13 22:52:22 EST SID 52fd92f6.5f5 XID 0LOG: failed to find proc >>> > 0x7f1c3de43c60 in ProcArray >>> > 2014-02-13 22:53:22 EST SID 52fd9332.5fd XID 0LOG: failed to find proc >>> > 0x7f1c3de43c60 in ProcArray >>> > 2014-02-13 22:54:22 EST SID 52fd936e.60d XID 0LOG: failed to find proc >>> > 0x7f1c3de43c60 in ProcArray >>> > >>> > Would you like me to keep experimenting on this installation, or >>> should I >>> > recreate the cluster? >>> > >>> > >>> > On Thu, Feb 13, 2014 at 9:49 PM, Koichi Suzuki <koi...@gm...> >>> > wrote: >>> >> >>> >> I don't see anything strange in the configuration file. >>> >> >>> >> I found regression test sets up gtm port explicitly (value is the >>> >> default, 6666, though). Could you try to configure your cluster >>> >> with pgxc_ctl, which is far simpler and you have much less chance to >>> >> encounter errors? This utility comes with built-in command sequences >>> >> needed to configure and operate your cluster. >>> >> >>> >> Or, could you re-run the same with different log_min_message level? >>> >> >>> >> Regards; >>> >> --- >>> >> Koichi Suzuki >>> >> >>> >> >>> >> 2014-02-14 11:16 GMT+09:00 Rishi Ramraj <the...@gm... >>> >: >>> >> > Missed the coordinator.conf file. Find attached. >>> >> > >>> >> > >>> >> > On Thu, Feb 13, 2014 at 9:15 PM, Rishi Ramraj >>> >> > <the...@gm...> >>> >> > wrote: >>> >> >> >>> >> >> Find all of the files attached. I have the GTM, coordinator and >>> >> >> datanode >>> >> >> services all hosted on the same Linux Mint machine for testing. >>> They >>> >> >> communicate using the loopback interface. >>> >> >> >>> >> >> To initialize the cluster, I ran the following: >>> >> >> >>> >> >> /var/lib/postgres-xc/gtm $ initgtm -Z >>> >> >> /var/lib/postgres-xc/data $ initdb -D . --nodename data >>> >> >> /var/lib/postgres-xc/coord $ initdb -D . --nodename coord >>> >> >> >>> >> >> To start the cluster, I use upstart: >>> >> >> >>> >> >> /usr/local/pgsql/bin/gtm -D /var/lib/postgres-xc/gtm >>> >> >> /usr/local/pgsql/bin/postgres --datanode -D >>> /var/lib/postgres-xc/data >>> >> >> /usr/local/pgsql/bin/postgres --coordinator -D >>> >> >> /var/lib/postgres-xc/coord >>> >> >> >>> >> >> I will increase the log level and try to reproduce the errors. >>> >> >> >>> >> >> >>> >> >> On Thu, Feb 13, 2014 at 8:39 PM, 鈴木 幸市 <ko...@in...> >>> >> >> wrote: >>> >> >>> >>> >> >>> Hello; >>> >> >>> >>> >> >>> You need to set log_statement GUC to appropriate value. Default >>> is >>> >> >>> “none”. “all” prints all statements accepted. Also, it will >>> be a >>> >> >>> good >>> >> >>> idea to include session ID in the log_line_prefix GUC. Your >>> >> >>> postgresql.conf has some information how to set this. I tested >>> >> >>> REL1_1_STABLE with four coordinators, four datanodes and four >>> >> >>> gtm_proxy and >>> >> >>> did not have any additional log for any coordinators/datanodes. >>> It >>> >> >>> will >>> >> >>> be helpful what statements you issued. >>> >> >>> >>> >> >>> Here’s what I got in REL1_1_STABLE (I configured the cluster using >>> >> >>> pgxc_ctl). >>> >> >>> >>> >> >>> Coordinator: >>> >> >>> LOG: database system was shut down at 2014-02-14 10:2sd >>> >> >>> LOG: autovacuum launcher started >>> >> >>> >>> >> >>> Datanode: >>> >> >>> LOG: database system was shut down at 2014-02-14 10:21:08 JST >>> >> >>> LOG: database system is ready to accept connections >>> >> >>> LOG: autovacuum launcher started >>> >> >>> >>> >> >>> I’d like to have your configuration, each postgresql.conf file and >>> >> >>> what >>> >> >>> you’ve done to initialize, start and run your application. >>> >> >>> >>> >> >>> Best; >>> >> >>> --- >>> >> >>> Koichi Suzuki >>> >> >>> >>> >> >>> 2014/02/14 0:38、Rishi Ramraj <the...@gm...> のメール: >>> >> >>> >>> >> >>> I just tried with REL1_1_STABLE. Here are my logs from the >>> >> >>> coordinator: >>> >> >>> >>> >> >>> LOG: database system was shut down at 2014-02-13 10:14:16 EST >>> >> >>> LOG: database system is ready to accept connections >>> >> >>> LOG: autovacuum launcher started >>> >> >>> ERROR: syntax error at or near "asdf" at character 1 >>> >> >>> STATEMENT: asdf; >>> >> >>> LOG: failed to find proc 0x7f7d4a79ea60 in ProcArray >>> >> >>> STATEMENT: create database test; >>> >> >>> LOG: failed to find proc 0x7f7d4a79bc60 in ProcArray >>> >> >>> ERROR: cannot drop the currently open database >>> >> >>> STATEMENT: drop database test; >>> >> >>> LOG: failed to find proc 0x7f7d4a79ea60 in ProcArray >>> >> >>> STATEMENT: drop database test; >>> >> >>> ERROR: syntax error at or near "blah" at character 1 >>> >> >>> STATEMENT: blah blah blah; >>> >> >>> LOG: failed to find proc 0x7f7d4a79bc60 in ProcArray >>> >> >>> LOG: failed to find proc 0x7f7d4a79bc60 in ProcArray >>> >> >>> LOG: failed to find proc 0x7f7d4a79bc60 in ProcArray >>> >> >>> >>> >> >>> The logs from the datanode: >>> >> >>> >>> >> >>> LOG: database system was shut down at 2014-02-13 10:19:33 EST >>> >> >>> LOG: database system is ready to accept connections >>> >> >>> LOG: autovacuum launcher started >>> >> >>> LOG: failed to find proc 0x7f06bee5ea60 in ProcArray >>> >> >>> STATEMENT: COMMIT PREPARED 'T10014' >>> >> >>> LOG: failed to find proc 0x7f06bee5bc60 in ProcArray >>> >> >>> LOG: failed to find proc 0x7f06bee5ea60 in ProcArray >>> >> >>> STATEMENT: COMMIT PREPARED 'T10023' >>> >> >>> LOG: failed to find proc 0x7f06bee5bc60 in ProcArray >>> >> >>> LOG: failed to find proc 0x7f06bee5bc60 in ProcArray >>> >> >>> LOG: failed to find proc 0x7f06bee5bc60 in ProcArray >>> >> >>> LOG: failed to find proc 0x7f06bee5bc60 in ProcArray >>> >> >>> LOG: failed to find proc 0x7f06bee5bc60 in ProcArray >>> >> >>> LOG: failed to find proc 0x7f06bee5bc60 in ProcArray >>> >> >>> >>> >> >>> The datanode seems to continuously produce logs but the >>> coordinator >>> >> >>> does >>> >> >>> not. I issued CREATE NODE commands to both services but they don't >>> >> >>> seem to >>> >> >>> show up in the log. >>> >> >>> >>> >> >>> >>> >> >>> On Thu, Feb 13, 2014 at 9:49 AM, Rishi Ramraj >>> >> >>> <the...@gm...> wrote: >>> >> >>>> >>> >> >>>> I haven't been issuing commits or aborts every 30 seconds. So >>> far, >>> >> >>>> I've >>> >> >>>> only issued five commands to the cluster using psql, but I have >>> over >>> >> >>>> 100 >>> >> >>>> logs. I will try 1.1 today and let you know. >>> >> >>>> >>> >> >>>> Do you use a specific branching methodology, like gitflow? Do you >>> >> >>>> have a >>> >> >>>> bug tracking system? If you need any help with release >>> engineering, >>> >> >>>> let me >>> >> >>>> know; I don't mind volunteering some time. >>> >> >>>> >>> >> >>>> >>> >> >>>> On Thu, Feb 13, 2014 at 1:41 AM, 鈴木 幸市 <ko...@in... >>> > >>> >> >>>> wrote: >>> >> >>>>> >>> >> >>>>> GTM message is just a report. When GTM starts, it tries to >>> read if >>> >> >>>>> there is any slave connected in previous run and it didn’t find >>> one. >>> >> >>>>> >>> >> >>>>> The first message looks not harmful but could be some potential >>> >> >>>>> issues. >>> >> >>>>> Please let me look into it. The chance to have this message >>> is at >>> >> >>>>> the >>> >> >>>>> initialization of each datanode/coordinator, COMMIT, ABORT, >>> COMMIT >>> >> >>>>> PREPARED >>> >> >>>>> or ABORT PREPARED. >>> >> >>>>> >>> >> >>>>> Do you have a chance to issue them every 30 seconds? >>> >> >>>>> >>> >> >>>>> If possible, could you try release 1.1 and see if you have the >>> same >>> >> >>>>> issue (first message)? I think release 1.1 is better because >>> master >>> >> >>>>> is >>> >> >>>>> anyway development branch. >>> >> >>>>> >>> >> >>>>> Best; >>> >> >>>>> --- >>> >> >>>>> Koichi Suzuki >>> >> >>>>> >>> >> >>>>> 2014/02/13 14:51、Rishi Ramraj <the...@gm...> >>> のメール: >>> >> >>>>> >>> >> >>>>> > Hello All, >>> >> >>>>> > >>> >> >>>>> > I just installed postgres-xc from the git master branch on a >>> test >>> >> >>>>> > machine. All processes are running on the same box. On both >>> the >>> >> >>>>> > coordinator >>> >> >>>>> > and data processes, I'm getting the following logs about >>> every 30 >>> >> >>>>> > seconds: >>> >> >>>>> > >>> >> >>>>> > LOG: failed to find proc 0x7fd9ee703f80 in ProcArray >>> >> >>>>> > >>> >> >>>>> > On the GTM process, I'm getting the following logs at about >>> the >>> >> >>>>> > same >>> >> >>>>> > frequency: >>> >> >>>>> > >>> >> >>>>> > LOG: Any GTM standby node not found in registered node(s). >>> >> >>>>> > LOCATION: gtm_standby_connect_to_standby_int, >>> gtm_standby.c:381 >>> >> >>>>> > >>> >> >>>>> > The cluster seems to be working properly. I was able to >>> create a >>> >> >>>>> > new >>> >> >>>>> > database and a table within that database without any >>> problem. I >>> >> >>>>> > restarted >>> >> >>>>> > all services and the data was persisted, but the logs >>> persist. Any >>> >> >>>>> > idea >>> >> >>>>> > what's causing these logs? >>> >> >>>>> > >>> >> >>>>> > Thanks, >>> >> >>>>> > - Rishi >>> >> >>>>> > >>> >> >>>>> > >>> >> >>>>> > >>> ------------------------------------------------------------------------------ >>> >> >>>>> > Android apps run on BlackBerry 10 >>> >> >>>>> > Introducing the new BlackBerry 10.2.1 Runtime for Android >>> apps. >>> >> >>>>> > Now with support for Jelly Bean, Bluetooth, Mapview and more. >>> >> >>>>> > Get your Android app in front of a whole new audience. Start >>> now. >>> >> >>>>> > >>> >> >>>>> > >>> >> >>>>> > >>> http://pubads.g.doubleclick.net/gampad/clk?id=124407151&iu=/4140/ostg.clktrk_______________________________________________ >>> >> >>>>> > Postgres-xc-general mailing list >>> >> >>>>> > Pos...@li... >>> >> >>>>> > >>> https://lists.sourceforge.net/lists/listinfo/postgres-xc-general >>> >> >>>>> >>> >> >>>> >>> >> >>>> >>> >> >>>> >>> >> >>>> -- >>> >> >>>> Cheers, >>> >> >>>> - Rishi >>> >> >>> >>> >> >>> >>> >> >>> >>> >> >>> >>> >> >>> -- >>> >> >>> Cheers, >>> >> >>> - Rishi >>> >> >>> >>> >> >>> >>> >> >> >>> >> >> >>> >> >> >>> >> >> -- >>> >> >> Cheers, >>> >> >> - Rishi >>> >> > >>> >> > >>> >> > >>> >> > >>> >> > -- >>> >> > Cheers, >>> >> > - Rishi >>> >> > >>> >> > >>> >> > >>> ------------------------------------------------------------------------------ >>> >> > Android apps run on BlackBerry 10 >>> >> > Introducing the new BlackBerry 10.2.1 Runtime for Android apps. >>> >> > Now with support for Jelly Bean, Bluetooth, Mapview and more. >>> >> > Get your Android app in front of a whole new audience. Start now. >>> >> > >>> >> > >>> http://pubads.g.doubleclick.net/gampad/clk?id=124407151&iu=/4140/ostg.clktrk >>> >> > _______________________________________________ >>> >> > Postgres-xc-general mailing list >>> >> > Pos...@li... >>> >> > https://lists.sourceforge.net/lists/listinfo/postgres-xc-general >>> >> > >>> > >>> > >>> > >>> > >>> > -- >>> > Cheers, >>> > - Rishi >>> >> >> >> >> -- >> Cheers, >> - Rishi >> > > > > -- > Cheers, > - Rishi > -- Cheers, - Rishi |
From: Rishi R. <the...@gm...> - 2014-02-14 04:14:12
|
Will give it a shot. Thanks for the help! On Thu, Feb 13, 2014 at 11:05 PM, Koichi Suzuki <koi...@gm...>wrote: > I'm afraid something is wrong inside but sorry I couldn't locate what > it is. Series of error message is supposed to be from COMMIT, ABORT, > COMMIT PREPARED or ABORT PREPARED. > > I'd advise to recreate the cluster with pgxc_ctl. I hope this is > better to track what is going on. You will find materials for this > from PGXC wiki page. Try www.postgres-xc.org, > > Regards; > --- > Koichi Suzuki > > > 2014-02-14 12:57 GMT+09:00 Rishi Ramraj <the...@gm...>: > > Apparently I left the datanode running, and after a while the logs were > > filled with the following: > > > > FATAL: sorry, too many clients already > > > > Here are the logs with the increased log levels. First on the > coordinator: > > > > 2014-02-13 22:24:31 EST SID 52fd8c6f.386 XID 0LOG: database system was > shut > > down at 2014-02-13 22:07:11 EST > > 2014-02-13 22:24:31 EST SID 52fd8c6e.382 XID 0LOG: database system is > ready > > to accept connections > > 2014-02-13 22:24:31 EST SID 52fd8c6f.38b XID 0LOG: autovacuum launcher > > started > > 2014-02-13 22:45:18 EST SID 52fd914a.59f XID 0LOG: statement: drop > database > > test; > > 2014-02-13 22:45:18 EST SID 52fd914a.59f XID 11571ERROR: database "test" > > does not exist > > 2014-02-13 22:45:18 EST SID 52fd914a.59f XID 11571STATEMENT: drop > database > > test; > > 2014-02-13 22:45:32 EST SID 52fd915c.5a2 XID 0LOG: failed to find proc > > 0x7f0297db9c60 in ProcArray > > 2014-02-13 22:45:33 EST SID 52fd914a.59f XID 0LOG: statement: create > > database test; > > 2014-02-13 22:45:35 EST SID 52fd914a.59f XID 11575LOG: failed to find > proc > > 0x7f0297dbca60 in ProcArray > > 2014-02-13 22:45:35 EST SID 52fd914a.59f XID 11575STATEMENT: create > > database test; > > 2014-02-13 22:46:32 EST SID 52fd9198.5b1 XID 0LOG: failed to find proc > > 0x7f0297db9c60 in ProcArray > > 2014-02-13 22:47:02 EST SID 52fd91b6.5b6 XID 0LOG: failed to find proc > > 0x7f0297db9c60 in ProcArray > > 2014-02-13 22:47:32 EST SID 52fd91d4.5bb XID 0LOG: failed to find proc > > 0x7f0297db9c60 in ProcArray > > 2014-02-13 22:48:02 EST SID 52fd91f2.5be XID 0LOG: failed to find proc > > 0x7f0297db9c60 in ProcArray > > 2014-02-13 22:48:32 EST SID 52fd9210.5c7 XID 0LOG: failed to find proc > > 0x7f0297db9c60 in ProcArray > > 2014-02-13 22:49:02 EST SID 52fd922e.5d1 XID 0LOG: failed to find proc > > 0x7f0297db9c60 in ProcArray > > 2014-02-13 22:49:32 EST SID 52fd924c.5d6 XID 0LOG: failed to find proc > > 0x7f0297db9c60 in ProcArray > > 2014-02-13 22:50:02 EST SID 52fd926a.5da XID 0LOG: failed to find proc > > 0x7f0297db9c60 in ProcArray > > 2014-02-13 22:50:32 EST SID 52fd9288.5e0 XID 0LOG: failed to find proc > > 0x7f0297db9c60 in ProcArray > > 2014-02-13 22:51:02 EST SID 52fd92a6.5e3 XID 0LOG: failed to find proc > > 0x7f0297db9c60 in ProcArray > > 2014-02-13 22:51:32 EST SID 52fd92c4.5ee XID 0LOG: failed to find proc > > 0x7f0297db9c60 in ProcArray > > 2014-02-13 22:52:02 EST SID 52fd92e2.5f1 XID 0LOG: failed to find proc > > 0x7f0297db9c60 in ProcArray > > > > Next, on the datanode: > > > > 2014-02-13 22:24:21 EST SID 52fd8c65.33a XID 0LOG: database system was > shut > > down at 2014-02-13 22:23:35 EST > > 2014-02-13 22:24:21 EST SID 52fd8c64.336 XID 0LOG: database system is > ready > > to accept connections > > 2014-02-13 22:24:21 EST SID 52fd8c65.33e XID 0LOG: autovacuum launcher > > started > > 2014-02-13 22:45:34 EST SID 52fd915e.5a4 XID 0LOG: statement: START > > TRANSACTION ISOLATION LEVEL read committed READ WRITE > > 2014-02-13 22:45:34 EST SID 52fd915e.5a4 XID 0LOG: statement: create > > database test; > > 2014-02-13 22:45:35 EST SID 52fd915e.5a4 XID 11574LOG: statement: > PREPARE > > TRANSACTION 'T11574' > > 2014-02-13 22:45:35 EST SID 52fd915e.5a4 XID 0LOG: statement: COMMIT > > PREPARED 'T11574' > > 2014-02-13 22:45:35 EST SID 52fd915e.5a4 XID 11575LOG: failed to find > proc > > 0x7f1c3de46a60 in ProcArray > > 2014-02-13 22:45:35 EST SID 52fd915e.5a4 XID 11575STATEMENT: COMMIT > > PREPARED 'T11574' > > 2014-02-13 22:46:22 EST SID 52fd918e.5ae XID 0LOG: failed to find proc > > 0x7f1c3de43c60 in ProcArray > > 2014-02-13 22:47:22 EST SID 52fd91ca.5b8 XID 0LOG: failed to find proc > > 0x7f1c3de43c60 in ProcArray > > 2014-02-13 22:48:22 EST SID 52fd9206.5c2 XID 0LOG: failed to find proc > > 0x7f1c3de43c60 in ProcArray > > 2014-02-13 22:49:22 EST SID 52fd9242.5d3 XID 0LOG: failed to find proc > > 0x7f1c3de43c60 in ProcArray > > 2014-02-13 22:50:22 EST SID 52fd927e.5dc XID 0LOG: failed to find proc > > 0x7f1c3de43c60 in ProcArray > > 2014-02-13 22:51:22 EST SID 52fd92ba.5e5 XID 0LOG: failed to find proc > > 0x7f1c3de43c60 in ProcArray > > 2014-02-13 22:52:22 EST SID 52fd92f6.5f5 XID 0LOG: failed to find proc > > 0x7f1c3de43c60 in ProcArray > > 2014-02-13 22:53:22 EST SID 52fd9332.5fd XID 0LOG: failed to find proc > > 0x7f1c3de43c60 in ProcArray > > 2014-02-13 22:54:22 EST SID 52fd936e.60d XID 0LOG: failed to find proc > > 0x7f1c3de43c60 in ProcArray > > > > Would you like me to keep experimenting on this installation, or should I > > recreate the cluster? > > > > > > On Thu, Feb 13, 2014 at 9:49 PM, Koichi Suzuki <koi...@gm...> > > wrote: > >> > >> I don't see anything strange in the configuration file. > >> > >> I found regression test sets up gtm port explicitly (value is the > >> default, 6666, though). Could you try to configure your cluster > >> with pgxc_ctl, which is far simpler and you have much less chance to > >> encounter errors? This utility comes with built-in command sequences > >> needed to configure and operate your cluster. > >> > >> Or, could you re-run the same with different log_min_message level? > >> > >> Regards; > >> --- > >> Koichi Suzuki > >> > >> > >> 2014-02-14 11:16 GMT+09:00 Rishi Ramraj <the...@gm...>: > >> > Missed the coordinator.conf file. Find attached. > >> > > >> > > >> > On Thu, Feb 13, 2014 at 9:15 PM, Rishi Ramraj > >> > <the...@gm...> > >> > wrote: > >> >> > >> >> Find all of the files attached. I have the GTM, coordinator and > >> >> datanode > >> >> services all hosted on the same Linux Mint machine for testing. They > >> >> communicate using the loopback interface. > >> >> > >> >> To initialize the cluster, I ran the following: > >> >> > >> >> /var/lib/postgres-xc/gtm $ initgtm -Z > >> >> /var/lib/postgres-xc/data $ initdb -D . --nodename data > >> >> /var/lib/postgres-xc/coord $ initdb -D . --nodename coord > >> >> > >> >> To start the cluster, I use upstart: > >> >> > >> >> /usr/local/pgsql/bin/gtm -D /var/lib/postgres-xc/gtm > >> >> /usr/local/pgsql/bin/postgres --datanode -D /var/lib/postgres-xc/data > >> >> /usr/local/pgsql/bin/postgres --coordinator -D > >> >> /var/lib/postgres-xc/coord > >> >> > >> >> I will increase the log level and try to reproduce the errors. > >> >> > >> >> > >> >> On Thu, Feb 13, 2014 at 8:39 PM, 鈴木 幸市 <ko...@in...> > >> >> wrote: > >> >>> > >> >>> Hello; > >> >>> > >> >>> You need to set log_statement GUC to appropriate value. Default is > >> >>> “none”. “all” prints all statements accepted. Also, it will be > a > >> >>> good > >> >>> idea to include session ID in the log_line_prefix GUC. Your > >> >>> postgresql.conf has some information how to set this. I tested > >> >>> REL1_1_STABLE with four coordinators, four datanodes and four > >> >>> gtm_proxy and > >> >>> did not have any additional log for any coordinators/datanodes. > It > >> >>> will > >> >>> be helpful what statements you issued. > >> >>> > >> >>> Here’s what I got in REL1_1_STABLE (I configured the cluster using > >> >>> pgxc_ctl). > >> >>> > >> >>> Coordinator: > >> >>> LOG: database system was shut down at 2014-02-14 10:2sd > >> >>> LOG: autovacuum launcher started > >> >>> > >> >>> Datanode: > >> >>> LOG: database system was shut down at 2014-02-14 10:21:08 JST > >> >>> LOG: database system is ready to accept connections > >> >>> LOG: autovacuum launcher started > >> >>> > >> >>> I’d like to have your configuration, each postgresql.conf file and > >> >>> what > >> >>> you’ve done to initialize, start and run your application. > >> >>> > >> >>> Best; > >> >>> --- > >> >>> Koichi Suzuki > >> >>> > >> >>> 2014/02/14 0:38、Rishi Ramraj <the...@gm...> のメール: > >> >>> > >> >>> I just tried with REL1_1_STABLE. Here are my logs from the > >> >>> coordinator: > >> >>> > >> >>> LOG: database system was shut down at 2014-02-13 10:14:16 EST > >> >>> LOG: database system is ready to accept connections > >> >>> LOG: autovacuum launcher started > >> >>> ERROR: syntax error at or near "asdf" at character 1 > >> >>> STATEMENT: asdf; > >> >>> LOG: failed to find proc 0x7f7d4a79ea60 in ProcArray > >> >>> STATEMENT: create database test; > >> >>> LOG: failed to find proc 0x7f7d4a79bc60 in ProcArray > >> >>> ERROR: cannot drop the currently open database > >> >>> STATEMENT: drop database test; > >> >>> LOG: failed to find proc 0x7f7d4a79ea60 in ProcArray > >> >>> STATEMENT: drop database test; > >> >>> ERROR: syntax error at or near "blah" at character 1 > >> >>> STATEMENT: blah blah blah; > >> >>> LOG: failed to find proc 0x7f7d4a79bc60 in ProcArray > >> >>> LOG: failed to find proc 0x7f7d4a79bc60 in ProcArray > >> >>> LOG: failed to find proc 0x7f7d4a79bc60 in ProcArray > >> >>> > >> >>> The logs from the datanode: > >> >>> > >> >>> LOG: database system was shut down at 2014-02-13 10:19:33 EST > >> >>> LOG: database system is ready to accept connections > >> >>> LOG: autovacuum launcher started > >> >>> LOG: failed to find proc 0x7f06bee5ea60 in ProcArray > >> >>> STATEMENT: COMMIT PREPARED 'T10014' > >> >>> LOG: failed to find proc 0x7f06bee5bc60 in ProcArray > >> >>> LOG: failed to find proc 0x7f06bee5ea60 in ProcArray > >> >>> STATEMENT: COMMIT PREPARED 'T10023' > >> >>> LOG: failed to find proc 0x7f06bee5bc60 in ProcArray > >> >>> LOG: failed to find proc 0x7f06bee5bc60 in ProcArray > >> >>> LOG: failed to find proc 0x7f06bee5bc60 in ProcArray > >> >>> LOG: failed to find proc 0x7f06bee5bc60 in ProcArray > >> >>> LOG: failed to find proc 0x7f06bee5bc60 in ProcArray > >> >>> LOG: failed to find proc 0x7f06bee5bc60 in ProcArray > >> >>> > >> >>> The datanode seems to continuously produce logs but the coordinator > >> >>> does > >> >>> not. I issued CREATE NODE commands to both services but they don't > >> >>> seem to > >> >>> show up in the log. > >> >>> > >> >>> > >> >>> On Thu, Feb 13, 2014 at 9:49 AM, Rishi Ramraj > >> >>> <the...@gm...> wrote: > >> >>>> > >> >>>> I haven't been issuing commits or aborts every 30 seconds. So far, > >> >>>> I've > >> >>>> only issued five commands to the cluster using psql, but I have > over > >> >>>> 100 > >> >>>> logs. I will try 1.1 today and let you know. > >> >>>> > >> >>>> Do you use a specific branching methodology, like gitflow? Do you > >> >>>> have a > >> >>>> bug tracking system? If you need any help with release engineering, > >> >>>> let me > >> >>>> know; I don't mind volunteering some time. > >> >>>> > >> >>>> > >> >>>> On Thu, Feb 13, 2014 at 1:41 AM, 鈴木 幸市 <ko...@in...> > >> >>>> wrote: > >> >>>>> > >> >>>>> GTM message is just a report. When GTM starts, it tries to read > if > >> >>>>> there is any slave connected in previous run and it didn’t find > one. > >> >>>>> > >> >>>>> The first message looks not harmful but could be some potential > >> >>>>> issues. > >> >>>>> Please let me look into it. The chance to have this message is > at > >> >>>>> the > >> >>>>> initialization of each datanode/coordinator, COMMIT, ABORT, COMMIT > >> >>>>> PREPARED > >> >>>>> or ABORT PREPARED. > >> >>>>> > >> >>>>> Do you have a chance to issue them every 30 seconds? > >> >>>>> > >> >>>>> If possible, could you try release 1.1 and see if you have the > same > >> >>>>> issue (first message)? I think release 1.1 is better because > master > >> >>>>> is > >> >>>>> anyway development branch. > >> >>>>> > >> >>>>> Best; > >> >>>>> --- > >> >>>>> Koichi Suzuki > >> >>>>> > >> >>>>> 2014/02/13 14:51、Rishi Ramraj <the...@gm...> のメール: > >> >>>>> > >> >>>>> > Hello All, > >> >>>>> > > >> >>>>> > I just installed postgres-xc from the git master branch on a > test > >> >>>>> > machine. All processes are running on the same box. On both the > >> >>>>> > coordinator > >> >>>>> > and data processes, I'm getting the following logs about every > 30 > >> >>>>> > seconds: > >> >>>>> > > >> >>>>> > LOG: failed to find proc 0x7fd9ee703f80 in ProcArray > >> >>>>> > > >> >>>>> > On the GTM process, I'm getting the following logs at about the > >> >>>>> > same > >> >>>>> > frequency: > >> >>>>> > > >> >>>>> > LOG: Any GTM standby node not found in registered node(s). > >> >>>>> > LOCATION: gtm_standby_connect_to_standby_int, gtm_standby.c:381 > >> >>>>> > > >> >>>>> > The cluster seems to be working properly. I was able to create a > >> >>>>> > new > >> >>>>> > database and a table within that database without any problem. I > >> >>>>> > restarted > >> >>>>> > all services and the data was persisted, but the logs persist. > Any > >> >>>>> > idea > >> >>>>> > what's causing these logs? > >> >>>>> > > >> >>>>> > Thanks, > >> >>>>> > - Rishi > >> >>>>> > > >> >>>>> > > >> >>>>> > > ------------------------------------------------------------------------------ > >> >>>>> > Android apps run on BlackBerry 10 > >> >>>>> > Introducing the new BlackBerry 10.2.1 Runtime for Android apps. > >> >>>>> > Now with support for Jelly Bean, Bluetooth, Mapview and more. > >> >>>>> > Get your Android app in front of a whole new audience. Start > now. > >> >>>>> > > >> >>>>> > > >> >>>>> > > http://pubads.g.doubleclick.net/gampad/clk?id=124407151&iu=/4140/ostg.clktrk_______________________________________________ > >> >>>>> > Postgres-xc-general mailing list > >> >>>>> > Pos...@li... > >> >>>>> > > https://lists.sourceforge.net/lists/listinfo/postgres-xc-general > >> >>>>> > >> >>>> > >> >>>> > >> >>>> > >> >>>> -- > >> >>>> Cheers, > >> >>>> - Rishi > >> >>> > >> >>> > >> >>> > >> >>> > >> >>> -- > >> >>> Cheers, > >> >>> - Rishi > >> >>> > >> >>> > >> >> > >> >> > >> >> > >> >> -- > >> >> Cheers, > >> >> - Rishi > >> > > >> > > >> > > >> > > >> > -- > >> > Cheers, > >> > - Rishi > >> > > >> > > >> > > ------------------------------------------------------------------------------ > >> > Android apps run on BlackBerry 10 > >> > Introducing the new BlackBerry 10.2.1 Runtime for Android apps. > >> > Now with support for Jelly Bean, Bluetooth, Mapview and more. > >> > Get your Android app in front of a whole new audience. Start now. > >> > > >> > > http://pubads.g.doubleclick.net/gampad/clk?id=124407151&iu=/4140/ostg.clktrk > >> > _______________________________________________ > >> > Postgres-xc-general mailing list > >> > Pos...@li... > >> > https://lists.sourceforge.net/lists/listinfo/postgres-xc-general > >> > > > > > > > > > > > -- > > Cheers, > > - Rishi > -- Cheers, - Rishi |
From: Koichi S. <koi...@gm...> - 2014-02-14 02:49:19
|
I don't see anything strange in the configuration file. I found regression test sets up gtm port explicitly (value is the default, 6666, though). Could you try to configure your cluster with pgxc_ctl, which is far simpler and you have much less chance to encounter errors? This utility comes with built-in command sequences needed to configure and operate your cluster. Or, could you re-run the same with different log_min_message level? Regards; --- Koichi Suzuki 2014-02-14 11:16 GMT+09:00 Rishi Ramraj <the...@gm...>: > Missed the coordinator.conf file. Find attached. > > > On Thu, Feb 13, 2014 at 9:15 PM, Rishi Ramraj <the...@gm...> > wrote: >> >> Find all of the files attached. I have the GTM, coordinator and datanode >> services all hosted on the same Linux Mint machine for testing. They >> communicate using the loopback interface. >> >> To initialize the cluster, I ran the following: >> >> /var/lib/postgres-xc/gtm $ initgtm -Z >> /var/lib/postgres-xc/data $ initdb -D . --nodename data >> /var/lib/postgres-xc/coord $ initdb -D . --nodename coord >> >> To start the cluster, I use upstart: >> >> /usr/local/pgsql/bin/gtm -D /var/lib/postgres-xc/gtm >> /usr/local/pgsql/bin/postgres --datanode -D /var/lib/postgres-xc/data >> /usr/local/pgsql/bin/postgres --coordinator -D /var/lib/postgres-xc/coord >> >> I will increase the log level and try to reproduce the errors. >> >> >> On Thu, Feb 13, 2014 at 8:39 PM, 鈴木 幸市 <ko...@in...> wrote: >>> >>> Hello; >>> >>> You need to set log_statement GUC to appropriate value. Default is >>> “none”. “all” prints all statements accepted. Also, it will be a good >>> idea to include session ID in the log_line_prefix GUC. Your >>> postgresql.conf has some information how to set this. I tested >>> REL1_1_STABLE with four coordinators, four datanodes and four gtm_proxy and >>> did not have any additional log for any coordinators/datanodes. It will >>> be helpful what statements you issued. >>> >>> Here’s what I got in REL1_1_STABLE (I configured the cluster using >>> pgxc_ctl). >>> >>> Coordinator: >>> LOG: database system was shut down at 2014-02-14 10:2sd >>> LOG: autovacuum launcher started >>> >>> Datanode: >>> LOG: database system was shut down at 2014-02-14 10:21:08 JST >>> LOG: database system is ready to accept connections >>> LOG: autovacuum launcher started >>> >>> I’d like to have your configuration, each postgresql.conf file and what >>> you’ve done to initialize, start and run your application. >>> >>> Best; >>> --- >>> Koichi Suzuki >>> >>> 2014/02/14 0:38、Rishi Ramraj <the...@gm...> のメール: >>> >>> I just tried with REL1_1_STABLE. Here are my logs from the coordinator: >>> >>> LOG: database system was shut down at 2014-02-13 10:14:16 EST >>> LOG: database system is ready to accept connections >>> LOG: autovacuum launcher started >>> ERROR: syntax error at or near "asdf" at character 1 >>> STATEMENT: asdf; >>> LOG: failed to find proc 0x7f7d4a79ea60 in ProcArray >>> STATEMENT: create database test; >>> LOG: failed to find proc 0x7f7d4a79bc60 in ProcArray >>> ERROR: cannot drop the currently open database >>> STATEMENT: drop database test; >>> LOG: failed to find proc 0x7f7d4a79ea60 in ProcArray >>> STATEMENT: drop database test; >>> ERROR: syntax error at or near "blah" at character 1 >>> STATEMENT: blah blah blah; >>> LOG: failed to find proc 0x7f7d4a79bc60 in ProcArray >>> LOG: failed to find proc 0x7f7d4a79bc60 in ProcArray >>> LOG: failed to find proc 0x7f7d4a79bc60 in ProcArray >>> >>> The logs from the datanode: >>> >>> LOG: database system was shut down at 2014-02-13 10:19:33 EST >>> LOG: database system is ready to accept connections >>> LOG: autovacuum launcher started >>> LOG: failed to find proc 0x7f06bee5ea60 in ProcArray >>> STATEMENT: COMMIT PREPARED 'T10014' >>> LOG: failed to find proc 0x7f06bee5bc60 in ProcArray >>> LOG: failed to find proc 0x7f06bee5ea60 in ProcArray >>> STATEMENT: COMMIT PREPARED 'T10023' >>> LOG: failed to find proc 0x7f06bee5bc60 in ProcArray >>> LOG: failed to find proc 0x7f06bee5bc60 in ProcArray >>> LOG: failed to find proc 0x7f06bee5bc60 in ProcArray >>> LOG: failed to find proc 0x7f06bee5bc60 in ProcArray >>> LOG: failed to find proc 0x7f06bee5bc60 in ProcArray >>> LOG: failed to find proc 0x7f06bee5bc60 in ProcArray >>> >>> The datanode seems to continuously produce logs but the coordinator does >>> not. I issued CREATE NODE commands to both services but they don't seem to >>> show up in the log. >>> >>> >>> On Thu, Feb 13, 2014 at 9:49 AM, Rishi Ramraj >>> <the...@gm...> wrote: >>>> >>>> I haven't been issuing commits or aborts every 30 seconds. So far, I've >>>> only issued five commands to the cluster using psql, but I have over 100 >>>> logs. I will try 1.1 today and let you know. >>>> >>>> Do you use a specific branching methodology, like gitflow? Do you have a >>>> bug tracking system? If you need any help with release engineering, let me >>>> know; I don't mind volunteering some time. >>>> >>>> >>>> On Thu, Feb 13, 2014 at 1:41 AM, 鈴木 幸市 <ko...@in...> wrote: >>>>> >>>>> GTM message is just a report. When GTM starts, it tries to read if >>>>> there is any slave connected in previous run and it didn’t find one. >>>>> >>>>> The first message looks not harmful but could be some potential issues. >>>>> Please let me look into it. The chance to have this message is at the >>>>> initialization of each datanode/coordinator, COMMIT, ABORT, COMMIT PREPARED >>>>> or ABORT PREPARED. >>>>> >>>>> Do you have a chance to issue them every 30 seconds? >>>>> >>>>> If possible, could you try release 1.1 and see if you have the same >>>>> issue (first message)? I think release 1.1 is better because master is >>>>> anyway development branch. >>>>> >>>>> Best; >>>>> --- >>>>> Koichi Suzuki >>>>> >>>>> 2014/02/13 14:51、Rishi Ramraj <the...@gm...> のメール: >>>>> >>>>> > Hello All, >>>>> > >>>>> > I just installed postgres-xc from the git master branch on a test >>>>> > machine. All processes are running on the same box. On both the coordinator >>>>> > and data processes, I'm getting the following logs about every 30 seconds: >>>>> > >>>>> > LOG: failed to find proc 0x7fd9ee703f80 in ProcArray >>>>> > >>>>> > On the GTM process, I'm getting the following logs at about the same >>>>> > frequency: >>>>> > >>>>> > LOG: Any GTM standby node not found in registered node(s). >>>>> > LOCATION: gtm_standby_connect_to_standby_int, gtm_standby.c:381 >>>>> > >>>>> > The cluster seems to be working properly. I was able to create a new >>>>> > database and a table within that database without any problem. I restarted >>>>> > all services and the data was persisted, but the logs persist. Any idea >>>>> > what's causing these logs? >>>>> > >>>>> > Thanks, >>>>> > - Rishi >>>>> > >>>>> > ------------------------------------------------------------------------------ >>>>> > Android apps run on BlackBerry 10 >>>>> > Introducing the new BlackBerry 10.2.1 Runtime for Android apps. >>>>> > Now with support for Jelly Bean, Bluetooth, Mapview and more. >>>>> > Get your Android app in front of a whole new audience. Start now. >>>>> > >>>>> > http://pubads.g.doubleclick.net/gampad/clk?id=124407151&iu=/4140/ostg.clktrk_______________________________________________ >>>>> > Postgres-xc-general mailing list >>>>> > Pos...@li... >>>>> > https://lists.sourceforge.net/lists/listinfo/postgres-xc-general >>>>> >>>> >>>> >>>> >>>> -- >>>> Cheers, >>>> - Rishi >>> >>> >>> >>> >>> -- >>> Cheers, >>> - Rishi >>> >>> >> >> >> >> -- >> Cheers, >> - Rishi > > > > > -- > Cheers, > - Rishi > > ------------------------------------------------------------------------------ > Android apps run on BlackBerry 10 > Introducing the new BlackBerry 10.2.1 Runtime for Android apps. > Now with support for Jelly Bean, Bluetooth, Mapview and more. > Get your Android app in front of a whole new audience. Start now. > http://pubads.g.doubleclick.net/gampad/clk?id=124407151&iu=/4140/ostg.clktrk > _______________________________________________ > Postgres-xc-general mailing list > Pos...@li... > https://lists.sourceforge.net/lists/listinfo/postgres-xc-general > |
From: 鈴木 幸市 <ko...@in...> - 2014-02-14 01:39:29
|
Hello; You need to set log_statement GUC to appropriate value. Default is “none”. “all” prints all statements accepted. Also, it will be a good idea to include session ID in the log_line_prefix GUC. Your postgresql.conf has some information how to set this. I tested REL1_1_STABLE with four coordinators, four datanodes and four gtm_proxy and did not have any additional log for any coordinators/datanodes. It will be helpful what statements you issued. Here’s what I got in REL1_1_STABLE (I configured the cluster using pgxc_ctl). Coordinator: LOG: database system was shut down at 2014-02-14 10:2sd LOG: autovacuum launcher started Datanode: LOG: database system was shut down at 2014-02-14 10:21:08 JST LOG: database system is ready to accept connections LOG: autovacuum launcher started I’d like to have your configuration, each postgresql.conf file and what you’ve done to initialize, start and run your application. Best; --- Koichi Suzuki 2014/02/14 0:38、Rishi Ramraj <the...@gm...<mailto:the...@gm...>> のメール: I just tried with REL1_1_STABLE. Here are my logs from the coordinator: LOG: database system was shut down at 2014-02-13 10:14:16 EST LOG: database system is ready to accept connections LOG: autovacuum launcher started ERROR: syntax error at or near "asdf" at character 1 STATEMENT: asdf; LOG: failed to find proc 0x7f7d4a79ea60 in ProcArray STATEMENT: create database test; LOG: failed to find proc 0x7f7d4a79bc60 in ProcArray ERROR: cannot drop the currently open database STATEMENT: drop database test; LOG: failed to find proc 0x7f7d4a79ea60 in ProcArray STATEMENT: drop database test; ERROR: syntax error at or near "blah" at character 1 STATEMENT: blah blah blah; LOG: failed to find proc 0x7f7d4a79bc60 in ProcArray LOG: failed to find proc 0x7f7d4a79bc60 in ProcArray LOG: failed to find proc 0x7f7d4a79bc60 in ProcArray The logs from the datanode: LOG: database system was shut down at 2014-02-13 10:19:33 EST LOG: database system is ready to accept connections LOG: autovacuum launcher started LOG: failed to find proc 0x7f06bee5ea60 in ProcArray STATEMENT: COMMIT PREPARED 'T10014' LOG: failed to find proc 0x7f06bee5bc60 in ProcArray LOG: failed to find proc 0x7f06bee5ea60 in ProcArray STATEMENT: COMMIT PREPARED 'T10023' LOG: failed to find proc 0x7f06bee5bc60 in ProcArray LOG: failed to find proc 0x7f06bee5bc60 in ProcArray LOG: failed to find proc 0x7f06bee5bc60 in ProcArray LOG: failed to find proc 0x7f06bee5bc60 in ProcArray LOG: failed to find proc 0x7f06bee5bc60 in ProcArray LOG: failed to find proc 0x7f06bee5bc60 in ProcArray The datanode seems to continuously produce logs but the coordinator does not. I issued CREATE NODE commands to both services but they don't seem to show up in the log. On Thu, Feb 13, 2014 at 9:49 AM, Rishi Ramraj <the...@gm...<mailto:the...@gm...>> wrote: I haven't been issuing commits or aborts every 30 seconds. So far, I've only issued five commands to the cluster using psql, but I have over 100 logs. I will try 1.1 today and let you know. Do you use a specific branching methodology, like gitflow<http://nvie.com/posts/a-successful-git-branching-model/>? Do you have a bug tracking system? If you need any help with release engineering, let me know; I don't mind volunteering some time. On Thu, Feb 13, 2014 at 1:41 AM, 鈴木 幸市 <ko...@in...<mailto:ko...@in...>> wrote: GTM message is just a report. When GTM starts, it tries to read if there is any slave connected in previous run and it didn’t find one. The first message looks not harmful but could be some potential issues. Please let me look into it. The chance to have this message is at the initialization of each datanode/coordinator, COMMIT, ABORT, COMMIT PREPARED or ABORT PREPARED. Do you have a chance to issue them every 30 seconds? If possible, could you try release 1.1 and see if you have the same issue (first message)? I think release 1.1 is better because master is anyway development branch. Best; --- Koichi Suzuki 2014/02/13 14:51、Rishi Ramraj <the...@gm...<mailto:the...@gm...>> のメール: > Hello All, > > I just installed postgres-xc from the git master branch on a test machine. All processes are running on the same box. On both the coordinator and data processes, I'm getting the following logs about every 30 seconds: > > LOG: failed to find proc 0x7fd9ee703f80 in ProcArray > > On the GTM process, I'm getting the following logs at about the same frequency: > > LOG: Any GTM standby node not found in registered node(s). > LOCATION: gtm_standby_connect_to_standby_int, gtm_standby.c:381 > > The cluster seems to be working properly. I was able to create a new database and a table within that database without any problem. I restarted all services and the data was persisted, but the logs persist. Any idea what's causing these logs? > > Thanks, > - Rishi > ------------------------------------------------------------------------------ > Android apps run on BlackBerry 10 > Introducing the new BlackBerry 10.2.1 Runtime for Android apps. > Now with support for Jelly Bean, Bluetooth, Mapview and more. > Get your Android app in front of a whole new audience. Start now. > http://pubads.g.doubleclick.net/gampad/clk?id=124407151&iu=/4140/ostg.clktrk_______________________________________________ > Postgres-xc-general mailing list > Pos...@li...<mailto:Pos...@li...> > https://lists.sourceforge.net/lists/listinfo/postgres-xc-general -- Cheers, - Rishi -- Cheers, - Rishi |
From: Rishi R. <the...@gm...> - 2014-02-13 15:38:28
|
I just tried with REL1_1_STABLE. Here are my logs from the coordinator: LOG: database system was shut down at 2014-02-13 10:14:16 EST LOG: database system is ready to accept connections LOG: autovacuum launcher started ERROR: syntax error at or near "asdf" at character 1 STATEMENT: asdf; LOG: failed to find proc 0x7f7d4a79ea60 in ProcArray STATEMENT: create database test; LOG: failed to find proc 0x7f7d4a79bc60 in ProcArray ERROR: cannot drop the currently open database STATEMENT: drop database test; LOG: failed to find proc 0x7f7d4a79ea60 in ProcArray STATEMENT: drop database test; ERROR: syntax error at or near "blah" at character 1 STATEMENT: blah blah blah; LOG: failed to find proc 0x7f7d4a79bc60 in ProcArray LOG: failed to find proc 0x7f7d4a79bc60 in ProcArray LOG: failed to find proc 0x7f7d4a79bc60 in ProcArray The logs from the datanode: LOG: database system was shut down at 2014-02-13 10:19:33 EST LOG: database system is ready to accept connections LOG: autovacuum launcher started LOG: failed to find proc 0x7f06bee5ea60 in ProcArray STATEMENT: COMMIT PREPARED 'T10014' LOG: failed to find proc 0x7f06bee5bc60 in ProcArray LOG: failed to find proc 0x7f06bee5ea60 in ProcArray STATEMENT: COMMIT PREPARED 'T10023' LOG: failed to find proc 0x7f06bee5bc60 in ProcArray LOG: failed to find proc 0x7f06bee5bc60 in ProcArray LOG: failed to find proc 0x7f06bee5bc60 in ProcArray LOG: failed to find proc 0x7f06bee5bc60 in ProcArray LOG: failed to find proc 0x7f06bee5bc60 in ProcArray LOG: failed to find proc 0x7f06bee5bc60 in ProcArray The datanode seems to continuously produce logs but the coordinator does not. I issued CREATE NODE commands to both services but they don't seem to show up in the log. On Thu, Feb 13, 2014 at 9:49 AM, Rishi Ramraj <the...@gm...>wrote: > I haven't been issuing commits or aborts every 30 seconds. So far, I've > only issued five commands to the cluster using psql, but I have over 100 > logs. I will try 1.1 today and let you know. > > Do you use a specific branching methodology, like gitflow<http://nvie.com/posts/a-successful-git-branching-model/>? > Do you have a bug tracking system? If you need any help with release > engineering, let me know; I don't mind volunteering some time. > > > On Thu, Feb 13, 2014 at 1:41 AM, 鈴木 幸市 <ko...@in...> wrote: > >> GTM message is just a report. When GTM starts, it tries to read if >> there is any slave connected in previous run and it didn’t find one. >> >> The first message looks not harmful but could be some potential issues. >> Please let me look into it. The chance to have this message is at the >> initialization of each datanode/coordinator, COMMIT, ABORT, COMMIT PREPARED >> or ABORT PREPARED. >> >> Do you have a chance to issue them every 30 seconds? >> >> If possible, could you try release 1.1 and see if you have the same issue >> (first message)? I think release 1.1 is better because master is anyway >> development branch. >> >> Best; >> --- >> Koichi Suzuki >> >> 2014/02/13 14:51、Rishi Ramraj <the...@gm...> のメール: >> >> > Hello All, >> > >> > I just installed postgres-xc from the git master branch on a test >> machine. All processes are running on the same box. On both the coordinator >> and data processes, I'm getting the following logs about every 30 seconds: >> > >> > LOG: failed to find proc 0x7fd9ee703f80 in ProcArray >> > >> > On the GTM process, I'm getting the following logs at about the same >> frequency: >> > >> > LOG: Any GTM standby node not found in registered node(s). >> > LOCATION: gtm_standby_connect_to_standby_int, gtm_standby.c:381 >> > >> > The cluster seems to be working properly. I was able to create a new >> database and a table within that database without any problem. I restarted >> all services and the data was persisted, but the logs persist. Any idea >> what's causing these logs? >> > >> > Thanks, >> > - Rishi >> > >> ------------------------------------------------------------------------------ >> > Android apps run on BlackBerry 10 >> > Introducing the new BlackBerry 10.2.1 Runtime for Android apps. >> > Now with support for Jelly Bean, Bluetooth, Mapview and more. >> > Get your Android app in front of a whole new audience. Start now. >> > >> http://pubads.g.doubleclick.net/gampad/clk?id=124407151&iu=/4140/ostg.clktrk_______________________________________________ >> > Postgres-xc-general mailing list >> > Pos...@li... >> > https://lists.sourceforge.net/lists/listinfo/postgres-xc-general >> >> > > > -- > Cheers, > - Rishi > -- Cheers, - Rishi |
From: Rishi R. <the...@gm...> - 2014-02-13 14:49:30
|
I haven't been issuing commits or aborts every 30 seconds. So far, I've only issued five commands to the cluster using psql, but I have over 100 logs. I will try 1.1 today and let you know. Do you use a specific branching methodology, like gitflow<http://nvie.com/posts/a-successful-git-branching-model/>? Do you have a bug tracking system? If you need any help with release engineering, let me know; I don't mind volunteering some time. On Thu, Feb 13, 2014 at 1:41 AM, 鈴木 幸市 <ko...@in...> wrote: > GTM message is just a report. When GTM starts, it tries to read if there > is any slave connected in previous run and it didn’t find one. > > The first message looks not harmful but could be some potential issues. > Please let me look into it. The chance to have this message is at the > initialization of each datanode/coordinator, COMMIT, ABORT, COMMIT PREPARED > or ABORT PREPARED. > > Do you have a chance to issue them every 30 seconds? > > If possible, could you try release 1.1 and see if you have the same issue > (first message)? I think release 1.1 is better because master is anyway > development branch. > > Best; > --- > Koichi Suzuki > > 2014/02/13 14:51、Rishi Ramraj <the...@gm...> のメール: > > > Hello All, > > > > I just installed postgres-xc from the git master branch on a test > machine. All processes are running on the same box. On both the coordinator > and data processes, I'm getting the following logs about every 30 seconds: > > > > LOG: failed to find proc 0x7fd9ee703f80 in ProcArray > > > > On the GTM process, I'm getting the following logs at about the same > frequency: > > > > LOG: Any GTM standby node not found in registered node(s). > > LOCATION: gtm_standby_connect_to_standby_int, gtm_standby.c:381 > > > > The cluster seems to be working properly. I was able to create a new > database and a table within that database without any problem. I restarted > all services and the data was persisted, but the logs persist. Any idea > what's causing these logs? > > > > Thanks, > > - Rishi > > > ------------------------------------------------------------------------------ > > Android apps run on BlackBerry 10 > > Introducing the new BlackBerry 10.2.1 Runtime for Android apps. > > Now with support for Jelly Bean, Bluetooth, Mapview and more. > > Get your Android app in front of a whole new audience. Start now. > > > http://pubads.g.doubleclick.net/gampad/clk?id=124407151&iu=/4140/ostg.clktrk_______________________________________________ > > Postgres-xc-general mailing list > > Pos...@li... > > https://lists.sourceforge.net/lists/listinfo/postgres-xc-general > > -- Cheers, - Rishi |
From: 鈴木 幸市 <ko...@in...> - 2014-02-13 06:41:53
|
GTM message is just a report. When GTM starts, it tries to read if there is any slave connected in previous run and it didn’t find one. The first message looks not harmful but could be some potential issues. Please let me look into it. The chance to have this message is at the initialization of each datanode/coordinator, COMMIT, ABORT, COMMIT PREPARED or ABORT PREPARED. Do you have a chance to issue them every 30 seconds? If possible, could you try release 1.1 and see if you have the same issue (first message)? I think release 1.1 is better because master is anyway development branch. Best; --- Koichi Suzuki 2014/02/13 14:51、Rishi Ramraj <the...@gm...> のメール: > Hello All, > > I just installed postgres-xc from the git master branch on a test machine. All processes are running on the same box. On both the coordinator and data processes, I'm getting the following logs about every 30 seconds: > > LOG: failed to find proc 0x7fd9ee703f80 in ProcArray > > On the GTM process, I'm getting the following logs at about the same frequency: > > LOG: Any GTM standby node not found in registered node(s). > LOCATION: gtm_standby_connect_to_standby_int, gtm_standby.c:381 > > The cluster seems to be working properly. I was able to create a new database and a table within that database without any problem. I restarted all services and the data was persisted, but the logs persist. Any idea what's causing these logs? > > Thanks, > - Rishi > ------------------------------------------------------------------------------ > Android apps run on BlackBerry 10 > Introducing the new BlackBerry 10.2.1 Runtime for Android apps. > Now with support for Jelly Bean, Bluetooth, Mapview and more. > Get your Android app in front of a whole new audience. Start now. > http://pubads.g.doubleclick.net/gampad/clk?id=124407151&iu=/4140/ostg.clktrk_______________________________________________ > Postgres-xc-general mailing list > Pos...@li... > https://lists.sourceforge.net/lists/listinfo/postgres-xc-general |
From: Rishi R. <the...@gm...> - 2014-02-13 06:13:14
|
I also noticed that if I kill the GTM, my coordinator remains polling for the GTM and doesn't respond to SIGTERM. The datanode in contrast, responds properly. Once I restart the GTM and the coordinator reconnects, it then responds to the SIGTERM message and stops the service. If the GTM doesn't come back online, I am forced to SIGKILL the service. Is this normal behavior? On Thu, Feb 13, 2014 at 12:51 AM, Rishi Ramraj <the...@gm...>wrote: > Hello All, > > I just installed postgres-xc from the git master branch on a test machine. > All processes are running on the same box. On both the coordinator and data > processes, I'm getting the following logs about every 30 seconds: > > LOG: failed to find proc 0x7fd9ee703f80 in ProcArray > > On the GTM process, I'm getting the following logs at about the same > frequency: > > LOG: Any GTM standby node not found in registered node(s). > LOCATION: gtm_standby_connect_to_standby_int, gtm_standby.c:381 > > The cluster seems to be working properly. I was able to create a new > database and a table within that database without any problem. I restarted > all services and the data was persisted, but the logs persist. Any idea > what's causing these logs? > > Thanks, > - Rishi > -- Cheers, - Rishi |
From: Rishi R. <the...@gm...> - 2014-02-13 05:51:53
|
Hello All, I just installed postgres-xc from the git master branch on a test machine. All processes are running on the same box. On both the coordinator and data processes, I'm getting the following logs about every 30 seconds: LOG: failed to find proc 0x7fd9ee703f80 in ProcArray On the GTM process, I'm getting the following logs at about the same frequency: LOG: Any GTM standby node not found in registered node(s). LOCATION: gtm_standby_connect_to_standby_int, gtm_standby.c:381 The cluster seems to be working properly. I was able to create a new database and a table within that database without any problem. I restarted all services and the data was persisted, but the logs persist. Any idea what's causing these logs? Thanks, - Rishi |
From: Thomas P. <vip...@gm...> - 2014-02-11 14:27:22
|
Thank you for both of your responses, I think I was replying improperly. I will take a look at the table distributions. Warm regards, On Mon, Feb 10, 2014 at 11:44 PM, Ashutosh Bapat < ash...@en...> wrote: > Hi Thomas, > > Mason has already provided you with a blog entry, that should be useful in > setting up Django. I will explain the particular error you mentioned in > your mail, > > On Tue, Feb 11, 2014 at 3:49 AM, Mason Sharp <ms...@tr...>wrote: > >> >> >> >> On Mon, Feb 10, 2014 at 4:12 PM, Thomas Perry <vip...@gm...> wrote: >> >>> Good afternoon all, >>> >>> I was curious if there was anything special, possibly simple, that one >>> would need to do in order to get Django to work with postgres-xc? >>> >>> I have successfully used Django with Postgresql, and other databases, >>> but the nature of postgres-xc points to a fairly different approach when >>> 'syncdb' is used. >>> >>> Specifically I received this error when utilizing Django's 'syncdb' >>> >>> "django.db.utils.NotSupportedError: Cannot create index whose evaluation >>> cannot be enforced to remote nodes" >>> >>> > This error comes when you are trying to create an index which can be > enforced on a single datanode. An example is unique index on a distributed > table whose distribution column is not in the unique key. For enforcing > such a constraint, we have to check for data from multiple nodes, which is > currently not supported in XC. So, I think, you will need to modify the > table distributions accordingly. > > >> Obviously Django is, erroneously, trying to write to the Coordinator as >>> though it were a database that should be written to. >>> >>> Has anyone had any experience with communicating with postgres-xc from >>> Django? I suspect there isn't much that needs to be done, but I cannot find >>> much information on the topic. >>> >> >> Nikhil Sontakke blogged about this here: >> >> >> http://www.stormdb.com/content/getting-django-work-postgres-xcstormdb?destination=node%2F949 >> >> Regards, >> >> >> -- >> Mason Sharp >> >> TransLattice - http://www.translattice.com >> Distributed and Clustered Database Solutions >> >> >> >> >> ------------------------------------------------------------------------------ >> Android apps run on BlackBerry 10 >> Introducing the new BlackBerry 10.2.1 Runtime for Android apps. >> Now with support for Jelly Bean, Bluetooth, Mapview and more. >> Get your Android app in front of a whole new audience. Start now. >> >> http://pubads.g.doubleclick.net/gampad/clk?id=124407151&iu=/4140/ostg.clktrk >> _______________________________________________ >> Postgres-xc-general mailing list >> Pos...@li... >> https://lists.sourceforge.net/lists/listinfo/postgres-xc-general >> >> > > > -- > Best Wishes, > Ashutosh Bapat > EnterpriseDB Corporation > The Postgres Database Company > |
From: Ashutosh B. <ash...@en...> - 2014-02-11 04:44:41
|
Hi Thomas, Mason has already provided you with a blog entry, that should be useful in setting up Django. I will explain the particular error you mentioned in your mail, On Tue, Feb 11, 2014 at 3:49 AM, Mason Sharp <ms...@tr...>wrote: > > > > On Mon, Feb 10, 2014 at 4:12 PM, Thomas Perry <vip...@gm...> wrote: > >> Good afternoon all, >> >> I was curious if there was anything special, possibly simple, that one >> would need to do in order to get Django to work with postgres-xc? >> >> I have successfully used Django with Postgresql, and other databases, but >> the nature of postgres-xc points to a fairly different approach when >> 'syncdb' is used. >> >> Specifically I received this error when utilizing Django's 'syncdb' >> >> "django.db.utils.NotSupportedError: Cannot create index whose evaluation >> cannot be enforced to remote nodes" >> >> This error comes when you are trying to create an index which can be enforced on a single datanode. An example is unique index on a distributed table whose distribution column is not in the unique key. For enforcing such a constraint, we have to check for data from multiple nodes, which is currently not supported in XC. So, I think, you will need to modify the table distributions accordingly. > Obviously Django is, erroneously, trying to write to the Coordinator as >> though it were a database that should be written to. >> >> Has anyone had any experience with communicating with postgres-xc from >> Django? I suspect there isn't much that needs to be done, but I cannot find >> much information on the topic. >> > > Nikhil Sontakke blogged about this here: > > > http://www.stormdb.com/content/getting-django-work-postgres-xcstormdb?destination=node%2F949 > > Regards, > > > -- > Mason Sharp > > TransLattice - http://www.translattice.com > Distributed and Clustered Database Solutions > > > > > ------------------------------------------------------------------------------ > Android apps run on BlackBerry 10 > Introducing the new BlackBerry 10.2.1 Runtime for Android apps. > Now with support for Jelly Bean, Bluetooth, Mapview and more. > Get your Android app in front of a whole new audience. Start now. > > http://pubads.g.doubleclick.net/gampad/clk?id=124407151&iu=/4140/ostg.clktrk > _______________________________________________ > Postgres-xc-general mailing list > Pos...@li... > https://lists.sourceforge.net/lists/listinfo/postgres-xc-general > > -- Best Wishes, Ashutosh Bapat EnterpriseDB Corporation The Postgres Database Company |
From: Mason S. <ms...@tr...> - 2014-02-10 22:25:30
|
On Mon, Feb 10, 2014 at 4:12 PM, Thomas Perry <vip...@gm...> wrote: > Good afternoon all, > > I was curious if there was anything special, possibly simple, that one > would need to do in order to get Django to work with postgres-xc? > > I have successfully used Django with Postgresql, and other databases, but > the nature of postgres-xc points to a fairly different approach when > 'syncdb' is used. > > Specifically I received this error when utilizing Django's 'syncdb' > > "django.db.utils.NotSupportedError: Cannot create index whose evaluation > cannot be enforced to remote nodes" > > Obviously Django is, erroneously, trying to write to the Coordinator as > though it were a database that should be written to. > > Has anyone had any experience with communicating with postgres-xc from > Django? I suspect there isn't much that needs to be done, but I cannot find > much information on the topic. > Nikhil Sontakke blogged about this here: http://www.stormdb.com/content/getting-django-work-postgres-xcstormdb?destination=node%2F949 Regards, -- Mason Sharp TransLattice - http://www.translattice.com Distributed and Clustered Database Solutions |
From: Thomas P. <vip...@gm...> - 2014-02-10 21:13:02
|
Good afternoon all, I was curious if there was anything special, possibly simple, that one would need to do in order to get Django to work with postgres-xc? I have successfully used Django with Postgresql, and other databases, but the nature of postgres-xc points to a fairly different approach when 'syncdb' is used. Specifically I received this error when utilizing Django's 'syncdb' "django.db.utils.NotSupportedError: Cannot create index whose evaluation cannot be enforced to remote nodes" Obviously Django is, erroneously, trying to write to the Coordinator as though it were a database that should be written to. Has anyone had any experience with communicating with postgres-xc from Django? I suspect there isn't much that needs to be done, but I cannot find much information on the topic. Warm regards, Thomas |
From: 鈴木 幸市 <ko...@in...> - 2014-02-05 01:32:38
|
Autovacuum is separate from these checkpoint setups. Because checkpoint is launched almost every five seconds, it is very likely that active WAL file goes full very easily with very heavy updates (including CREATE INDEX). Unless you change this parameter, checkpoint_timeout setting will not work. When WAL file is full, checkpoint will be launched and no database update will be allowed unless you have a room to write to WAL files. Regards; --- Koichi Suzuki 2014/02/05 9:20、Sandeep Gupta <gup...@gm...<mailto:gup...@gm...>> のメール: Hi Koichi, Thank you suggesting these parameters. Initially we did play around these. However, we used significantly higher values such checkpoint_timeout=30mins etc. Essentially we were trying parameters so as to avoid interference from autovaccum in the first place. The reason was using low values was to recreate the problem in the test setup. I did the regression tests with the new setting it is certainly better. It does crash but not so often. I will try to use in the application and see if runs in the main application. Also, I am running the same tests over a standalone PG (9.3 version I believe) and so far it has crashed. I haven't been too careful to make sure to use the exact same values for checkpoint parameters. Next email I will attach log files for review. Thanks. Sandeep On Tue, Feb 4, 2014 at 8:34 AM, Koichi Suzuki <koi...@gm...<mailto:koi...@gm...>> wrote: I looked at the log at datanode and found checkpoint is running too frequently. Default checkpoint timeout is 5min. In your case, checkpoint runs almost every five seconds (not minutes) in each datanode. It is extraordinary. Could you try to tweak each datanode's postgresql.conf as follows? 1. Longer period for checkpoint_timeout. Default is 5min. 15min. will be okay. 2. Larger value for checkpoint_completion_target. Default is 0.5. It should be okay. Larger value, such as 0.7, will make make checkpoint work more smoothly. 3. Larger value of checkpoint_segment. Default is 3. Because your application updates the database very frequently, this number of checkpoint segment will be exhausted very easily. Increase this to, say, 30 or more. Each checkpoint_segment (in fact, WAL file) consumes 16MB of your file space. I hope this is no problem to you at all. I'm afraid too frequent checkpoint causes this kind of error (even with vanilla PostgreSQL) and this situation is what you should avoid both in PG and XC. Would like to know if things are improved. Best; --- Koichi Suzuki 2014-02-04 Sandeep Gupta <gup...@gm...<mailto:gup...@gm...>>: > Hi Koichi, > > Just wanted to add that I have send across the datanode and coordinator log > files in my previous email. My hope is that it may give some insights into > what could be amiss and any ideas for workaround. > > > Thanks. > Sandeep > ------------------------------------------------------------------------------ Managing the Performance of Cloud-Based Applications Take advantage of what the Cloud has to offer - Avoid Common Pitfalls. Read the Whitepaper. http://pubads.g.doubleclick.net/gampad/clk?id=121051231&iu=/4140/ostg.clktrk_______________________________________________ Postgres-xc-general mailing list Pos...@li... https://lists.sourceforge.net/lists/listinfo/postgres-xc-general |
From: Sandeep G. <gup...@gm...> - 2014-02-05 00:20:59
|
LOG: 00000: database system was shut down at 2014-02-04 13:31:47 EST LOCATION: StartupXLOG, xlog.c:6348 LOG: 00000: database system is ready to accept connections LOCATION: reaper, postmaster.c:2560 LOG: 00000: autovacuum launcher started LOCATION: AutoVacLauncherMain, autovacuum.c:407 WARNING: 25P01: there is no transaction in progress LOCATION: EndTransactionBlock, xact.c:4086 ERROR: 42P07: relation "la_directednetwork" already exists LOCATION: heap_create_with_catalog, heap.c:1408 STATEMENT: create table LA_directednetwork (head int, tail int, tail_type int, duration int) DISTRIBUTE BY HASH(head) WARNING: 01000: unexpected EOF on datanode connection LOCATION: pgxc_node_receive, pgxcnode.c:463 ERROR: XX000: failed to send PREPARE TRANSACTION command to the node 16384 LOCATION: pgxc_node_remote_prepare, execRemote.c:1629 STATEMENT: create INDEX mdn on la_directednetwork(head) WARNING: 01000: unexpected EOF on datanode connection LOCATION: pgxc_node_receive, pgxcnode.c:463 LOG: 00000: Failed to ABORT at node 16384 Detail: unexpected EOF on datanode connection LOCATION: pgxc_node_remote_abort, execRemote.c:2039 LOG: 00000: Failed to ABORT an implicitly PREPARED transaction status - 7 LOCATION: pgxc_node_remote_abort, execRemote.c:2070 ERROR: 42704: index "mdn" does not exist LOCATION: DropErrorMsgNonExistent, tablecmds.c:746 STATEMENT: drop index mdn WARNING: 01000: Unexpected data on connection, cleaning. LOCATION: acquire_connection, poolmgr.c:2141 LOG: 08006: failed to connect to Datanode LOCATION: grow_pool, poolmgr.c:2259 WARNING: 01000: can not connect to node 16384 LOCATION: acquire_connection, poolmgr.c:2153 LOG: 53000: failed to acquire connections LOCATION: pool_recvfds, poolcomm.c:623 STATEMENT: create INDEX mdn on la_directednetwork(head) ERROR: 53000: Failed to get pooled connections LOCATION: get_handles, pgxcnode.c:1969 STATEMENT: create INDEX mdn on la_directednetwork(head) ERROR: 42704: index "mdn" does not exist LOCATION: DropErrorMsgNonExistent, tablecmds.c:746 STATEMENT: drop index mdn LOG: 08006: failed to connect to Datanode LOCATION: grow_pool, poolmgr.c:2259 WARNING: 01000: can not connect to node 16384 LOCATION: acquire_connection, poolmgr.c:2153 LOG: 53000: failed to acquire connections LOCATION: pool_recvfds, poolcomm.c:623 STATEMENT: create INDEX mdn on la_directednetwork(head) ERROR: 53000: Failed to get pooled connections LOCATION: get_handles, pgxcnode.c:1969 STATEMENT: create INDEX mdn on la_directednetwork(head) ERROR: 42704: index "mdn" does not exist LOCATION: DropErrorMsgNonExistent, tablecmds.c:746 STATEMENT: drop index mdn LOG: 08006: failed to connect to Datanode LOCATION: grow_pool, poolmgr.c:2259 WARNING: 01000: can not connect to node 16384 LOCATION: acquire_connection, poolmgr.c:2153 LOG: 53000: failed to acquire connections LOCATION: pool_recvfds, poolcomm.c:623 STATEMENT: create INDEX mdn on la_directednetwork(head) ERROR: 53000: Failed to get pooled connections LOCATION: get_handles, pgxcnode.c:1969 STATEMENT: create INDEX mdn on la_directednetwork(head) ERROR: 42704: index "mdn" does not exist LOCATION: DropErrorMsgNonExistent, tablecmds.c:746 STATEMENT: drop index mdn LOG: 08006: failed to connect to Datanode LOCATION: grow_pool, poolmgr.c:2259 WARNING: 01000: can not connect to node 16384 LOCATION: acquire_connection, poolmgr.c:2153 LOG: 53000: failed to acquire connections LOCATION: pool_recvfds, poolcomm.c:623 STATEMENT: create INDEX mdn on la_directednetwork(head) ERROR: 53000: Failed to get pooled connections LOCATION: get_handles, pgxcnode.c:1969 STATEMENT: create INDEX mdn on la_directednetwork(head) ERROR: 42704: index "mdn" does not exist LOCATION: DropErrorMsgNonExistent, tablecmds.c:746 STATEMENT: drop index mdn LOG: 08006: failed to connect to Datanode LOCATION: grow_pool, poolmgr.c:2259 WARNING: 01000: can not connect to node 16384 LOCATION: acquire_connection, poolmgr.c:2153 LOG: 53000: failed to acquire connections LOCATION: pool_recvfds, poolcomm.c:623 STATEMENT: create INDEX mdn on la_directednetwork(head) ERROR: 53000: Failed to get pooled connections LOCATION: get_handles, pgxcnode.c:1969 STATEMENT: create INDEX mdn on la_directednetwork(head) ERROR: 42704: index "mdn" does not exist LOCATION: DropErrorMsgNonExistent, tablecmds.c:746 STATEMENT: drop index mdn LOG: 08006: failed to connect to Datanode LOCATION: grow_pool, poolmgr.c:2259 WARNING: 01000: can not connect to node 16384 LOCATION: acquire_connection, poolmgr.c:2153 LOG: 53000: failed to acquire connections LOCATION: pool_recvfds, poolcomm.c:623 STATEMENT: create INDEX mdn on la_directednetwork(head) ERROR: 53000: Failed to get pooled connections LOCATION: get_handles, pgxcnode.c:1969 STATEMENT: create INDEX mdn on la_directednetwork(head) ERROR: 42704: index "mdn" does not exist LOCATION: DropErrorMsgNonExistent, tablecmds.c:746 STATEMENT: drop index mdn LOG: 08006: failed to connect to Datanode LOCATION: grow_pool, poolmgr.c:2259 WARNING: 01000: can not connect to node 16384 LOCATION: acquire_connection, poolmgr.c:2153 LOG: 53000: failed to acquire connections LOCATION: pool_recvfds, poolcomm.c:623 STATEMENT: create INDEX mdn on la_directednetwork(head) ERROR: 53000: Failed to get pooled connections LOCATION: get_handles, pgxcnode.c:1969 STATEMENT: create INDEX mdn on la_directednetwork(head) ERROR: 42704: index "mdn" does not exist LOCATION: DropErrorMsgNonExistent, tablecmds.c:746 STATEMENT: drop index mdn LOG: 08006: failed to connect to Datanode LOCATION: grow_pool, poolmgr.c:2259 WARNING: 01000: can not connect to node 16384 LOCATION: acquire_connection, poolmgr.c:2153 LOG: 53000: failed to acquire connections LOCATION: pool_recvfds, poolcomm.c:623 STATEMENT: create INDEX mdn on la_directednetwork(head) ERROR: 53000: Failed to get pooled connections LOCATION: get_handles, pgxcnode.c:1969 STATEMENT: create INDEX mdn on la_directednetwork(head) ERROR: 42704: index "mdn" does not exist LOCATION: DropErrorMsgNonExistent, tablecmds.c:746 STATEMENT: drop index mdn LOG: 08006: failed to connect to Datanode LOCATION: grow_pool, poolmgr.c:2259 WARNING: 01000: can not connect to node 16384 LOCATION: acquire_connection, poolmgr.c:2153 LOG: 53000: failed to acquire connections LOCATION: pool_recvfds, poolcomm.c:623 STATEMENT: create INDEX mdn on la_directednetwork(head) ERROR: 53000: Failed to get pooled connections LOCATION: get_handles, pgxcnode.c:1969 STATEMENT: create INDEX mdn on la_directednetwork(head) ERROR: 42704: index "mdn" does not exist LOCATION: DropErrorMsgNonExistent, tablecmds.c:746 STATEMENT: drop index mdn WARNING: 25P01: there is no transaction in progress LOCATION: EndTransactionBlock, xact.c:4086 LOG: 08006: failed to connect to Datanode LOCATION: grow_pool, poolmgr.c:2259 WARNING: 01000: can not connect to node 16384 LOCATION: acquire_connection, poolmgr.c:2153 LOG: 53000: failed to acquire connections LOCATION: pool_recvfds, poolcomm.c:623 STATEMENT: CHECKPOINT ERROR: 53000: Failed to get pooled connections LOCATION: get_handles, pgxcnode.c:1969 STATEMENT: CHECKPOINT |
From: Sandeep G. <gup...@gm...> - 2014-02-05 00:20:18
|
Hi Koichi, Thank you suggesting these parameters. Initially we did play around these. However, we used significantly higher values such checkpoint_timeout=30mins etc. Essentially we were trying parameters so as to avoid interference from autovaccum in the first place. The reason was using low values was to recreate the problem in the test setup. I did the regression tests with the new setting it is certainly better. It does crash but not so often. I will try to use in the application and see if runs in the main application. Also, I am running the same tests over a standalone PG (9.3 version I believe) and so far it has crashed. I haven't been too careful to make sure to use the exact same values for checkpoint parameters. Next email I will attach log files for review. Thanks. Sandeep On Tue, Feb 4, 2014 at 8:34 AM, Koichi Suzuki <koi...@gm...> wrote: > I looked at the log at datanode and found checkpoint is running too > frequently. Default checkpoint timeout is 5min. In your case, > checkpoint runs almost every five seconds (not minutes) in each > datanode. It is extraordinary. > > Could you try to tweak each datanode's postgresql.conf as follows? > > 1. Longer period for checkpoint_timeout. Default is 5min. 15min. > will be okay. > 2. Larger value for checkpoint_completion_target. Default is 0.5. > It should be okay. Larger value, such as 0.7, will make make > checkpoint work more smoothly. > 3. Larger value of checkpoint_segment. Default is 3. Because your > application updates the database very frequently, this number of > checkpoint segment will be exhausted very easily. Increase this to, > say, 30 or more. Each checkpoint_segment (in fact, WAL file) > consumes 16MB of your file space. I hope this is no problem to you at > all. > > I'm afraid too frequent checkpoint causes this kind of error (even > with vanilla PostgreSQL) and this situation is what you should avoid > both in PG and XC. > > Would like to know if things are improved. > > Best; > --- > Koichi Suzuki > > > 2014-02-04 Sandeep Gupta <gup...@gm...>: > > Hi Koichi, > > > > Just wanted to add that I have send across the datanode and coordinator > log > > files in my previous email. My hope is that it may give some insights > into > > what could be amiss and any ideas for workaround. > > > > > > Thanks. > > Sandeep > > > |
From: Koichi S. <koi...@gm...> - 2014-02-04 13:34:34
|
I looked at the log at datanode and found checkpoint is running too frequently. Default checkpoint timeout is 5min. In your case, checkpoint runs almost every five seconds (not minutes) in each datanode. It is extraordinary. Could you try to tweak each datanode's postgresql.conf as follows? 1. Longer period for checkpoint_timeout. Default is 5min. 15min. will be okay. 2. Larger value for checkpoint_completion_target. Default is 0.5. It should be okay. Larger value, such as 0.7, will make make checkpoint work more smoothly. 3. Larger value of checkpoint_segment. Default is 3. Because your application updates the database very frequently, this number of checkpoint segment will be exhausted very easily. Increase this to, say, 30 or more. Each checkpoint_segment (in fact, WAL file) consumes 16MB of your file space. I hope this is no problem to you at all. I'm afraid too frequent checkpoint causes this kind of error (even with vanilla PostgreSQL) and this situation is what you should avoid both in PG and XC. Would like to know if things are improved. Best; --- Koichi Suzuki 2014-02-04 Sandeep Gupta <gup...@gm...>: > Hi Koichi, > > Just wanted to add that I have send across the datanode and coordinator log > files in my previous email. My hope is that it may give some insights into > what could be amiss and any ideas for workaround. > > > Thanks. > Sandeep > |
From: Sandeep G. <gup...@gm...> - 2014-02-04 05:33:51
|
Hi Koichi, Just wanted to add that I have send across the datanode and coordinator log files in my previous email. My hope is that it may give some insights into what could be amiss and any ideas for workaround. Thanks. Sandeep |
From: Sandeep G. <gup...@gm...> - 2014-02-04 01:49:06
|
Hi Koichi, We are not invoking the statements concurrently. On second thoughts my question on how "how to know if a command failed...etc" doesn't make sense. What is happening right now is that index creation fails with datanode shutting down. We did try a workaround by turning off the autovacuum but then the memory usage will hit 100% and would result in tuple not found error. We don't have to create index on the fly. We would just like to get the index build somehow. -Sandeep On Mon, Feb 3, 2014 at 4:46 PM, Koichi Suzuki <koi...@gm...> wrote: > Unless you invoke "CONCURRENTLY" statement, having psql to request > next command means the command completed. At present, XC does not > support "CONCURRENTLY" command. > > Please let us find a time to test "CREATE INDEX" in parallel with > autovacuum. > > Do you need to create index on the fly, I mean as a part of usual > database operation? If not, there could be some more workaround on > this. > > Regards; > --- > Koichi Suzuki > > > 2014-02-04 Sandeep Gupta <gup...@gm...>: > > Hi Ashutosh, > > > > For us the app+ autovaccum is quite harmful. We are not able to run the > > application because the index creation gets aborted in middle. The > datanodes > > crashes. > > > > We could somehow restart the datanodes and start the index creation but > my > > feeling it will happen quite often. > > > > > > I have a related question: is there anyway to know if a command has > failed. > > Usually we fire a command using psql. > > And move to the next command. Is there any way to know if the previous > > command failed or was a success? > > > > Thanks. > > Sandeep > > > > > > > > > > On Mon, Feb 3, 2014 at 1:13 PM, Sandeep Gupta <gup...@gm...> > > wrote: > >> > >> Hi Ashutosh, Koichi, > >> > >> Initially my feeling was that this was a postgres bug. That is why I > >> posted it in the postgres community. However, I now feel that it is due > to > >> the changes made in XC. > >> > >> I have started the same test on standalone postgres. So far it hasn't > >> crashed. My feeling is that it won't. If in case it does I will report > >> accordingly. > >> > >> As requested, I started the test with verbose log on. Attached are the > log > >> files for the coordinator and the datanodes. > >> There are several redundant messages that gets printed such as > "checkpoint > >> too often". Please use some filters etc. to view the log file. I > thought it > >> was best to send across the whole file. > >> > >> To clarify, I create a very large table (using copy) and then repeatedly > >> create and drop index. I understand this is not the actual workload but > that > >> was the only way to reproduce the error. > >> > >> The other complication is that in real system we get two kinds of > errors > >> "tuple on found" and this deadlock. I feel that they connected. > >> > >> Let me know if the log file help or is any other suggestions that you > have > >> may. > >> > >> -Sandeep > >> > >> > >> > >> > >> > >> > >> On Sun, Feb 2, 2014 at 11:49 PM, Ashutosh Bapat > >> <ash...@en...> wrote: > >>> > >>> Hi Sandeep, > >>> Can you please check if similar error happens on vanilla PG. It may be > an > >>> application + auto-vacuum error, which can happen in PG as well and > might be > >>> harmless. It's auto-vacuum being cancelled. Auto-vacuum will run again > >>> during the next iteration. > >>> > >>> > >>> On Fri, Jan 31, 2014 at 11:21 PM, Sandeep Gupta < > gup...@gm...> > >>> wrote: > >>>> > >>>> Hi, > >>>> > >>>> I was debugging an outstanding issue with pgxc > >>>> ( > http://sourceforge.net/mailarchive/forum.php?thread_name=CABEZHFtr_YoWb22UAnPGQz8M5KqpwzbviYiAgq_%3DY...@ma...&forum_name=postgres-xc-general > ). > >>>> > >>>> I couldn't reproduce that error. But I do get this error. > >>>> > >>>> > >>>> LOG: database system is ready to accept connections > >>>> LOG: autovacuum launcher started > >>>> LOG: sending cancel to blocking autovacuum PID 17222 > >>>> DETAIL: Process 13896 waits for AccessExclusiveLock on relation 16388 > >>>> of database 12626. > >>>> STATEMENT: drop index mdn > >>>> ERROR: canceling autovacuum task > >>>> CONTEXT: automatic analyze of table > >>>> "postgres.public.la_directednetwork" > >>>> PreAbort Remote > >>>> > >>>> > >>>> It seems to be a deadlock issue and may be related to the earlier > >>>> problem as well. > >>>> Please let me know your comments. > >>>> > >>>> -Sandeep > >>>> > >>>> > >>>> > >>>> > ------------------------------------------------------------------------------ > >>>> WatchGuard Dimension instantly turns raw network data into actionable > >>>> security intelligence. It gives you real-time visual feedback on key > >>>> security issues and trends. Skip the complicated setup - simply > import > >>>> a virtual appliance and go from zero to informed in seconds. > >>>> > >>>> > http://pubads.g.doubleclick.net/gampad/clk?id=123612991&iu=/4140/ostg.clktrk > >>>> _______________________________________________ > >>>> Postgres-xc-general mailing list > >>>> Pos...@li... > >>>> https://lists.sourceforge.net/lists/listinfo/postgres-xc-general > >>>> > >>> > >>> > >>> > >>> -- > >>> Best Wishes, > >>> Ashutosh Bapat > >>> EnterpriseDB Corporation > >>> The Postgres Database Company > >> > >> > > > > > > > ------------------------------------------------------------------------------ > > Managing the Performance of Cloud-Based Applications > > Take advantage of what the Cloud has to offer - Avoid Common Pitfalls. > > Read the Whitepaper. > > > http://pubads.g.doubleclick.net/gampad/clk?id=121051231&iu=/4140/ostg.clktrk > > _______________________________________________ > > Postgres-xc-general mailing list > > Pos...@li... > > https://lists.sourceforge.net/lists/listinfo/postgres-xc-general > > > |
From: Koichi S. <koi...@gm...> - 2014-02-04 00:46:45
|
Unless you invoke "CONCURRENTLY" statement, having psql to request next command means the command completed. At present, XC does not support "CONCURRENTLY" command. Please let us find a time to test "CREATE INDEX" in parallel with autovacuum. Do you need to create index on the fly, I mean as a part of usual database operation? If not, there could be some more workaround on this. Regards; --- Koichi Suzuki 2014-02-04 Sandeep Gupta <gup...@gm...>: > Hi Ashutosh, > > For us the app+ autovaccum is quite harmful. We are not able to run the > application because the index creation gets aborted in middle. The datanodes > crashes. > > We could somehow restart the datanodes and start the index creation but my > feeling it will happen quite often. > > > I have a related question: is there anyway to know if a command has failed. > Usually we fire a command using psql. > And move to the next command. Is there any way to know if the previous > command failed or was a success? > > Thanks. > Sandeep > > > > > On Mon, Feb 3, 2014 at 1:13 PM, Sandeep Gupta <gup...@gm...> > wrote: >> >> Hi Ashutosh, Koichi, >> >> Initially my feeling was that this was a postgres bug. That is why I >> posted it in the postgres community. However, I now feel that it is due to >> the changes made in XC. >> >> I have started the same test on standalone postgres. So far it hasn't >> crashed. My feeling is that it won't. If in case it does I will report >> accordingly. >> >> As requested, I started the test with verbose log on. Attached are the log >> files for the coordinator and the datanodes. >> There are several redundant messages that gets printed such as "checkpoint >> too often". Please use some filters etc. to view the log file. I thought it >> was best to send across the whole file. >> >> To clarify, I create a very large table (using copy) and then repeatedly >> create and drop index. I understand this is not the actual workload but that >> was the only way to reproduce the error. >> >> The other complication is that in real system we get two kinds of errors >> "tuple on found" and this deadlock. I feel that they connected. >> >> Let me know if the log file help or is any other suggestions that you have >> may. >> >> -Sandeep >> >> >> >> >> >> >> On Sun, Feb 2, 2014 at 11:49 PM, Ashutosh Bapat >> <ash...@en...> wrote: >>> >>> Hi Sandeep, >>> Can you please check if similar error happens on vanilla PG. It may be an >>> application + auto-vacuum error, which can happen in PG as well and might be >>> harmless. It's auto-vacuum being cancelled. Auto-vacuum will run again >>> during the next iteration. >>> >>> >>> On Fri, Jan 31, 2014 at 11:21 PM, Sandeep Gupta <gup...@gm...> >>> wrote: >>>> >>>> Hi, >>>> >>>> I was debugging an outstanding issue with pgxc >>>> (http://sourceforge.net/mailarchive/forum.php?thread_name=CABEZHFtr_YoWb22UAnPGQz8M5KqpwzbviYiAgq_%3DY...@ma...&forum_name=postgres-xc-general). >>>> >>>> I couldn't reproduce that error. But I do get this error. >>>> >>>> >>>> LOG: database system is ready to accept connections >>>> LOG: autovacuum launcher started >>>> LOG: sending cancel to blocking autovacuum PID 17222 >>>> DETAIL: Process 13896 waits for AccessExclusiveLock on relation 16388 >>>> of database 12626. >>>> STATEMENT: drop index mdn >>>> ERROR: canceling autovacuum task >>>> CONTEXT: automatic analyze of table >>>> "postgres.public.la_directednetwork" >>>> PreAbort Remote >>>> >>>> >>>> It seems to be a deadlock issue and may be related to the earlier >>>> problem as well. >>>> Please let me know your comments. >>>> >>>> -Sandeep >>>> >>>> >>>> >>>> ------------------------------------------------------------------------------ >>>> WatchGuard Dimension instantly turns raw network data into actionable >>>> security intelligence. It gives you real-time visual feedback on key >>>> security issues and trends. Skip the complicated setup - simply import >>>> a virtual appliance and go from zero to informed in seconds. >>>> >>>> http://pubads.g.doubleclick.net/gampad/clk?id=123612991&iu=/4140/ostg.clktrk >>>> _______________________________________________ >>>> Postgres-xc-general mailing list >>>> Pos...@li... >>>> https://lists.sourceforge.net/lists/listinfo/postgres-xc-general >>>> >>> >>> >>> >>> -- >>> Best Wishes, >>> Ashutosh Bapat >>> EnterpriseDB Corporation >>> The Postgres Database Company >> >> > > > ------------------------------------------------------------------------------ > Managing the Performance of Cloud-Based Applications > Take advantage of what the Cloud has to offer - Avoid Common Pitfalls. > Read the Whitepaper. > http://pubads.g.doubleclick.net/gampad/clk?id=121051231&iu=/4140/ostg.clktrk > _______________________________________________ > Postgres-xc-general mailing list > Pos...@li... > https://lists.sourceforge.net/lists/listinfo/postgres-xc-general > |