You can subscribe to this list here.
2010 |
Jan
|
Feb
|
Mar
|
Apr
|
May
(2) |
Jun
|
Jul
|
Aug
(6) |
Sep
|
Oct
(19) |
Nov
(1) |
Dec
|
---|---|---|---|---|---|---|---|---|---|---|---|---|
2011 |
Jan
(12) |
Feb
(1) |
Mar
(4) |
Apr
(4) |
May
(32) |
Jun
(12) |
Jul
(11) |
Aug
(1) |
Sep
(6) |
Oct
(3) |
Nov
|
Dec
(10) |
2012 |
Jan
(11) |
Feb
(1) |
Mar
(3) |
Apr
(25) |
May
(53) |
Jun
(38) |
Jul
(103) |
Aug
(54) |
Sep
(31) |
Oct
(66) |
Nov
(77) |
Dec
(20) |
2013 |
Jan
(91) |
Feb
(86) |
Mar
(103) |
Apr
(107) |
May
(25) |
Jun
(37) |
Jul
(17) |
Aug
(59) |
Sep
(38) |
Oct
(78) |
Nov
(29) |
Dec
(15) |
2014 |
Jan
(23) |
Feb
(82) |
Mar
(118) |
Apr
(101) |
May
(103) |
Jun
(45) |
Jul
(6) |
Aug
(10) |
Sep
|
Oct
(32) |
Nov
|
Dec
(9) |
2015 |
Jan
(3) |
Feb
(5) |
Mar
|
Apr
(1) |
May
|
Jun
|
Jul
(9) |
Aug
(4) |
Sep
(3) |
Oct
|
Nov
|
Dec
|
2016 |
Jan
(3) |
Feb
|
Mar
|
Apr
|
May
|
Jun
|
Jul
|
Aug
|
Sep
|
Oct
|
Nov
|
Dec
|
2017 |
Jan
|
Feb
|
Mar
|
Apr
|
May
|
Jun
(3) |
Jul
|
Aug
|
Sep
|
Oct
|
Nov
|
Dec
|
2018 |
Jan
|
Feb
|
Mar
|
Apr
|
May
(4) |
Jun
|
Jul
|
Aug
|
Sep
|
Oct
|
Nov
|
Dec
|
S | M | T | W | T | F | S |
---|---|---|---|---|---|---|
1
|
2
|
3
|
4
|
5
|
6
|
7
|
8
|
9
|
10
|
11
|
12
|
13
|
14
|
15
|
16
|
17
|
18
|
19
(10) |
20
|
21
|
22
|
23
|
24
(1) |
25
(2) |
26
(3) |
27
(7) |
28
|
29
|
30
(8) |
31
(1) |
|
|
|
|
From: Lionel F. <lio...@gm...> - 2011-05-31 07:26:11
|
Hi, yes, persistent_datanode_connections is now set to off - it may not be related to the issues I have. What amount of memory do you have on your datanodes & coordinator ? Here are my settings : datanode : shared_buffers = 512MB coordinator=256MB (now, was 96MB) I still get for some distributed tables (by hash) "ERROR: Could not commit prepared transaction implicitely" For distribution syntax, yes, I found your webpage talking about regression tests > You also have to know that it is important to set a limit of connections on > datanodes equal to the sum of max connections on all coordinators. > For example, if your cluster is using 2 coordinator with 20 max connections > each, you may have a maximum of 40 connections to datanodes. Ok, tweaking this today and launching the tests again... Lionel F. 2011/5/31 Michael Paquier <mic...@gm...>: > > > On Mon, May 30, 2011 at 7:34 PM, Lionel Frachon <lio...@gm...> > wrote: >> >> Hi again, >> >> I turned off connection pooling on coordinator (dunno why it sayed >> on), raised the shared_buffers of coordinator, allowed 1000 >> connections and the error disappeared. > > I am not really sure I get the meaning of this, but how did you turn off > pooler on coordinator. > Did you use the parameter persistent_connections? > Connection pooling from coordinator is an automatic feature and you have to > use it if you want to connect from a remote coordinator to backend XC nodes. > > You also have to know that it is important to set a limit of connections on > datanodes equal to the sum of max connections on all coordinators. > For example, if your cluster is using 2 coordinator with 20 max connections > each, you may have a maximum of 40 connections to datanodes. > This uses a lot of shared buffer on a node, but typically this maximum > number of connections is never reached thanks to the connection pooling. > > Please node also that number of Coordinator <-> Coordinator connections may > also increase if DDL are used from several coordinators. > >> However, all data is still going on one node (and whatever I could >> choose as primary datanode), with 40 warehouses... any specific syntax >> to load balance warehouses over nodes ? > > CREATE TABLE foo (column_key type, other_column int) DISTRIBUTE BY > HASH(column_key); > -- > Michael Paquier > http://michael.otacoo.com > |
From: Michael P. <mic...@gm...> - 2011-05-30 23:54:33
|
On Mon, May 30, 2011 at 7:34 PM, Lionel Frachon <lio...@gm...>wrote: > Hi again, > > I turned off connection pooling on coordinator (dunno why it sayed > on), raised the shared_buffers of coordinator, allowed 1000 > connections and the error disappeared. > I am not really sure I get the meaning of this, but how did you turn off pooler on coordinator. Did you use the parameter persistent_connections? Connection pooling from coordinator is an automatic feature and you have to use it if you want to connect from a remote coordinator to backend XC nodes. You also have to know that it is important to set a limit of connections on datanodes equal to the sum of max connections on all coordinators. For example, if your cluster is using 2 coordinator with 20 max connections each, you may have a maximum of 40 connections to datanodes. This uses a lot of shared buffer on a node, but typically this maximum number of connections is never reached thanks to the connection pooling. Please node also that number of Coordinator <-> Coordinator connections may also increase if DDL are used from several coordinators. However, all data is still going on one node (and whatever I could > choose as primary datanode), with 40 warehouses... any specific syntax > to load balance warehouses over nodes ? > CREATE TABLE foo (column_key type, other_column int) DISTRIBUTE BY HASH(column_key); -- Michael Paquier http://michael.otacoo.com |
From: Lionel F. <lio...@gm...> - 2011-05-30 10:34:31
|
Hi again, I turned off connection pooling on coordinator (dunno why it sayed on), raised the shared_buffers of coordinator, allowed 1000 connections and the error disappeared. However, all data is still going on one node (and whatever I could choose as primary datanode), with 40 warehouses... any specific syntax to load balance warehouses over nodes ? Lionel F. 2011/5/30 Lionel Frachon <lio...@gm...>: > Hello, > > tried again with your compilations options, the datanodes connect > correctly to the gtm, and the coordinator aswell, but on loadData.sh, > only one node is loaded, whereas the 2 other ones complains > > "STATEMENT: COMMIT PREPARED 'T709' > ERROR: prepared transaction with identifier "T711" does not exist > STATEMENT: COMMIT PREPARED 'T711' > ERROR: prepared transaction with identifier "T713" does not exist > STATEMENT: COMMIT PREPARED 'T713'" > > and the loader itself complains aswell > > ERROR: Could not commit prepared transaction implicitely > Elasped Time(ms): 0.86 Writing record 215000 of 500000 > > Looks like a lack of comm between them ? > > Lionel F. > > > > 2011/5/30 Lionel Frachon <lio...@gm...>: >> Ok, testing your compile flags. >> >> On install, I'm adding an /etc/ld.so.conf.d/pgxc.conf file pointing to >> /usr/local/pgsql/lib, then doing an lddconfig to ensure the lib is >> enabled system-wide. Will add LD_LIBRARY_PATH in user env just to be >> sure (but I think it already works properly, as if I miss it, the >> server does not start...) >> >> Lionel F. >> >> >> >> 2011/5/30 Michael Paquier <mic...@gm...>: >>> Compilation looks to be correct, I myself use this one: >>> ./configure CFLAGS="-DPGXC -O2" --enable-depend --enable-debug >>> --disable-rpath --enable-cassert >>> but even if you define the flag before configure it works correctly. >>> >>> On Mon, May 30, 2011 at 4:52 PM, Lionel Frachon <lio...@gm...> >>> wrote: >>>> >>>> Hi Michael, >>>> >>>> thanks for your tests and involvment. The error on my side may be a >>>> compilation, installation or environment problem (at this point, I >>>> have no clues) , here are the CFLAGS I use : >>>> >>>> CFLAGS="-O2" >>>> and configure : >>>> "./configure --enable-debug --disable-rpath --enable-depend" >>>> >>>> I'm then packing with rpmbuild everything in /usr/local/pgsql. Is that >>>> a good method (apart from compiling directly on host)? >>>> >>>> Are there any env variable (like /etc/security/limits.conf tweaking, >>>> semaphores or whatever) I should be aware of ? >>> >>> LD_LIBRARY_PATH is an environment variable you should set to point to the >>> correct XC libraries. >>> It is the only thing that may mess up your settings I think. >>> -- >>> Michael Paquier >>> http://michael.otacoo.com >>> >> > |
From: Lionel F. <lio...@gm...> - 2011-05-30 09:40:23
|
Hello, tried again with your compilations options, the datanodes connect correctly to the gtm, and the coordinator aswell, but on loadData.sh, only one node is loaded, whereas the 2 other ones complains "STATEMENT: COMMIT PREPARED 'T709' ERROR: prepared transaction with identifier "T711" does not exist STATEMENT: COMMIT PREPARED 'T711' ERROR: prepared transaction with identifier "T713" does not exist STATEMENT: COMMIT PREPARED 'T713'" and the loader itself complains aswell ERROR: Could not commit prepared transaction implicitely Elasped Time(ms): 0.86 Writing record 215000 of 500000 Looks like a lack of comm between them ? Lionel F. 2011/5/30 Lionel Frachon <lio...@gm...>: > Ok, testing your compile flags. > > On install, I'm adding an /etc/ld.so.conf.d/pgxc.conf file pointing to > /usr/local/pgsql/lib, then doing an lddconfig to ensure the lib is > enabled system-wide. Will add LD_LIBRARY_PATH in user env just to be > sure (but I think it already works properly, as if I miss it, the > server does not start...) > > Lionel F. > > > > 2011/5/30 Michael Paquier <mic...@gm...>: >> Compilation looks to be correct, I myself use this one: >> ./configure CFLAGS="-DPGXC -O2" --enable-depend --enable-debug >> --disable-rpath --enable-cassert >> but even if you define the flag before configure it works correctly. >> >> On Mon, May 30, 2011 at 4:52 PM, Lionel Frachon <lio...@gm...> >> wrote: >>> >>> Hi Michael, >>> >>> thanks for your tests and involvment. The error on my side may be a >>> compilation, installation or environment problem (at this point, I >>> have no clues) , here are the CFLAGS I use : >>> >>> CFLAGS="-O2" >>> and configure : >>> "./configure --enable-debug --disable-rpath --enable-depend" >>> >>> I'm then packing with rpmbuild everything in /usr/local/pgsql. Is that >>> a good method (apart from compiling directly on host)? >>> >>> Are there any env variable (like /etc/security/limits.conf tweaking, >>> semaphores or whatever) I should be aware of ? >> >> LD_LIBRARY_PATH is an environment variable you should set to point to the >> correct XC libraries. >> It is the only thing that may mess up your settings I think. >> -- >> Michael Paquier >> http://michael.otacoo.com >> > |
From: Lionel F. <lio...@gm...> - 2011-05-30 08:11:53
|
Ok, testing your compile flags. On install, I'm adding an /etc/ld.so.conf.d/pgxc.conf file pointing to /usr/local/pgsql/lib, then doing an lddconfig to ensure the lib is enabled system-wide. Will add LD_LIBRARY_PATH in user env just to be sure (but I think it already works properly, as if I miss it, the server does not start...) Lionel F. 2011/5/30 Michael Paquier <mic...@gm...>: > Compilation looks to be correct, I myself use this one: > ./configure CFLAGS="-DPGXC -O2" --enable-depend --enable-debug > --disable-rpath --enable-cassert > but even if you define the flag before configure it works correctly. > > On Mon, May 30, 2011 at 4:52 PM, Lionel Frachon <lio...@gm...> > wrote: >> >> Hi Michael, >> >> thanks for your tests and involvment. The error on my side may be a >> compilation, installation or environment problem (at this point, I >> have no clues) , here are the CFLAGS I use : >> >> CFLAGS="-O2" >> and configure : >> "./configure --enable-debug --disable-rpath --enable-depend" >> >> I'm then packing with rpmbuild everything in /usr/local/pgsql. Is that >> a good method (apart from compiling directly on host)? >> >> Are there any env variable (like /etc/security/limits.conf tweaking, >> semaphores or whatever) I should be aware of ? > > LD_LIBRARY_PATH is an environment variable you should set to point to the > correct XC libraries. > It is the only thing that may mess up your settings I think. > -- > Michael Paquier > http://michael.otacoo.com > |
From: Michael P. <mic...@gm...> - 2011-05-30 08:08:21
|
Compilation looks to be correct, I myself use this one: ./configure CFLAGS="-DPGXC -O2" --enable-depend --enable-debug --disable-rpath --enable-cassert but even if you define the flag before configure it works correctly. On Mon, May 30, 2011 at 4:52 PM, Lionel Frachon <lio...@gm...>wrote: > Hi Michael, > > thanks for your tests and involvment. The error on my side may be a > compilation, installation or environment problem (at this point, I > have no clues) , here are the CFLAGS I use : > > CFLAGS="-O2" > and configure : > "./configure --enable-debug --disable-rpath --enable-depend" > > I'm then packing with rpmbuild everything in /usr/local/pgsql. Is that > a good method (apart from compiling directly on host)? > > Are there any env variable (like /etc/security/limits.conf tweaking, > semaphores or whatever) I should be aware of ? > LD_LIBRARY_PATH is an environment variable you should set to point to the correct XC libraries. It is the only thing that may mess up your settings I think. -- Michael Paquier http://michael.otacoo.com |
From: Lionel F. <lio...@gm...> - 2011-05-30 07:52:36
|
Hi Michael, thanks for your tests and involvment. The error on my side may be a compilation, installation or environment problem (at this point, I have no clues) , here are the CFLAGS I use : CFLAGS="-O2" and configure : "./configure --enable-debug --disable-rpath --enable-depend" I'm then packing with rpmbuild everything in /usr/local/pgsql. Is that a good method (apart from compiling directly on host)? Are there any env variable (like /etc/security/limits.conf tweaking, semaphores or whatever) I should be aware of ? Thanks. Kind regards Lionel F. 2011/5/30 Michael Paquier <mic...@gm...>: > Hi, > > I tested the benchmark with latest head code, and it looks to work on my > side without crashing. > Respecting what you said in your previous email and what is written in > README. > #Create database > createdb test > #Create tables > ./runSQL.sh postgres.properties sqlTableCreates > #Data generation > ./loadData.sh postgres.properties numWarehouses 1 > #Index creation > ./runSQL.sh postgres.properties sqlIndexCreates > #To launch test terminal > ./runBenchmark.sh postgres.properties > > With the following properties > driver=org.postgresql.Driver > conn=jdbc:postgresql://localhost:5432/test > user=****** > password=***** > > I found in the logs some errors such as: > LOG: execute <unnamed>: SELECT c_discount, c_last, c_credit, w_tax FROM > customer, warehouse WHERE w_id = $1 AND w_id = c_w_id AND c_d_id = $2 AND > c_id = $3 > DETAIL: parameters: $1 = '1', $2 = '10', $3 = '1549' > ERROR: bind message supplies 3 parameters, but prepared statement "" > requires 0 > > But this is expected as XC does not yet manage multi-prepared queries and > BenchmarkSQL looks to use it. Yes, I've seen it and saw your warning; as you said, it should be however sufficient to get some correct results regarding perfs/scalability. > Btw, this is enough to get some results. > > Also, I am wondering about this error you got: > LOG: could not create IPv6 socket: Address family not supported by protocol > I am using here only IPv4 protocol. > -- > Michael Paquier > http://michael.otacoo.com > |
From: Michael P. <mic...@gm...> - 2011-05-30 01:58:14
|
There is also something I noticed with the way data is distributed among nodes. Now all the table are using the warehouse ID (this looks exactly like TPC-C benchmark test) as a distribution key. If you let the benchmark running like that by default with only 1 warehouse, you will finish with all your data located on the same node, resulting in bad performance. If you are looking for performance tests, you may need to use far more warehouses (don't know, 20~100) or a different distribution key, but regarding all the tables warehouse ID is definitely the best distribution key. So, I would recommend to use a high number of warehouses and distribute all the tables with warehouse ID as distribution key. item table does not use warehouse ID, so it is better to replicate this table among nodes. On Mon, May 30, 2011 at 10:50 AM, Michael Paquier <mic...@gm... > wrote: > Hi, > > I tested the benchmark with latest head code, and it looks to work on my > side without crashing. > Respecting what you said in your previous email and what is written in > README. > #Create database > createdb test > #Create tables > ./runSQL.sh postgres.properties sqlTableCreates > #Data generation > ./loadData.sh postgres.properties numWarehouses 1 > #Index creation > ./runSQL.sh postgres.properties sqlIndexCreates > #To launch test terminal > ./runBenchmark.sh postgres.properties > > With the following properties > driver=org.postgresql.Driver > conn=jdbc:postgresql://localhost:5432/test > > user=****** > password=***** > > I found in the logs some errors such as: > LOG: execute <unnamed>: SELECT c_discount, c_last, c_credit, w_tax FROM > customer, warehouse WHERE w_id = $1 AND w_id = c_w_id AND c_d_id = $2 AND > c_id = $3 > DETAIL: parameters: $1 = '1', $2 = '10', $3 = '1549' > ERROR: bind message supplies 3 parameters, but prepared statement "" > requires 0 > > But this is expected as XC does not yet manage multi-prepared queries and > BenchmarkSQL looks to use it. > Btw, this is enough to get some results. > > Also, I am wondering about this error you got: > > LOG: could not create IPv6 socket: Address family not supported by > protocol > I am using here only IPv4 protocol. > -- > Michael Paquier > http://michael.otacoo.com > Regards, -- Michael Paquier http://michael.otacoo.com |
From: Michael P. <mic...@gm...> - 2011-05-30 01:50:36
|
Hi, I tested the benchmark with latest head code, and it looks to work on my side without crashing. Respecting what you said in your previous email and what is written in README. #Create database createdb test #Create tables ./runSQL.sh postgres.properties sqlTableCreates #Data generation ./loadData.sh postgres.properties numWarehouses 1 #Index creation ./runSQL.sh postgres.properties sqlIndexCreates #To launch test terminal ./runBenchmark.sh postgres.properties With the following properties driver=org.postgresql.Driver conn=jdbc:postgresql://localhost:5432/test user=****** password=***** I found in the logs some errors such as: LOG: execute <unnamed>: SELECT c_discount, c_last, c_credit, w_tax FROM customer, warehouse WHERE w_id = $1 AND w_id = c_w_id AND c_d_id = $2 AND c_id = $3 DETAIL: parameters: $1 = '1', $2 = '10', $3 = '1549' ERROR: bind message supplies 3 parameters, but prepared statement "" requires 0 But this is expected as XC does not yet manage multi-prepared queries and BenchmarkSQL looks to use it. Btw, this is enough to get some results. Also, I am wondering about this error you got: LOG: could not create IPv6 socket: Address family not supported by protocol I am using here only IPv4 protocol. -- Michael Paquier http://michael.otacoo.com |
From: Lionel F. <lio...@gm...> - 2011-05-27 10:15:45
|
Hi, coord=1, data1=1,data2=2 is the conf I use. actually, I'm using - BenchmrkSQL (http://sourceforge.net/projects/benchmarksql/), - postgresql-9.0-801.jdbc3.jar driver (to be changed in runSQL.sh and loadData.sh) - the postgres.properties file (un run subdirectory) contains "driver=org.postgresql.Driver conn=jdbc:postgresql://10.114.12.26:26001/testperfs user=pgxc password=pgxc" - testperfs database is created on initialization through coordinator, and verified it is present on both nodes - then running './runSQL.sh postgres.properties sqlCreateTables' for the initial problem, I was running loaddata.sh postgres.properties numWarehouses 1 Hope you can reproduce the problem... Lionel F. 2011/5/27 Michael Paquier <mic...@gm...>: > > > On Fri, May 27, 2011 at 5:55 PM, Lionel Frachon <lio...@gm...> > wrote: >> >> Hi, >> >> > > Have you setup correctly data_node_hosts and data_node_ports on >> > > Coordinator? >> Coordinator shows : >> num_data_nodes = 2 >> num_coordinators = 1 >> data_node_hosts = '10.114.12.26,10.114.12.14' >> data_node_ports = '25001,25001' >> >> (However, coordinator does not start with these options >> >> #data_node_users = pgxc >> #coordinator_users = pgxc >> >> generated through pgxc_config, so I removed them. Any importance ?) > > No they are not. > *_users and *_passwds options generated by configurator have been removed in > current head to allow multiple users to use the pooler at the same time with > different connection pooling. > >> I see them connected through 'netstat -apn' on both sides, and node 2 >> connected with gtm. >> >> > Have you used -i option on datanode to be sure that it can accept TCP/IP >> > remote connections and not only local ones? >> >> Yes , launch option is -i -p 25001 > > So no problem. >> >> > Perhaps it is a problem with pg_hba.conf. >> It's set to trust on local segment network, and had errors with it >> that I corrected, so assuming it's working now :) psql working >> properly on creating database remotely. > > You got the basics. >> >> > gtm_host and gtm_port are correctly set for both Coordinator and >> > datanodes. >> >> They are : >> Node 2 : >> datanode/postgresql.conf:gtm_host = '10.114.12.26' >> datanode/postgresql.conf:gtm_port = 16680 >> >> Node 1 >> coord/postgresql.conf:gtm_host = '10.114.12.26' >> coord/postgresql.conf:gtm_port = 16680 >> datanode/postgresql.conf:gtm_host = '10.114.12.26' >> datanode/postgresql.conf:gtm_port = 16680 >> >> Of course, gtm running on 16680. And confirmed they are TCP-connected >> through netstat. > > So, this is also OK. >> >> > Is pgxc_node_id set differently for your datanodes? >> Yes, but doc is not crystal clear on that; are pgxc_nodes to be all >> different between coord & data nodes ? Meaning >> > Ids have to be different for Coordinators and for datanodes. > This is used when registering nodes on GTM. However registration is also > made by node type: coordinator and datanodes. So it doesn't matter to have a > datanode 1 and a coordinator 1 at the same time. >> >> coord=1, data1=2, data2=3 >> >> or >> coord=1, data1=1,data2=2 ? > > Second configuration is OK >> >> On first JDBC query, coordinator shows an "EOF on client connection". > > My JDBC is running smoothly... We may have different settings. > What kind of queries are you launching so as I can have a try? > -- > Michael Paquier > http://michael.otacoo.com > |
From: Michael P. <mic...@gm...> - 2011-05-27 09:28:43
|
On Fri, May 27, 2011 at 5:55 PM, Lionel Frachon <lio...@gm...>wrote: > Hi, > > > > Have you setup correctly data_node_hosts and data_node_ports on > Coordinator? > Coordinator shows : > num_data_nodes = 2 > num_coordinators = 1 > data_node_hosts = '10.114.12.26,10.114.12.14' > data_node_ports = '25001,25001' > > (However, coordinator does not start with these options > > #data_node_users = pgxc > #coordinator_users = pgxc > > generated through pgxc_config, so I removed them. Any importance ?) > No they are not. *_users and *_passwds options generated by configurator have been removed in current head to allow multiple users to use the pooler at the same time with different connection pooling. I see them connected through 'netstat -apn' on both sides, and node 2 > connected with gtm. > > > Have you used -i option on datanode to be sure that it can accept TCP/IP > > remote connections and not only local ones? > > Yes , launch option is -i -p 25001 > So no problem. > > > Perhaps it is a problem with pg_hba.conf. > It's set to trust on local segment network, and had errors with it > that I corrected, so assuming it's working now :) psql working > properly on creating database remotely. > You got the basics. > > gtm_host and gtm_port are correctly set for both Coordinator and > datanodes. > > They are : > Node 2 : > datanode/postgresql.conf:gtm_host = '10.114.12.26' > datanode/postgresql.conf:gtm_port = 16680 > > Node 1 > coord/postgresql.conf:gtm_host = '10.114.12.26' > coord/postgresql.conf:gtm_port = 16680 > datanode/postgresql.conf:gtm_host = '10.114.12.26' > datanode/postgresql.conf:gtm_port = 16680 > > Of course, gtm running on 16680. And confirmed they are TCP-connected > through netstat. > So, this is also OK. > > > Is pgxc_node_id set differently for your datanodes? > Yes, but doc is not crystal clear on that; are pgxc_nodes to be all > different between coord & data nodes ? Meaning > > Ids have to be different for Coordinators and for datanodes. This is used when registering nodes on GTM. However registration is also made by node type: coordinator and datanodes. So it doesn't matter to have a datanode 1 and a coordinator 1 at the same time. > coord=1, data1=2, data2=3 > or > coord=1, data1=1,data2=2 ? > Second configuration is OK > On first JDBC query, coordinator shows an "EOF on client connection". > My JDBC is running smoothly... We may have different settings. What kind of queries are you launching so as I can have a try? -- Michael Paquier http://michael.otacoo.com |
From: Lionel F. <lio...@gm...> - 2011-05-27 08:58:44
|
To have the exact trace ==> /var/log/pgxc/coordinator.log <== LOG: could not create IPv6 socket: Address family not supported by protocol LOG: database system was shut down at 2011-05-27 10:57:27 CEST LOG: database system is ready to accept connections LOG: unexpected EOF on client connection Lionel F. 2011/5/27 Lionel Frachon <lio...@gm...>: > Hi, > >> > Have you setup correctly data_node_hosts and data_node_ports on Coordinator? > Coordinator shows : > num_data_nodes = 2 > num_coordinators = 1 > data_node_hosts = '10.114.12.26,10.114.12.14' > data_node_ports = '25001,25001' > > (However, coordinator does not start with these options > > #data_node_users = pgxc > #coordinator_users = pgxc > > generated through pgxc_config, so I removed them. Any importance ?) > > I see them connected through 'netstat -apn' on both sides, and node 2 > connected with gtm. > >> Have you used -i option on datanode to be sure that it can accept TCP/IP >> remote connections and not only local ones? > > Yes , launch option is -i -p 25001 > >> Perhaps it is a problem with pg_hba.conf. > It's set to trust on local segment network, and had errors with it > that I corrected, so assuming it's working now :) psql working > properly on creating database remotely. > >> gtm_host and gtm_port are correctly set for both Coordinator and datanodes. > > They are : > Node 2 : > datanode/postgresql.conf:gtm_host = '10.114.12.26' > datanode/postgresql.conf:gtm_port = 16680 > > Node 1 > coord/postgresql.conf:gtm_host = '10.114.12.26' > coord/postgresql.conf:gtm_port = 16680 > datanode/postgresql.conf:gtm_host = '10.114.12.26' > datanode/postgresql.conf:gtm_port = 16680 > > Of course, gtm running on 16680. And confirmed they are TCP-connected > through netstat. > >> Is pgxc_node_id set differently for your datanodes? > Yes, but doc is not crystal clear on that; are pgxc_nodes to be all > different between coord & data nodes ? Meaning > > coord=1, data1=2, data2=3 > or > coord=1, data1=1,data2=2 ? > > (already tried some combinations...) > > On first JDBC query, coordinator shows an "EOF on client connection". > > Thanks (and thanks to Benny for the fix, will compile in parallel the > newest build) > > Lionel Frachon > > > > 2011/5/27 Michael Paquier <mic...@gm...>: >> This may be a setting problem. >> Have you setup correctly data_node_hosts and data_node_ports on Coordinator? >> If your coordinator cannot connect to a backend datanode it may be this >> problem. >> >> Have you used -i option on datanode to be sure that it can accept TCP/IP >> remote connections and not only local ones? >> Perhaps it is a problem with pg_hba.conf. >> Have you set up this file for both nodes correctly? >> >> If a node complains about GTM no accessible, you may need also to check if >> gtm_host and gtm_port are correctly set for both Coordinator and datanodes. >> >> You have also to take care about the node IDs used in each configuration >> file. >> Is pgxc_node_id set differently for your datanodes? >> >> On Fri, May 27, 2011 at 2:47 PM, Lionel Frachon <lio...@gm...> >> wrote: >>> >>> Hi Michael, >>> >>> I believe this is the kind of query generated by BenchmarlSQL on the >>> "load data" stage; trying to load a 2-node cluster yesterday >>> (1gtm+coord+datanode, 1 datanode), only one is loaded, and the other >>> one complains saying "No GTM snapshot available" and thus not loading >>> anything. Table content is not accessible either from the second node. >>> I tried to switch the preferred_node and master node settings, to try >>> to force the writes on second node first, but without positive result. >>> However, a std psql query distributes the work properly when issued >>> through coordinator. >>> >>> I'll try to revert today to the previous build (05/18) to see if >>> there's any behaviour difference. >>> >>> Any clue appeciated ^^ >>> >>> Lionel Frachon >>> >>> 2011/5/26 Michael Paquier <mic...@gm...>: >>> > Btw, if I were you I'd avoid to use multiple-value INSERT with JDBC. >>> > Those >>> > are queries like: >>> > INSERT INTO table_ex VALUES (1),(2),(3),(4); >>> > I noticed today during tests that it wasn't working properly. >>> > However, single value INSERT and normal queries work fine. >>> > -- >>> > Michael Paquier >>> > http://michael.otacoo.com >>> > >> >> >> >> -- >> Michael Paquier >> http://michael.otacoo.com >> > |
From: Lionel F. <lio...@gm...> - 2011-05-27 08:55:49
|
Hi, > > Have you setup correctly data_node_hosts and data_node_ports on Coordinator? Coordinator shows : num_data_nodes = 2 num_coordinators = 1 data_node_hosts = '10.114.12.26,10.114.12.14' data_node_ports = '25001,25001' (However, coordinator does not start with these options #data_node_users = pgxc #coordinator_users = pgxc generated through pgxc_config, so I removed them. Any importance ?) I see them connected through 'netstat -apn' on both sides, and node 2 connected with gtm. > Have you used -i option on datanode to be sure that it can accept TCP/IP > remote connections and not only local ones? Yes , launch option is -i -p 25001 > Perhaps it is a problem with pg_hba.conf. It's set to trust on local segment network, and had errors with it that I corrected, so assuming it's working now :) psql working properly on creating database remotely. > gtm_host and gtm_port are correctly set for both Coordinator and datanodes. They are : Node 2 : datanode/postgresql.conf:gtm_host = '10.114.12.26' datanode/postgresql.conf:gtm_port = 16680 Node 1 coord/postgresql.conf:gtm_host = '10.114.12.26' coord/postgresql.conf:gtm_port = 16680 datanode/postgresql.conf:gtm_host = '10.114.12.26' datanode/postgresql.conf:gtm_port = 16680 Of course, gtm running on 16680. And confirmed they are TCP-connected through netstat. > Is pgxc_node_id set differently for your datanodes? Yes, but doc is not crystal clear on that; are pgxc_nodes to be all different between coord & data nodes ? Meaning coord=1, data1=2, data2=3 or coord=1, data1=1,data2=2 ? (already tried some combinations...) On first JDBC query, coordinator shows an "EOF on client connection". Thanks (and thanks to Benny for the fix, will compile in parallel the newest build) Lionel Frachon 2011/5/27 Michael Paquier <mic...@gm...>: > This may be a setting problem. > Have you setup correctly data_node_hosts and data_node_ports on Coordinator? > If your coordinator cannot connect to a backend datanode it may be this > problem. > > Have you used -i option on datanode to be sure that it can accept TCP/IP > remote connections and not only local ones? > Perhaps it is a problem with pg_hba.conf. > Have you set up this file for both nodes correctly? > > If a node complains about GTM no accessible, you may need also to check if > gtm_host and gtm_port are correctly set for both Coordinator and datanodes. > > You have also to take care about the node IDs used in each configuration > file. > Is pgxc_node_id set differently for your datanodes? > > On Fri, May 27, 2011 at 2:47 PM, Lionel Frachon <lio...@gm...> > wrote: >> >> Hi Michael, >> >> I believe this is the kind of query generated by BenchmarlSQL on the >> "load data" stage; trying to load a 2-node cluster yesterday >> (1gtm+coord+datanode, 1 datanode), only one is loaded, and the other >> one complains saying "No GTM snapshot available" and thus not loading >> anything. Table content is not accessible either from the second node. >> I tried to switch the preferred_node and master node settings, to try >> to force the writes on second node first, but without positive result. >> However, a std psql query distributes the work properly when issued >> through coordinator. >> >> I'll try to revert today to the previous build (05/18) to see if >> there's any behaviour difference. >> >> Any clue appeciated ^^ >> >> Lionel Frachon >> >> 2011/5/26 Michael Paquier <mic...@gm...>: >> > Btw, if I were you I'd avoid to use multiple-value INSERT with JDBC. >> > Those >> > are queries like: >> > INSERT INTO table_ex VALUES (1),(2),(3),(4); >> > I noticed today during tests that it wasn't working properly. >> > However, single value INSERT and normal queries work fine. >> > -- >> > Michael Paquier >> > http://michael.otacoo.com >> > > > > > -- > Michael Paquier > http://michael.otacoo.com > |
From: Michael P. <mic...@gm...> - 2011-05-27 08:19:31
|
Just to update you about JDBC status. Now you can use multiple INSERT queries with JDBC. Benny Wang has found a simple fix. -- Michael Paquier http://michael.otacoo.com |
From: Michael P. <mic...@gm...> - 2011-05-27 06:20:42
|
This may be a setting problem. Have you setup correctly data_node_hosts and data_node_ports on Coordinator? If your coordinator cannot connect to a backend datanode it may be this problem. Have you used -i option on datanode to be sure that it can accept TCP/IP remote connections and not only local ones? Perhaps it is a problem with pg_hba.conf. Have you set up this file for both nodes correctly? If a node complains about GTM no accessible, you may need also to check if gtm_host and gtm_port are correctly set for both Coordinator and datanodes. You have also to take care about the node IDs used in each configuration file. Is pgxc_node_id set differently for your datanodes? On Fri, May 27, 2011 at 2:47 PM, Lionel Frachon <lio...@gm...>wrote: > Hi Michael, > > I believe this is the kind of query generated by BenchmarlSQL on the > "load data" stage; trying to load a 2-node cluster yesterday > (1gtm+coord+datanode, 1 datanode), only one is loaded, and the other > one complains saying "No GTM snapshot available" and thus not loading > anything. Table content is not accessible either from the second node. > I tried to switch the preferred_node and master node settings, to try > to force the writes on second node first, but without positive result. > However, a std psql query distributes the work properly when issued > through coordinator. > > I'll try to revert today to the previous build (05/18) to see if > there's any behaviour difference. > > Any clue appeciated ^^ > > Lionel Frachon > > 2011/5/26 Michael Paquier <mic...@gm...>: > > Btw, if I were you I'd avoid to use multiple-value INSERT with JDBC. > Those > > are queries like: > > INSERT INTO table_ex VALUES (1),(2),(3),(4); > > I noticed today during tests that it wasn't working properly. > > However, single value INSERT and normal queries work fine. > > -- > > Michael Paquier > > http://michael.otacoo.com > > > -- Michael Paquier http://michael.otacoo.com |
From: Lionel F. <lio...@gm...> - 2011-05-27 05:47:44
|
Hi Michael, I believe this is the kind of query generated by BenchmarlSQL on the "load data" stage; trying to load a 2-node cluster yesterday (1gtm+coord+datanode, 1 datanode), only one is loaded, and the other one complains saying "No GTM snapshot available" and thus not loading anything. Table content is not accessible either from the second node. I tried to switch the preferred_node and master node settings, to try to force the writes on second node first, but without positive result. However, a std psql query distributes the work properly when issued through coordinator. I'll try to revert today to the previous build (05/18) to see if there's any behaviour difference. Any clue appeciated ^^ Lionel Frachon 2011/5/26 Michael Paquier <mic...@gm...>: > Btw, if I were you I'd avoid to use multiple-value INSERT with JDBC. Those > are queries like: > INSERT INTO table_ex VALUES (1),(2),(3),(4); > I noticed today during tests that it wasn't working properly. > However, single value INSERT and normal queries work fine. > -- > Michael Paquier > http://michael.otacoo.com > |
From: Michael P. <mic...@gm...> - 2011-05-26 11:30:25
|
Btw, if I were you I'd avoid to use multiple-value INSERT with JDBC. Those are queries like: INSERT INTO table_ex VALUES (1),(2),(3),(4); I noticed today during tests that it wasn't working properly. However, single value INSERT and normal queries work fine. -- Michael Paquier http://michael.otacoo.com |
From: Lionel F. <lio...@gm...> - 2011-05-26 09:24:46
|
Hi Michael, You're fully right, I'm very happy with this ! Currently compiling to deploy today on test machines. Thanks for integrating this so quickly. Regards, Lionel F. 2011/5/26 Michael Paquier <mic...@gm...>: > Hi, > > I am sure you will be happy to know that JDBC problem has been fixed. > Creating a table or using DML with this driver should work correctly. > > Regards, > -- > Michael Paquier > http://michael.otacoo.com > |
From: Michael P. <mic...@gm...> - 2011-05-26 06:47:17
|
Hi, I am sure you will be happy to know that JDBC problem has been fixed. Creating a table or using DML with this driver should work correctly. Regards, -- Michael Paquier http://michael.otacoo.com |
From: Michael P. <mic...@gm...> - 2011-05-25 01:34:13
|
> > > Hi Folks, > > Is there already a perl and python module for Postgres XC ? IF yes where > > could i download it ? If no, how could I get Python/Perl functions work ? > Just one point of detail. Even if plpython and plperl functions have not been tested yet, I would suspect they work correctly as long as no more than one SQL query is used inside the function as argument values are fetche dthe same way for plpgsql, plperl and plpythin functions. -- Michael Paquier http://michael.otacoo.com |
From: Koichi S. <ko...@in...> - 2011-05-25 01:28:10
|
So far, we've not tested plpython and plperl. We're now woking to support SQL functions and pl/pgSQL functions. In the current head, they work if they contain only one sql statement. Development team is discussing how to support multi-statement planner. Because XC's external interface is alomst the same as original PG, I expect they may work with the above restriction. It's very helpful if you try one of them. Best Regards; --- Koichi Suzuki On Tue, 24 May 2011 14:19:27 +0200 Eric Ndengang <eri...@af...> wrote: > Hi Folks, > Is there already a perl and python module for Postgres XC ? IF yes where > could i download it ? If no, how could I get Python/Perl functions work ? > regards > > -- > Eric Ndengang > Datenbankadministrator > > Affinitas GmbH | Kohlfurter Straße 41/43 | 10999 Berlin | Germany > email: eri...@af... | tel: +49.(0)30. 991 949 5 0 | www.edarling.de > > Geschäftsführer: Lukas Brosseder, David Khalil, Kai Rieke, Christian Vollmann > Eingetragen beim Amtsgericht Berlin, HRB 115958 > > Real People: www.edarling.de/echte-paare > Real Love: www.youtube.de/edarling > Real Science: www.edarling.org > > > ------------------------------------------------------------------------------ > vRanger cuts backup time in half-while increasing security. > With the market-leading solution for virtual backup and recovery, > you get blazing-fast, flexible, and affordable data protection. > Download your free trial now. > http://p.sf.net/sfu/quest-d2dcopy1 > _______________________________________________ > Postgres-xc-general mailing list > Pos...@li... > https://lists.sourceforge.net/lists/listinfo/postgres-xc-general > |
From: Eric N. <eri...@af...> - 2011-05-24 12:19:37
|
Hi Folks, Is there already a perl and python module for Postgres XC ? IF yes where could i download it ? If no, how could I get Python/Perl functions work ? regards -- Eric Ndengang Datenbankadministrator Affinitas GmbH | Kohlfurter Straße 41/43 | 10999 Berlin | Germany email: eri...@af... | tel: +49.(0)30. 991 949 5 0 | www.edarling.de Geschäftsführer: Lukas Brosseder, David Khalil, Kai Rieke, Christian Vollmann Eingetragen beim Amtsgericht Berlin, HRB 115958 Real People: www.edarling.de/echte-paare Real Love: www.youtube.de/edarling Real Science: www.edarling.org |
From: Lionel F. <lio...@gm...> - 2011-05-19 17:36:37
|
Ok, thanks for the follow-up. Lionel F. Le 19 mai 2011 à 17:28, Michael Paquier <mic...@gm...> a écrit : I still need to review this patch. Once it is corrected and committed, feel free to use the head. On Thu, May 19, 2011 at 11:23 PM, Lionel Frachon <lio...@gm...>wrote: > Good news, as far as I see, the patch is working ok ! :) > > (Not reviewed the whole code to know if it meets the quality > criterias, but will keep it for now) > > Lionel F. > > > > 2011/5/19 Michael Paquier <mic...@gm...>: > > This is a PGXC-side problem for sure, JDBC may not be involved. > > The fact is that we are currently having problems with JDBC driver like > in > > bug 3299211 at table creation. > > The cause of that is we haven't done so many tests with it. > > > > Howver, we got a patch on the JDBC bug there. It has not yet been > reviewed: > > > http://sourceforge.net/tracker/?func=detail&aid=3299211&group_id=311227&atid=1310232 > > > > If you try to apply it, does your problem disappear? > > > > On Thu, May 19, 2011 at 5:53 PM, Lionel Frachon < > lio...@gm...> > > wrote: > >> > >> Hello, > >> > >> I'm trying to run BenchmarkSQL against a (for the moment) single node > >> on 0.9.4, however I face some connection issues. > >> > >> Running the create table script, here is what I get : > >> > >> ------------- ExecJDBC Start Date = Thu May 19 10:41:14 CEST > >> 2011------------- > >> driver=org.postgresql.Driver > >> conn=jdbc:postgresql://10.114.12.39:26001/testperfs > >> user=pgxc > >> password=****** > >> commandFile=sqlTableCreates > >> ------------------------------------------------- > >> > >> > >> > >> create table warehouse ( > >> w_id integer not null, > >> w_ytd decimal(12,2), > >> w_tax decimal(4,4), > >> w_name varchar(10), > >> w_street_1 varchar(20), > >> w_street_2 varchar(20), > >> w_city varchar(20), > >> w_state char(2), > >> w_zip char(9) > >> ); > >> -- SQL Success: Runtime = 33 ms -- > >> > >> commit; > >> -- SQL Runtime Exception ----------------------------------------- > >> DBMS SqlCode=0 DBMS Msg= > >> Une erreur d'entrée/sortie a eu lieu lors d'envoi vers le serveur. > >> ------------------------------------------------------------------ > >> > >> > >> > >> create table district ( > >> d_w_id integer not null, > >> d_id integer not null, > >> d_ytd decimal(12,2), > >> d_tax decimal(4,4), > >> d_next_o_id integer, > >> d_name varchar(10), > >> d_street_1 varchar(20), > >> d_street_2 varchar(20), > >> d_city varchar(20), > >> d_state char(2), > >> d_zip char(9) > >> ); > >> -- SQL Runtime Exception ----------------------------------------- > >> DBMS SqlCode=0 DBMS Msg= > >> This connection has been closed. > >> ------------------------------------------------------------------ > >> > >> and all the lasting table are not created, while the first one is. > >> > >> I updated the provided BenchmarkSQL jdbc driver (from > >> postgresql-8.0.309.jdbc3.jar to the latest > >> postgresql-9.0-801.jdbc3.jar) but no luck in this. > >> > >> What looks strange to me is > >> " > >> -- SQL Runtime Exception ----------------------------------------- > >> DBMS SqlCode=0 DBMS Msg= > >> Une erreur d'entrée/sortie a eu lieu lors d'envoi vers le serveur." > >> > >> (it means An I/O error occured while sending data to the server") but > >> the SqlCode is 0. I tried to remove the commit line, same effect. > >> > >> Is there any possibility to get debug info on this to know what's > >> happening, or have hints to the proper method (java libs on client > >> side, ....) > >> > >> Thanks for your help > >> > >> Regards. > >> > >> Lionel F. > >> > > > > > > -- > > Michael Paquier > > http://michael.otacoo.com > > > -- Michael Paquier http://michael.otacoo.com |
From: Michael P. <mic...@gm...> - 2011-05-19 15:28:25
|
I still need to review this patch. Once it is corrected and committed, feel free to use the head. On Thu, May 19, 2011 at 11:23 PM, Lionel Frachon <lio...@gm...>wrote: > Good news, as far as I see, the patch is working ok ! :) > > (Not reviewed the whole code to know if it meets the quality > criterias, but will keep it for now) > > Lionel F. > > > > 2011/5/19 Michael Paquier <mic...@gm...>: > > This is a PGXC-side problem for sure, JDBC may not be involved. > > The fact is that we are currently having problems with JDBC driver like > in > > bug 3299211 at table creation. > > The cause of that is we haven't done so many tests with it. > > > > Howver, we got a patch on the JDBC bug there. It has not yet been > reviewed: > > > http://sourceforge.net/tracker/?func=detail&aid=3299211&group_id=311227&atid=1310232 > > > > If you try to apply it, does your problem disappear? > > > > On Thu, May 19, 2011 at 5:53 PM, Lionel Frachon < > lio...@gm...> > > wrote: > >> > >> Hello, > >> > >> I'm trying to run BenchmarkSQL against a (for the moment) single node > >> on 0.9.4, however I face some connection issues. > >> > >> Running the create table script, here is what I get : > >> > >> ------------- ExecJDBC Start Date = Thu May 19 10:41:14 CEST > >> 2011------------- > >> driver=org.postgresql.Driver > >> conn=jdbc:postgresql://10.114.12.39:26001/testperfs > >> user=pgxc > >> password=****** > >> commandFile=sqlTableCreates > >> ------------------------------------------------- > >> > >> > >> > >> create table warehouse ( > >> w_id integer not null, > >> w_ytd decimal(12,2), > >> w_tax decimal(4,4), > >> w_name varchar(10), > >> w_street_1 varchar(20), > >> w_street_2 varchar(20), > >> w_city varchar(20), > >> w_state char(2), > >> w_zip char(9) > >> ); > >> -- SQL Success: Runtime = 33 ms -- > >> > >> commit; > >> -- SQL Runtime Exception ----------------------------------------- > >> DBMS SqlCode=0 DBMS Msg= > >> Une erreur d'entrée/sortie a eu lieu lors d'envoi vers le serveur. > >> ------------------------------------------------------------------ > >> > >> > >> > >> create table district ( > >> d_w_id integer not null, > >> d_id integer not null, > >> d_ytd decimal(12,2), > >> d_tax decimal(4,4), > >> d_next_o_id integer, > >> d_name varchar(10), > >> d_street_1 varchar(20), > >> d_street_2 varchar(20), > >> d_city varchar(20), > >> d_state char(2), > >> d_zip char(9) > >> ); > >> -- SQL Runtime Exception ----------------------------------------- > >> DBMS SqlCode=0 DBMS Msg= > >> This connection has been closed. > >> ------------------------------------------------------------------ > >> > >> and all the lasting table are not created, while the first one is. > >> > >> I updated the provided BenchmarkSQL jdbc driver (from > >> postgresql-8.0.309.jdbc3.jar to the latest > >> postgresql-9.0-801.jdbc3.jar) but no luck in this. > >> > >> What looks strange to me is > >> " > >> -- SQL Runtime Exception ----------------------------------------- > >> DBMS SqlCode=0 DBMS Msg= > >> Une erreur d'entrée/sortie a eu lieu lors d'envoi vers le serveur." > >> > >> (it means An I/O error occured while sending data to the server") but > >> the SqlCode is 0. I tried to remove the commit line, same effect. > >> > >> Is there any possibility to get debug info on this to know what's > >> happening, or have hints to the proper method (java libs on client > >> side, ....) > >> > >> Thanks for your help > >> > >> Regards. > >> > >> Lionel F. > >> > > > > > > -- > > Michael Paquier > > http://michael.otacoo.com > > > -- Michael Paquier http://michael.otacoo.com |
From: Lionel F. <lio...@gm...> - 2011-05-19 14:23:56
|
Good news, as far as I see, the patch is working ok ! :) (Not reviewed the whole code to know if it meets the quality criterias, but will keep it for now) Lionel F. 2011/5/19 Michael Paquier <mic...@gm...>: > This is a PGXC-side problem for sure, JDBC may not be involved. > The fact is that we are currently having problems with JDBC driver like in > bug 3299211 at table creation. > The cause of that is we haven't done so many tests with it. > > Howver, we got a patch on the JDBC bug there. It has not yet been reviewed: > http://sourceforge.net/tracker/?func=detail&aid=3299211&group_id=311227&atid=1310232 > > If you try to apply it, does your problem disappear? > > On Thu, May 19, 2011 at 5:53 PM, Lionel Frachon <lio...@gm...> > wrote: >> >> Hello, >> >> I'm trying to run BenchmarkSQL against a (for the moment) single node >> on 0.9.4, however I face some connection issues. >> >> Running the create table script, here is what I get : >> >> ------------- ExecJDBC Start Date = Thu May 19 10:41:14 CEST >> 2011------------- >> driver=org.postgresql.Driver >> conn=jdbc:postgresql://10.114.12.39:26001/testperfs >> user=pgxc >> password=****** >> commandFile=sqlTableCreates >> ------------------------------------------------- >> >> >> >> create table warehouse ( >> w_id integer not null, >> w_ytd decimal(12,2), >> w_tax decimal(4,4), >> w_name varchar(10), >> w_street_1 varchar(20), >> w_street_2 varchar(20), >> w_city varchar(20), >> w_state char(2), >> w_zip char(9) >> ); >> -- SQL Success: Runtime = 33 ms -- >> >> commit; >> -- SQL Runtime Exception ----------------------------------------- >> DBMS SqlCode=0 DBMS Msg= >> Une erreur d'entrée/sortie a eu lieu lors d'envoi vers le serveur. >> ------------------------------------------------------------------ >> >> >> >> create table district ( >> d_w_id integer not null, >> d_id integer not null, >> d_ytd decimal(12,2), >> d_tax decimal(4,4), >> d_next_o_id integer, >> d_name varchar(10), >> d_street_1 varchar(20), >> d_street_2 varchar(20), >> d_city varchar(20), >> d_state char(2), >> d_zip char(9) >> ); >> -- SQL Runtime Exception ----------------------------------------- >> DBMS SqlCode=0 DBMS Msg= >> This connection has been closed. >> ------------------------------------------------------------------ >> >> and all the lasting table are not created, while the first one is. >> >> I updated the provided BenchmarkSQL jdbc driver (from >> postgresql-8.0.309.jdbc3.jar to the latest >> postgresql-9.0-801.jdbc3.jar) but no luck in this. >> >> What looks strange to me is >> " >> -- SQL Runtime Exception ----------------------------------------- >> DBMS SqlCode=0 DBMS Msg= >> Une erreur d'entrée/sortie a eu lieu lors d'envoi vers le serveur." >> >> (it means An I/O error occured while sending data to the server") but >> the SqlCode is 0. I tried to remove the commit line, same effect. >> >> Is there any possibility to get debug info on this to know what's >> happening, or have hints to the proper method (java libs on client >> side, ....) >> >> Thanks for your help >> >> Regards. >> >> Lionel F. >> > > > -- > Michael Paquier > http://michael.otacoo.com > |