You can subscribe to this list here.
2010 |
Jan
|
Feb
|
Mar
|
Apr
|
May
(2) |
Jun
|
Jul
|
Aug
(6) |
Sep
|
Oct
(19) |
Nov
(1) |
Dec
|
---|---|---|---|---|---|---|---|---|---|---|---|---|
2011 |
Jan
(12) |
Feb
(1) |
Mar
(4) |
Apr
(4) |
May
(32) |
Jun
(12) |
Jul
(11) |
Aug
(1) |
Sep
(6) |
Oct
(3) |
Nov
|
Dec
(10) |
2012 |
Jan
(11) |
Feb
(1) |
Mar
(3) |
Apr
(25) |
May
(53) |
Jun
(38) |
Jul
(103) |
Aug
(54) |
Sep
(31) |
Oct
(66) |
Nov
(77) |
Dec
(20) |
2013 |
Jan
(91) |
Feb
(86) |
Mar
(103) |
Apr
(107) |
May
(25) |
Jun
(37) |
Jul
(17) |
Aug
(59) |
Sep
(38) |
Oct
(78) |
Nov
(29) |
Dec
(15) |
2014 |
Jan
(23) |
Feb
(82) |
Mar
(118) |
Apr
(101) |
May
(103) |
Jun
(45) |
Jul
(6) |
Aug
(10) |
Sep
|
Oct
(32) |
Nov
|
Dec
(9) |
2015 |
Jan
(3) |
Feb
(5) |
Mar
|
Apr
(1) |
May
|
Jun
|
Jul
(9) |
Aug
(4) |
Sep
(3) |
Oct
|
Nov
|
Dec
|
2016 |
Jan
(3) |
Feb
|
Mar
|
Apr
|
May
|
Jun
|
Jul
|
Aug
|
Sep
|
Oct
|
Nov
|
Dec
|
2017 |
Jan
|
Feb
|
Mar
|
Apr
|
May
|
Jun
(3) |
Jul
|
Aug
|
Sep
|
Oct
|
Nov
|
Dec
|
2018 |
Jan
|
Feb
|
Mar
|
Apr
|
May
(4) |
Jun
|
Jul
|
Aug
|
Sep
|
Oct
|
Nov
|
Dec
|
S | M | T | W | T | F | S |
---|---|---|---|---|---|---|
|
|
1
(6) |
2
(3) |
3
(4) |
4
(4) |
5
(7) |
6
(3) |
7
(16) |
8
(4) |
9
(6) |
10
(3) |
11
|
12
|
13
|
14
(2) |
15
(2) |
16
(1) |
17
(14) |
18
|
19
|
20
|
21
|
22
|
23
|
24
|
25
|
26
|
27
|
28
(1) |
29
(2) |
30
|
31
|
|
|
From: Sandeep G. <gup...@gm...> - 2013-10-03 20:34:08
|
Hi, Setup has two computers. The configuration for gtm, gtm_proxy, datanode, and, coordinator all have listen_address='*'. Here are the info about setup: postgres=# select * from pgxc_node; node_name | node_type | node_port | node_host | nodeis_primary | nodeis_preferred | node_id ----------------+-----------+-----------+-----------+----------------+------------------+------------- coord1 | C | 5432 | localhost | f | f | 1885696643 datanode_c1_d1 | D | 45421 | sfx057 | f | f | -1199687708 datanode_c2_d1 | D | 45421 | sfx050 | f | f | -294121722 (3 rows) select pgxc_pool_reload(); pgxc_pool_reload ------------------ t (1 row) However, when I create a table I get ERROR: Failed to get pooled connections Is there anything else apart from listen_address that I am missing? Please let me know. -Sandeep |
From: Anson A. <ans...@gm...> - 2013-10-03 19:48:38
|
So I migrated a pg database (9.1) to postgres-xc 1.1. Required me to essentially apply the Distribute By Replication to most of the tables. But doing so, apparently this app threw out an error: Unable to upgrade schema to latest version. org.hibernate.exception.GenericJDBCException: ResultSet not positioned properly, perhaps you need to call next. at org.hibernate.exception.internal.StandardSQLExceptionConverter.convert(StandardSQLExceptionConverter.java:54) at org.hibernate.engine.jdbc.spi.SqlExceptionHelper.convert(SqlExceptionHelper.java:125) at org.hibernate.engine.jdbc.spi.SqlExceptionHelper.convert(SqlExceptionHelper.java:110) at org.hibernate.engine.jdbc.internal.proxy.AbstractResultSetProxyHandler.continueInvocation(AbstractResultSetProxyHandler.java:108) at org.hibernate.engine.jdbc.internal.proxy.AbstractProxyHandler.invoke(AbstractProxyHandler.java:81) at $Proxy10.getInt(Unknown Source) at com.cloudera.enterprise.dbutil.DbUtil.getSchemaVersion(DbUtil.java:212) at com.cloudera.enterprise.dbutil.DbUtil$1SchemaVersionWork.execute(DbUtil.java:159) at org.hibernate.jdbc.WorkExecutor.executeWork(WorkExecutor.java:54) at org.hibernate.internal.SessionImpl$2.accept(SessionImpl.java:1937) at org.hibernate.internal.SessionImpl$2.accept(SessionImpl.java:1934) at org.hibernate.engine.jdbc.internal.JdbcCoordinatorImpl.coordinateWork(JdbcCoordinatorImpl.java:211) at org.hibernate.internal.SessionImpl.doWork(SessionImpl.java:1955) at org.hibernate.internal.SessionImpl.doWork(SessionImpl.java:1941) at com.cloudera.enterprise.dbutil.DbUtil.getSchemaVersion(DbUtil.java:171) at com.cloudera.enterprise.dbutil.DbUtil.upgradeSchema(DbUtil.java:333) at com.cloudera.cmon.FhDatabaseManager.initialize(FhDatabaseManager.java:68) at com.cloudera.cmon.firehose.Main.main(Main.java:339) Caused by: org.postgresql.util.PSQLException: ResultSet not positioned properly, perhaps you need to call next. at org.postgresql.jdbc2.AbstractJdbc2ResultSet.checkResultSet(AbstractJdbc2ResultSet.java:2695) at org.postgresql.jdbc2.AbstractJdbc2ResultSet.getInt(AbstractJdbc2ResultSet.java:1992) at com.mchange.v2.c3p0.impl.NewProxyResultSet.getInt(NewProxyResultSet.java:2547) at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method) at sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:39) at sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:25) at java.lang.reflect.Method.invoke(Method.java:597) at org.hibernate.engine.jdbc.internal.proxy.AbstractResultSetProxyHandler.continueInvocation(AbstractResultSetProxyHandler.java:104) ... So I'm assuming hibernate does not support Distribute by Replication? Is that so, or did I not need to apply distribute by replication, though the table has a PK w/ is also reference as FK from another table? If not, is there a hack to get around this, w/o having to recompile hibernate objects? |
From: Hector M. J. <hec...@et...> - 2013-10-03 16:21:42
|
Hi all, First of all, I would like to thank Mr. Koichi Suzuki for their comments to my previous post. Was right with regard to the parameter max_connection. In the new configuration each DataNode has a maximum of 800 (three coordinators with 200 concurrent connections each plus 200 extra) Regarding the suggestion about stress tool to use (dbt-1), I'm in the study of their characteristics and peculiarities of installation and use. When I get my first results with it I publishes in this forum. I still have details to resolve as is the use of parameter gtmPxyExtraConfig of pgxc_ctl.conf to include parameters (as worker_threads = xx, for example) in the gtm_proxy settings. This was one of the problems detected during the initial deployment through pgxc_ctl tool. This is the second post about my impressions with pgxc-1.1 In the previous post exposed them details of the scenario we want to build as well as configurations necessary to reach a state of functionality and operability acceptable. Once you reach that point, we design and execute a set of tests (based on pgbench) in order to measure the performance of our installation and know when we reached our goals: 300 (or more) concurrent connections and increase the number of transactional operations. The modifications was made mainly on DataNodes config. These changes weres implemented through datanodeExtraConfig parameter (pgxc_ctl.conf file) and were as follows: # ================================================ # Added to all the DataNode postgresql.conf # Original: datanodeExtraConfig log_destination = 'syslog' logging_collector = on log_directory = 'pg_log' listen_addresses = '*' max_connections = 800 work_mem = 100MB fsync = off shared_buffers = 5GB wal_buffers = 1MB checkpoint_timeout = 5min effective_cache_size = 12GB checkpoint_segments = 64 checkpoint_completion_target = 0.9 maintenance_work_mem = 4GB max_prepared_transactions = 800 synchronous_commit = off These modifications allowed us to obtain results with increases between two and three times (in some cases, more than three times) with respect to the initial results set (number of transactions and tps). Our other goal, get more than 300 concurrent connections is reached and can measure up to 355 connections. During the course of testing measurements were obtained on the consumption of CPU and memory resources on each of the components of the cluster (dn01, DN02, DN03, GTM). For these measurements, use the SAR command parameters: -R Report memory utilization statistics -U Report CPU utilization 200 iterations with an interval of 5 seconds (14 test with a duration of 60 seconds divided by 5 seconds is +/- 170 iterations) The averages obtained by server: This command was executed on each servers before launch the tests: sar -u -r 5 200 dn01: Average: CPU %user %nice %system %iowait %steal %idle Average: all 15.79 0.00 18.20 0.44 0.00 65.57 Average: kbmemfree kbmemused %memused kbbuffers kbcached kbcommit %commit Average: 12003982 4327934 26.50 44163 1766752 7616567 37.35 dn02: Average: CPU %user %nice %system %iowait %steal %idle Average: all 14.89 0.00 17.37 0.11 0.00 67.62 Average: kbmemfree kbmemused %memused kbbuffers kbcached kbcommit %commit Average: 12097661 4234255 25.93 42716 1725394 7609960 37.31 dn03: Average: CPU %user %nice %system %iowait %steal %idle Average: all 16.67 0.00 19.59 0.57 0.00 63.17 Average: kbmemfree kbmemused %memused kbbuffers kbcached kbcommit %commit Average: 11603908 4728008 28.95 42955 1708146 7609769 37.31 gtm: Average: CPU %user %nice %system %iowait %steal %idle Average: all 8.54 0.00 24.80 0.12 0.00 66.54 Average: kbmemfree kbmemused %memused kbbuffers kbcached kbcommit %commit Average: 3553938 370626 9.44 42358 120419 723856 9.06 The result obtained in each of the tests: [root@rhelclient ~]# pgbench -c 16 -j 8 -T 60 -h 192.168.97.44 -U postgres pgbench starting vacuum...end. transaction type: TPC-B (sort of) scaling factor: 100 query mode: simple number of clients: 16 number of threads: 8 duration: 60 s number of transactions actually processed: 23680 tps = 394.458636 (including connections establishing) tps = 394.637063 (excluding connections establishing) [root@rhelclient ~]# pgbench -S -c 16 -j 8 -T 60 -h 192.168.97.44 -U postgres pgbench starting vacuum...end. transaction type: SELECT only scaling factor: 100 query mode: simple number of clients: 16 number of threads: 8 duration: 60 s number of transactions actually processed: 108929 tps = 1815.247714 (including connections establishing) tps = 1815.947505 (excluding connections establishing) [root@rhelclient ~]# pgbench -c 16 -j 16 -T 60 -h 192.168.97.44 -U postgres pgbench starting vacuum...end. transaction type: TPC-B (sort of) scaling factor: 100 query mode: simple number of clients: 16 number of threads: 16 duration: 60 s number of transactions actually processed: 23953 tps = 399.034541 (including connections establishing) tps = 399.120451 (excluding connections establishing) [root@rhelclient ~]# pgbench -S -c 16 -j 16 -T 60 -h 192.168.97.44 -U postgres pgbench starting vacuum...end. transaction type: SELECT only scaling factor: 100 query mode: simple number of clients: 16 number of threads: 16 duration: 60 s number of transactions actually processed: 127142 tps = 2118.825088 (including connections establishing) tps = 2119.318006 (excluding connections establishing) [root@rhelclient ~]# pgbench -c 96 -j 8 -T 60 -h 192.168.97.44 -U postgres pgbench starting vacuum...end. transaction type: TPC-B (sort of) scaling factor: 100 query mode: simple number of clients: 96 number of threads: 8 duration: 60 s number of transactions actually processed: 95644 tps = 1592.722011 (including connections establishing) tps = 1595.906611 (excluding connections establishing) [root@rhelclient ~]# pgbench -S -c 96 -j 8 -T 60 -h 192.168.97.44 -U postgres pgbench starting vacuum...end. transaction type: SELECT only scaling factor: 100 query mode: simple number of clients: 96 number of threads: 8 duration: 60 s number of transactions actually processed: 580728 tps = 9675.754717 (including connections establishing) tps = 9695.954649 (excluding connections establishing) [root@rhelclient ~]# pgbench -c 64 -j 32 -T 60 -h 192.168.97.44 -U postgres pgbench starting vacuum...end. transaction type: TPC-B (sort of) scaling factor: 100 query mode: simple number of clients: 64 number of threads: 32 duration: 60 s number of transactions actually processed: 72239 tps = 1183.511659 (including connections establishing) tps = 1184.529232 (excluding connections establishing) [root@rhelclient ~]# pgbench -S -c 64 -j 32 -T 60 -h 192.168.97.44 -U postgres pgbench starting vacuum...end. transaction type: SELECT only scaling factor: 100 query mode: simple number of clients: 64 number of threads: 32 duration: 60 s number of transactions actually processed: 388861 tps = 6479.326642 (including connections establishing) tps = 6482.532350 (excluding connections establishing) [root@rhelclient ~]# pgbench -c 64 -j 64 -T 60 -h 192.168.97.44 -U postgres pgbench starting vacuum...end. transaction type: TPC-B (sort of) scaling factor: 100 query mode: simple number of clients: 64 number of threads: 64 duration: 60 s number of transactions actually processed: 61663 tps = 1026.636406 (including connections establishing) tps = 1027.679280 (excluding connections establishing) [root@rhelclient ~]# pgbench -S -c 64 -j 64 -T 60 -h 192.168.97.44 -U postgres pgbench starting vacuum...end. transaction type: SELECT only scaling factor: 100 query mode: simple number of clients: 64 number of threads: 64 duration: 60 s number of transactions actually processed: 369321 tps = 6151.931064 (including connections establishing) tps = 6155.611035 (excluding connections establishing) [root@rhelclient ~]# pgbench -c 104 -j 8 -T 60 -h 192.168.97.44 -U postgres pgbench starting vacuum...end. transaction type: TPC-B (sort of) scaling factor: 100 query mode: simple number of clients: 104 number of threads: 8 duration: 60 s number of transactions actually processed: 80479 tps = 1337.396423 (including connections establishing) tps = 1347.248687 (excluding connections establishing) [root@rhelclient ~]# pgbench -S -c 104 -j 8 -T 60 -h 192.168.97.44 -U postgres pgbench starting vacuum...end. transaction type: SELECT only scaling factor: 100 query mode: simple number of clients: 104 number of threads: 8 duration: 60 s number of transactions actually processed: 587109 tps = 9782.401960 (including connections establishing) tps = 9805.111450 (excluding connections establishing) [root@rhelclient ~]# pgbench -c 300 -j 10 -T 60 -h 192.168.97.44 -U postgres pgbench starting vacuum...end. transaction type: TPC-B (sort of) scaling factor: 100 query mode: simple number of clients: 300 number of threads: 10 duration: 60 s number of transactions actually processed: 171351 tps = 2849.021939 (including connections establishing) tps = 2869.345032 (excluding connections establishing) [root@rhelclient ~]# pgbench -S -c 300 -j 10 -T 60 -h 192.168.97.44 -U postgres pgbench starting vacuum...end. transaction type: SELECT only scaling factor: 100 query mode: simple number of clients: 300 number of threads: 10 duration: 60 s number of transactions actually processed: 1177464 tps = 19613.592584 (including connections establishing) tps = 19716.537285 (excluding connections establishing) [root@rhelclient ~]# Our new goal is to provide to our pgxc cluster of characteristics of High Availability adding slave servers for DataNodes (and perhaps for coordinators servers) When we get this environment, run again the same set of tests above, plus new ones, such as fault recovery tests simulating the loss of cluster components, etc.. Ok, that's all for now. Thank you very much. This is a great project and I'm very glad I found it. Thanks again, Hector M. Jacas |
From: Koichi S. <koi...@gm...> - 2013-10-03 01:31:26
|
You cannot add a coordinator in such a way. There're many issued to be resolved internally. You can configure and operate whole cluster with pgxc_ctl to get handy way to add coordinator/datanode. I understand you have your cluster configured without pgxc_ctl. In this case, adding coordinator manually could be a bit complicated work. Sorry, I've not uploaded the detailed step to do it. Whole steps will be found in add_coordinatorMaster() function defined in coord_cmd.c of pgxc_ctl source code. It will be found at contrib/pgxc_ctl in the release material. Please allow a bit of time to find my time to upload this information to XC wiki. Or, you can backup whole database with pg_dumpall, then reconfigure new xc cluster with additional coordinator, and then restore the backup. Regards; --- Koichi Suzuki 2013/10/3 Julian <jul...@gm...> > Dear Sir, > > I have a cluster with 3 coordinators and 3 datanodes on 3 VM, today i > was try to added a new coordinator to the cluster, when i using command > "psql postgres -f coordinator.sql -p 5455" to restore the backup file to > the new coordinator. > > Then i got this message : > > psql:coordinator-dump.sql:105: connection to server was lost > > In the log file: > > ---------------------------------------------------------------------------------------------------------------------------------------------------------------- > 2013-10-02 23:00:09.811 CST,,,29061,,524c34de.7185,2,,2013-10-02 > 22:59:42 CST,,0,LOG,00000,"server process (PID 29094) was terminated by > signal 11: Segmentation fault","Failed process was r > unning: CREATE TABLE user_info_hash ( > id integer NOT NULL, > firstname text, > lastname text, > info text > ) > DISTRIBUTE BY HASH (id) > TO NODE (dn2,dn1,dn3);",,,,,,,,"" > 2013-10-02 23:00:09.811 CST,,,29061,,524c34de.7185,3,,2013-10-02 > 22:59:42 CST,,0,LOG,00000,"terminating any other active server > processes",,,,,,,,,"" > > ------------------------------------------------------------------------------------------------------------------------------------------------------------------ > refer to > http://postgres-xc.sourceforge.net/docs/1_1/add-node-coordinator.html > > Is there something i doing worng? > > > Thanks for your kindly reply. > > And sorry for my poor english. > > > Best regards. > > > ------------------------------------------------------------------------------ > October Webinars: Code for Performance > Free Intel webinars can help you accelerate application performance. > Explore tips for MPI, OpenMP, advanced profiling, and more. Get the most > from > the latest Intel processors and coprocessors. See abstracts and register > > http://pubads.g.doubleclick.net/gampad/clk?id=60134791&iu=/4140/ostg.clktrk > _______________________________________________ > Postgres-xc-general mailing list > Pos...@li... > https://lists.sourceforge.net/lists/listinfo/postgres-xc-general > |