You can subscribe to this list here.
| 2010 |
Jan
|
Feb
|
Mar
|
Apr
|
May
(2) |
Jun
|
Jul
|
Aug
(6) |
Sep
|
Oct
(19) |
Nov
(1) |
Dec
|
|---|---|---|---|---|---|---|---|---|---|---|---|---|
| 2011 |
Jan
(12) |
Feb
(1) |
Mar
(4) |
Apr
(4) |
May
(32) |
Jun
(12) |
Jul
(11) |
Aug
(1) |
Sep
(6) |
Oct
(3) |
Nov
|
Dec
(10) |
| 2012 |
Jan
(11) |
Feb
(1) |
Mar
(3) |
Apr
(25) |
May
(53) |
Jun
(38) |
Jul
(103) |
Aug
(54) |
Sep
(31) |
Oct
(66) |
Nov
(77) |
Dec
(20) |
| 2013 |
Jan
(91) |
Feb
(86) |
Mar
(103) |
Apr
(107) |
May
(25) |
Jun
(37) |
Jul
(17) |
Aug
(59) |
Sep
(38) |
Oct
(78) |
Nov
(29) |
Dec
(15) |
| 2014 |
Jan
(23) |
Feb
(82) |
Mar
(118) |
Apr
(101) |
May
(103) |
Jun
(45) |
Jul
(6) |
Aug
(10) |
Sep
|
Oct
(32) |
Nov
|
Dec
(9) |
| 2015 |
Jan
(3) |
Feb
(5) |
Mar
|
Apr
(1) |
May
|
Jun
|
Jul
(9) |
Aug
(4) |
Sep
(3) |
Oct
|
Nov
|
Dec
|
| 2016 |
Jan
(3) |
Feb
|
Mar
|
Apr
|
May
|
Jun
|
Jul
|
Aug
|
Sep
|
Oct
|
Nov
|
Dec
|
| 2017 |
Jan
|
Feb
|
Mar
|
Apr
|
May
|
Jun
(3) |
Jul
|
Aug
|
Sep
|
Oct
|
Nov
|
Dec
|
| 2018 |
Jan
|
Feb
|
Mar
|
Apr
|
May
(4) |
Jun
|
Jul
|
Aug
|
Sep
|
Oct
|
Nov
|
Dec
|
|
From: Mason S. <ma...@st...> - 2013-10-04 00:49:09
|
On Thu, Oct 3, 2013 at 1:34 PM, Sandeep Gupta <gup...@gm...>wrote: > Hi, > > > Setup has two computers. The configuration for gtm, gtm_proxy, datanode, > and, coordinator all have listen_address='*'. > > > Here are the info about setup: > postgres=# select * from > pgxc_node; node_name | > node_type | node_port | node_host | nodeis_primary | nodeis_preferred | > node_id > > ----------------+-----------+-----------+-----------+----------------+------------------+------------- > coord1 | C | 5432 | localhost | f | > f | 1885696643 > datanode_c1_d1 | D | 45421 | sfx057 | f | > f | -1199687708 > datanode_c2_d1 | D | 45421 | sfx050 | f | > f | -294121722 > (3 rows) > > > select pgxc_pool_reload(); > pgxc_pool_reload > ------------------ > t > (1 row) > > However, when I create a table I get > > ERROR: Failed to get pooled connections > Please check if you have a firewall running and open up the necessary ports. Also, edit pg_hba.conf to allow connections from the coordinator to the data nodes. > > Is there anything else apart from listen_address that I am missing? > Please let me know. > > > -Sandeep > > > ------------------------------------------------------------------------------ > October Webinars: Code for Performance > Free Intel webinars can help you accelerate application performance. > Explore tips for MPI, OpenMP, advanced profiling, and more. Get the most > from > the latest Intel processors and coprocessors. See abstracts and register > > http://pubads.g.doubleclick.net/gampad/clk?id=60134791&iu=/4140/ostg.clktrk > _______________________________________________ > Postgres-xc-general mailing list > Pos...@li... > https://lists.sourceforge.net/lists/listinfo/postgres-xc-general > > -- Mason Sharp StormDB - http://www.stormdb.com The Database Cloud Postgres-XC Support and Services |
|
From: Sandeep G. <gup...@gm...> - 2013-10-03 20:34:08
|
Hi, Setup has two computers. The configuration for gtm, gtm_proxy, datanode, and, coordinator all have listen_address='*'. Here are the info about setup: postgres=# select * from pgxc_node; node_name | node_type | node_port | node_host | nodeis_primary | nodeis_preferred | node_id ----------------+-----------+-----------+-----------+----------------+------------------+------------- coord1 | C | 5432 | localhost | f | f | 1885696643 datanode_c1_d1 | D | 45421 | sfx057 | f | f | -1199687708 datanode_c2_d1 | D | 45421 | sfx050 | f | f | -294121722 (3 rows) select pgxc_pool_reload(); pgxc_pool_reload ------------------ t (1 row) However, when I create a table I get ERROR: Failed to get pooled connections Is there anything else apart from listen_address that I am missing? Please let me know. -Sandeep |
|
From: Anson A. <ans...@gm...> - 2013-10-03 19:48:38
|
So I migrated a pg database (9.1) to postgres-xc 1.1. Required me to essentially apply the Distribute By Replication to most of the tables. But doing so, apparently this app threw out an error: Unable to upgrade schema to latest version. org.hibernate.exception.GenericJDBCException: ResultSet not positioned properly, perhaps you need to call next. at org.hibernate.exception.internal.StandardSQLExceptionConverter.convert(StandardSQLExceptionConverter.java:54) at org.hibernate.engine.jdbc.spi.SqlExceptionHelper.convert(SqlExceptionHelper.java:125) at org.hibernate.engine.jdbc.spi.SqlExceptionHelper.convert(SqlExceptionHelper.java:110) at org.hibernate.engine.jdbc.internal.proxy.AbstractResultSetProxyHandler.continueInvocation(AbstractResultSetProxyHandler.java:108) at org.hibernate.engine.jdbc.internal.proxy.AbstractProxyHandler.invoke(AbstractProxyHandler.java:81) at $Proxy10.getInt(Unknown Source) at com.cloudera.enterprise.dbutil.DbUtil.getSchemaVersion(DbUtil.java:212) at com.cloudera.enterprise.dbutil.DbUtil$1SchemaVersionWork.execute(DbUtil.java:159) at org.hibernate.jdbc.WorkExecutor.executeWork(WorkExecutor.java:54) at org.hibernate.internal.SessionImpl$2.accept(SessionImpl.java:1937) at org.hibernate.internal.SessionImpl$2.accept(SessionImpl.java:1934) at org.hibernate.engine.jdbc.internal.JdbcCoordinatorImpl.coordinateWork(JdbcCoordinatorImpl.java:211) at org.hibernate.internal.SessionImpl.doWork(SessionImpl.java:1955) at org.hibernate.internal.SessionImpl.doWork(SessionImpl.java:1941) at com.cloudera.enterprise.dbutil.DbUtil.getSchemaVersion(DbUtil.java:171) at com.cloudera.enterprise.dbutil.DbUtil.upgradeSchema(DbUtil.java:333) at com.cloudera.cmon.FhDatabaseManager.initialize(FhDatabaseManager.java:68) at com.cloudera.cmon.firehose.Main.main(Main.java:339) Caused by: org.postgresql.util.PSQLException: ResultSet not positioned properly, perhaps you need to call next. at org.postgresql.jdbc2.AbstractJdbc2ResultSet.checkResultSet(AbstractJdbc2ResultSet.java:2695) at org.postgresql.jdbc2.AbstractJdbc2ResultSet.getInt(AbstractJdbc2ResultSet.java:1992) at com.mchange.v2.c3p0.impl.NewProxyResultSet.getInt(NewProxyResultSet.java:2547) at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method) at sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:39) at sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:25) at java.lang.reflect.Method.invoke(Method.java:597) at org.hibernate.engine.jdbc.internal.proxy.AbstractResultSetProxyHandler.continueInvocation(AbstractResultSetProxyHandler.java:104) ... So I'm assuming hibernate does not support Distribute by Replication? Is that so, or did I not need to apply distribute by replication, though the table has a PK w/ is also reference as FK from another table? If not, is there a hack to get around this, w/o having to recompile hibernate objects? |
|
From: Hector M. J. <hec...@et...> - 2013-10-03 16:21:42
|
Hi all,
First of all, I would like to thank Mr. Koichi Suzuki for their comments
to my previous post. Was right with regard to the parameter
max_connection. In the new configuration each DataNode has a maximum of
800 (three coordinators with 200 concurrent connections each plus 200 extra)
Regarding the suggestion about stress tool to use (dbt-1), I'm in the
study of their characteristics and peculiarities of installation and use.
When I get my first results with it I publishes in this forum.
I still have details to resolve as is the use of parameter
gtmPxyExtraConfig of pgxc_ctl.conf to include parameters (as
worker_threads = xx, for example) in the gtm_proxy settings.
This was one of the problems detected during the initial deployment
through pgxc_ctl tool.
This is the second post about my impressions with pgxc-1.1
In the previous post exposed them details of the scenario we want to
build as well as configurations necessary to reach a state of
functionality and operability acceptable.
Once you reach that point, we design and execute a set of tests (based
on pgbench) in order to measure the performance of our installation and
know when we reached our goals: 300 (or more) concurrent connections and
increase the number of transactional operations.
The modifications was made mainly on DataNodes config. These changes
weres implemented through datanodeExtraConfig parameter (pgxc_ctl.conf
file) and were as follows:
# ================================================
# Added to all the DataNode postgresql.conf
# Original: datanodeExtraConfig
log_destination = 'syslog'
logging_collector = on
log_directory = 'pg_log'
listen_addresses = '*'
max_connections = 800
work_mem = 100MB
fsync = off
shared_buffers = 5GB
wal_buffers = 1MB
checkpoint_timeout = 5min
effective_cache_size = 12GB
checkpoint_segments = 64
checkpoint_completion_target = 0.9
maintenance_work_mem = 4GB
max_prepared_transactions = 800
synchronous_commit = off
These modifications allowed us to obtain results with increases between
two and three times (in some cases, more than three times) with respect
to the initial results set (number of transactions and tps).
Our other goal, get more than 300 concurrent connections is reached and
can measure up to 355 connections.
During the course of testing measurements were obtained on the
consumption of CPU and memory resources on each of the components of the
cluster (dn01, DN02, DN03, GTM).
For these measurements, use the SAR command parameters:
-R Report memory utilization statistics
-U Report CPU utilization
200 iterations with an interval of 5 seconds (14 test with a duration of
60 seconds divided by 5 seconds is +/- 170 iterations)
The averages obtained by server:
This command was executed on each servers before launch the tests:
sar -u -r 5 200
dn01:
Average: CPU %user %nice %system %iowait %steal
%idle
Average: all 15.79 0.00 18.20 0.44 0.00
65.57
Average: kbmemfree kbmemused %memused kbbuffers kbcached kbcommit
%commit
Average: 12003982 4327934 26.50 44163 1766752 7616567
37.35
dn02:
Average: CPU %user %nice %system %iowait %steal
%idle
Average: all 14.89 0.00 17.37 0.11 0.00
67.62
Average: kbmemfree kbmemused %memused kbbuffers kbcached kbcommit
%commit
Average: 12097661 4234255 25.93 42716 1725394 7609960
37.31
dn03:
Average: CPU %user %nice %system %iowait %steal
%idle
Average: all 16.67 0.00 19.59 0.57 0.00
63.17
Average: kbmemfree kbmemused %memused kbbuffers kbcached kbcommit
%commit
Average: 11603908 4728008 28.95 42955 1708146 7609769
37.31
gtm:
Average: CPU %user %nice %system %iowait %steal
%idle
Average: all 8.54 0.00 24.80 0.12 0.00
66.54
Average: kbmemfree kbmemused %memused kbbuffers kbcached kbcommit
%commit
Average: 3553938 370626 9.44 42358 120419 723856
9.06
The result obtained in each of the tests:
[root@rhelclient ~]# pgbench -c 16 -j 8 -T 60 -h 192.168.97.44 -U
postgres pgbench
starting vacuum...end.
transaction type: TPC-B (sort of)
scaling factor: 100
query mode: simple
number of clients: 16
number of threads: 8
duration: 60 s
number of transactions actually processed: 23680
tps = 394.458636 (including connections establishing)
tps = 394.637063 (excluding connections establishing)
[root@rhelclient ~]# pgbench -S -c 16 -j 8 -T 60 -h 192.168.97.44 -U
postgres pgbench
starting vacuum...end.
transaction type: SELECT only
scaling factor: 100
query mode: simple
number of clients: 16
number of threads: 8
duration: 60 s
number of transactions actually processed: 108929
tps = 1815.247714 (including connections establishing)
tps = 1815.947505 (excluding connections establishing)
[root@rhelclient ~]# pgbench -c 16 -j 16 -T 60 -h 192.168.97.44 -U
postgres pgbench
starting vacuum...end.
transaction type: TPC-B (sort of)
scaling factor: 100
query mode: simple
number of clients: 16
number of threads: 16
duration: 60 s
number of transactions actually processed: 23953
tps = 399.034541 (including connections establishing)
tps = 399.120451 (excluding connections establishing)
[root@rhelclient ~]# pgbench -S -c 16 -j 16 -T 60 -h 192.168.97.44 -U
postgres pgbench
starting vacuum...end.
transaction type: SELECT only
scaling factor: 100
query mode: simple
number of clients: 16
number of threads: 16
duration: 60 s
number of transactions actually processed: 127142
tps = 2118.825088 (including connections establishing)
tps = 2119.318006 (excluding connections establishing)
[root@rhelclient ~]# pgbench -c 96 -j 8 -T 60 -h 192.168.97.44 -U
postgres pgbench
starting vacuum...end.
transaction type: TPC-B (sort of)
scaling factor: 100
query mode: simple
number of clients: 96
number of threads: 8
duration: 60 s
number of transactions actually processed: 95644
tps = 1592.722011 (including connections establishing)
tps = 1595.906611 (excluding connections establishing)
[root@rhelclient ~]# pgbench -S -c 96 -j 8 -T 60 -h 192.168.97.44 -U
postgres pgbench
starting vacuum...end.
transaction type: SELECT only
scaling factor: 100
query mode: simple
number of clients: 96
number of threads: 8
duration: 60 s
number of transactions actually processed: 580728
tps = 9675.754717 (including connections establishing)
tps = 9695.954649 (excluding connections establishing)
[root@rhelclient ~]# pgbench -c 64 -j 32 -T 60 -h 192.168.97.44 -U
postgres pgbench
starting vacuum...end.
transaction type: TPC-B (sort of)
scaling factor: 100
query mode: simple
number of clients: 64
number of threads: 32
duration: 60 s
number of transactions actually processed: 72239
tps = 1183.511659 (including connections establishing)
tps = 1184.529232 (excluding connections establishing)
[root@rhelclient ~]# pgbench -S -c 64 -j 32 -T 60 -h 192.168.97.44 -U
postgres pgbench
starting vacuum...end.
transaction type: SELECT only
scaling factor: 100
query mode: simple
number of clients: 64
number of threads: 32
duration: 60 s
number of transactions actually processed: 388861
tps = 6479.326642 (including connections establishing)
tps = 6482.532350 (excluding connections establishing)
[root@rhelclient ~]# pgbench -c 64 -j 64 -T 60 -h 192.168.97.44 -U
postgres pgbench
starting vacuum...end.
transaction type: TPC-B (sort of)
scaling factor: 100
query mode: simple
number of clients: 64
number of threads: 64
duration: 60 s
number of transactions actually processed: 61663
tps = 1026.636406 (including connections establishing)
tps = 1027.679280 (excluding connections establishing)
[root@rhelclient ~]# pgbench -S -c 64 -j 64 -T 60 -h 192.168.97.44 -U
postgres pgbench
starting vacuum...end.
transaction type: SELECT only
scaling factor: 100
query mode: simple
number of clients: 64
number of threads: 64
duration: 60 s
number of transactions actually processed: 369321
tps = 6151.931064 (including connections establishing)
tps = 6155.611035 (excluding connections establishing)
[root@rhelclient ~]# pgbench -c 104 -j 8 -T 60 -h 192.168.97.44 -U
postgres pgbench
starting vacuum...end.
transaction type: TPC-B (sort of)
scaling factor: 100
query mode: simple
number of clients: 104
number of threads: 8
duration: 60 s
number of transactions actually processed: 80479
tps = 1337.396423 (including connections establishing)
tps = 1347.248687 (excluding connections establishing)
[root@rhelclient ~]# pgbench -S -c 104 -j 8 -T 60 -h 192.168.97.44 -U
postgres pgbench
starting vacuum...end.
transaction type: SELECT only
scaling factor: 100
query mode: simple
number of clients: 104
number of threads: 8
duration: 60 s
number of transactions actually processed: 587109
tps = 9782.401960 (including connections establishing)
tps = 9805.111450 (excluding connections establishing)
[root@rhelclient ~]# pgbench -c 300 -j 10 -T 60 -h 192.168.97.44 -U
postgres pgbench
starting vacuum...end.
transaction type: TPC-B (sort of)
scaling factor: 100
query mode: simple
number of clients: 300
number of threads: 10
duration: 60 s
number of transactions actually processed: 171351
tps = 2849.021939 (including connections establishing)
tps = 2869.345032 (excluding connections establishing)
[root@rhelclient ~]# pgbench -S -c 300 -j 10 -T 60 -h 192.168.97.44 -U
postgres pgbench
starting vacuum...end.
transaction type: SELECT only
scaling factor: 100
query mode: simple
number of clients: 300
number of threads: 10
duration: 60 s
number of transactions actually processed: 1177464
tps = 19613.592584 (including connections establishing)
tps = 19716.537285 (excluding connections establishing)
[root@rhelclient ~]#
Our new goal is to provide to our pgxc cluster of characteristics of
High Availability adding slave servers for DataNodes (and perhaps for
coordinators servers)
When we get this environment, run again the same set of tests above,
plus new ones, such as fault recovery tests simulating the loss of
cluster components, etc..
Ok, that's all for now.
Thank you very much. This is a great project and I'm very glad I found it.
Thanks again,
Hector M. Jacas |
|
From: Koichi S. <koi...@gm...> - 2013-10-03 01:31:26
|
You cannot add a coordinator in such a way. There're many issued to be resolved internally. You can configure and operate whole cluster with pgxc_ctl to get handy way to add coordinator/datanode. I understand you have your cluster configured without pgxc_ctl. In this case, adding coordinator manually could be a bit complicated work. Sorry, I've not uploaded the detailed step to do it. Whole steps will be found in add_coordinatorMaster() function defined in coord_cmd.c of pgxc_ctl source code. It will be found at contrib/pgxc_ctl in the release material. Please allow a bit of time to find my time to upload this information to XC wiki. Or, you can backup whole database with pg_dumpall, then reconfigure new xc cluster with additional coordinator, and then restore the backup. Regards; --- Koichi Suzuki 2013/10/3 Julian <jul...@gm...> > Dear Sir, > > I have a cluster with 3 coordinators and 3 datanodes on 3 VM, today i > was try to added a new coordinator to the cluster, when i using command > "psql postgres -f coordinator.sql -p 5455" to restore the backup file to > the new coordinator. > > Then i got this message : > > psql:coordinator-dump.sql:105: connection to server was lost > > In the log file: > > ---------------------------------------------------------------------------------------------------------------------------------------------------------------- > 2013-10-02 23:00:09.811 CST,,,29061,,524c34de.7185,2,,2013-10-02 > 22:59:42 CST,,0,LOG,00000,"server process (PID 29094) was terminated by > signal 11: Segmentation fault","Failed process was r > unning: CREATE TABLE user_info_hash ( > id integer NOT NULL, > firstname text, > lastname text, > info text > ) > DISTRIBUTE BY HASH (id) > TO NODE (dn2,dn1,dn3);",,,,,,,,"" > 2013-10-02 23:00:09.811 CST,,,29061,,524c34de.7185,3,,2013-10-02 > 22:59:42 CST,,0,LOG,00000,"terminating any other active server > processes",,,,,,,,,"" > > ------------------------------------------------------------------------------------------------------------------------------------------------------------------ > refer to > http://postgres-xc.sourceforge.net/docs/1_1/add-node-coordinator.html > > Is there something i doing worng? > > > Thanks for your kindly reply. > > And sorry for my poor english. > > > Best regards. > > > ------------------------------------------------------------------------------ > October Webinars: Code for Performance > Free Intel webinars can help you accelerate application performance. > Explore tips for MPI, OpenMP, advanced profiling, and more. Get the most > from > the latest Intel processors and coprocessors. See abstracts and register > > http://pubads.g.doubleclick.net/gampad/clk?id=60134791&iu=/4140/ostg.clktrk > _______________________________________________ > Postgres-xc-general mailing list > Pos...@li... > https://lists.sourceforge.net/lists/listinfo/postgres-xc-general > |
|
From: Julian <jul...@gm...> - 2013-10-02 15:53:22
|
Dear Sir,
I have a cluster with 3 coordinators and 3 datanodes on 3 VM, today i
was try to added a new coordinator to the cluster, when i using command
"psql postgres -f coordinator.sql -p 5455" to restore the backup file to
the new coordinator.
Then i got this message :
psql:coordinator-dump.sql:105: connection to server was lost
In the log file:
----------------------------------------------------------------------------------------------------------------------------------------------------------------
2013-10-02 23:00:09.811 CST,,,29061,,524c34de.7185,2,,2013-10-02
22:59:42 CST,,0,LOG,00000,"server process (PID 29094) was terminated by
signal 11: Segmentation fault","Failed process was r
unning: CREATE TABLE user_info_hash (
id integer NOT NULL,
firstname text,
lastname text,
info text
)
DISTRIBUTE BY HASH (id)
TO NODE (dn2,dn1,dn3);",,,,,,,,""
2013-10-02 23:00:09.811 CST,,,29061,,524c34de.7185,3,,2013-10-02
22:59:42 CST,,0,LOG,00000,"terminating any other active server
processes",,,,,,,,,""
------------------------------------------------------------------------------------------------------------------------------------------------------------------
refer to
http://postgres-xc.sourceforge.net/docs/1_1/add-node-coordinator.html
Is there something i doing worng?
Thanks for your kindly reply.
And sorry for my poor english.
Best regards.
|
|
From: Koichi S. <koi...@gm...> - 2013-10-02 03:01:33
|
This is a quick comment to your configuration. 1. It's better to configure gtm_proxy for each VM. gtm_proxy groups request/response between coordinator/datanode and gtm to reduce the network workload. 2. You may not need this many max_connection of each datanode. Your max connection to each coordinator is 200 so max connection to each datanode will be 200 x 4 = 800. Just in case, I suppose 1000 is sufficient. 3. Unfortunately, pgbench is not fullly tuned for XC. If you have specific application in mind, you may want to run it. Or, you can try DBT-1 benchmark, which is better tuned for XC, installation will be a bit complicated though. It is available from Postgres-XC development page, https://sourceforge.net/projects/postgres-xc/ You will find dbt-1 repo at git tag of the page. Good luck. --- Koichi Suzuki 2013/10/2 Hector M. Jacas <hec...@et...> > > Dear Sir, > > Below I discuss the first results of the tests performed with version 1.1 > of pgxc official. > > During the rest of the week I will be making adjustments to the settings > in order to get better performance and up to 300 concurrent connections > (now I can only reach up to 150 concurrent connections). > > If someone with experience in this solution can provide it will be welcome. > > The scenario I have is as follows: > > In a cluster VMware ESXi 5.1 Update 1 with 3 esx host HPBL685c G7 427.9 GB > of ram one vApp with 4 virtual machines. > > 3 of these virtual machines with 16 vCPU and 1GB of ram per vCPU. Each > virtual machine acts of gtm_proxy / coordinator / data node. > > The fourth and last virtual machine (4 vCPUs and 1GB of ram per vCPU) acts > of GTM. > > Operating System: RHELv6.4 (Kernel 2.6.32-358.18.1.el6.x86_64) > Installation Type: Basic > Additional Packages: Development Kit > Filesystem type for the data area: XFS without additional adjustments > > This cluster of postgres-xc is balanced with pirahna (LVS) with a virtual > IP (192.168.97.44) and port 5432 using the LC method (less connections) > > dn01/192.168.97.40:(gtm_proxy p:6666/coordinator p:5432/DataNode p:5433) > dn02/192.168.97.41:(gtm_proxy p:6666/coordinator p:5432/DataNode p:5433) > dn03/192.168.97.42:(gtm_proxy p:6666/coordinator p:5432/DataNode p:5433) > gtm /192.168.97.43: (gtm p:6666) > > The deployment method used is pgxc_ctl with some additional configuration > parameters through coordExtraConfig and datanodeExtraConfig and whose > contents will describe: > > coordExtraConfig: > > # ==============================**================== > # Added to all the coordinator postgresql.conf > # Original: coordExtraConfig > log_destination = 'syslog' > logging_collector = on > log_directory = 'pg_log' > listen_addresses = '*' > max_connections = 200 > > datanodeExtraConfig: > # ==============================**================== > # Added to all the coordinator postgresql.conf > # Original: datanodeExtraConfig > log_destination = 'syslog' > logging_collector = on > log_directory = 'pg_log' > listen_addresses = '*' > max_connections = 4096 > fsync = off > shared_buffers = 4GB > wal_buffers = 1MB > checkpoint_timeout = 5min > checkpoint_segments = 16 > maintenance_work_mem = 4GB > max_prepared_transactions = 4096 > synchronous_commit = off > > NOTE: I should point out that in the case of gtm_proxy definition, I was > unable to add through gtmPxyExtraConfig parameter, additional parameters > such as worker_threads = 15. > > In this particular case, invariably reflected in the final configuration > worker_threads = 1 > > Ok, the results: > > [root@rhelclient ~]# createdb -h 192.168.97.44 -U postgres pgbench > [root@rhelclient ~]# pgbench -i -s 100 -h 192.168.97.44 -U postgres > pgbench > NOTICE: table "pgbench_branches" does not exist, skipping > NOTICE: table "pgbench_tellers" does not exist, skipping > NOTICE: table "pgbench_accounts" does not exist, skipping > NOTICE: table "pgbench_history" does not exist, skipping > creating tables... > 10000 tuples done. > 20000 tuples done. > 30000 tuples done. > 40000 tuples done. > : > : > : > 9950000 tuples done. > 9960000 tuples done. > 9970000 tuples done. > 9980000 tuples done. > 9990000 tuples done. > 10000000 tuples done. > set primary key... > NOTICE: ALTER TABLE / ADD PRIMARY KEY will create implicit index > "pgbench_branches_pkey" for table "pgbench_branches" > NOTICE: ALTER TABLE / ADD PRIMARY KEY will create implicit index > "pgbench_tellers_pkey" for table "pgbench_tellers" > NOTICE: ALTER TABLE / ADD PRIMARY KEY will create implicit index > "pgbench_accounts_pkey" for table "pgbench_accounts" > vacuum...done. > > [root@rhelclient ~]# pgbench -c 16 -j 8 -T 60 -h 192.168.97.44 -U > postgres pgbench > starting vacuum...end. > transaction type: TPC-B (sort of) > scaling factor: 100 > query mode: simple > number of clients: 16 > number of threads: 8 > duration: 60 s > number of transactions actually processed: 10217 > tps = 170.186523 (including connections establishing) > tps = 170.279171 (excluding connections establishing) > > [root@rhelclient ~]# pgbench -S -c 16 -j 8 -T 60 -h 192.168.97.44 -U > postgres pgbench > starting vacuum...end. > transaction type: SELECT only > scaling factor: 100 > query mode: simple > number of clients: 16 > number of threads: 8 > duration: 60 s > number of transactions actually processed: 62484 > tps = 1041.168271 (including connections establishing) > tps = 1041.592065 (excluding connections establishing) > > [root@rhelclient ~]# pgbench -c 16 -j 16 -T 60 -h 192.168.97.44 -U > postgres pgbench > starting vacuum...end. > transaction type: TPC-B (sort of) > scaling factor: 100 > query mode: simple > number of clients: 16 > number of threads: 16 > duration: 60 s > number of transactions actually processed: 14001 > tps = 232.131191 (including connections establishing) > tps = 232.374844 (excluding connections establishing) > > [root@rhelclient ~]# pgbench -S -c 16 -j 16 -T 60 -h 192.168.97.44 -U > postgres pgbench > starting vacuum...end. > transaction type: SELECT only > scaling factor: 100 > query mode: simple > number of clients: 16 > number of threads: 16 > duration: 60 s > number of transactions actually processed: 59981 > tps = 998.525044 (including connections establishing) > tps = 998.773480 (excluding connections establishing) > > [root@rhelclient ~]# pgbench -c 96 -j 8 -T 60 -h 192.168.97.44 -U > postgres pgbench > starting vacuum...end. > transaction type: TPC-B (sort of) > scaling factor: 100 > query mode: simple > number of clients: 96 > number of threads: 8 > duration: 60 s > number of transactions actually processed: 25577 > tps = 423.132610 (including connections establishing) > tps = 424.137946 (excluding connections establishing) > > [root@rhelclient ~]# pgbench -S -c 96 -j 8 -T 60 -h 192.168.97.44 -U > postgres pgbench > starting vacuum...end. > transaction type: SELECT only > scaling factor: 100 > query mode: simple > number of clients: 96 > number of threads: 8 > duration: 60 s > number of transactions actually processed: 115994 > tps = 1929.694756 (including connections establishing) > tps = 1939.253901 (excluding connections establishing) > > [root@rhelclient ~]# pgbench -c 64 -j 32 -T 60 -h 192.168.97.44 -U > postgres pgbench > starting vacuum...end. > transaction type: TPC-B (sort of) > scaling factor: 100 > query mode: simple > number of clients: 64 > number of threads: 32 > duration: 60 s > number of transactions actually processed: 14087 > tps = 234.278526 (including connections establishing) > tps = 234.472208 (excluding connections establishing) > > [root@rhelclient ~]# pgbench -S -c 64 -j 32 -T 60 -h 192.168.97.44 -U > postgres pgbench > starting vacuum...end. > transaction type: SELECT only > scaling factor: 100 > query mode: simple > number of clients: 64 > number of threads: 32 > duration: 60 s > number of transactions actually processed: 124367 > tps = 2072.136619 (including connections establishing) > tps = 2073.475197 (excluding connections establishing) > > [root@rhelclient ~]# pgbench -c 64 -j 64 -T 60 -h 192.168.97.44 -U > postgres pgbench > starting vacuum...end. > transaction type: TPC-B (sort of) > scaling factor: 100 > query mode: simple > number of clients: 64 > number of threads: 64 > duration: 60 s > number of transactions actually processed: 14239 > tps = 232.935903 (including connections establishing) > tps = 236.194708 (excluding connections establishing) > > [root@rhelclient ~]# pgbench -S -c 64 -j 64 -T 60 -h 192.168.97.44 -U > postgres pgbench > starting vacuum...end. > transaction type: SELECT only > scaling factor: 100 > query mode: simple > number of clients: 64 > number of threads: 64 > duration: 60 s > number of transactions actually processed: 95342 > tps = 1530.270930 (including connections establishing) > tps = 1531.320634 (excluding connections establishing) > > [root@rhelclient ~]# pgbench -c 104 -j 8 -T 60 -h 192.168.97.44 -U > postgres pgbench > starting vacuum...end. > transaction type: TPC-B (sort of) > scaling factor: 100 > query mode: simple > number of clients: 104 > number of threads: 8 > duration: 60 s > number of transactions actually processed: 33543 > tps = 542.183967 (including connections establishing) > tps = 543.542901 (excluding connections establishing) > > [root@rhelclient ~]# pgbench -S -c 104 -j 8 -T 60 -h 192.168.97.44 -U > postgres pgbench > starting vacuum...end. > transaction type: SELECT only > scaling factor: 100 > query mode: simple > number of clients: 104 > number of threads: 8 > duration: 60 s > number of transactions actually processed: 286291 > tps = 4770.397054 (including connections establishing) > tps = 4790.285324 (excluding connections establishing) > [root@rhelclient ~]# > --- > This message was processed by Kaspersky Mail Gateway 5.6.28/RELEASE > running at host imx3.etecsa.cu > Visit our web-site: <http://www.kaspersky.com>, <http://www.viruslist.com> > > > ------------------------------------------------------------------------------ > October Webinars: Code for Performance > Free Intel webinars can help you accelerate application performance. > Explore tips for MPI, OpenMP, advanced profiling, and more. Get the most > from > the latest Intel processors and coprocessors. See abstracts and register > > http://pubads.g.doubleclick.net/gampad/clk?id=60134791&iu=/4140/ostg.clktrk > _______________________________________________ > Postgres-xc-general mailing list > Pos...@li... > https://lists.sourceforge.net/lists/listinfo/postgres-xc-general > > |
|
From: Koichi S. <koi...@gm...> - 2013-10-02 01:56:12
|
PostgreSQL community adopted commit fest to keep their release cycle as predictable as possible as a feedback of release 8.3 work. Yes, as Michael mentioned, I think it may take a bit more when commit fest is necessary for XC. Anybody can submit a patch to XC community through developers mailing list, which will be reviewed by some of the community members. The process is now very flexible. Please visit XC web page, postgres-xc.sourceforge.netfor information. Regards; --- Koichi Suzuki 2013/10/2 Michael Paquier <mic...@gm...> > On Wed, Oct 2, 2013 at 1:19 AM, Aris Setyawan <ari...@gm...> wrote: > > Hi, > > > > I'm a PHP web developer. I'm interested in contributing commit-fest > > system for posgre-xc, though, recently, I have no plan or time or > > resource to offer. > > > > I have some questions, about this topic. > > > > 1. Is XC need a commit-fest app? > I honestly don't think that XC, which has a more limited community > than Postgres, is at a state where it would need a commit fest > application. We also do not have a website provider that could host > it. > > > 2. Is Postgresql commit-fest app available in opensource? > Not sure about that. The code of postgresql.org is for example released > here: > https://github.com/postgres/pgweb > I am pretty sure that if it is available it would be under the > PostgreSQL license. If you cannot google it, why not contacting those > guys directly? > > > 3. How commit-fest working? Eg: How to submit a patch, reviewing, > committing. > Have a look here: > https://wiki.postgresql.org/wiki/Submitting_a_Patch > https://wiki.postgresql.org/wiki/CommitFest > > Regards, > -- > Michael > > > ------------------------------------------------------------------------------ > October Webinars: Code for Performance > Free Intel webinars can help you accelerate application performance. > Explore tips for MPI, OpenMP, advanced profiling, and more. Get the most > from > the latest Intel processors and coprocessors. See abstracts and register > > http://pubads.g.doubleclick.net/gampad/clk?id=60134791&iu=/4140/ostg.clktrk > _______________________________________________ > Postgres-xc-general mailing list > Pos...@li... > https://lists.sourceforge.net/lists/listinfo/postgres-xc-general > |
|
From: Michael P. <mic...@gm...> - 2013-10-01 23:53:58
|
On Wed, Oct 2, 2013 at 1:19 AM, Aris Setyawan <ari...@gm...> wrote: > Hi, > > I'm a PHP web developer. I'm interested in contributing commit-fest > system for posgre-xc, though, recently, I have no plan or time or > resource to offer. > > I have some questions, about this topic. > > 1. Is XC need a commit-fest app? I honestly don't think that XC, which has a more limited community than Postgres, is at a state where it would need a commit fest application. We also do not have a website provider that could host it. > 2. Is Postgresql commit-fest app available in opensource? Not sure about that. The code of postgresql.org is for example released here: https://github.com/postgres/pgweb I am pretty sure that if it is available it would be under the PostgreSQL license. If you cannot google it, why not contacting those guys directly? > 3. How commit-fest working? Eg: How to submit a patch, reviewing, committing. Have a look here: https://wiki.postgresql.org/wiki/Submitting_a_Patch https://wiki.postgresql.org/wiki/CommitFest Regards, -- Michael |
|
From: Hector M. J. <hec...@et...> - 2013-10-01 20:41:58
|
Dear Sir, Below I discuss the first results of the tests performed with version 1.1 of pgxc official. During the rest of the week I will be making adjustments to the settings in order to get better performance and up to 300 concurrent connections (now I can only reach up to 150 concurrent connections). If someone with experience in this solution can provide it will be welcome. The scenario I have is as follows: In a cluster VMware ESXi 5.1 Update 1 with 3 esx host HPBL685c G7 427.9 GB of ram one vApp with 4 virtual machines. 3 of these virtual machines with 16 vCPU and 1GB of ram per vCPU. Each virtual machine acts of gtm_proxy / coordinator / data node. The fourth and last virtual machine (4 vCPUs and 1GB of ram per vCPU) acts of GTM. Operating System: RHELv6.4 (Kernel 2.6.32-358.18.1.el6.x86_64) Installation Type: Basic Additional Packages: Development Kit Filesystem type for the data area: XFS without additional adjustments This cluster of postgres-xc is balanced with pirahna (LVS) with a virtual IP (192.168.97.44) and port 5432 using the LC method (less connections) dn01/192.168.97.40:(gtm_proxy p:6666/coordinator p:5432/DataNode p:5433) dn02/192.168.97.41:(gtm_proxy p:6666/coordinator p:5432/DataNode p:5433) dn03/192.168.97.42:(gtm_proxy p:6666/coordinator p:5432/DataNode p:5433) gtm /192.168.97.43: (gtm p:6666) The deployment method used is pgxc_ctl with some additional configuration parameters through coordExtraConfig and datanodeExtraConfig and whose contents will describe: coordExtraConfig: # ================================================ # Added to all the coordinator postgresql.conf # Original: coordExtraConfig log_destination = 'syslog' logging_collector = on log_directory = 'pg_log' listen_addresses = '*' max_connections = 200 datanodeExtraConfig: # ================================================ # Added to all the coordinator postgresql.conf # Original: datanodeExtraConfig log_destination = 'syslog' logging_collector = on log_directory = 'pg_log' listen_addresses = '*' max_connections = 4096 fsync = off shared_buffers = 4GB wal_buffers = 1MB checkpoint_timeout = 5min checkpoint_segments = 16 maintenance_work_mem = 4GB max_prepared_transactions = 4096 synchronous_commit = off NOTE: I should point out that in the case of gtm_proxy definition, I was unable to add through gtmPxyExtraConfig parameter, additional parameters such as worker_threads = 15. In this particular case, invariably reflected in the final configuration worker_threads = 1 Ok, the results: [root@rhelclient ~]# createdb -h 192.168.97.44 -U postgres pgbench [root@rhelclient ~]# pgbench -i -s 100 -h 192.168.97.44 -U postgres pgbench NOTICE: table "pgbench_branches" does not exist, skipping NOTICE: table "pgbench_tellers" does not exist, skipping NOTICE: table "pgbench_accounts" does not exist, skipping NOTICE: table "pgbench_history" does not exist, skipping creating tables... 10000 tuples done. 20000 tuples done. 30000 tuples done. 40000 tuples done. : : : 9950000 tuples done. 9960000 tuples done. 9970000 tuples done. 9980000 tuples done. 9990000 tuples done. 10000000 tuples done. set primary key... NOTICE: ALTER TABLE / ADD PRIMARY KEY will create implicit index "pgbench_branches_pkey" for table "pgbench_branches" NOTICE: ALTER TABLE / ADD PRIMARY KEY will create implicit index "pgbench_tellers_pkey" for table "pgbench_tellers" NOTICE: ALTER TABLE / ADD PRIMARY KEY will create implicit index "pgbench_accounts_pkey" for table "pgbench_accounts" vacuum...done. [root@rhelclient ~]# pgbench -c 16 -j 8 -T 60 -h 192.168.97.44 -U postgres pgbench starting vacuum...end. transaction type: TPC-B (sort of) scaling factor: 100 query mode: simple number of clients: 16 number of threads: 8 duration: 60 s number of transactions actually processed: 10217 tps = 170.186523 (including connections establishing) tps = 170.279171 (excluding connections establishing) [root@rhelclient ~]# pgbench -S -c 16 -j 8 -T 60 -h 192.168.97.44 -U postgres pgbench starting vacuum...end. transaction type: SELECT only scaling factor: 100 query mode: simple number of clients: 16 number of threads: 8 duration: 60 s number of transactions actually processed: 62484 tps = 1041.168271 (including connections establishing) tps = 1041.592065 (excluding connections establishing) [root@rhelclient ~]# pgbench -c 16 -j 16 -T 60 -h 192.168.97.44 -U postgres pgbench starting vacuum...end. transaction type: TPC-B (sort of) scaling factor: 100 query mode: simple number of clients: 16 number of threads: 16 duration: 60 s number of transactions actually processed: 14001 tps = 232.131191 (including connections establishing) tps = 232.374844 (excluding connections establishing) [root@rhelclient ~]# pgbench -S -c 16 -j 16 -T 60 -h 192.168.97.44 -U postgres pgbench starting vacuum...end. transaction type: SELECT only scaling factor: 100 query mode: simple number of clients: 16 number of threads: 16 duration: 60 s number of transactions actually processed: 59981 tps = 998.525044 (including connections establishing) tps = 998.773480 (excluding connections establishing) [root@rhelclient ~]# pgbench -c 96 -j 8 -T 60 -h 192.168.97.44 -U postgres pgbench starting vacuum...end. transaction type: TPC-B (sort of) scaling factor: 100 query mode: simple number of clients: 96 number of threads: 8 duration: 60 s number of transactions actually processed: 25577 tps = 423.132610 (including connections establishing) tps = 424.137946 (excluding connections establishing) [root@rhelclient ~]# pgbench -S -c 96 -j 8 -T 60 -h 192.168.97.44 -U postgres pgbench starting vacuum...end. transaction type: SELECT only scaling factor: 100 query mode: simple number of clients: 96 number of threads: 8 duration: 60 s number of transactions actually processed: 115994 tps = 1929.694756 (including connections establishing) tps = 1939.253901 (excluding connections establishing) [root@rhelclient ~]# pgbench -c 64 -j 32 -T 60 -h 192.168.97.44 -U postgres pgbench starting vacuum...end. transaction type: TPC-B (sort of) scaling factor: 100 query mode: simple number of clients: 64 number of threads: 32 duration: 60 s number of transactions actually processed: 14087 tps = 234.278526 (including connections establishing) tps = 234.472208 (excluding connections establishing) [root@rhelclient ~]# pgbench -S -c 64 -j 32 -T 60 -h 192.168.97.44 -U postgres pgbench starting vacuum...end. transaction type: SELECT only scaling factor: 100 query mode: simple number of clients: 64 number of threads: 32 duration: 60 s number of transactions actually processed: 124367 tps = 2072.136619 (including connections establishing) tps = 2073.475197 (excluding connections establishing) [root@rhelclient ~]# pgbench -c 64 -j 64 -T 60 -h 192.168.97.44 -U postgres pgbench starting vacuum...end. transaction type: TPC-B (sort of) scaling factor: 100 query mode: simple number of clients: 64 number of threads: 64 duration: 60 s number of transactions actually processed: 14239 tps = 232.935903 (including connections establishing) tps = 236.194708 (excluding connections establishing) [root@rhelclient ~]# pgbench -S -c 64 -j 64 -T 60 -h 192.168.97.44 -U postgres pgbench starting vacuum...end. transaction type: SELECT only scaling factor: 100 query mode: simple number of clients: 64 number of threads: 64 duration: 60 s number of transactions actually processed: 95342 tps = 1530.270930 (including connections establishing) tps = 1531.320634 (excluding connections establishing) [root@rhelclient ~]# pgbench -c 104 -j 8 -T 60 -h 192.168.97.44 -U postgres pgbench starting vacuum...end. transaction type: TPC-B (sort of) scaling factor: 100 query mode: simple number of clients: 104 number of threads: 8 duration: 60 s number of transactions actually processed: 33543 tps = 542.183967 (including connections establishing) tps = 543.542901 (excluding connections establishing) [root@rhelclient ~]# pgbench -S -c 104 -j 8 -T 60 -h 192.168.97.44 -U postgres pgbench starting vacuum...end. transaction type: SELECT only scaling factor: 100 query mode: simple number of clients: 104 number of threads: 8 duration: 60 s number of transactions actually processed: 286291 tps = 4770.397054 (including connections establishing) tps = 4790.285324 (excluding connections establishing) [root@rhelclient ~]# |
|
From: Aris S. <ari...@gm...> - 2013-10-01 16:19:56
|
Hi, I'm a PHP web developer. I'm interested in contributing commit-fest system for posgre-xc, though, recently, I have no plan or time or resource to offer. I have some questions, about this topic. 1. Is XC need a commit-fest app? 2. Is Postgresql commit-fest app available in opensource? 3. How commit-fest working? Eg: How to submit a patch, reviewing, committing. Aris. |
|
From: Koichi S. <koi...@gm...> - 2013-10-01 02:13:36
|
Thanks for the infer. I'm afraid this could be operating-system specific and I might have to run the debugger in the same environment to see what's exactly going on. Because I don't have Fedora, it might take a bit long. Appreciated if you run the debugger against gtm_proxy and see when it exits (you can start gtm_proxy as you're doing and attach gdb to gtm_proxy process). BTW, monitor is a pgxc_ctl command. Please issue monitor all at pgxc_ctl prompt. You will get what components are running and whats are not. Regards; --- Koichi Suzuki 2013/10/1 Sandeep Gupta <gup...@gm...> > Hi Koichi, > > Thanks for looking into this. I did some more debugging. The gtm_proxy > runs as long as the datanodes/co-ordinator have not started. > Once I launch the co-ordinator (or the data-nodes, not sure since they all > get invoked by the same script) the gtm_proxy exits. > Unfortunately, the logfile is empty as well. So not sure what is going on. > It is something to do with my system only. > > Is monitor is postgres/pgxc related command. Otherwise I can see the via > ps command that proxy exited. > > -Sandeep > > > > > On Mon, Sep 30, 2013 at 9:04 PM, 鈴木 幸市 <ko...@in...> wrote: > >> I've not seen such phenomena yet. Does gtm_proxy exits silently just >> after it is launched? Could you issue monitor command to see if it is >> running? Pgxc_ctl uses many system(3) and popen(3) and there might be a >> slight chance to hit OS dependencies.sd >> >> Regards; >> --- >> Koichi Suzuki >> >> On 2013/10/01, at 8:09, Sandeep Gupta <gup...@gm...> wrote: >> >> > Hi, >> > >> > I have been working with pgxc for a couple of months on a old machine. >> Today I installed pgxc (v1.1) on >> > a new machine. All the ports, gtm_port, gtm_proxy port, pooler port >> are set to different values than the default. >> > >> > The OS is fedora on core i7. The problem I am facing is that the >> gtm_proxy exits silently. Nothing gets written in the log >> > file as well. However, if i fire the proxy within the gdb debugger >> things work fine (this I didn't double check but I think it is happening). >> > >> > The pgxc launch scripts works fine on other machine. I am not if I have >> messed up some other system parameters etc shmem size etc. >> > Please let me know. Any ideas/suggestions are welcome. >> > >> > -Sandeep >> > >> > >> ------------------------------------------------------------------------------ >> > October Webinars: Code for Performance >> > Free Intel webinars can help you accelerate application performance. >> > Explore tips for MPI, OpenMP, advanced profiling, and more. Get the >> most from >> > the latest Intel processors and coprocessors. See abstracts and >> register > >> > >> http://pubads.g.doubleclick.net/gampad/clk?id=60134791&iu=/4140/ostg.clktrk_______________________________________________ >> > Postgres-xc-general mailing list >> > Pos...@li... >> > https://lists.sourceforge.net/lists/listinfo/postgres-xc-general >> >> > > > ------------------------------------------------------------------------------ > October Webinars: Code for Performance > Free Intel webinars can help you accelerate application performance. > Explore tips for MPI, OpenMP, advanced profiling, and more. Get the most > from > the latest Intel processors and coprocessors. See abstracts and register > > http://pubads.g.doubleclick.net/gampad/clk?id=60134791&iu=/4140/ostg.clktrk > _______________________________________________ > Postgres-xc-general mailing list > Pos...@li... > https://lists.sourceforge.net/lists/listinfo/postgres-xc-general > > |
|
From: Sandeep G. <gup...@gm...> - 2013-10-01 01:59:19
|
Hi Koichi, Thanks for looking into this. I did some more debugging. The gtm_proxy runs as long as the datanodes/co-ordinator have not started. Once I launch the co-ordinator (or the data-nodes, not sure since they all get invoked by the same script) the gtm_proxy exits. Unfortunately, the logfile is empty as well. So not sure what is going on. It is something to do with my system only. Is monitor is postgres/pgxc related command. Otherwise I can see the via ps command that proxy exited. -Sandeep On Mon, Sep 30, 2013 at 9:04 PM, 鈴木 幸市 <ko...@in...> wrote: > I've not seen such phenomena yet. Does gtm_proxy exits silently just > after it is launched? Could you issue monitor command to see if it is > running? Pgxc_ctl uses many system(3) and popen(3) and there might be a > slight chance to hit OS dependencies.sd > > Regards; > --- > Koichi Suzuki > > On 2013/10/01, at 8:09, Sandeep Gupta <gup...@gm...> wrote: > > > Hi, > > > > I have been working with pgxc for a couple of months on a old machine. > Today I installed pgxc (v1.1) on > > a new machine. All the ports, gtm_port, gtm_proxy port, pooler port are > set to different values than the default. > > > > The OS is fedora on core i7. The problem I am facing is that the > gtm_proxy exits silently. Nothing gets written in the log > > file as well. However, if i fire the proxy within the gdb debugger > things work fine (this I didn't double check but I think it is happening). > > > > The pgxc launch scripts works fine on other machine. I am not if I have > messed up some other system parameters etc shmem size etc. > > Please let me know. Any ideas/suggestions are welcome. > > > > -Sandeep > > > > > ------------------------------------------------------------------------------ > > October Webinars: Code for Performance > > Free Intel webinars can help you accelerate application performance. > > Explore tips for MPI, OpenMP, advanced profiling, and more. Get the most > from > > the latest Intel processors and coprocessors. See abstracts and register > > > > > http://pubads.g.doubleclick.net/gampad/clk?id=60134791&iu=/4140/ostg.clktrk_______________________________________________ > > Postgres-xc-general mailing list > > Pos...@li... > > https://lists.sourceforge.net/lists/listinfo/postgres-xc-general > > |
|
From: 鈴木 幸市 <ko...@in...> - 2013-10-01 01:04:39
|
I've not seen such phenomena yet. Does gtm_proxy exits silently just after it is launched? Could you issue monitor command to see if it is running? Pgxc_ctl uses many system(3) and popen(3) and there might be a slight chance to hit OS dependencies.sd Regards; --- Koichi Suzuki On 2013/10/01, at 8:09, Sandeep Gupta <gup...@gm...> wrote: > Hi, > > I have been working with pgxc for a couple of months on a old machine. Today I installed pgxc (v1.1) on > a new machine. All the ports, gtm_port, gtm_proxy port, pooler port are set to different values than the default. > > The OS is fedora on core i7. The problem I am facing is that the gtm_proxy exits silently. Nothing gets written in the log > file as well. However, if i fire the proxy within the gdb debugger things work fine (this I didn't double check but I think it is happening). > > The pgxc launch scripts works fine on other machine. I am not if I have messed up some other system parameters etc shmem size etc. > Please let me know. Any ideas/suggestions are welcome. > > -Sandeep > > ------------------------------------------------------------------------------ > October Webinars: Code for Performance > Free Intel webinars can help you accelerate application performance. > Explore tips for MPI, OpenMP, advanced profiling, and more. Get the most from > the latest Intel processors and coprocessors. See abstracts and register > > http://pubads.g.doubleclick.net/gampad/clk?id=60134791&iu=/4140/ostg.clktrk_______________________________________________ > Postgres-xc-general mailing list > Pos...@li... > https://lists.sourceforge.net/lists/listinfo/postgres-xc-general |
|
From: Sandeep G. <gup...@gm...> - 2013-09-30 23:10:06
|
Hi, I have been working with pgxc for a couple of months on a old machine. Today I installed pgxc (v1.1) on a new machine. All the ports, gtm_port, gtm_proxy port, pooler port are set to different values than the default. The OS is fedora on core i7. The problem I am facing is that the gtm_proxy exits silently. Nothing gets written in the log file as well. However, if i fire the proxy within the gdb debugger things work fine (this I didn't double check but I think it is happening). The pgxc launch scripts works fine on other machine. I am not if I have messed up some other system parameters etc shmem size etc. Please let me know. Any ideas/suggestions are welcome. -Sandeep |
|
From: Nikhil S. <ni...@st...> - 2013-09-26 16:39:31
|
Hi Hector, Very interesting to see that you are trying different clustering solutions. Would like to see your impressions and summary after you are done with all your configurations. Especially your thoughts on ease of use, architecture and performance of pgxc visavis the other products. Getting back to pgxc_ctl, the link that you have mentioned is good enough to get going. The most important step is to come up with a proper configuration in the pgxc_ctl.conf file. Once you have that in place, doing an: init all should get the cluster going. You can do a: monitor all to see the status of all the components as well stop all, start all etc, do things as expected. HTH, Nikhils StormDB On Thu, Sep 26, 2013 at 8:59 PM, Hector M. Jacas <hec...@et...>wrote: > Dear Nikhils, > > Thank you very much for your quick response. > > Problem solved. > > Where could find guides or tutorials on pgxc_ctl? > > https://sourceforge.net/apps/**mediawiki/postgres-xc/index.** > php?title=Pgxc_ctl_tutorial<https://sourceforge.net/apps/mediawiki/postgres-xc/index.php?title=Pgxc_ctl_tutorial>tutorial page is under construction. > > http://postgres-xc.**sourceforge.net/docs/1_1/pgxc-**ctl.html<http://postgres-xc.sourceforge.net/docs/1_1/pgxc-ctl.html>is just a manual. Explains the meanings of the parameters but says nothing > about procedures, the order in which tasks should be performed, etc. > > My project is to create clusters with different database backends used by > our applications in the enterprise. > > Could you tell me any site or page that contains guides, tutorials or > procedures to follow? > > So far, I have already a solution for MySQL with Percona cluster (3 > nodes). This solution is already in production. > > Oracle RAC (2 nodes) is almost ready. Next (now) is postgresql and mongoDB > at last. > > During the months of July and August install the 1.1beta version (3 nodes > coordinator/gtm_proxy/datanode and one gtm). I really liked the solution > (from all revised this is the most complete solution) and its performance. > For installation and deployment I follow the documents on StormDB site. > Everything was Ok. > > Now, the official 1.1 version is out and is my intention to explore the > deployment and management of a cluster postgres-xc with similar > characteristics through pgxc_ctl tool. > > Again, thank you very much for your reply > > Hector M. Jacas > > --- > This message was processed by Kaspersky Mail Gateway 5.6.28/RELEASE > running at host imx3.etecsa.cu > Visit our web-site: <http://www.kaspersky.com>, <http://www.viruslist.com> > > -- StormDB - http://www.stormdb.com The Database Cloud |
|
From: Hector M. J. <hec...@et...> - 2013-09-26 15:30:13
|
--- This message was processed by Kaspersky Mail Gateway 5.6.28/RELEASE running at host imx3.etecsa.cu Visit our web-site: <http://www.kaspersky.com>, <http://www.viruslist.com> |
|
From: Nikhil S. <ni...@st...> - 2013-09-25 22:46:41
|
Hi Hector, Try adding the path to your pgxc binaries in your .bashrc on all the nodes that are involved in the cluster. HTH, Nikhils On Thu, Sep 26, 2013 at 1:49 AM, Hector M. Jacas <hec...@et...>wrote: > Dear SEGNOR , > > In recent months I install and successfully try 1.1 beta version of > postgres-xc . > > Now the official version 1.1 is out. > > One of the components included in the contrib folder, pgxc_ctl, is where > my focus is right now and I'm confronting some problems. > > The scenario I have is as follows: > > Component | Component Name | Component Server > gtm (no slave) | gtm | gtm > - > gtp proxy (3) | gtmprx1 | dn01 > | gtmprx2 | dn02 > | gtmprx3 | dn03 > - > coordinators (3) | coord1 | dn01 > | coord2 | dn02 > | coord3 | dn03 > - > DataNodes (3) | dnode1 | dn01 > | dnode2 | dn02 > | dnode3 | dn03 > - > > I can perform the deployment ( deploy gtm) and when I try to initialize > the GTM components, for example : > > [ postgres @ rhelclient ~ ] $ pgxc_ctl > Installing pgxc_ctl_bash script as / home / postgres / pgxc_ctl / > pgxc_ctl_bash . > Installing pgxc_ctl_bash script as / home / postgres / pgxc_ctl / > pgxc_ctl_bash . > Reading configuration using / home / postgres / pgxc_ctl / pgxc_ctl_bash - > home / home / postgres / pgxc_ctl - configuration / home / postgres / > pgxc_ctl / pgxc_ctl.conf > Finished to read configuration . > PGXC_CTL ******** START *************** > > Current directory : / home / postgres / pgxc_ctl > PGXC deploy gtm > Postgres -XC Deploying materials. > Prepare tarball to deploy ... > Deploying to the server gtm . > Deployment done. > Init PGXC gtm > Initialize master GTM > GTM : no process killed > bash: initgtm : command not found > bash: gtm : command not found > bash: gtm_ctl : command not found > Done. > > Can not understand this message because users (postgres in this case) they > have PATH correctly pointing to /usr/local/pgsql/bin and which gtm_ctl > command (for example) properly returns the path of the command > /usr/local/pgsql/bin/gtm_ctl. > > In ~/pgxc_ctl/pgxc_log/ directory the last log file contents: > > pgxc_ctl(30849):1309251435_25 PGXC deploy gtm > pgxc_ctl(30849):1309251435_25 Deploying Postgres-XC materials. > pgxc_ctl(30849):1309251435_25 Deploying Postgres-XC materials. > pgxc_ctl(30849):1309251435_25 Prepare tarball to deploy ... > pgxc_ctl(30849):1309251435_25 Actual command: ( tar czCf > /home/postgres/pgsql /tmp/30849.tgz bin include lib share ) < /dev/null > > /tmp/STDOUT_30849_0 2>&1 > pgxc_ctl(30849):1309251435_27 Deploying to the server gtm. > pgxc_ctl(30849):1309251435_27 *** cmdList Dump > ********************************* > allocated = 2, used = 1 > pgxc_ctl(30849):1309251435_27 === CMD: 0 === > pgxc_ctl(30849):1309251435_27 === CMD: 0 === > pgxc_ctl(30849):1309251435_27 --- CMD-EL: 0:host="gtm", command="rm -rf > /home/postgres/pgsql/bin /home/postgres/pgsql/include > /home/postgres/pgsql/lib /home/postgres/pgsql/share; mkdir -p > /home/postgres/pgsql", localStdin="NULL", localStdout="NULL" > pgxc_ctl(30849):1309251435_27 --- CMD-EL: 1:host="NULL", command="scp > /tmp/30849.tgz postgres@gtm:/tmp", localStdin="NULL", localStdout="NULL" > pgxc_ctl(30849):1309251435_27 --- CMD-EL: 2:host="gtm", command="tar > xzCf /home/postgres/pgsql /tmp/30849.tgz; rm /tmp/30849.tgz", > localStdin="NULL", localStdout="NULL" > pgxc_ctl(30866):1309251435_27 Remote command: "rm -rf > /home/postgres/pgsql/bin /home/postgres/pgsql/include > /home/postgres/pgsql/lib /home/postgres/pgsql/share; mkdir -p > /home/postgres/pgsql", actual: "ssh postgres@gtm "( rm -rf > /home/postgres/pgsql/bin /home/postgres/pgsql/include > /home/postgres/pgsql/lib /home/postgres/pgsql/share; mkdir -p > /home/postgres/pgsql ) > /tmp/rhelclient_STDOUT_30849_2 2>&1" < /dev/null > > /dev/null 2>&1" > pgxc_ctl(30866):1309251435_28 Local command: "scp /tmp/30849.tgz > postgres@gtm:/tmp", actual: "( scp /tmp/30849.tgz postgres@gtm:/tmp ) > > /tmp/STDOUT_30849_3 2>&1 < /dev/null" > pgxc_ctl(30866):1309251435_29 Remote command: "tar xzCf > /home/postgres/pgsql /tmp/30849.tgz; rm /tmp/30849.tgz", actual: "ssh > postgres@gtm "( tar xzCf /home/postgres/pgsql /tmp/30849.tgz; rm > /tmp/30849.tgz ) > /tmp/rhelclient_STDOUT_30849_5 2>&1" < /dev/null > > /dev/null 2>&1" > pgxc_ctl(30849):1309251435_30 Actual command: ( rm -f /tmp/30849.tgz ) < > /dev/null > /tmp/STDOUT_30849_6 2>&1 > pgxc_ctl(30849):1309251435_30 Deployment done. > pgxc_ctl(30849):1309251511_35 PGXC init gtm > pgxc_ctl(30849):1309251511_35 Initialize GTM master > pgxc_ctl(30849):1309251511_35 *** cmdList Dump > ********************************* > allocated = 2, used = 1 > pgxc_ctl(30849):1309251511_35 === CMD: 0 === > pgxc_ctl(30849):1309251511_35 === CMD: 0 === > pgxc_ctl(30849):1309251511_35 --- CMD-EL: 0:host="gtm", > command="killall -u postgres -9 gtm; rm -rf /home/postgres/pgxc/nodes/gtm; > mkdir -p /home/postgres/pgxc/nodes/gtm;**initgtm -Z gtm -D > /home/postgres/pgxc/nodes/gtm"**, localStdin="NULL", localStdout="NULL" > pgxc_ctl(30849):1309251511_35 --- CMD-EL: 1:host="gtm", command="cat >> > /home/postgres/pgxc/nodes/gtm/**gtm.conf", localStdin="/tmp/STDIN_30849_**21", > localStdout="NULL" > pgxc_ctl(30849):1309251511_35 --- CMD-EL: 1:host="gtm", command="cat >> > /home/postgres/pgxc/nodes/gtm/**gtm.conf", localStdin="/tmp/STDIN_30849_**21", > localStdout="NULL" > pgxc_ctl(30849):1309251511_35 #=============================** > ================== > pgxc_ctl(30849):1309251511_35 # Added at initialization, 20130925_15:11:35 > pgxc_ctl(30849):1309251511_35 listen_addresses = '*' > pgxc_ctl(30849):1309251511_35 port = 20001 > pgxc_ctl(30849):1309251511_35 port = 20001 > pgxc_ctl(30849):1309251511_35 nodename = 'gtm' > pgxc_ctl(30849):1309251511_35 startup = ACT > pgxc_ctl(30849):1309251511_35 # End of addition > pgxc_ctl(30849):1309251511_35 ---------- > pgxc_ctl(30849):1309251511_35 --- CMD-EL: 2:host="gtm", command="(gtm > -x 2000 -D /home/postgres/pgxc/nodes/gtm &); sleep 1; gtm_ctl stop -Z gtm > -D /home/postgres/pgxc/nodes/gtm"**, localStdin="NULL", localStdout="NULL" > pgxc_ctl(31553):1309251511_35 Remote command: "killall -u postgres -9 gtm; > rm -rf /home/postgres/pgxc/nodes/gtm; mkdir -p > /home/postgres/pgxc/nodes/gtm;**initgtm -Z gtm -D > /home/postgres/pgxc/nodes/gtm"**, actual: "ssh postgres@gtm "( killall -u > postgres -9 gtm; rm -rf /home/postgres/pgxc/nodes/gtm; mkdir -p > /home/postgres/pgxc/nodes/gtm;**initgtm -Z gtm -D > /home/postgres/pgxc/nodes/gtm ) > /tmp/rhelclient_STDOUT_30849_**23 2>&1" > < /dev/null > /dev/null 2>&1" > pgxc_ctl(31553):1309251511_36 Remote command: "cat >> > /home/postgres/pgxc/nodes/gtm/**gtm.conf", actual: "ssh postgres@gtm "( > cat >> /home/postgres/pgxc/nodes/gtm/**gtm.conf ) > > /tmp/rhelclient_STDOUT_30849_**25 2>&1" < /tmp/STDIN_30849_21 > /dev/null > 2>&1" > pgxc_ctl(31553):1309251511_37 Remote command: "(gtm -x 2000 -D > /home/postgres/pgxc/nodes/gtm &); sleep 1; gtm_ctl stop -Z gtm -D > /home/postgres/pgxc/nodes/gtm"**, actual: "ssh postgres@gtm "( (gtm -x > 2000 -D /home/postgres/pgxc/nodes/gtm &); sleep 1; gtm_ctl stop -Z gtm -D > /home/postgres/pgxc/nodes/gtm ) > /tmp/rhelclient_STDOUT_30849_**27 2>&1" > < /dev/null > /dev/null 2>&1" > pgxc_ctl(30849):1309251511_38 gtm: no process killed > pgxc_ctl(30849):1309251511_38 bash: initgtm: command not found > pgxc_ctl(30849):1309251511_38 bash: initgtm: command not found > pgxc_ctl(30849):1309251511_38 bash: gtm: command not found > pgxc_ctl(30849):1309251511_38 bash: gtm_ctl: command not found > pgxc_ctl(30849):1309251511_38 Done. > > > What could be happening ? > > Could you help me? > > Thanks a lot, > > Hector M Jacas > > --- > This message was processed by Kaspersky Mail Gateway 5.6.28/RELEASE > running at host imx3.etecsa.cu > Visit our web-site: <http://www.kaspersky.com>, <http://www.viruslist.com> > > > ------------------------------------------------------------------------------ > October Webinars: Code for Performance > Free Intel webinars can help you accelerate application performance. > Explore tips for MPI, OpenMP, advanced profiling, and more. Get the most > from > the latest Intel processors and coprocessors. See abstracts and register > > http://pubads.g.doubleclick.net/gampad/clk?id=60133471&iu=/4140/ostg.clktrk > _______________________________________________ > Postgres-xc-general mailing list > Pos...@li... > https://lists.sourceforge.net/lists/listinfo/postgres-xc-general > > -- StormDB - http://www.stormdb.com The Database Cloud |
|
From: Hector M. J. <hec...@et...> - 2013-09-25 20:49:26
|
--- This message was processed by Kaspersky Mail Gateway 5.6.28/RELEASE running at host imx3.etecsa.cu Visit our web-site: <http://www.kaspersky.com>, <http://www.viruslist.com> |
|
From: Koichi S. <koi...@gm...> - 2013-09-23 01:11:31
|
To be honest, DML in plpgsql is not fully reviewed and we suspect there could be something needs improvement/fix. In the regression test, the test script issues "update" statements from plpgsql and it works fine. Regards; --- Koichi Suzuki 2013/9/22 Lucio Chiessi [VORio] <pos...@vo...> > My best regards to Postgres-XC developers. > > I am writing this e-mail in order to resolve a doubt about the use of > DML in functions using pl/pgsql. > > In section "E.5.7. Restrictions" of the "1.1 Release Notes", I can see > that DML are not allowed in pl/pgsql. > > But looking at the manual Postgres-XC, in the part that deals with the > use of pl/pgsql, I see instructions that can be used as the EXECUTE and > PERFORM commands. > > My question is: Even with this I will not be able to make DML functions > using pl/pgsql? > If not, there is another way to be into de database, or planning to > include the use of DML in pl/pgsql in future versions? > > Thanks! > > Lucio Chiessi > Rio de Janeiro - Brasil > > > ------------------------------------------------------------------------------ > LIMITED TIME SALE - Full Year of Microsoft Training For Just $49.99! > 1,500+ hours of tutorials including VisualStudio 2012, Windows 8, > SharePoint > 2013, SQL 2012, MVC 4, more. BEST VALUE: New Multi-Library Power Pack > includes > Mobile, Cloud, Java, and UX Design. Lowest price ever! Ends 9/22/13. > http://pubads.g.doubleclick.net/gampad/clk?id=64545871&iu=/4140/ostg.clktrk > _______________________________________________ > Postgres-xc-general mailing list > Pos...@li... > https://lists.sourceforge.net/lists/listinfo/postgres-xc-general > |
|
From: Koichi S. <koi...@gm...> - 2013-09-23 01:06:17
|
GTM-master does not require sync for every command to GTM-standby. It requires sync only at the end of grouped commands from GTM-Proxy. There should not be significant influence to the performance. Regards; --- Koichi Suzuki 2013/9/22 Nikhil Sontakke <ni...@st...> > Hi Prasad, > > 1) I am trying to evaluate the High Availability aspects of PGXC; and >> notice that GTM, and GTM-standby are configured to be in continuous >> sync. That means, every status change in GTM is synchronously made at >> GTM-standby. In such setup, what is the performance drop becoz of >> gtm-standby. >> >> Are there any benchmark tests run with and without GTM-standby?? what >> are the numbers?? >> >> > We did not see any significant differences in the with and without > GTM-Standby numbers when we did the runs some while ago. Don't have more > specifics right now though. > > > >> 2) How is GTM failure discovered? Vanilla PGXC, doesn't integrate with >> clusters like Corosync, right?? >> >> > You can come up with your resource agents for Corosync/Pacemaker. That's > what we did at StormDB. We have agents for GTM and datanode failover. > > >> 3) During GTM-failover, I see bunch of manual steps are needed to >> promote the GTM-standby to master; and make the GTM-proxies reconnect >> to the new GTM. What happens to the in-flight and new transactions >> while this GTM-failover happening?? >> I guess all active transaction will have to hang during this period, >> isn't?? >> >> > Again if you integrate properly with Corosync/Pacemaker or have your own > HA infrastructure in place, then you won't need any manual steps. > > Transactions would fail or error out for a brief period when this is > happening. If the application has logic to retry the transactions then it > might help. > > Regards, > Nikhils > > >> thanks, >> -Prasad >> >> >> ------------------------------------------------------------------------------ >> LIMITED TIME SALE - Full Year of Microsoft Training For Just $49.99! >> 1,500+ hours of tutorials including VisualStudio 2012, Windows 8, >> SharePoint >> 2013, SQL 2012, MVC 4, more. BEST VALUE: New Multi-Library Power Pack >> includes >> Mobile, Cloud, Java, and UX Design. Lowest price ever! Ends 9/20/13. >> >> http://pubads.g.doubleclick.net/gampad/clk?id=58041151&iu=/4140/ostg.clktrk >> _______________________________________________ >> Postgres-xc-general mailing list >> Pos...@li... >> https://lists.sourceforge.net/lists/listinfo/postgres-xc-general >> > > > > -- > StormDB - http://www.stormdb.com > The Database Cloud > > > ------------------------------------------------------------------------------ > LIMITED TIME SALE - Full Year of Microsoft Training For Just $49.99! > 1,500+ hours of tutorials including VisualStudio 2012, Windows 8, > SharePoint > 2013, SQL 2012, MVC 4, more. BEST VALUE: New Multi-Library Power Pack > includes > Mobile, Cloud, Java, and UX Design. Lowest price ever! Ends 9/22/13. > http://pubads.g.doubleclick.net/gampad/clk?id=64545871&iu=/4140/ostg.clktrk > _______________________________________________ > Postgres-xc-general mailing list > Pos...@li... > https://lists.sourceforge.net/lists/listinfo/postgres-xc-general > > |
|
From: Nikhil S. <ni...@st...> - 2013-09-22 02:01:14
|
Hi Prasad, 1) I am trying to evaluate the High Availability aspects of PGXC; and > notice that GTM, and GTM-standby are configured to be in continuous > sync. That means, every status change in GTM is synchronously made at > GTM-standby. In such setup, what is the performance drop becoz of > gtm-standby. > > Are there any benchmark tests run with and without GTM-standby?? what > are the numbers?? > > We did not see any significant differences in the with and without GTM-Standby numbers when we did the runs some while ago. Don't have more specifics right now though. > 2) How is GTM failure discovered? Vanilla PGXC, doesn't integrate with > clusters like Corosync, right?? > > You can come up with your resource agents for Corosync/Pacemaker. That's what we did at StormDB. We have agents for GTM and datanode failover. > 3) During GTM-failover, I see bunch of manual steps are needed to > promote the GTM-standby to master; and make the GTM-proxies reconnect > to the new GTM. What happens to the in-flight and new transactions > while this GTM-failover happening?? > I guess all active transaction will have to hang during this period, > isn't?? > > Again if you integrate properly with Corosync/Pacemaker or have your own HA infrastructure in place, then you won't need any manual steps. Transactions would fail or error out for a brief period when this is happening. If the application has logic to retry the transactions then it might help. Regards, Nikhils > thanks, > -Prasad > > > ------------------------------------------------------------------------------ > LIMITED TIME SALE - Full Year of Microsoft Training For Just $49.99! > 1,500+ hours of tutorials including VisualStudio 2012, Windows 8, > SharePoint > 2013, SQL 2012, MVC 4, more. BEST VALUE: New Multi-Library Power Pack > includes > Mobile, Cloud, Java, and UX Design. Lowest price ever! Ends 9/20/13. > http://pubads.g.doubleclick.net/gampad/clk?id=58041151&iu=/4140/ostg.clktrk > _______________________________________________ > Postgres-xc-general mailing list > Pos...@li... > https://lists.sourceforge.net/lists/listinfo/postgres-xc-general > -- StormDB - http://www.stormdb.com The Database Cloud |
|
From: Lucio C. [VORio] <pos...@vo...> - 2013-09-21 19:49:56
|
My best regards to Postgres-XC developers. I am writing this e-mail in order to resolve a doubt about the use of DML in functions using pl/pgsql. In section "E.5.7. Restrictions" of the "1.1 Release Notes", I can see that DML are not allowed in pl/pgsql. But looking at the manual Postgres-XC, in the part that deals with the use of pl/pgsql, I see instructions that can be used as the EXECUTE and PERFORM commands. My question is: Even with this I will not be able to make DML functions using pl/pgsql? If not, there is another way to be into de database, or planning to include the use of DML in pl/pgsql in future versions? Thanks! Lucio Chiessi Rio de Janeiro - Brasil |
|
From: Prasad V. <va...@gm...> - 2013-09-19 05:49:18
|
Hi, 1) I am trying to evaluate the High Availability aspects of PGXC; and notice that GTM, and GTM-standby are configured to be in continuous sync. That means, every status change in GTM is synchronously made at GTM-standby. In such setup, what is the performance drop becoz of gtm-standby. Are there any benchmark tests run with and without GTM-standby?? what are the numbers?? 2) How is GTM failure discovered? Vanilla PGXC, doesn't integrate with clusters like Corosync, right?? 3) During GTM-failover, I see bunch of manual steps are needed to promote the GTM-standby to master; and make the GTM-proxies reconnect to the new GTM. What happens to the in-flight and new transactions while this GTM-failover happening?? I guess all active transaction will have to hang during this period, isn't?? thanks, -Prasad |
|
From: Ashutosh B. <ash...@en...> - 2013-09-17 12:17:19
|
We have previously experimented with 10 coordinators and 10 datanodes. That configuration gave 6 times scalability wrt single PG server. There are reports in these archives of users trying with 20 servers. So, may be 16 coord + 16 datanodes should not be a problem. But adding coordinators increases performance if there are enough clients to keep coordinators busy. They do not store any user data. On Tue, Sep 17, 2013 at 5:39 PM, Bartłomiej Wójcik < bar...@tu...> wrote: > Hello, > > But can I have more then 16 coordinators and 16 datanodes the cluster > scaling ? > > Regards! > > > W dniu 2013-09-17 12:55, Abbas Butt pisze: > > Hi, > > Nodes are stored in a table in shared memory, which cannot be resized in > dynamic fashion. Hence limits are required at startup time to determine > shared memory size. You can have a cluster with any nodes less than the max > values defined in the configuration file and then later on add more nodes > to the cluster until that limit is reached. To add more nodes the > configuration needs to be changed and the cluster needs to be restarted. > Thanks. > > > > On Tue, Sep 17, 2013 at 3:35 PM, Bartłomiej Wójcik < > bar...@tu...> wrote: > >> Hello, >> >> I have a question about default max Coordinators which we can see >> in the configuration file - postgres.conf and in some presentations >> about pgxc >> >> What happens when you exceed the limit and after crossing? >> >> (similar restriction is for nodes) - why ? >> >> Regards! >> bw >> >> >> >> ------------------------------------------------------------------------------ >> LIMITED TIME SALE - Full Year of Microsoft Training For Just $49.99! >> 1,500+ hours of tutorials including VisualStudio 2012, Windows 8, >> SharePoint >> 2013, SQL 2012, MVC 4, more. BEST VALUE: New Multi-Library Power Pack >> includes >> Mobile, Cloud, Java, and UX Design. Lowest price ever! Ends 9/20/13. >> >> http://pubads.g.doubleclick.net/gampad/clk?id=58041151&iu=/4140/ostg.clktrk >> _______________________________________________ >> Postgres-xc-general mailing list >> Pos...@li... >> https://lists.sourceforge.net/lists/listinfo/postgres-xc-general >> > > > > -- > -- > *Abbas* > Architect > > Ph: 92.334.5100153 > Skype ID: gabbasb > www.enterprisedb.co <http://www.enterprisedb.com/>m<http://www.enterprisedb.com/> > * > Follow us on Twitter* > @EnterpriseDB > > Visit EnterpriseDB for tutorials, webinars, whitepapers<http://www.enterprisedb.com/resources-community>and more<http://www.enterprisedb.com/resources-community> > > > > > ------------------------------------------------------------------------------ > LIMITED TIME SALE - Full Year of Microsoft Training For Just $49.99! > 1,500+ hours of tutorials including VisualStudio 2012, Windows 8, > SharePoint > 2013, SQL 2012, MVC 4, more. BEST VALUE: New Multi-Library Power Pack > includes > Mobile, Cloud, Java, and UX Design. Lowest price ever! Ends 9/20/13. > http://pubads.g.doubleclick.net/gampad/clk?id=58041151&iu=/4140/ostg.clktrk > _______________________________________________ > Postgres-xc-general mailing list > Pos...@li... > https://lists.sourceforge.net/lists/listinfo/postgres-xc-general > > -- Best Wishes, Ashutosh Bapat EnterpriseDB Corporation The Postgres Database Company |