You can subscribe to this list here.
2010 |
Jan
|
Feb
|
Mar
|
Apr
|
May
(2) |
Jun
|
Jul
|
Aug
(6) |
Sep
|
Oct
(19) |
Nov
(1) |
Dec
|
---|---|---|---|---|---|---|---|---|---|---|---|---|
2011 |
Jan
(12) |
Feb
(1) |
Mar
(4) |
Apr
(4) |
May
(32) |
Jun
(12) |
Jul
(11) |
Aug
(1) |
Sep
(6) |
Oct
(3) |
Nov
|
Dec
(10) |
2012 |
Jan
(11) |
Feb
(1) |
Mar
(3) |
Apr
(25) |
May
(53) |
Jun
(38) |
Jul
(103) |
Aug
(54) |
Sep
(31) |
Oct
(66) |
Nov
(77) |
Dec
(20) |
2013 |
Jan
(91) |
Feb
(86) |
Mar
(103) |
Apr
(107) |
May
(25) |
Jun
(37) |
Jul
(17) |
Aug
(59) |
Sep
(38) |
Oct
(78) |
Nov
(29) |
Dec
(15) |
2014 |
Jan
(23) |
Feb
(82) |
Mar
(118) |
Apr
(101) |
May
(103) |
Jun
(45) |
Jul
(6) |
Aug
(10) |
Sep
|
Oct
(32) |
Nov
|
Dec
(9) |
2015 |
Jan
(3) |
Feb
(5) |
Mar
|
Apr
(1) |
May
|
Jun
|
Jul
(9) |
Aug
(4) |
Sep
(3) |
Oct
|
Nov
|
Dec
|
2016 |
Jan
(3) |
Feb
|
Mar
|
Apr
|
May
|
Jun
|
Jul
|
Aug
|
Sep
|
Oct
|
Nov
|
Dec
|
2017 |
Jan
|
Feb
|
Mar
|
Apr
|
May
|
Jun
(3) |
Jul
|
Aug
|
Sep
|
Oct
|
Nov
|
Dec
|
2018 |
Jan
|
Feb
|
Mar
|
Apr
|
May
(4) |
Jun
|
Jul
|
Aug
|
Sep
|
Oct
|
Nov
|
Dec
|
S | M | T | W | T | F | S |
---|---|---|---|---|---|---|
|
|
|
1
|
2
|
3
|
4
|
5
|
6
|
7
(2) |
8
(4) |
9
|
10
|
11
|
12
|
13
(1) |
14
(5) |
15
|
16
(2) |
17
(4) |
18
|
19
|
20
(1) |
21
|
22
|
23
|
24
(2) |
25
|
26
|
27
|
28
|
29
|
30
|
31
(2) |
|
From: Nirmal S. <sha...@gm...> - 2014-01-31 23:38:37
|
Hi, I created a pgxc cluster with one coordinator, one GTM and 6 data nodes (all on same big machine ). To test the performance, i ran one query through coordinator on my data which is evenly distributed on all the nodes and it took total 25 sec to complete. And then i ran the same query on datanodes directly and it took 5 sec on each and every datanodes. Since query execution happens parallely on data nodes so ideally even if i run the query through coordinator, it should not take more than 5-8 sec max but i dont understand why is it taking 25 sec. Can somebody help me.? Do i need to make some changes to my cluster configuration? Regards Nirmal |
From: Sandeep G. <gup...@gm...> - 2014-01-31 17:51:21
|
Hi, I was debugging an outstanding issue with pgxc ( http://sourceforge.net/mailarchive/forum.php?thread_name=CABEZHFtr_YoWb22UAnPGQz8M5KqpwzbviYiAgq_%3DY...@ma...&forum_name=postgres-xc-general). I couldn't reproduce that error. But I do get this error. LOG: database system is ready to accept connections LOG: autovacuum launcher started LOG: sending cancel to blocking autovacuum PID 17222 DETAIL: Process 13896 waits for AccessExclusiveLock on relation 16388 of database 12626. STATEMENT: drop index mdn ERROR: canceling autovacuum task CONTEXT: automatic analyze of table "postgres.public.la_directednetwork" PreAbort Remote It seems to be a deadlock issue and may be related to the earlier problem as well. Please let me know your comments. -Sandeep |
From: Nirmal S. <sha...@gm...> - 2014-01-24 00:09:33
|
Hi, I ran a query and generated this plan. Now i want to know where this query is taking most time? Is it the co-ordinatore that is taking time while sorting/aggregating and how how am i reduce this time ? "HashAggregate (cost=190.96..191.72 rows=100 width=64) (actual time=25115.936..25121.042 rows=24360 loops=1)" " -> Subquery Scan on zz (cost=49.83..190.71 rows=100 width=64) (actual time=18290.491..25059.508 rows=59192 loops=1)" " -> GroupAggregate (cost=49.83..189.95 rows=1 width=384) (actual time=18290.239..20885.314 rows=38786 loops=1)" " -> Sort (cost=49.83..52.33 rows=1000 width=384) (actual time=18290.101..18340.226 rows=343158 loops=1)" " Sort Key: kw.expr_names, kw.expr_values" " Sort Method: quicksort Memory: 66021kB" " -> Data Node Scan on "_REMOTE_TABLE_QUERY_" (cost=0.00..0.00 rows=1000 width=384) (actual time=295.532..12126.271 rows=343158 loops=1)" " Node/s: d11, d12, d13, d14, d15, d16" "Total runtime: 25145.074 ms" Nirmal |
From: Nirmal S. <sha...@gm...> - 2014-01-24 00:00:40
|
Hi, I ran a query and generated this plan. Now i want to know where this query is taking most time? Is it the co-ordinatore that is taking time while sorting/aggregating and how how am i reduce this time ? "HashAggregate (cost=190.96..191.72 rows=100 width=64) (actual time=25115.936..25121.042 rows=24360 loops=1)" " -> Subquery Scan on zz (cost=49.83..190.71 rows=100 width=64) (actual time=18290.491..25059.508 rows=59192 loops=1)" " -> GroupAggregate (cost=49.83..189.95 rows=1 width=384) (actual time=18290.239..20885.314 rows=38786 loops=1)" " -> Sort (cost=49.83..52.33 rows=1000 width=384) (actual time=18290.101..18340.226 rows=343158 loops=1)" " Sort Key: kw.expr_names, kw.expr_values" " Sort Method: quicksort Memory: 66021kB" " -> Data Node Scan on "_REMOTE_TABLE_QUERY_" (cost=0.00..0.00 rows=1000 width=384) (actual time=295.532..12126.271 rows=343158 loops=1)" " Node/s: d11, d12, d13, d14, d15, d16" "Total runtime: 25145.074 ms" Nirmal |
From: 鈴木 幸市 <ko...@in...> - 2014-01-20 06:02:48
|
I tested 10Gb/s network at the very first stage of PGXC implementation and found that we need to utilize giant packet to get full performance of 10Gb/s network. Packet size must be as much as 32kB or so and the maximum speed was around 6Gbps. GTM consumes network bandwidth but this evaluation does not show good background of using ultra-high-speed network. Next possibility will be to introduce group commit to make packet size bigger and utilize 10Gbps network speed. I didn’t do any such evaluation for 100Gbps network but I don’t think it works with short packed smaller than 1.5kB. Regards; --- Koichi Suzuki 2014/01/17 7:23、Mason Sharp <ms...@tr...<mailto:ms...@tr...>> のメール: On Thu, Jan 16, 2014 at 11:18 AM, Ying He <yin...@ya...<mailto:yin...@ya...>> wrote: hi, All, After reviewing http://postgres-xc.sourceforge.net/misc-docs/PG-XC_Architecture.pdf and reading mailing list. two things concerns me:1. the gtm network overhead 2. postgres dml support in plpgsql 1. Is 100Gbit/s network a prerequisite for the following setup? No. If I plan to deploy a gtm master and slave on separate servers and 4 nodes of gtm_proxy, coordinator, datanodes and each will need to serve millions of transactions as daily volume. Millions of transactions daily is not a problem. 2. Is there a doc that highlight which is currently not supported for postgres dml in plpgsql? Since the explain plan changes, i wonder how that affect postgres view predicate push and query performance.. The list will be important to consider postgres xc seriously because you can know what you need to change for your existing postgres setup. IIRC, view rewriting turns into base relations internally post-parsing, pre-planning, so push-down should work similarly as base tables. My main interest using postgres xc is its write scalability, all my tables will be replicated. I do not recommend this. With how many nodes? If you replicate to multiple nodes, two phase commit will be used, so instead of getting write scalability, it will be slower than a single native PostgreSQL instance. I think it only makes sense to replicate all tables for read-only/read-mainly cases. With Postgres-XC you get write scalability precisely because the tables are sharded amongst multiple nodes, and you can keep those drives busy writing across multiple servers. If you are worried about redundancy, you can instance use streaming replication so that you have individual node replicas. You can also put these replicas on another server that contains the "master" of a data node. The additional overhead is surprisingly small. It will be nice to see an updated PG-XC_Architecture.pdf which helps understand the internals. There are some presentations out there that may provide more information that you are looking for. One such slide deck is here: http://www.slideshare.net/stormdb_cloud_database/postgresxc-write-scalable-postgresql-cluster Thank you for your help. best, Ying ------------------------------------------------------------------------------ CenturyLink Cloud: The Leader in Enterprise Cloud Services. Learn Why More Businesses Are Choosing CenturyLink Cloud For Critical Workloads, Development Environments & Everything In Between. Get a Quote or Start a Free Trial Today. http://pubads.g.doubleclick.net/gampad/clk?id=119420431&iu=/4140/ostg.clktrk _______________________________________________ Postgres-xc-general mailing list Pos...@li...<mailto:Pos...@li...> https://lists.sourceforge.net/lists/listinfo/postgres-xc-general -- Mason Sharp TransLattice - http://www.translattice.com<http://www.translattice.com/> Distributed and Clustered Database Solutions ------------------------------------------------------------------------------ CenturyLink Cloud: The Leader in Enterprise Cloud Services. Learn Why More Businesses Are Choosing CenturyLink Cloud For Critical Workloads, Development Environments & Everything In Between. Get a Quote or Start a Free Trial Today. http://pubads.g.doubleclick.net/gampad/clk?id=119420431&iu=/4140/ostg.clktrk_______________________________________________ Postgres-xc-general mailing list Pos...@li... https://lists.sourceforge.net/lists/listinfo/postgres-xc-general |
From: Ying He <yin...@ya...> - 2014-01-17 15:33:16
|
Thank you Mason. This helps. On Thursday, January 16, 2014 5:23 PM, Mason Sharp <ms...@tr...> wrote: On Thu, Jan 16, 2014 at 11:18 AM, Ying He <yin...@ya...> wrote: hi, All, > > >After reviewing http://postgres-xc.sourceforge.net/misc-docs/PG-XC_Architecture.pdf and reading mailing list. two things concerns me:1. the gtm network overhead 2. postgres dml support in plpgsql > > > >1. Is 100Gbit/s network a prerequisite for the following setup? > > No. If I plan to deploy a gtm master and slave on separate servers and 4 nodes of gtm_proxy, coordinator, datanodes and each will need to serve millions of transactions as daily volume. Millions of transactions daily is not a problem. > >2. Is there a doc that highlight which is currently not supported for postgres dml in plpgsql? Since the explain plan changes, i wonder how that affect postgres view predicate push and query performance.. The list will be important to consider postgres xc seriously because you can know what you need to change for your existing postgres setup. > > > IIRC, view rewriting turns into base relations internally post-parsing, pre-planning, so push-down should work similarly as base tables. My main interest using postgres xc is its write scalability, all my tables will be replicated. I do not recommend this. With how many nodes? If you replicate to multiple nodes, two phase commit will be used, so instead of getting write scalability, it will be slower than a single native PostgreSQL instance. I think it only makes sense to replicate all tables for read-only/read-mainly cases. With Postgres-XC you get write scalability precisely because the tables are sharded amongst multiple nodes, and you can keep those drives busy writing across multiple servers. If you are worried about redundancy, you can instance use streaming replication so that you have individual node replicas. You can also put these replicas on another server that contains the "master" of a data node. The additional overhead is surprisingly small. > >It will be nice to see an updated PG-XC_Architecture.pdf which helps understand the internals. There are some presentations out there that may provide more information that you are looking for. One such slide deck is here: http://www.slideshare.net/stormdb_cloud_database/postgresxc-write-scalable-postgresql-cluster > >Thank you for your help. > > >best, >Ying > >------------------------------------------------------------------------------ >CenturyLink Cloud: The Leader in Enterprise Cloud Services. >Learn Why More Businesses Are Choosing CenturyLink Cloud For >Critical Workloads, Development Environments & Everything In Between. >Get a Quote or Start a Free Trial Today. >http://pubads.g.doubleclick.net/gampad/clk?id=119420431&iu=/4140/ostg.clktrk >_______________________________________________ >Postgres-xc-general mailing list >Pos...@li... >https://lists.sourceforge.net/lists/listinfo/postgres-xc-general > > -- Mason Sharp TransLattice - http://www.translattice.com Distributed and Clustered Database Solutions |
From: Ying He <yin...@ya...> - 2014-01-17 15:29:46
|
Thank you for the answers. どうもありがとうございました On Friday, January 17, 2014 1:16 AM, Koichi Suzuki <koi...@gm...> wrote: Please see inline... Good luck; --- Koichi Suzuki 2014/1/14 Ying He <yin...@ya...>: > Thank you for your response. I have a few questions: > > 1. when I use pgxc_ctl monitor command to check status, is the tool connect > to the target db and execute a query to see it is working fine or what does > it do? Yes. In the case of gtm and gtm_proxy, it opens the connection and make sure that initial shakehand works. > > 2. I did run into cases when I bring down gtm and not able to stop > gtm_proxy, coordinator, so what should be the correct order of stopping each > component if I need maintenance of a host. I will give a sample layout here > and if you could advise what should be the right order of stopping > components and restart them: > host1 has gtm master > host2 has gtm slave > host3 has gtm_proxy1, coordinator1, datanode1 > host4 has gtm_proxy2, coordinator2, datanode2 > > all tables are replicated not sharded so datanode1 and datanode2 should hold > the same copy. Let's say if I need bring down: > * host1. promote host2 as gtm master, reconnect gtm_proxy to new master. > commands like ssh gtm_slave_host gtm_ctl promote -Z gtm -D gtm_slave_dir in > the presentation, can I do it in pgxc_ctl? Yes, please see "reconnect" command in the reference. > * host2. should be ok, it is a slave gtm > * host3/host4. what is the correct order of stopping and restarting things, > assuming my app can detect down of a coordinator and connect to another > available one. You don't have to stop things in host3 and host4. You should just reconnect gtm_proxy at host3 and host4 to the new gtm master. Reconnect command of pgxc_ctl will take care of it. If you find gtm_proxy at host3 and host4 down, you can just restart them by "start" command. This time, you don't have to restart coordinators or datanodes. > > 3. for the sample layout, if host3 has power outages and down for a day and > 10G new data is in host4. what is the steps to bring it up? In present XC, you cannot do this. You should have datanode2 slave and failover datanode2 to the new master. "Failover" command will take care of this. In the case of coordinator, you can remove this by "remove" command. After a substitute hardware is available so that you can start coordinator 2, you shoudl "add" the coordinator by using "add" command. Coordinator has only system catalogs so adding the server will not take long. > > 4. for the sample layout, if I want to add host5 which has a set of > gtm_proxy3, coordinator3, datanode3, I should be able to use pgxc_ctl to add > it but if the existing datanode1 and datanode2 each holds 1Teribyte data, > will it take a really long time for it to be in synch with the rest and > start serving app, does the wait depend on how much data is already in the > existing datanodes? Yes, please refer to "add" command. The duration depends upon the size of each datanode's database cluster. Pgxc_ctl uses pg_basebackup for this purpose. > > I understand these are long and detailed questions to answer but it will > help me evaluate this tool better for serious adoption. Thank you for your > help. > > For those also wondering about any real company adoption, I find this thread > on linkedin http://www.linkedin.com/groups/Postgres-XC-77585.S.202900648, > that is the only one i find so far. > > best, > Ying > > > > > On Monday, January 13, 2014 8:51 PM, Koichi Suzuki <koi...@gm...> > wrote: > Monitoring a node status is another thing. pgxc_ctl provides a means > to check the status of each component. You should be careful to > check if GTM is alive. If it's dead, then all the query will be > blocked. I mean, before you try to check if a coordinator or a > datanode is alive, you should check if the GTM is alive. > > Regards; > --- > Koichi Suzuki > > > 2014/1/14 Michael Paquier <mic...@gm...>: >> On Tue, Jan 14, 2014 at 12:15 AM, Ying He <yin...@ya...> wrote: >>> hi, >>> >>> I am new to postgres xc. Is there a psql command that can be issued to >>> coordinator to query the current cluster's coordinators and datanodes? I >>> followed the setup steps and did CREATE NODE for coordinator and data >>> nodes >>> and they are up and running, just wonder whether a command exists that >>> can >>> query the current NODE and its status. >> You can have a look at the list of nodes registered with that: >> SELECT * FROM pgxc_node; >> Regards, >> -- >> Michael > >> >> >> ------------------------------------------------------------------------------ >> CenturyLink Cloud: The Leader in Enterprise Cloud Services. >> Learn Why More Businesses Are Choosing CenturyLink Cloud For >> Critical Workloads, Development Environments & Everything In Between. >> Get a Quote or Start a Free Trial Today. >> >> http://pubads.g.doubleclick.net/gampad/clk?id=119420431&iu=/4140/ostg.clktrk >> _______________________________________________ >> Postgres-xc-general mailing list >> Pos...@li... >> https://lists.sourceforge.net/lists/listinfo/postgres-xc-general > > > |
From: Koichi S. <koi...@gm...> - 2014-01-17 06:16:51
|
Please see inline... Good luck; --- Koichi Suzuki 2014/1/14 Ying He <yin...@ya...>: > Thank you for your response. I have a few questions: > > 1. when I use pgxc_ctl monitor command to check status, is the tool connect > to the target db and execute a query to see it is working fine or what does > it do? Yes. In the case of gtm and gtm_proxy, it opens the connection and make sure that initial shakehand works. > > 2. I did run into cases when I bring down gtm and not able to stop > gtm_proxy, coordinator, so what should be the correct order of stopping each > component if I need maintenance of a host. I will give a sample layout here > and if you could advise what should be the right order of stopping > components and restart them: > host1 has gtm master > host2 has gtm slave > host3 has gtm_proxy1, coordinator1, datanode1 > host4 has gtm_proxy2, coordinator2, datanode2 > > all tables are replicated not sharded so datanode1 and datanode2 should hold > the same copy. Let's say if I need bring down: > * host1. promote host2 as gtm master, reconnect gtm_proxy to new master. > commands like ssh gtm_slave_host gtm_ctl promote -Z gtm -D gtm_slave_dir in > the presentation, can I do it in pgxc_ctl? Yes, please see "reconnect" command in the reference. > * host2. should be ok, it is a slave gtm > * host3/host4. what is the correct order of stopping and restarting things, > assuming my app can detect down of a coordinator and connect to another > available one. You don't have to stop things in host3 and host4. You should just reconnect gtm_proxy at host3 and host4 to the new gtm master. Reconnect command of pgxc_ctl will take care of it. If you find gtm_proxy at host3 and host4 down, you can just restart them by "start" command. This time, you don't have to restart coordinators or datanodes. > > 3. for the sample layout, if host3 has power outages and down for a day and > 10G new data is in host4. what is the steps to bring it up? In present XC, you cannot do this. You should have datanode2 slave and failover datanode2 to the new master. "Failover" command will take care of this. In the case of coordinator, you can remove this by "remove" command. After a substitute hardware is available so that you can start coordinator 2, you shoudl "add" the coordinator by using "add" command. Coordinator has only system catalogs so adding the server will not take long. > > 4. for the sample layout, if I want to add host5 which has a set of > gtm_proxy3, coordinator3, datanode3, I should be able to use pgxc_ctl to add > it but if the existing datanode1 and datanode2 each holds 1Teribyte data, > will it take a really long time for it to be in synch with the rest and > start serving app, does the wait depend on how much data is already in the > existing datanodes? Yes, please refer to "add" command. The duration depends upon the size of each datanode's database cluster. Pgxc_ctl uses pg_basebackup for this purpose. > > I understand these are long and detailed questions to answer but it will > help me evaluate this tool better for serious adoption. Thank you for your > help. > > For those also wondering about any real company adoption, I find this thread > on linkedin http://www.linkedin.com/groups/Postgres-XC-77585.S.202900648, > that is the only one i find so far. > > best, > Ying > > > > > On Monday, January 13, 2014 8:51 PM, Koichi Suzuki <koi...@gm...> > wrote: > Monitoring a node status is another thing. pgxc_ctl provides a means > to check the status of each component. You should be careful to > check if GTM is alive. If it's dead, then all the query will be > blocked. I mean, before you try to check if a coordinator or a > datanode is alive, you should check if the GTM is alive. > > Regards; > --- > Koichi Suzuki > > > 2014/1/14 Michael Paquier <mic...@gm...>: >> On Tue, Jan 14, 2014 at 12:15 AM, Ying He <yin...@ya...> wrote: >>> hi, >>> >>> I am new to postgres xc. Is there a psql command that can be issued to >>> coordinator to query the current cluster's coordinators and datanodes? I >>> followed the setup steps and did CREATE NODE for coordinator and data >>> nodes >>> and they are up and running, just wonder whether a command exists that >>> can >>> query the current NODE and its status. >> You can have a look at the list of nodes registered with that: >> SELECT * FROM pgxc_node; >> Regards, >> -- >> Michael > >> >> >> ------------------------------------------------------------------------------ >> CenturyLink Cloud: The Leader in Enterprise Cloud Services. >> Learn Why More Businesses Are Choosing CenturyLink Cloud For >> Critical Workloads, Development Environments & Everything In Between. >> Get a Quote or Start a Free Trial Today. >> >> http://pubads.g.doubleclick.net/gampad/clk?id=119420431&iu=/4140/ostg.clktrk >> _______________________________________________ >> Postgres-xc-general mailing list >> Pos...@li... >> https://lists.sourceforge.net/lists/listinfo/postgres-xc-general > > > |
From: Koichi S. <koi...@gm...> - 2014-01-17 06:07:47
|
Recognizing -m option could be pgxc_ctl bug. Please let me test it. There could be some more patch available to fix it. Also, I will look into the monitor problem. Please forgive me some days to this. Thank you very much for the report and for your patient. Best; --- Koichi Suzuki 2014/1/15 Ying He <yin...@ya...>: > hi, > > Please ignore the original post, that is partially written. Here is the > whole text. > > > I start to use pgxc_ctl and encounter a few issues. I am using the latest > pgxc tar and contrib for pgxc_ctl. The test setup is as follows: > > host1 has gtm master > host2 has gtm slave > host3 has gtm_proxy1, coordinator1, datanode1 > host4 has gtm_proxy2, coordinator2, datanode2 > > steps: > 1. create pgxc_ctl.conf > 2. pgxc_ctl init all > 3. > pgxc_ctl Psql - coord1 > > CREATE NODE coord2 WITH (TYPE = 'coordinator', HOST = 'host4', PORT = xxxx); > CREATE NODE datanode1 WITH (TYPE = 'datanode', HOST = 'host3', PORT = xxxx); > CREATE NODE datanode2 WITH (TYPE = 'datanode', HOST = 'host4', PORT = xxxx); > > pgxc_ctl Psql - coord2 > CREATE NODE coord1 WITH (TYPE = 'coordinator', HOST = 'host3', PORT = xxxx); > CREATE NODE datanode1 WITH (TYPE = 'datanode', HOST = 'host3', PORT = xxxx); > CREATE NODE datanode2 WITH (TYPE = 'datanode', HOST = 'host4', PORT = xxxx); > > 4. Then it works with the following simple test. both coordinator returns > the same data and we are able to write to both coordinator > > create table t2 (a int) distribute by replication; > insert into t1 values (1),(2),(3),(4),(5),(6),(7),(8),(9),(10),(11),(12), > (13),(14); > > Issues: > > 1. -m fast is not recognized: > > pgxc_ctl stop datanode master datanode2 -m fast > pgxc_ctl: invalid option -- 'm' > Invalid optin value, received code 077 > > 2. when i try to use stop command to stop host4's gtm_proxy2, coordinator2, > datanode2, monitor all will show correctly that they are not running, but > pgxc_ctl Psql will still select coord2 > > pgxc_ctl Psql - coord1 is working as expected. > > 3. if not using stop, but use remove command for host4's set, it behaves > better: > pgxc_ctl remove coordinator master coord2 > pgxc_ctl remove datanode master datanode2 > pgxc_ctl remove gtm_proxy gtmproxy2 > > I remove them, then added back, stop all, start all, monitor all shows all > runing fine. > > issue is that select on both coordinator works fine. insert on coord2 is > working and data can be seen on both coordinator but insert on coord1 will > give the following: > postgres=# insert into t2 values (21); > ERROR: Failed to get pooled connections > > After SELECT pgxc_pool_reload(); I still see the following: > ERROR: Failed to read response from Datanodes > > For the above issues, any point of direction will be appreciated. It seems > like stop and remove and add order might matter, however in real production, > the order could be random, in that case, what should be the recovery > process? > > Thanks for any help. > > best, > Ying > > > > On Tuesday, January 14, 2014 5:13 PM, Ying He <yin...@ya...> wrote: > hi, > I start to use pgxc_ctl and encounter a few issues. I am using the latest > pgxc tar and contrib for pgxc_ctl. The test setup is as follows: > > host1 has gtm master > host2 has gtm slave > host3 has gtm_proxy1, coordinator1, datanode1 > host4 has gtm_proxy2, coordinator2, datanode2 > > steps: > 1. create pgxc_ctl.conf > 2. pgxc_ctl init all > 3. > pgxc_ctl Psql - coord1 > > CREATE NODE coord2 WITH (TYPE = 'coordinator', HOST = 'host2', PORT = xxxx); > CREATE NODE datanode1 WITH (TYPE = 'datanode', HOST = 'host1', PORT = xxxx); > CREATE NODE datanode2 WITH (TYPE = 'datanode', HOST = 'host2', PORT = xxxx); > > pgxc_ctl Psql - coord2 > CREATE NODE coord1 WITH (TYPE = 'coordinator', HOST = 'host1', PORT = xxxx); > CREATE NODE datanode1 WITH (TYPE = 'datanode', HOST = 'host1', PORT = xxxx); > CREATE NODE datanode2 WITH (TYPE = 'datanode', HOST = 'host2', PORT = xxxx); > > 4. Then it works with the following simple test. both coordinator returns > the same data and we are able to write. > create table t2 (a int) distribute by replication; > insert into t1 values (1),(2),(3),(4),(5),(6),(7),(8),(9),(10),(11),(12), > (13),(14); > > Issues: > 1. -m fast is not recognized: > > > ------------------------------------------------------------------------------ > CenturyLink Cloud: The Leader in Enterprise Cloud Services. > Learn Why More Businesses Are Choosing CenturyLink Cloud For > Critical Workloads, Development Environments & Everything In Between. > Get a Quote or Start a Free Trial Today. > http://pubads.g.doubleclick.net/gampad/clk?id=119420431&iu=/4140/ostg.clktrk > _______________________________________________ > Postgres-xc-general mailing list > Pos...@li... > https://lists.sourceforge.net/lists/listinfo/postgres-xc-general > > > > ------------------------------------------------------------------------------ > CenturyLink Cloud: The Leader in Enterprise Cloud Services. > Learn Why More Businesses Are Choosing CenturyLink Cloud For > Critical Workloads, Development Environments & Everything In Between. > Get a Quote or Start a Free Trial Today. > http://pubads.g.doubleclick.net/gampad/clk?id=119420431&iu=/4140/ostg.clktrk > _______________________________________________ > Postgres-xc-general mailing list > Pos...@li... > https://lists.sourceforge.net/lists/listinfo/postgres-xc-general > |
From: Mason S. <ms...@tr...> - 2014-01-16 22:23:51
|
On Thu, Jan 16, 2014 at 11:18 AM, Ying He <yin...@ya...> wrote: > hi, All, > > After reviewing > http://postgres-xc.sourceforge.net/misc-docs/PG-XC_Architecture.pdf and > reading mailing list. two things concerns me:1. the gtm network overhead 2. > postgres dml support in plpgsql > > 1. Is 100Gbit/s network a prerequisite for the following setup? > > No. > If I plan to deploy a gtm master and slave on separate servers and 4 nodes > of gtm_proxy, coordinator, datanodes and each will need to serve millions > of transactions as daily volume. > Millions of transactions daily is not a problem. > > 2. Is there a doc that highlight which is currently not supported for > postgres dml in plpgsql? Since the explain plan changes, i wonder how that > affect postgres view predicate push and query performance.. The list will > be important to consider postgres xc seriously because you can know what > you need to change for your existing postgres setup. > > IIRC, view rewriting turns into base relations internally post-parsing, pre-planning, so push-down should work similarly as base tables. > My main interest using postgres xc is its write scalability, all my tables > will be replicated. > I do not recommend this. With how many nodes? If you replicate to multiple nodes, two phase commit will be used, so instead of getting write scalability, it will be slower than a single native PostgreSQL instance. I think it only makes sense to replicate all tables for read-only/read-mainly cases. With Postgres-XC you get write scalability precisely because the tables are sharded amongst multiple nodes, and you can keep those drives busy writing across multiple servers. If you are worried about redundancy, you can instance use streaming replication so that you have individual node replicas. You can also put these replicas on another server that contains the "master" of a data node. The additional overhead is surprisingly small. > > It will be nice to see an updated PG-XC_Architecture.pdf which helps > understand the internals. > There are some presentations out there that may provide more information that you are looking for. One such slide deck is here: http://www.slideshare.net/stormdb_cloud_database/postgresxc-write-scalable-postgresql-cluster > > Thank you for your help. > > best, > Ying > > > ------------------------------------------------------------------------------ > CenturyLink Cloud: The Leader in Enterprise Cloud Services. > Learn Why More Businesses Are Choosing CenturyLink Cloud For > Critical Workloads, Development Environments & Everything In Between. > Get a Quote or Start a Free Trial Today. > > http://pubads.g.doubleclick.net/gampad/clk?id=119420431&iu=/4140/ostg.clktrk > _______________________________________________ > Postgres-xc-general mailing list > Pos...@li... > https://lists.sourceforge.net/lists/listinfo/postgres-xc-general > > -- Mason Sharp TransLattice - http://www.translattice.com Distributed and Clustered Database Solutions |
From: Ying He <yin...@ya...> - 2014-01-16 16:18:47
|
hi, All, After reviewing http://postgres-xc.sourceforge.net/misc-docs/PG-XC_Architecture.pdf and reading mailing list. two things concerns me:1. the gtm network overhead 2. postgres dml support in plpgsql 1. Is 100Gbit/s network a prerequisite for the following setup? If I plan to deploy a gtm master and slave on separate servers and 4 nodes of gtm_proxy, coordinator, datanodes and each will need to serve millions of transactions as daily volume. 2. Is there a doc that highlight which is currently not supported for postgres dml in plpgsql? Since the explain plan changes, i wonder how that affect postgres view predicate push and query performance.. The list will be important to consider postgres xc seriously because you can know what you need to change for your existing postgres setup. My main interest using postgres xc is its write scalability, all my tables will be replicated. It will be nice to see an updated PG-XC_Architecture.pdf which helps understand the internals. Thank you for your help. best, Ying |
From: Ying He <yin...@ya...> - 2014-01-14 22:32:29
|
hi, Please ignore the original post, that is partially written. Here is the whole text. I start to use pgxc_ctl and encounter a few issues. I am using the latest pgxc tar and contrib for pgxc_ctl. The test setup is as follows: host1 has gtm master host2 has gtm slave host3 has gtm_proxy1, coordinator1, datanode1 host4 has gtm_proxy2, coordinator2, datanode2 steps: 1. create pgxc_ctl.conf 2. pgxc_ctl init all 3. pgxc_ctl Psql - coord1 CREATE NODE coord2 WITH (TYPE = 'coordinator', HOST = 'host4', PORT = xxxx); CREATE NODE datanode1 WITH (TYPE = 'datanode', HOST = 'host3', PORT = xxxx); CREATE NODE datanode2 WITH (TYPE = 'datanode', HOST = 'host4', PORT = xxxx); pgxc_ctl Psql - coord2 CREATE NODE coord1 WITH (TYPE = 'coordinator', HOST = 'host3', PORT = xxxx); CREATE NODE datanode1 WITH (TYPE = 'datanode', HOST = 'host3', PORT = xxxx); CREATE NODE datanode2 WITH (TYPE = 'datanode', HOST = 'host4', PORT = xxxx); 4. Then it works with the following simple test. both coordinator returns the same data and we are able to write to both coordinator create table t2 (a int) distribute by replication; insert into t1 values (1),(2),(3),(4),(5),(6),(7),(8),(9),(10),(11),(12), (13),(14); Issues: 1. -m fast is not recognized: pgxc_ctl stop datanode master datanode2 -m fast pgxc_ctl: invalid option -- 'm' Invalid optin value, received code 077 2. when i try to use stop command to stop host4's gtm_proxy2, coordinator2, datanode2, monitor all will show correctly that they are not running, but pgxc_ctl Psql will still select coord2 pgxc_ctl Psql - coord1 is working as expected. 3. if not using stop, but use remove command for host4's set, it behaves better: pgxc_ctl remove coordinator master coord2 pgxc_ctl remove datanode master datanode2 pgxc_ctl remove gtm_proxy gtmproxy2 I remove them, then added back, stop all, start all, monitor all shows all runing fine. issue is that select on both coordinator works fine. insert on coord2 is working and data can be seen on both coordinator but insert on coord1 will give the following: postgres=# insert into t2 values (21); ERROR: Failed to get pooled connections After SELECT pgxc_pool_reload(); I still see the following: ERROR: Failed to read response from Datanodes For the above issues, any point of direction will be appreciated. It seems like stop and remove and add order might matter, however in real production, the order could be random, in that case, what should be the recovery process? Thanks for any help. best, Ying On Tuesday, January 14, 2014 5:13 PM, Ying He <yin...@ya...> wrote: hi, I start to use pgxc_ctl and encounter a few issues. I am using the latest pgxc tar and contrib for pgxc_ctl. The test setup is as follows: host1 has gtm master host2 has gtm slave host3 has gtm_proxy1, coordinator1, datanode1 host4 has gtm_proxy2, coordinator2, datanode2 steps: 1. create pgxc_ctl.conf 2. pgxc_ctl init all 3. pgxc_ctl Psql - coord1 CREATE NODE coord2 WITH (TYPE = 'coordinator', HOST = 'host2', PORT = xxxx); CREATE NODE datanode1 WITH (TYPE = 'datanode', HOST = 'host1', PORT = xxxx); CREATE NODE datanode2 WITH (TYPE = 'datanode', HOST = 'host2', PORT = xxxx); pgxc_ctl Psql - coord2 CREATE NODE coord1 WITH (TYPE = 'coordinator', HOST = 'host1', PORT = xxxx); CREATE NODE datanode1 WITH (TYPE = 'datanode', HOST = 'host1', PORT = xxxx); CREATE NODE datanode2 WITH (TYPE = 'datanode', HOST = 'host2', PORT = xxxx); 4. Then it works with the following simple test. both coordinator returns the same data and we are able to write. create table t2 (a int) distribute by replication; insert into t1 values (1),(2),(3),(4),(5),(6),(7),(8),(9),(10),(11),(12), (13),(14); Issues: 1. -m fast is not recognized: ------------------------------------------------------------------------------ CenturyLink Cloud: The Leader in Enterprise Cloud Services. Learn Why More Businesses Are Choosing CenturyLink Cloud For Critical Workloads, Development Environments & Everything In Between. Get a Quote or Start a Free Trial Today. http://pubads.g.doubleclick.net/gampad/clk?id=119420431&iu=/4140/ostg.clktrk _______________________________________________ Postgres-xc-general mailing list Pos...@li... https://lists.sourceforge.net/lists/listinfo/postgres-xc-general |
From: Ying He <yin...@ya...> - 2014-01-14 22:13:07
|
hi, I start to use pgxc_ctl and encounter a few issues. I am using the latest pgxc tar and contrib for pgxc_ctl. The test setup is as follows: host1 has gtm master host2 has gtm slave host3 has gtm_proxy1, coordinator1, datanode1 host4 has gtm_proxy2, coordinator2, datanode2 steps: 1. create pgxc_ctl.conf 2. pgxc_ctl init all 3. pgxc_ctl Psql - coord1 CREATE NODE coord2 WITH (TYPE = 'coordinator', HOST = 'host2', PORT = xxxx); CREATE NODE datanode1 WITH (TYPE = 'datanode', HOST = 'host1', PORT = xxxx); CREATE NODE datanode2 WITH (TYPE = 'datanode', HOST = 'host2', PORT = xxxx); pgxc_ctl Psql - coord2 CREATE NODE coord1 WITH (TYPE = 'coordinator', HOST = 'host1', PORT = xxxx); CREATE NODE datanode1 WITH (TYPE = 'datanode', HOST = 'host1', PORT = xxxx); CREATE NODE datanode2 WITH (TYPE = 'datanode', HOST = 'host2', PORT = xxxx); 4. Then it works with the following simple test. both coordinator returns the same data and we are able to write. create table t2 (a int) distribute by replication; insert into t1 values (1),(2),(3),(4),(5),(6),(7),(8),(9),(10),(11),(12), (13),(14); Issues: 1. -m fast is not recognized: |
From: Ying He <yin...@ya...> - 2014-01-14 14:42:58
|
Thank you for your response. I have a few questions: 1. when I use pgxc_ctl monitor command to check status, is the tool connect to the target db and execute a query to see it is working fine or what does it do? 2. I did run into cases when I bring down gtm and not able to stop gtm_proxy, coordinator, so what should be the correct order of stopping each component if I need maintenance of a host. I will give a sample layout here and if you could advise what should be the right order of stopping components and restart them: host1 has gtm master host2 has gtm slave host3 has gtm_proxy1, coordinator1, datanode1 host4 has gtm_proxy2, coordinator2, datanode2 all tables are replicated not sharded so datanode1 and datanode2 should hold the same copy. Let's say if I need bring down: * host1. promote host2 as gtm master, reconnect gtm_proxy to new master. commands like ssh gtm_slave_host gtm_ctl promote -Z gtm -D gtm_slave_dir in the presentation, can I do it in pgxc_ctl? * host2. should be ok, it is a slave gtm * host3/host4. what is the correct order of stopping and restarting things, assuming my app can detect down of a coordinator and connect to another available one. 3. for the sample layout, if host3 has power outages and down for a day and 10G new data is in host4. what is the steps to bring it up? 4. for the sample layout, if I want to add host5 which has a set of gtm_proxy3, coordinator3, datanode3, I should be able to use pgxc_ctl to add it but if the existing datanode1 and datanode2 each holds 1Teribyte data, will it take a really long time for it to be in synch with the rest and start serving app, does the wait depend on how much data is already in the existing datanodes? I understand these are long and detailed questions to answer but it will help me evaluate this tool better for serious adoption. Thank you for your help. For those also wondering about any real company adoption, I find this thread on linkedin http://www.linkedin.com/groups/Postgres-XC-77585.S.202900648, that is the only one i find so far. best, Ying On Monday, January 13, 2014 8:51 PM, Koichi Suzuki <koi...@gm...> wrote: Monitoring a node status is another thing. pgxc_ctl provides a means to check the status of each component. You should be careful to check if GTM is alive. If it's dead, then all the query will be blocked. I mean, before you try to check if a coordinator or a datanode is alive, you should check if the GTM is alive. Regards; --- Koichi Suzuki 2014/1/14 Michael Paquier <mic...@gm...>: > On Tue, Jan 14, 2014 at 12:15 AM, Ying He <yin...@ya...> wrote: >> hi, >> >> I am new to postgres xc. Is there a psql command that can be issued to >> coordinator to query the current cluster's coordinators and datanodes? I >> followed the setup steps and did CREATE NODE for coordinator and data nodes >> and they are up and running, just wonder whether a command exists that can >> query the current NODE and its status. > You can have a look at the list of nodes registered with that: > SELECT * FROM pgxc_node; > Regards, > -- > Michael > > ------------------------------------------------------------------------------ > CenturyLink Cloud: The Leader in Enterprise Cloud Services. > Learn Why More Businesses Are Choosing CenturyLink Cloud For > Critical Workloads, Development Environments & Everything In Between. > Get a Quote or Start a Free Trial Today. > http://pubads.g.doubleclick.net/gampad/clk?id=119420431&iu=/4140/ostg.clktrk > _______________________________________________ > Postgres-xc-general mailing list > Pos...@li... > https://lists.sourceforge.net/lists/listinfo/postgres-xc-general |
From: Koichi S. <koi...@gm...> - 2014-01-14 01:51:10
|
Monitoring a node status is another thing. pgxc_ctl provides a means to check the status of each component. You should be careful to check if GTM is alive. If it's dead, then all the query will be blocked. I mean, before you try to check if a coordinator or a datanode is alive, you should check if the GTM is alive. Regards; --- Koichi Suzuki 2014/1/14 Michael Paquier <mic...@gm...>: > On Tue, Jan 14, 2014 at 12:15 AM, Ying He <yin...@ya...> wrote: >> hi, >> >> I am new to postgres xc. Is there a psql command that can be issued to >> coordinator to query the current cluster's coordinators and datanodes? I >> followed the setup steps and did CREATE NODE for coordinator and data nodes >> and they are up and running, just wonder whether a command exists that can >> query the current NODE and its status. > You can have a look at the list of nodes registered with that: > SELECT * FROM pgxc_node; > Regards, > -- > Michael > > ------------------------------------------------------------------------------ > CenturyLink Cloud: The Leader in Enterprise Cloud Services. > Learn Why More Businesses Are Choosing CenturyLink Cloud For > Critical Workloads, Development Environments & Everything In Between. > Get a Quote or Start a Free Trial Today. > http://pubads.g.doubleclick.net/gampad/clk?id=119420431&iu=/4140/ostg.clktrk > _______________________________________________ > Postgres-xc-general mailing list > Pos...@li... > https://lists.sourceforge.net/lists/listinfo/postgres-xc-general |
From: Michael P. <mic...@gm...> - 2014-01-14 00:26:52
|
On Tue, Jan 14, 2014 at 12:15 AM, Ying He <yin...@ya...> wrote: > hi, > > I am new to postgres xc. Is there a psql command that can be issued to > coordinator to query the current cluster's coordinators and datanodes? I > followed the setup steps and did CREATE NODE for coordinator and data nodes > and they are up and running, just wonder whether a command exists that can > query the current NODE and its status. You can have a look at the list of nodes registered with that: SELECT * FROM pgxc_node; Regards, -- Michael |
From: Ying He <yin...@ya...> - 2014-01-13 15:15:49
|
hi, I am new to postgres xc. Is there a psql command that can be issued to coordinator to query the current cluster's coordinators and datanodes? I followed the setup steps and did CREATE NODE for coordinator and data nodes and they are up and running, just wonder whether a command exists that can query the current NODE and its status. I am still reviewing pgxc_ctl tool. is this tool being widely used? I am looking for some real business use cases. Does any one know where I can find some real company adoption for postgres xc in a production environment. Thank you. best, Ying |
From: Juned K. <jkh...@gm...> - 2014-01-08 05:57:36
|
Thanks Michael and Koichi. On Wed, Jan 8, 2014 at 7:33 AM, Koichi Suzuki <koi...@gm...> wrote: > Thanks Michael for the comment. > > The presentation script covers only a subset of pgxc_ctl commands. > For details, please visit > http://postgres-xc.sourceforge.net/docs/1_1/pgxc-ctl.html > > Good luck. > --- > Koichi Suzuki > > > 2014/1/8 Michael Paquier <mic...@gm...>: > > On Wed, Jan 8, 2014 at 10:35 AM, Koichi Suzuki <koi...@gm...> > wrote: > >> Yes, you can configure slaves for GTM, coordinator and datanode. > >> Also, you can configure GTM, coordinator and datanode on the same > >> server. > http://sourceforge.net/apps/mediawiki/postgres-xc/index.php?title=PGOpen2013_Postgres_Open_2013 > >> has a link to the presentation material in the last Postgres Open. > >> Material includes presentation slide deck, configuration file I used > >> for the demo, script file for the demo, and the link to my > >> presentation video. > >> > >> Hope they help. > > This reminds me as well the demonstration that Suzuki-san gave of > > pgxc_ctl at the last Postgres Open. There is a youtube video here: > > https://www.youtube.com/watch?v=65Fddd1Y3-o > > I don't recall if the presentation material contained all the commands > > used for the demonstration, so this video is worth having a look IMO. > > > > Regards, > > -- > > Michael > -- Thanks, Juned Khan iNextrix Technologies Pvt Ltd. www.inextrix.com |
From: Koichi S. <koi...@gm...> - 2014-01-08 02:03:39
|
Thanks Michael for the comment. The presentation script covers only a subset of pgxc_ctl commands. For details, please visit http://postgres-xc.sourceforge.net/docs/1_1/pgxc-ctl.html Good luck. --- Koichi Suzuki 2014/1/8 Michael Paquier <mic...@gm...>: > On Wed, Jan 8, 2014 at 10:35 AM, Koichi Suzuki <koi...@gm...> wrote: >> Yes, you can configure slaves for GTM, coordinator and datanode. >> Also, you can configure GTM, coordinator and datanode on the same >> server. http://sourceforge.net/apps/mediawiki/postgres-xc/index.php?title=PGOpen2013_Postgres_Open_2013 >> has a link to the presentation material in the last Postgres Open. >> Material includes presentation slide deck, configuration file I used >> for the demo, script file for the demo, and the link to my >> presentation video. >> >> Hope they help. > This reminds me as well the demonstration that Suzuki-san gave of > pgxc_ctl at the last Postgres Open. There is a youtube video here: > https://www.youtube.com/watch?v=65Fddd1Y3-o > I don't recall if the presentation material contained all the commands > used for the demonstration, so this video is worth having a look IMO. > > Regards, > -- > Michael |
From: Michael P. <mic...@gm...> - 2014-01-08 01:53:07
|
On Wed, Jan 8, 2014 at 10:35 AM, Koichi Suzuki <koi...@gm...> wrote: > Yes, you can configure slaves for GTM, coordinator and datanode. > Also, you can configure GTM, coordinator and datanode on the same > server. http://sourceforge.net/apps/mediawiki/postgres-xc/index.php?title=PGOpen2013_Postgres_Open_2013 > has a link to the presentation material in the last Postgres Open. > Material includes presentation slide deck, configuration file I used > for the demo, script file for the demo, and the link to my > presentation video. > > Hope they help. This reminds me as well the demonstration that Suzuki-san gave of pgxc_ctl at the last Postgres Open. There is a youtube video here: https://www.youtube.com/watch?v=65Fddd1Y3-o I don't recall if the presentation material contained all the commands used for the demonstration, so this video is worth having a look IMO. Regards, -- Michael |
From: Koichi S. <koi...@gm...> - 2014-01-08 01:35:35
|
Yes, you can configure slaves for GTM, coordinator and datanode. Also, you can configure GTM, coordinator and datanode on the same server. http://sourceforge.net/apps/mediawiki/postgres-xc/index.php?title=PGOpen2013_Postgres_Open_2013 has a link to the presentation material in the last Postgres Open. Material includes presentation slide deck, configuration file I used for the demo, script file for the demo, and the link to my presentation video. Hope they help. Best; --- Koichi Suzuki 2014/1/7 Juned Khan <jkh...@gm...>: > Hi All, > > Greetings ! > > I have some confusion regarding its configuration. As general we need to > create GTM,GTM proxy, datanodes and cordinator. GTM manages cordinator and > datanodes. > > Here my question is what will happen if GTM server will go down ? can we > have failover of GTM server ? if one server goes down then another server > will go up. > > can i have GTM,cordinator and datanodes on same server ? > > I want to setup something like below scenario: > > i.e two servers with GTM, datanodes, cordinator and GTM proxy on same > server. can i do failover of these two server ? so if one server goes down > then load will be transferred to another server. > > > -- > Thanks, > Juned Khan > iNextrix Technologies Pvt Ltd. > www.inextrix.com > > ------------------------------------------------------------------------------ > Rapidly troubleshoot problems before they affect your business. Most IT > organizations don't have a clear picture of how application performance > affects their revenue. With AppDynamics, you get 100% visibility into your > Java,.NET, & PHP application. Start your 15-day FREE TRIAL of AppDynamics > Pro! > http://pubads.g.doubleclick.net/gampad/clk?id=84349831&iu=/4140/ostg.clktrk > _______________________________________________ > Postgres-xc-general mailing list > Pos...@li... > https://lists.sourceforge.net/lists/listinfo/postgres-xc-general > |
From: Juned K. <jkh...@gm...> - 2014-01-07 12:17:11
|
Hi All, Greetings ! I have some confusion regarding its configuration. As general we need to create GTM,GTM proxy, datanodes and cordinator. GTM manages cordinator and datanodes. Here my question is what will happen if GTM server will go down ? can we have failover of GTM server ? if one server goes down then another server will go up. can i have GTM,cordinator and datanodes on same server ? I want to setup something like below scenario: *i.e* two servers with GTM, datanodes, cordinator and GTM proxy on same server. can i do failover of these two server ? so if one server goes down then load will be transferred to another server. -- Thanks, Juned Khan iNextrix Technologies Pvt Ltd. www.inextrix.com |
From: Juned K. <jkh...@gm...> - 2014-01-07 12:11:31
|
Hi All, Greetings ! I have some confusion regarding its configuration. As general we need to create GTM,GTM proxy, datanodes and cordinator. GTM manages cordinator and datanodes. Here my question is what will happen if GTM server will go down ? can we have failover of GTM server ? if one server goes down then another server will go up. can i have GTM,cordinator and datanodes on same server ? I want to setup something like below scenario: i.e two servers with GTM, datanodes, cordinator and GTM proxy on same server. can i do failover of these two server ? so if one server goes down then load will be transferred to another server. -- Thanks, Juned Khan iNextrix Technologies Pvt Ltd. www.inextrix.com |