You can subscribe to this list here.
2010 |
Jan
|
Feb
|
Mar
|
Apr
|
May
(2) |
Jun
|
Jul
|
Aug
(6) |
Sep
|
Oct
(19) |
Nov
(1) |
Dec
|
---|---|---|---|---|---|---|---|---|---|---|---|---|
2011 |
Jan
(12) |
Feb
(1) |
Mar
(4) |
Apr
(4) |
May
(32) |
Jun
(12) |
Jul
(11) |
Aug
(1) |
Sep
(6) |
Oct
(3) |
Nov
|
Dec
(10) |
2012 |
Jan
(11) |
Feb
(1) |
Mar
(3) |
Apr
(25) |
May
(53) |
Jun
(38) |
Jul
(103) |
Aug
(54) |
Sep
(31) |
Oct
(66) |
Nov
(77) |
Dec
(20) |
2013 |
Jan
(91) |
Feb
(86) |
Mar
(103) |
Apr
(107) |
May
(25) |
Jun
(37) |
Jul
(17) |
Aug
(59) |
Sep
(38) |
Oct
(78) |
Nov
(29) |
Dec
(15) |
2014 |
Jan
(23) |
Feb
(82) |
Mar
(118) |
Apr
(101) |
May
(103) |
Jun
(45) |
Jul
(6) |
Aug
(10) |
Sep
|
Oct
(32) |
Nov
|
Dec
(9) |
2015 |
Jan
(3) |
Feb
(5) |
Mar
|
Apr
(1) |
May
|
Jun
|
Jul
(9) |
Aug
(4) |
Sep
(3) |
Oct
|
Nov
|
Dec
|
2016 |
Jan
(3) |
Feb
|
Mar
|
Apr
|
May
|
Jun
|
Jul
|
Aug
|
Sep
|
Oct
|
Nov
|
Dec
|
2017 |
Jan
|
Feb
|
Mar
|
Apr
|
May
|
Jun
(3) |
Jul
|
Aug
|
Sep
|
Oct
|
Nov
|
Dec
|
2018 |
Jan
|
Feb
|
Mar
|
Apr
|
May
(4) |
Jun
|
Jul
|
Aug
|
Sep
|
Oct
|
Nov
|
Dec
|
S | M | T | W | T | F | S |
---|---|---|---|---|---|---|
|
|
1
|
2
(3) |
3
(3) |
4
(4) |
5
(1) |
6
|
7
(7) |
8
(6) |
9
(5) |
10
(7) |
11
(7) |
12
(1) |
13
|
14
(3) |
15
(4) |
16
(6) |
17
(13) |
18
(6) |
19
|
20
|
21
|
22
|
23
(1) |
24
(4) |
25
(5) |
26
(1) |
27
|
28
(2) |
29
(10) |
30
(2) |
|
|
|
From: Aaron J. <aja...@re...> - 2014-04-29 16:39:21
|
When I load data into my table "detail" with COPY, the table loads at a rate of about 56k rows per second. The data is distributed on a key to achieve this rate of insert (width is 678). However, when I do the following: INSERT INTO DETAIL SELECT 123 as Id, ... FROM DETAIL WHERE Id = 500; I see the write performance drop to only 2.5K rows per second. The total data set loaded from Id = 500 is 200k rows and takes about 7s to load into the data coordinator. So, I can attribute almost all of the time (about 80 seconds) directly to the insert. Insert on detail (cost=0.00..10.00 rows=1000 width=678) (actual time=79438.038..79438.038 rows=0 loops=1) Node/s: node_pgs01_1, node_pgs01_2, node_pgs02_1, node_pgs02_2 Node expr: productid -> Data Node Scan on detail "_REMOTE_TABLE_QUERY_" (cost=0.00..10.00 rows=1000 width=678) (actual time=3.917..2147.231 rows=200000 loops=1) Node/s: node_pgs01_1, node_pgs01_2, node_pgs02_1, node_pgs02_2 IMO, it seems like an insert like this should approach the performance of a COPY. Am I missing something or can you recommend a different approach? |
From: Koichi S. <koi...@gm...> - 2014-04-29 12:10:08
|
Your configuration is wrong. Gtm_host and gtm_port in gtm_proxy.conf should be the same as host and port in pgtm.conf. Because there's too may pitfalls like this in configuring PGXC manually, I strongly advise to begin with pgxc_ctl, which does all these for your behalf unless you'd like to lean more in deep of PGXC architecture and configuration. Regards; --- Koichi Suzuki 2014-04-25 15:55 GMT+09:00 张紫宇 <zha...@gm...>: > I installed pgxc 1.2.1 and I wanted to start gtm and gtm_proxy on the same > sever, but gtm_proxy didn't work. > > I did it as such: > sudo su > mkdir /usr/local/pgsql/data_gtm > mkdir /usr/local/pgsql/data_gtm_proxy > chown l /usr/local/pgsql/data_gtm > chown l /usr/local/pgsql/data_gtm_proxy > su l > initgtm -Z gtm -D /usr/local/pgsql/data_gtm > initgtm -Z gtm_proxy -D /usr/local/pgsql/data_gtm_proxy > gtm -D /usr/local/pgsql/data_gtm & > gtm_proxy -D /usr/local/pgsql/data_gtm_proxy > > On the last step,it returns CST -FATAL: can not connect to GTM > LOCATION: ConnectGTM, proxy_main.c:3344. > I followed it in the gtm_proxy,I found errorno 111 which means Connection > refused in > function connectFailureMessage > in GTMPQconnectPoll > in connectGTMStart > in PQconnectGTMStart > in PQconnectGTM > in ConnectGTM > in RegisterProxy > in BaseInit > in main. > > My os is ubuntu 12.04 amd64 and I also tested it on centos 6.I also > installed pgxc 1.2.1 on both of them. But they all get the same error.I > found a mail > "http://sourceforge.net/p/postgres-xc/mailman/message/30755193/", we are > exactly the same. > > I followed and tried each every pages on net, but still can't solve it. Can > you please tell me what I can do? Any help here would be really appreciated. > > gtm.conf and gtm_proxy.conf come as follow: > > > > > gtm.conf: > # ---------------------- > # GTM configuration file > # ---------------------- > # > # This file must be placed on gtm working directory > # specified by -D command line option of gtm or gtm_ctl. The > # configuration file name must be "gtm.conf" > # > # > # This file consists of lines of the form > # > # name = value > # > # (The "=" is optional.) Whitespace may be used. Comments are > # introduced with "#" anywhere on a line. The complete list of > # parameter names and allowed values can be found in the > # Postgres-XC documentation. > # > # The commented-out settings shown in this file represent the default > # values. > # > # Re-commenting a setting is NOT sufficient to revert it to the default > # value. > # > # You need to restart the server. > > #------------------------------------------------------------------------------ > # GENERAL PARAMETERS > #------------------------------------------------------------------------------ > nodename = 'one' # Specifies the node name. > # (changes requires restart) > #listen_addresses = '*' # Listen addresses of this GTM. > # (changes requires restart) > port = 6666 # Port number of this GTM. > # (changes requires restart) > > #startup = ACT # Start mode. ACT/STANDBY. > > #------------------------------------------------------------------------------ > # GTM STANDBY PARAMETERS > #------------------------------------------------------------------------------ > #Those parameters are effective when GTM is activated as a standby server > #active_host = '' # Listen address of active GTM. > # (changes requires restart) > #active_port = # Port number of active GTM. > # (changes requires restart) > > #--------------------------------------- > # OTHER OPTIONS > #--------------------------------------- > #keepalives_idle = 0 # Keepalives_idle parameter. > #keepalives_interval = 0 # Keepalives_interval parameter. > #keepalives_count = 0 # Keepalives_count internal parameter. > #log_file = 'gtm.log' # Log file name > #log_min_messages = WARNING # log_min_messages. Default WARNING. > # Valid value: DEBUG, DEBUG5, DEBUG4, DEBUG3, > # DEBUG2, DEBUG1, INFO, NOTICE, WARNING, > # ERROR, LOG, FATAL, PANIC > #synchronous_backup = off # If backup to standby is synchronous > > > > > > > > gtm_proxy.conf: > > #----------------------------- > # GTM Proxy configuration file > #----------------------------- > # > # This file must be placed on gtm working directory > # specified by -D command line option of gtm_proxy or gtm_ctl. > # The configuration file name must be "gtm_proxy.conf" > # > # > # This file consists of lines of the form > # > # name = value > # > # (The "=" is optional.) Whitespace may be used. Comments are > # introduced with "#" anywhere on a line. The complete list of > # parameter names and allowed values can be found in the > # Postgres-XC documentation. > # > # The commented-out settings shown in this file represent the default > # values. > # > # Re-commenting a setting is NOT sufficient to revert it to the default > # value. > # > # You need to restart the server. > > #------------------------------------------------------------------------------ > # GENERAL PARAMETERS > #------------------------------------------------------------------------------ > nodename = 'one' # Specifies the node name. > # (changes requires restart) > #listen_addresses = '*' # Listen addresses of this GTM. > # (changes requires restart) > port = 6666 # Port number of this GTM. > # (changes requires restart) > > #------------------------------------------------------------------------------ > # GTM PROXY PARAMETERS > #------------------------------------------------------------------------------ > #worker_threads = 1 # Number of the worker thread of this > # GTM proxy > # (changes requires restart) > > #------------------------------------------------------------------------------ > # GTM CONNECTION PARAMETERS > #------------------------------------------------------------------------------ > # Those parameters are used to connect to a GTM server > gtm_host = 'localhost' # Listen address of the active GTM. > # (changes requires restart) > gtm_port = 6668 # Port number of the active GTM. > # (changes requires restart) > > #------------------------------------------------------------------------------ > # Behavior at GTM communication error > #------------------------------------------------------------------------------ > #gtm_connect_retry_interval = 0 # How long (in secs) to wait until the next > # retry to connect to GTM. > # > # > #------------------------------------------------------------------------------ > # Other options > #------------------------------------------------------------------------------ > #keepalives_idle = 0 # Keepalives_idle parameter. > #keepalives_interval = 0 # Keepalives_interval parameter. > #keepalives_count = 0 # Keepalives_count internal parameter. > #log_file = 'gtm_proxy.log' # Log file name > #log_min_messages = WARNING # log_min_messages. Default WARNING. > # Valid value: DEBUG, DEBUG5, DEBUG4, DEBUG3, > # DEBUG2, DEBUG1, INFO, NOTICE, WARNING, > # ERROR, LOG, FATAL, PANIC. > > > > --Ronian > > ------------------------------------------------------------------------------ > "Accelerate Dev Cycles with Automated Cross-Browser Testing - For FREE > Instantly run your Selenium tests across 300+ browser/OS combos. Get > unparalleled scalability from the best Selenium testing platform available. > Simple to use. Nothing to install. Get started now for free." > http://p.sf.net/sfu/SauceLabs > _______________________________________________ > Postgres-xc-general mailing list > Pos...@li... > https://lists.sourceforge.net/lists/listinfo/postgres-xc-general > |
From: Aaron J. <aja...@re...> - 2014-04-29 06:16:52
|
That was my thinking as well. I indirectly executed the queries independently. Basically, each query takes about 5-10 seconds each for 4 data nodes - specifically, I used psql -h <datanode> -p <port> - to time the individual data node performance individually. So you figure, worst case = 10 seconds x 4 nodes = 40 seconds of aggregate time on a serial request. But I'm seeing 65 seconds which means there's some other overhead that I'm missing. The 65 second aggregate is also the reason why I asked if the requests were parallel or serial because it *feels* serial though it could be other factors. I'll retest and update. ________________________________ From: Ashutosh Bapat [ash...@en...] Sent: Tuesday, April 29, 2014 1:05 AM To: Aaron Jackson Cc: amul sul; pos...@li... Subject: Re: [Postgres-xc-general] Data Node Scan Performance Hi Aaron, Can you please take the timing of executing "EXECUTE DIRECT <query to the datanode>" to some datanode. I suspect that the delay you are seeing is added by the sheer communication between coord and datanode. Some of that would be libpq overhead and some of it will be network overhead. On Tue, Apr 29, 2014 at 10:58 AM, Aaron Jackson <aja...@re...<mailto:aja...@re...>> wrote: Interesting, So, I wonder why I am seeing query times that are more than the sum of the total times required to perform the process without the coordinator. For example, let's say the query was 'SELECT 500 as Id, Foo, Bar from MyTable WHERE Id = 186' - I could perform this query at all 4 nodes and they would take no more than 10 seconds to run individually. However, when performed against the coordinator, this same query takes 65 seconds. That's more than the total aggregate of all data nodes. Any thoughts - is it completely attributed to the coordinator? ________________________________________ From: amul sul [sul...@ya...<mailto:sul...@ya...>] Sent: Tuesday, April 29, 2014 12:23 AM To: Aaron Jackson; pos...@li...<mailto:pos...@li...> Subject: Re: [Postgres-xc-general] Data Node Scan Performance >On Tuesday, 29 April 2014 10:38 AM, Aaron Jackson <aja...@re...<mailto:aja...@re...>> wrote: > my question is, does the coordinator execute the data node scan serially >or in parallel - and if it's serially, >is there any thought around how to make it parallel? IMO, scan on data happens independently i.e parallel, the scan result is collected at co-ordinator and returned to client. Referring distributed table using other than distribution key(in your case it Q instead of k), has little penalty. Regards, Amul Sul ------------------------------------------------------------------------------ "Accelerate Dev Cycles with Automated Cross-Browser Testing - For FREE Instantly run your Selenium tests across 300+ browser/OS combos. Get unparalleled scalability from the best Selenium testing platform available. Simple to use. Nothing to install. Get started now for free." http://p.sf.net/sfu/SauceLabs _______________________________________________ Postgres-xc-general mailing list Pos...@li...<mailto:Pos...@li...> https://lists.sourceforge.net/lists/listinfo/postgres-xc-general -- Best Wishes, Ashutosh Bapat EnterpriseDB Corporation The Postgres Database Company |
From: Ashutosh B. <ash...@en...> - 2014-04-29 06:05:17
|
Hi Aaron, Can you please take the timing of executing "EXECUTE DIRECT <query to the datanode>" to some datanode. I suspect that the delay you are seeing is added by the sheer communication between coord and datanode. Some of that would be libpq overhead and some of it will be network overhead. On Tue, Apr 29, 2014 at 10:58 AM, Aaron Jackson <aja...@re...>wrote: > Interesting, > > So, I wonder why I am seeing query times that are more than the sum of the > total times required to perform the process without the coordinator. For > example, let's say the query was 'SELECT 500 as Id, Foo, Bar from MyTable > WHERE Id = 186' - I could perform this query at all 4 nodes and they would > take no more than 10 seconds to run individually. However, when performed > against the coordinator, this same query takes 65 seconds. That's more > than the total aggregate of all data nodes. > > Any thoughts - is it completely attributed to the coordinator? > > > ________________________________________ > From: amul sul [sul...@ya...] > Sent: Tuesday, April 29, 2014 12:23 AM > To: Aaron Jackson; pos...@li... > Subject: Re: [Postgres-xc-general] Data Node Scan Performance > > >On Tuesday, 29 April 2014 10:38 AM, Aaron Jackson <aja...@re...> > wrote: > > my question is, does the coordinator execute the data node scan serially > >or in parallel - and if it's serially, > >is there any thought around how to make it parallel? > > IMO, scan on data happens independently i.e parallel, the scan result is > collected at co-ordinator and returned to client. > > Referring distributed table using other than distribution key(in your case > it Q instead of k), has little penalty. > > Regards, > Amul Sul > > > ------------------------------------------------------------------------------ > "Accelerate Dev Cycles with Automated Cross-Browser Testing - For FREE > Instantly run your Selenium tests across 300+ browser/OS combos. Get > unparalleled scalability from the best Selenium testing platform available. > Simple to use. Nothing to install. Get started now for free." > http://p.sf.net/sfu/SauceLabs > _______________________________________________ > Postgres-xc-general mailing list > Pos...@li... > https://lists.sourceforge.net/lists/listinfo/postgres-xc-general > -- Best Wishes, Ashutosh Bapat EnterpriseDB Corporation The Postgres Database Company |
From: amul s. <sul...@ya...> - 2014-04-29 06:01:50
|
On Tuesday, 29 April 2014 10:58 AM, Aaron Jackson <aja...@re...> wrote: Interesting, >Any thoughts - is it completely attributed to the coordinator? I am not sure, but in your example, MyTable is distributed on foo and search on ID, if somehow you able to add distribution key search in WHERE condition (eg. ....WHERE Id = 186 AND Foo=xyz ), planner would get help to locate candidate tuple easily. Current situation it scanning all the datanodes & combine result at coordinator. Dose ID is unique in MyTable ? if so can't you can distribute on ID? Regards, Amul Sul |
From: Aaron J. <aja...@re...> - 2014-04-29 05:30:21
|
Interesting, So, I wonder why I am seeing query times that are more than the sum of the total times required to perform the process without the coordinator. For example, let's say the query was 'SELECT 500 as Id, Foo, Bar from MyTable WHERE Id = 186' - I could perform this query at all 4 nodes and they would take no more than 10 seconds to run individually. However, when performed against the coordinator, this same query takes 65 seconds. That's more than the total aggregate of all data nodes. Any thoughts - is it completely attributed to the coordinator? ________________________________________ From: amul sul [sul...@ya...] Sent: Tuesday, April 29, 2014 12:23 AM To: Aaron Jackson; pos...@li... Subject: Re: [Postgres-xc-general] Data Node Scan Performance >On Tuesday, 29 April 2014 10:38 AM, Aaron Jackson <aja...@re...> wrote: > my question is, does the coordinator execute the data node scan serially >or in parallel - and if it's serially, >is there any thought around how to make it parallel? IMO, scan on data happens independently i.e parallel, the scan result is collected at co-ordinator and returned to client. Referring distributed table using other than distribution key(in your case it Q instead of k), has little penalty. Regards, Amul Sul |
From: amul s. <sul...@ya...> - 2014-04-29 05:23:42
|
>On Tuesday, 29 April 2014 10:38 AM, Aaron Jackson <aja...@re...> wrote: > my question is, does the coordinator execute the data node scan serially >or in parallel - and if it's serially, >is there any thought around how to make it parallel? IMO, scan on data happens independently i.e parallel, the scan result is collected at co-ordinator and returned to client. Referring distributed table using other than distribution key(in your case it Q instead of k), has little penalty. Regards, Amul Sul |
From: Aaron J. <aja...@re...> - 2014-04-29 05:16:09
|
Also, if it helps, this is actually an operational query, not a data warehouse type query. In this case, we have an object that owns a considerable amount of data below it (in this case, 1:N:M relationships where rows in table A have children in table B and grandchildren in table C). The operation I want to perform is a scalable clone of the data. The actually query looks more like the following: INSERT INTO MyTable SELECT 500 as Id, Foo, Bar from MyTable WHERE Id = $1 For testing purposes (I'm simply tinkering), the data is distributed on Foo and indexed on Id. So in this case I'm copying the "values" from the old entity to the new entity. The plan appears to require the data to be retrieved into the coordinator and then dispersed back down into the data nodes. An optimization for this specific problem would be to push the 'INSERT INTO ... SELECT' directly down into the data nodes since there isn't any inherent benefit to the coordinator consuming the data. The distribution key is identical - so the data will be sent right back to the data node it came from. Any other thoughts on how to make this performant - if not, I'll go back to my tinkering table. ________________________________ From: Aaron Jackson [aja...@re...] Sent: Tuesday, April 29, 2014 12:06 AM To: pos...@li... Subject: [Postgres-xc-general] Data Node Scan Performance I have a table that I've distributed by some key K. When I want to query by some other dimension Q, the coordinator explain plan indicates that it does a Data Node Scan on *table* "_REMOTE_TABLE_QUERY" Now what I've noticed is that if I have 4 nodes, the coordinator based scan may take 65 seconds, however, the individual date nodes usually finish within 5-10 seconds. The individual explain plains from each data node reveal nothing. So my question is, does the coordinator execute the data node scan serially or in parallel - and if it's serially, is there any thought around how to make it parallel? In the event it is already parallel, is the time differential I'm seeing simply attributed to the coordinator gathering results in preparation to return to the requesting client? Thanks |
From: Aaron J. <aja...@re...> - 2014-04-29 05:07:32
|
I have a table that I've distributed by some key K. When I want to query by some other dimension Q, the coordinator explain plan indicates that it does a Data Node Scan on *table* "_REMOTE_TABLE_QUERY" Now what I've noticed is that if I have 4 nodes, the coordinator based scan may take 65 seconds, however, the individual date nodes usually finish within 5-10 seconds. The individual explain plains from each data node reveal nothing. So my question is, does the coordinator execute the data node scan serially or in parallel - and if it's serially, is there any thought around how to make it parallel? In the event it is already parallel, is the time differential I'm seeing simply attributed to the coordinator gathering results in preparation to return to the requesting client? Thanks |
From: L <zha...@gm...> - 2014-04-29 02:58:21
|
-------- Original Message -------- Subject: GTM Proxy can't start Date: Fri, 25 Apr 2014 14:55:01 +0800 From: 张紫宇 <zha...@gm...> To: pos...@li... I installed pgxc 1.2.1 and I wanted to start gtm and gtm_proxy on the same sever, but gtm_proxy didn't work. I did it as such: sudo su mkdir /usr/local/pgsql/data_gtm mkdir /usr/local/pgsql/data_gtm_proxy chown l /usr/local/pgsql/data_gtm chown l /usr/local/pgsql/data_gtm_proxy su l initgtm -Z gtm -D /usr/local/pgsql/data_gtm initgtm -Z gtm_proxy -D /usr/local/pgsql/data_gtm_proxy gtm -D /usr/local/pgsql/data_gtm & gtm_proxy -D /usr/local/pgsql/data_gtm_proxy On the last step,it returns CST -FATAL: can not connect to GTM LOCATION: ConnectGTM, proxy_main.c:3344. I followed it in the gtm_proxy,I found errorno 111 which means Connection refused in function connectFailureMessage in GTMPQconnectPoll in connectGTMStart in PQconnectGTMStart in PQconnectGTM in ConnectGTM in RegisterProxy in BaseInit in main. My os is ubuntu 12.04 amd64 and I also tested it on centos 6.I also installed pgxc 1.2.1 on both of them. But they all get the same error.I found a mail "http://sourceforge.net/p/postgres-xc/mailman/message/30755193/", we are exactly the same. I followed and tried each every pages on net, but still can't solve it. Can you please tell me what I can do? Any help here would be really appreciated. gtm.conf and gtm_proxy.conf come as follow: gtm.conf: # ---------------------- # GTM configuration file # ---------------------- # # This file must be placed on gtm working directory # specified by -D command line option of gtm or gtm_ctl. The # configuration file name must be "gtm.conf" # # # This file consists of lines of the form # # name = value # # (The "=" is optional.) Whitespace may be used. Comments are # introduced with "#" anywhere on a line. The complete list of # parameter names and allowed values can be found in the # Postgres-XC documentation. # # The commented-out settings shown in this file represent the default # values. # # Re-commenting a setting is NOT sufficient to revert it to the default # value. # # You need to restart the server. #------------------------------------------------------------------------------ # GENERAL PARAMETERS #------------------------------------------------------------------------------ nodename = 'one'# Specifies the node name. # (changes requires restart) #listen_addresses = '*'# Listen addresses of this GTM. # (changes requires restart) port = 6666# Port number of this GTM. # (changes requires restart) #startup = ACT# Start mode. ACT/STANDBY. #------------------------------------------------------------------------------ # GTM STANDBY PARAMETERS #------------------------------------------------------------------------------ #Those parameters are effective when GTM is activated as a standby server #active_host = ''# Listen address of active GTM. # (changes requires restart) #active_port =# Port number of active GTM. # (changes requires restart) #--------------------------------------- # OTHER OPTIONS #--------------------------------------- #keepalives_idle = 0# Keepalives_idle parameter. #keepalives_interval = 0# Keepalives_interval parameter. #keepalives_count = 0# Keepalives_count internal parameter. #log_file = 'gtm.log'# Log file name #log_min_messages = WARNING# log_min_messages. Default WARNING. # Valid value: DEBUG, DEBUG5, DEBUG4, DEBUG3, # DEBUG2, DEBUG1, INFO, NOTICE, WARNING, # ERROR, LOG, FATAL, PANIC #synchronous_backup = off# If backup to standby is synchronous gtm_proxy.conf: #----------------------------- # GTM Proxy configuration file #----------------------------- # # This file must be placed on gtm working directory # specified by -D command line option of gtm_proxy or gtm_ctl. # The configuration file name must be "gtm_proxy.conf" # # # This file consists of lines of the form # # name = value # # (The "=" is optional.) Whitespace may be used. Comments are # introduced with "#" anywhere on a line. The complete list of # parameter names and allowed values can be found in the # Postgres-XC documentation. # # The commented-out settings shown in this file represent the default # values. # # Re-commenting a setting is NOT sufficient to revert it to the default # value. # # You need to restart the server. #------------------------------------------------------------------------------ # GENERAL PARAMETERS #------------------------------------------------------------------------------ nodename = 'one'# Specifies the node name. # (changes requires restart) #listen_addresses = '*'# Listen addresses of this GTM. # (changes requires restart) port = 6666# Port number of this GTM. # (changes requires restart) #------------------------------------------------------------------------------ # GTM PROXY PARAMETERS #------------------------------------------------------------------------------ #worker_threads = 1# Number of the worker thread of this # GTM proxy # (changes requires restart) #------------------------------------------------------------------------------ # GTM CONNECTION PARAMETERS #------------------------------------------------------------------------------ # Those parameters are used to connect to a GTM server gtm_host = 'localhost' # Listen address of the active GTM. # (changes requires restart) gtm_port = 6668 # Port number of the active GTM. # (changes requires restart) #------------------------------------------------------------------------------ # Behavior at GTM communication error #------------------------------------------------------------------------------ #gtm_connect_retry_interval = 0# How long (in secs) to wait until the next # retry to connect to GTM. # # #------------------------------------------------------------------------------ # Other options #------------------------------------------------------------------------------ #keepalives_idle = 0# Keepalives_idle parameter. #keepalives_interval = 0# Keepalives_interval parameter. #keepalives_count = 0# Keepalives_count internal parameter. #log_file = 'gtm_proxy.log'# Log file name #log_min_messages = WARNING# log_min_messages. Default WARNING. # Valid value: DEBUG, DEBUG5, DEBUG4, DEBUG3, # DEBUG2, DEBUG1, INFO, NOTICE, WARNING, # ERROR, LOG, FATAL, PANIC. --Ronian |