You can subscribe to this list here.
2010 |
Jan
|
Feb
|
Mar
|
Apr
|
May
(2) |
Jun
|
Jul
|
Aug
(6) |
Sep
|
Oct
(19) |
Nov
(1) |
Dec
|
---|---|---|---|---|---|---|---|---|---|---|---|---|
2011 |
Jan
(12) |
Feb
(1) |
Mar
(4) |
Apr
(4) |
May
(32) |
Jun
(12) |
Jul
(11) |
Aug
(1) |
Sep
(6) |
Oct
(3) |
Nov
|
Dec
(10) |
2012 |
Jan
(11) |
Feb
(1) |
Mar
(3) |
Apr
(25) |
May
(53) |
Jun
(38) |
Jul
(103) |
Aug
(54) |
Sep
(31) |
Oct
(66) |
Nov
(77) |
Dec
(20) |
2013 |
Jan
(91) |
Feb
(86) |
Mar
(103) |
Apr
(107) |
May
(25) |
Jun
(37) |
Jul
(17) |
Aug
(59) |
Sep
(38) |
Oct
(78) |
Nov
(29) |
Dec
(15) |
2014 |
Jan
(23) |
Feb
(82) |
Mar
(118) |
Apr
(101) |
May
(103) |
Jun
(45) |
Jul
(6) |
Aug
(10) |
Sep
|
Oct
(32) |
Nov
|
Dec
(9) |
2015 |
Jan
(3) |
Feb
(5) |
Mar
|
Apr
(1) |
May
|
Jun
|
Jul
(9) |
Aug
(4) |
Sep
(3) |
Oct
|
Nov
|
Dec
|
2016 |
Jan
(3) |
Feb
|
Mar
|
Apr
|
May
|
Jun
|
Jul
|
Aug
|
Sep
|
Oct
|
Nov
|
Dec
|
2017 |
Jan
|
Feb
|
Mar
|
Apr
|
May
|
Jun
(3) |
Jul
|
Aug
|
Sep
|
Oct
|
Nov
|
Dec
|
2018 |
Jan
|
Feb
|
Mar
|
Apr
|
May
(4) |
Jun
|
Jul
|
Aug
|
Sep
|
Oct
|
Nov
|
Dec
|
From: <zzy...@si...> - 2014-04-25 06:59:10
|
I installed pgxc 1.2.1 and I wanted to start gtm and gtm_proxy on the same sever, but gtm_proxy didn't work. I did it as such:sudo sumkdir /usr/local/pgsql/data_gtmmkdir /usr/local/pgsql/data_gtm_proxychown l /usr/local/pgsql/data_gtmchown l /usr/local/pgsql/data_gtm_proxysu linitgtm -Z gtm -D /usr/local/pgsql/data_gtminitgtm -Z gtm_proxy -D /usr/local/pgsql/data_gtm_proxygtm -D /usr/local/pgsql/data_gtm >m_proxy -D /usr/local/pgsql/data_gtm_proxy On the last step,it returns CST -FATAL: can not connect to GTMLOCATION: ConnectGTM, proxy_main.c:3344.I followed it in the gtm_proxy,I found errorno 111 which means Connection refused infunction connectFailureMessagein GTMPQconnectPollin connectGTMStartin PQconnectGTMStartin PQconnectGTMin ConnectGTMin RegisterProxyin BaseInit in main. My os is ubuntu 12.04 amd64 and I also tested it on centos 6.I also installed pgxc 1.2.1 on both of them. But they all get the same error.I found a mail "http://sourceforge.net/p/postgres-xc/mailman/message/30755193/", we are exactly the same. I followed and tried each every pages on net, but still can't solve it. Can you please tell me what I can do? Any help here would be really appreciated. gtm.conf and gtm_proxy.conf come as follow: gtm.conf:# ----------------------# GTM configuration file# ----------------------## This file must be placed on gtm working directory # specified by -D command line option of gtm or gtm_ctl. The# configuration file name must be "gtm.conf"### This file consists of lines of the form## name = value## (The "=" is optional.) Whitespace may be used. Comments are# introduced with "#" anywhere on a line. The complete list of# parameter names and allowed values can be found in the# Postgres-XC documentation.## The commented-out settings shown in this file represent the default# values.## Re-commenting a setting is NOT sufficient to revert it to the default# value.## You need to restart the server. #------------------------------------------------------------------------------# GENERAL PARAMETERS#------------------------------------------------------------------------------nodename = 'one' # Specifies the node name. # (changes requires restart)#listen_addresses = '*' # Listen addresses of this GTM. # (changes requires restart)port = 6666 # Port number of this GTM. # (changes requires restart) #startup = ACT # Start mode. ACT/STANDBY. #------------------------------------------------------------------------------# GTM STANDBY PARAMETERS#------------------------------------------------------------------------------#Those parameters are effective when GTM is activated as a standby server#active_host = '' # Listen address of active GTM. # (changes requires restart)#active_port = # Port number of active GTM. # (changes requires restart) #---------------------------------------# OTHER OPTIONS#---------------------------------------#keepalives_idle = 0 # Keepalives_idle parameter.#keepalives_interval = 0 # Keepalives_interval parameter.#keepalives_count = 0 # Keepalives_count internal parameter.#log_file = 'gtm.log' # Log file name#log_min_messages = WARNING # log_min_messages. Default WARNING. # Valid value: DEBUG, DEBUG5, DEBUG4, DEBUG3, # DEBUG2, DEBUG1, INFO, NOTICE, WARNING, # ERROR, LOG, FATAL, PANIC#synchronous_backup = off # If backup to standby is synchronous gtm_proxy.conf: #-----------------------------# GTM Proxy configuration file#-----------------------------## This file must be placed on gtm working directory # specified by -D command line option of gtm_proxy or gtm_ctl.# The configuration file name must be "gtm_proxy.conf"### This file consists of lines of the form## name = value## (The "=" is optional.) Whitespace may be used. Comments are# introduced with "#" anywhere on a line. The complete list of# parameter names and allowed values can be found in the# Postgres-XC documentation.## The commented-out settings shown in this file represent the default# values.## Re-commenting a setting is NOT sufficient to revert it to the default# value.## You need to restart the server. #------------------------------------------------------------------------------# GENERAL PARAMETERS#------------------------------------------------------------------------------nodename = 'one' # Specifies the node name. # (changes requires restart)#listen_addresses = '*' # Listen addresses of this GTM. # (changes requires restart)port = 6666 # Port number of this GTM. # (changes requires restart) #------------------------------------------------------------------------------# GTM PROXY PARAMETERS#------------------------------------------------------------------------------#worker_threads = 1 # Number of the worker thread of this # GTM proxy # (changes requires restart) #------------------------------------------------------------------------------# GTM CONNECTION PARAMETERS#------------------------------------------------------------------------------# Those parameters are used to connect to a GTM servergtm_host = 'localhost' # Listen address of the active GTM. # (changes requires restart)gtm_port = 6668 # Port number of the active GTM. # (changes requires restart) #------------------------------------------------------------------------------# Behavior at GTM communication error#------------------------------------------------------------------------------#gtm_connect_retry_interval = 0 # How long (in secs) to wait until the next # retry to connect to GTM.###------------------------------------------------------------------------------# Other options#------------------------------------------------------------------------------#keepalives_idle = 0 # Keepalives_idle parameter.#keepalives_interval = 0 # Keepalives_interval parameter.#keepalives_count = 0 # Keepalives_count internal parameter.#log_file = 'gtm_proxy.log' # Log file name#log_min_messages = WARNING # log_min_messages. Default WARNING. # Valid value: DEBUG, DEBUG5, DEBUG4, DEBUG3, # DEBUG2, DEBUG1, INFO, NOTICE, WARNING, # ERROR, LOG, FATAL, PANIC. --Ronian |
From: 张紫宇 <zha...@gm...> - 2014-04-25 06:55:09
|
I installed pgxc 1.2.1 and I wanted to start gtm and gtm_proxy on the same sever, but gtm_proxy didn't work. I did it as such: sudo su mkdir /usr/local/pgsql/data_gtm mkdir /usr/local/pgsql/data_gtm_proxy chown l /usr/local/pgsql/data_gtm chown l /usr/local/pgsql/data_gtm_proxy su l initgtm -Z gtm -D /usr/local/pgsql/data_gtm initgtm -Z gtm_proxy -D /usr/local/pgsql/data_gtm_proxy gtm -D /usr/local/pgsql/data_gtm & gtm_proxy -D /usr/local/pgsql/data_gtm_proxy On the last step,it returns CST -FATAL: can not connect to GTM LOCATION: ConnectGTM, proxy_main.c:3344. I followed it in the gtm_proxy,I found errorno 111 which means Connection refused in function connectFailureMessage in GTMPQconnectPoll in connectGTMStart in PQconnectGTMStart in PQconnectGTM in ConnectGTM in RegisterProxy in BaseInit in main. My os is ubuntu 12.04 amd64 and I also tested it on centos 6.I also installed pgxc 1.2.1 on both of them. But they all get the same error.I found a mail "http://sourceforge.net/p/postgres-xc/mailman/message/30755193/", we are exactly the same. I followed and tried each every pages on net, but still can't solve it. Can you please tell me what I can do? Any help here would be really appreciated. gtm.conf and gtm_proxy.conf come as follow: gtm.conf: # ---------------------- # GTM configuration file # ---------------------- # # This file must be placed on gtm working directory # specified by -D command line option of gtm or gtm_ctl. The # configuration file name must be "gtm.conf" # # # This file consists of lines of the form # # name = value # # (The "=" is optional.) Whitespace may be used. Comments are # introduced with "#" anywhere on a line. The complete list of # parameter names and allowed values can be found in the # Postgres-XC documentation. # # The commented-out settings shown in this file represent the default # values. # # Re-commenting a setting is NOT sufficient to revert it to the default # value. # # You need to restart the server. #------------------------------------------------------------------------------ # GENERAL PARAMETERS #------------------------------------------------------------------------------ nodename = 'one' # Specifies the node name. # (changes requires restart) #listen_addresses = '*' # Listen addresses of this GTM. # (changes requires restart) port = 6666 # Port number of this GTM. # (changes requires restart) #startup = ACT # Start mode. ACT/STANDBY. #------------------------------------------------------------------------------ # GTM STANDBY PARAMETERS #------------------------------------------------------------------------------ #Those parameters are effective when GTM is activated as a standby server #active_host = '' # Listen address of active GTM. # (changes requires restart) #active_port = # Port number of active GTM. # (changes requires restart) #--------------------------------------- # OTHER OPTIONS #--------------------------------------- #keepalives_idle = 0 # Keepalives_idle parameter. #keepalives_interval = 0 # Keepalives_interval parameter. #keepalives_count = 0 # Keepalives_count internal parameter. #log_file = 'gtm.log' # Log file name #log_min_messages = WARNING # log_min_messages. Default WARNING. # Valid value: DEBUG, DEBUG5, DEBUG4, DEBUG3, # DEBUG2, DEBUG1, INFO, NOTICE, WARNING, # ERROR, LOG, FATAL, PANIC #synchronous_backup = off # If backup to standby is synchronous gtm_proxy.conf: #----------------------------- # GTM Proxy configuration file #----------------------------- # # This file must be placed on gtm working directory # specified by -D command line option of gtm_proxy or gtm_ctl. # The configuration file name must be "gtm_proxy.conf" # # # This file consists of lines of the form # # name = value # # (The "=" is optional.) Whitespace may be used. Comments are # introduced with "#" anywhere on a line. The complete list of # parameter names and allowed values can be found in the # Postgres-XC documentation. # # The commented-out settings shown in this file represent the default # values. # # Re-commenting a setting is NOT sufficient to revert it to the default # value. # # You need to restart the server. #------------------------------------------------------------------------------ # GENERAL PARAMETERS #------------------------------------------------------------------------------ nodename = 'one' # Specifies the node name. # (changes requires restart) #listen_addresses = '*' # Listen addresses of this GTM. # (changes requires restart) port = 6666 # Port number of this GTM. # (changes requires restart) #------------------------------------------------------------------------------ # GTM PROXY PARAMETERS #------------------------------------------------------------------------------ #worker_threads = 1 # Number of the worker thread of this # GTM proxy # (changes requires restart) #------------------------------------------------------------------------------ # GTM CONNECTION PARAMETERS #------------------------------------------------------------------------------ # Those parameters are used to connect to a GTM server gtm_host = 'localhost' # Listen address of the active GTM. # (changes requires restart) gtm_port = 6668 # Port number of the active GTM. # (changes requires restart) #------------------------------------------------------------------------------ # Behavior at GTM communication error #------------------------------------------------------------------------------ #gtm_connect_retry_interval = 0 # How long (in secs) to wait until the next # retry to connect to GTM. # # #------------------------------------------------------------------------------ # Other options #------------------------------------------------------------------------------ #keepalives_idle = 0 # Keepalives_idle parameter. #keepalives_interval = 0 # Keepalives_interval parameter. #keepalives_count = 0 # Keepalives_count internal parameter. #log_file = 'gtm_proxy.log' # Log file name #log_min_messages = WARNING # log_min_messages. Default WARNING. # Valid value: DEBUG, DEBUG5, DEBUG4, DEBUG3, # DEBUG2, DEBUG1, INFO, NOTICE, WARNING, # ERROR, LOG, FATAL, PANIC. --Ronian |
From: 鈴木 幸市 <ko...@in...> - 2014-04-25 01:52:32
|
Did you backup the file you cleared? if not, it was not a good idea to clean things because GTM writes its restart point at its working directory. If you did, then you can restore the file gtm.control back. It is in fact a text file so if you would like, you can edit it for repair. First like is the next GXID value. Following lines define each sequence. This could be a bit harmful but should work. If you ran gtm slave, it will also have gtm.control which can be used as well. Good luck. --- Koichi Suzuki 2014/04/25 2:20、Aaron Jackson <aja...@re...<mailto:aja...@re...>> のメール: So I noticed a problem creating a sequence on my cluster - the logs indicated that there was a gtm failure. The GTM indicated that the sequence existed. So I stopped the GTM but left the two coordinators (and proxies) running. I then wiped out everything exception the configuration file and restarted. Soon after that, I noticed that the schemas I had created were no longer available through the coordinator - through attempts to create them said they existed. The best I can guess is that my reset caused the transaction #s to get munged and the schemas were not visible at the mvcc level I was currently querying. Long story short, I did this one to myself - however, in a real scenario, how would you go about resetting the environment if the GTM suffered a failure and needed to be rebuilt? ------------------------------------------------------------------------------ Start Your Social Network Today - Download eXo Platform Build your Enterprise Intranet with eXo Platform Software Java Based Open Source Intranet - Social, Extensible, Cloud Ready Get Started Now And Turn Your Intranet Into A Collaboration Platform http://p.sf.net/sfu/ExoPlatform_______________________________________________ Postgres-xc-general mailing list Pos...@li... https://lists.sourceforge.net/lists/listinfo/postgres-xc-general |
From: Aaron J. <aja...@re...> - 2014-04-24 17:20:50
|
So I noticed a problem creating a sequence on my cluster - the logs indicated that there was a gtm failure. The GTM indicated that the sequence existed. So I stopped the GTM but left the two coordinators (and proxies) running. I then wiped out everything exception the configuration file and restarted. Soon after that, I noticed that the schemas I had created were no longer available through the coordinator - through attempts to create them said they existed. The best I can guess is that my reset caused the transaction #s to get munged and the schemas were not visible at the mvcc level I was currently querying. Long story short, I did this one to myself - however, in a real scenario, how would you go about resetting the environment if the GTM suffered a failure and needed to be rebuilt? |
From: 鈴木 幸市 <ko...@in...> - 2014-04-24 05:46:39
|
At present pgxc_ctl does this. --- Koichi Suzuki 2014/04/24 13:31、Pavan Deolasee <pav...@gm...<mailto:pav...@gm...>> のメール: I just want to make sure I'm not missing something obvious. If this is the way it's designed to work, then that's fine. Yes, this is per design. Having said that, I wonder if we should make it easier for users. One reason why we don't do it automatically today is because the nodes being added may not be online when they are being added. We discussed a scheme to handle this in the past. One option is to add an ONLINE option to create node command and if specified, coordinator can try connecting to the new node and make appropriate catalog changes there too. Command can fail if the node is Unreachable. We can also find a way to fire pgxc_pool_reload automatically when cluster definition changes. Thanks, Pavan ------------------------------------------------------------------------------ Start Your Social Network Today - Download eXo Platform Build your Enterprise Intranet with eXo Platform Software Java Based Open Source Intranet - Social, Extensible, Cloud Ready Get Started Now And Turn Your Intranet Into A Collaboration Platform http://p.sf.net/sfu/ExoPlatform_______________________________________________ Postgres-xc-general mailing list Pos...@li... https://lists.sourceforge.net/lists/listinfo/postgres-xc-general |
From: Pavan D. <pav...@gm...> - 2014-04-24 04:31:59
|
> > I just want to make sure I'm not missing something obvious. If this is the way it's designed to work, then that's fine. > Yes, this is per design. Having said that, I wonder if we should make it easier for users. One reason why we don't do it automatically today is because the nodes being added may not be online when they are being added. We discussed a scheme to handle this in the past. One option is to add an ONLINE option to create node command and if specified, coordinator can try connecting to the new node and make appropriate catalog changes there too. Command can fail if the node is Unreachable. We can also find a way to fire pgxc_pool_reload automatically when cluster definition changes. Thanks, Pavan |
From: Ashutosh B. <ash...@en...> - 2014-04-24 04:13:04
|
On Thu, Apr 24, 2014 at 12:49 AM, Aaron Jackson <aja...@re...>wrote: > I've read through several pieces of documentation but couldn't find an > answer, short of experimentation. I'm really unclear about how > coordinators are "managed" when they are in a cluster. From what I can > tell, it looks like each node in a cluster must be individually setup ... > for example. > > I have three nodes, A, B & C. > > - A is my GTM instance. > - B & C newly created data nodes and coordinators - nothing in them > > Now let's say I want to make B & C aware of each other at the > coordinator level. So on server B, I connect to the coordinator and issue > the following: > > CREATE NODE coord_2 WITH (TYPE = 'coordinator', PORT = 5432, HOST = > 'B'); > select pgxc_pool_reload(); > select * from pg_catalog.pgxc_node; > > And as expected, I now have two entries in pgxc_node. I hop over to > machine C ... > > select * from pg_catalog.pgxc_node; > > This returns only a row for itself. Okay, so I reload the pool and > still one row. This implies that adding a coordinator node only affects > the coordinator where it was run on. So, I must go to all machines in my > cluster and tell them that there is a new coordinator. > > I just want to make sure I'm not missing something obvious. If this is > the way it's designed to work, then that's fine. > > You are right. We have to "introduce" each node (not just coordinator but datanode as well) to all (other, if applicable), coordinators. > Aaron > > > ------------------------------------------------------------------------------ > Start Your Social Network Today - Download eXo Platform > Build your Enterprise Intranet with eXo Platform Software > Java Based Open Source Intranet - Social, Extensible, Cloud Ready > Get Started Now And Turn Your Intranet Into A Collaboration Platform > http://p.sf.net/sfu/ExoPlatform > _______________________________________________ > Postgres-xc-general mailing list > Pos...@li... > https://lists.sourceforge.net/lists/listinfo/postgres-xc-general > > -- Best Wishes, Ashutosh Bapat EnterpriseDB Corporation The Postgres Database Company |
From: Aaron J. <aja...@re...> - 2014-04-23 19:20:12
|
I've read through several pieces of documentation but couldn't find an answer, short of experimentation. I'm really unclear about how coordinators are "managed" when they are in a cluster. From what I can tell, it looks like each node in a cluster must be individually setup ... for example. I have three nodes, A, B & C. - A is my GTM instance. - B & C newly created data nodes and coordinators - nothing in them Now let's say I want to make B & C aware of each other at the coordinator level. So on server B, I connect to the coordinator and issue the following: CREATE NODE coord_2 WITH (TYPE = 'coordinator', PORT = 5432, HOST = 'B'); select pgxc_pool_reload(); select * from pg_catalog.pgxc_node; And as expected, I now have two entries in pgxc_node. I hop over to machine C ... select * from pg_catalog.pgxc_node; This returns only a row for itself. Okay, so I reload the pool and still one row. This implies that adding a coordinator node only affects the coordinator where it was run on. So, I must go to all machines in my cluster and tell them that there is a new coordinator. I just want to make sure I'm not missing something obvious. If this is the way it's designed to work, then that's fine. Aaron |
From: Juned K. <jkh...@gm...> - 2014-04-18 10:38:11
|
yeah sure i would like to test it first you mean this presentation http://sourceforge.net/apps/mediawiki/postgres-xc/index.php?title=PGOpen2013_Postgres_Open_2013 ? On Fri, Apr 18, 2014 at 2:30 PM, 鈴木 幸市 <ko...@in...> wrote: > No, ALTER TABLE tablename DELETE NODE (nodename); > > This is very similar to the thing you should do to add the node > > ALTER TABLE table name ADD NODE (ndoename); > > It is included in my demo story at the last PG Open presentation. > > If you’re not sure, you can test it using smaller work table. > > Regards; > --- > Koichi Suzuki > > 2014/04/18 16:54、Juned Khan <jkh...@gm...> のメール: > >> You mean first i have to modify my table structure lets consider i >> have replicated tables so i have to remove "DISTRIBUTE by >> REPLICATION" right ? >> >> And then i have to run DROP node command to remove datanode slave >> >> pgxc_ctl won't do it for me ? >> i.e >> PGXC remove datanode slave datanode1 >> >> PGXC add datanode slave datanode3 node5 20008 /home/postgres/pgxc/nodes/dn_slave >> >> And how it will copy database after adding datanode slave ? >> >> >> >> On Fri, Apr 18, 2014 at 12:59 PM, 鈴木 幸市 <ko...@in...> wrote: >>> Reverse order. >>> >>> First, exclude removing node from the node set your tables are distributed >>> or replicated. You can do this with ALTER TABLE. All the row in the >>> table will be redistributed to modified set of nodes for distributed tables. >>> In the case of replicated tables, tables in the removing node will just be >>> detached. >>> >>> Then, you can issue DROP NODE before you really stop and clear the node >>> resource. >>> --- >>> Koichi Suzuki >>> >>> 2014/04/18 16:13、Juned Khan <jkh...@gm...> のメール: >>> >>> what if i just remove datanode slave and add it again ? all data will be >>> copied to slave ? >>> will it impact on master ? >>> >>> >>> On Fri, Apr 18, 2014 at 6:29 AM, 鈴木 幸市 <ko...@in...> wrote: >>>> >>>> The impact to the master server is almost the same as in the case of >>>> vanilla PG to build a slave using pg_basebackup. It sends all the master >>>> database file resource to the slave together with its WAL. Good thing >>>> compared with more primitive way of pg_start_backup() and pg_stop_backup() >>>> is the data are read directly from the cache so impact to I/O workload will >>>> be smaller. >>>> >>>> If you’re concerning safety to the master resource, there could be another >>>> means, for example, to stop the master, copy everything to the slave, change >>>> master to enable WAL shipping and slave to run as the slave. Principally, >>>> this should work and you can keep the master resource safer but I’ve not >>>> tested this yet. >>>> >>>> Regards; >>>> --- >>>> Koichi Suzuki >>>> >>>> 2014/04/17 20:23、Juned Khan <jkh...@gm...> のメール: >>>> >>>> Hi koichi, >>>> >>>> Is there any other short solution to fix this issue ? >>>> >>>> 1. Run checkpoint and vacuum full at the master, >>>> Found the docs to perform vacuum full but i have confusion about how >>>> to run checkpoint manually. >>>> >>>> 2. Build the slave from the scotch using pg_basebackup (pgxc_ctl provides >>>> this means). >>>> should i run this command from datanode slave server something like ( >>>> pg_basebackup -U postgres -R -D /srv/pgsql/standby --host=192.168.1.17 >>>> --port=5432) >>>> >>>> And very important thing how it will impact on datanode master server ? as >>>> of now i have only this master running on server. >>>> so as of now i just don't want to take a chance, and somehow want to start >>>> slave for backup. >>>> >>>> Please advice >>>> >>>> Regards >>>> Juned Khan >>>> >>>> On Thu, Apr 17, 2014 at 4:43 PM, Juned Khan <jkh...@gm...> wrote: >>>>> >>>>> so i have to do experiment for this. i really need that much connection. >>>>> >>>>> Thanks for the suggestion @Michael >>>>> >>>>> >>>>> On Thu, Apr 17, 2014 at 11:40 AM, Michael Paquier >>>>> <mic...@gm...> wrote: >>>>>> >>>>>> On Thu, Apr 17, 2014 at 2:51 PM, Juned Khan <jkh...@gm...> wrote: >>>>>>> And i can use pgpool and pgbouncer with pgxc right ? >>>>>> In front of the Coordinators, that's fine. But I am not sure in from >>>>>> of the Datanodes as XC has one extra connection parameter to identify >>>>>> the node type from which connection is from, and a couple of >>>>>> additional message types to pass down transaction ID, timestamp and >>>>>> snapshot data from Coordinator to Datanodes (actually Coordinators as >>>>>> well for DDL queries). If those message types and/or connection >>>>>> parameters get filtered by pgpool or pgbouncer, well you cannot use >>>>>> them. I've personally never given a try though, but the idea is worth >>>>>> an attempt to reduce lock contention that could be caused by a too >>>>>> high value of max_connections. >>>>>> -- >>>>>> Michael >>>>> >>>>> >>>>> >>>>> >>>> >>>> ------------------------------------------------------------------------------ >>>> Learn Graph Databases - Download FREE O'Reilly Book >>>> "Graph Databases" is the definitive new guide to graph databases and their >>>> applications. Written by three acclaimed leaders in the field, >>>> this first edition is now available. Download your free book today! >>>> http://p.sf.net/sfu/NeoTech_______________________________________________ >>>> Postgres-xc-general mailing list >>>> Pos...@li... >>>> https://lists.sourceforge.net/lists/listinfo/postgres-xc-general >>>> >>>> >>> >>> >>> >>> -- >> > -- Thanks, Juned Khan iNextrix Technologies Pvt Ltd. www.inextrix.com |
From: 鈴木 幸市 <ko...@in...> - 2014-04-18 09:00:38
|
No, ALTER TABLE tablename DELETE NODE (nodename); This is very similar to the thing you should do to add the node ALTER TABLE table name ADD NODE (ndoename); It is included in my demo story at the last PG Open presentation. If you’re not sure, you can test it using smaller work table. Regards; --- Koichi Suzuki 2014/04/18 16:54、Juned Khan <jkh...@gm...> のメール: > You mean first i have to modify my table structure lets consider i > have replicated tables so i have to remove "DISTRIBUTE by > REPLICATION" right ? > > And then i have to run DROP node command to remove datanode slave > > pgxc_ctl won't do it for me ? > i.e > PGXC remove datanode slave datanode1 > > PGXC add datanode slave datanode3 node5 20008 /home/postgres/pgxc/nodes/dn_slave > > And how it will copy database after adding datanode slave ? > > > > On Fri, Apr 18, 2014 at 12:59 PM, 鈴木 幸市 <ko...@in...> wrote: >> Reverse order. >> >> First, exclude removing node from the node set your tables are distributed >> or replicated. You can do this with ALTER TABLE. All the row in the >> table will be redistributed to modified set of nodes for distributed tables. >> In the case of replicated tables, tables in the removing node will just be >> detached. >> >> Then, you can issue DROP NODE before you really stop and clear the node >> resource. >> --- >> Koichi Suzuki >> >> 2014/04/18 16:13、Juned Khan <jkh...@gm...> のメール: >> >> what if i just remove datanode slave and add it again ? all data will be >> copied to slave ? >> will it impact on master ? >> >> >> On Fri, Apr 18, 2014 at 6:29 AM, 鈴木 幸市 <ko...@in...> wrote: >>> >>> The impact to the master server is almost the same as in the case of >>> vanilla PG to build a slave using pg_basebackup. It sends all the master >>> database file resource to the slave together with its WAL. Good thing >>> compared with more primitive way of pg_start_backup() and pg_stop_backup() >>> is the data are read directly from the cache so impact to I/O workload will >>> be smaller. >>> >>> If you’re concerning safety to the master resource, there could be another >>> means, for example, to stop the master, copy everything to the slave, change >>> master to enable WAL shipping and slave to run as the slave. Principally, >>> this should work and you can keep the master resource safer but I’ve not >>> tested this yet. >>> >>> Regards; >>> --- >>> Koichi Suzuki >>> >>> 2014/04/17 20:23、Juned Khan <jkh...@gm...> のメール: >>> >>> Hi koichi, >>> >>> Is there any other short solution to fix this issue ? >>> >>> 1. Run checkpoint and vacuum full at the master, >>> Found the docs to perform vacuum full but i have confusion about how >>> to run checkpoint manually. >>> >>> 2. Build the slave from the scotch using pg_basebackup (pgxc_ctl provides >>> this means). >>> should i run this command from datanode slave server something like ( >>> pg_basebackup -U postgres -R -D /srv/pgsql/standby --host=192.168.1.17 >>> --port=5432) >>> >>> And very important thing how it will impact on datanode master server ? as >>> of now i have only this master running on server. >>> so as of now i just don't want to take a chance, and somehow want to start >>> slave for backup. >>> >>> Please advice >>> >>> Regards >>> Juned Khan >>> >>> On Thu, Apr 17, 2014 at 4:43 PM, Juned Khan <jkh...@gm...> wrote: >>>> >>>> so i have to do experiment for this. i really need that much connection. >>>> >>>> Thanks for the suggestion @Michael >>>> >>>> >>>> On Thu, Apr 17, 2014 at 11:40 AM, Michael Paquier >>>> <mic...@gm...> wrote: >>>>> >>>>> On Thu, Apr 17, 2014 at 2:51 PM, Juned Khan <jkh...@gm...> wrote: >>>>>> And i can use pgpool and pgbouncer with pgxc right ? >>>>> In front of the Coordinators, that's fine. But I am not sure in from >>>>> of the Datanodes as XC has one extra connection parameter to identify >>>>> the node type from which connection is from, and a couple of >>>>> additional message types to pass down transaction ID, timestamp and >>>>> snapshot data from Coordinator to Datanodes (actually Coordinators as >>>>> well for DDL queries). If those message types and/or connection >>>>> parameters get filtered by pgpool or pgbouncer, well you cannot use >>>>> them. I've personally never given a try though, but the idea is worth >>>>> an attempt to reduce lock contention that could be caused by a too >>>>> high value of max_connections. >>>>> -- >>>>> Michael >>>> >>>> >>>> >>>> >>> >>> ------------------------------------------------------------------------------ >>> Learn Graph Databases - Download FREE O'Reilly Book >>> "Graph Databases" is the definitive new guide to graph databases and their >>> applications. Written by three acclaimed leaders in the field, >>> this first edition is now available. Download your free book today! >>> http://p.sf.net/sfu/NeoTech_______________________________________________ >>> Postgres-xc-general mailing list >>> Pos...@li... >>> https://lists.sourceforge.net/lists/listinfo/postgres-xc-general >>> >>> >> >> >> >> -- > |
From: Juned K. <jkh...@gm...> - 2014-04-18 07:55:05
|
You mean first i have to modify my table structure lets consider i have replicated tables so i have to remove "DISTRIBUTE by REPLICATION" right ? And then i have to run DROP node command to remove datanode slave pgxc_ctl won't do it for me ? i.e PGXC remove datanode slave datanode1 PGXC add datanode slave datanode3 node5 20008 /home/postgres/pgxc/nodes/dn_slave And how it will copy database after adding datanode slave ? On Fri, Apr 18, 2014 at 12:59 PM, 鈴木 幸市 <ko...@in...> wrote: > Reverse order. > > First, exclude removing node from the node set your tables are distributed > or replicated. You can do this with ALTER TABLE. All the row in the > table will be redistributed to modified set of nodes for distributed tables. > In the case of replicated tables, tables in the removing node will just be > detached. > > Then, you can issue DROP NODE before you really stop and clear the node > resource. > --- > Koichi Suzuki > > 2014/04/18 16:13、Juned Khan <jkh...@gm...> のメール: > > what if i just remove datanode slave and add it again ? all data will be > copied to slave ? > will it impact on master ? > > > On Fri, Apr 18, 2014 at 6:29 AM, 鈴木 幸市 <ko...@in...> wrote: >> >> The impact to the master server is almost the same as in the case of >> vanilla PG to build a slave using pg_basebackup. It sends all the master >> database file resource to the slave together with its WAL. Good thing >> compared with more primitive way of pg_start_backup() and pg_stop_backup() >> is the data are read directly from the cache so impact to I/O workload will >> be smaller. >> >> If you’re concerning safety to the master resource, there could be another >> means, for example, to stop the master, copy everything to the slave, change >> master to enable WAL shipping and slave to run as the slave. Principally, >> this should work and you can keep the master resource safer but I’ve not >> tested this yet. >> >> Regards; >> --- >> Koichi Suzuki >> >> 2014/04/17 20:23、Juned Khan <jkh...@gm...> のメール: >> >> Hi koichi, >> >> Is there any other short solution to fix this issue ? >> >> 1. Run checkpoint and vacuum full at the master, >> Found the docs to perform vacuum full but i have confusion about how >> to run checkpoint manually. >> >> 2. Build the slave from the scotch using pg_basebackup (pgxc_ctl provides >> this means). >> should i run this command from datanode slave server something like ( >> pg_basebackup -U postgres -R -D /srv/pgsql/standby --host=192.168.1.17 >> --port=5432) >> >> And very important thing how it will impact on datanode master server ? as >> of now i have only this master running on server. >> so as of now i just don't want to take a chance, and somehow want to start >> slave for backup. >> >> Please advice >> >> Regards >> Juned Khan >> >> On Thu, Apr 17, 2014 at 4:43 PM, Juned Khan <jkh...@gm...> wrote: >>> >>> so i have to do experiment for this. i really need that much connection. >>> >>> Thanks for the suggestion @Michael >>> >>> >>> On Thu, Apr 17, 2014 at 11:40 AM, Michael Paquier >>> <mic...@gm...> wrote: >>>> >>>> On Thu, Apr 17, 2014 at 2:51 PM, Juned Khan <jkh...@gm...> wrote: >>>> > And i can use pgpool and pgbouncer with pgxc right ? >>>> In front of the Coordinators, that's fine. But I am not sure in from >>>> of the Datanodes as XC has one extra connection parameter to identify >>>> the node type from which connection is from, and a couple of >>>> additional message types to pass down transaction ID, timestamp and >>>> snapshot data from Coordinator to Datanodes (actually Coordinators as >>>> well for DDL queries). If those message types and/or connection >>>> parameters get filtered by pgpool or pgbouncer, well you cannot use >>>> them. I've personally never given a try though, but the idea is worth >>>> an attempt to reduce lock contention that could be caused by a too >>>> high value of max_connections. >>>> -- >>>> Michael >>> >>> >>> >>> >> >> ------------------------------------------------------------------------------ >> Learn Graph Databases - Download FREE O'Reilly Book >> "Graph Databases" is the definitive new guide to graph databases and their >> applications. Written by three acclaimed leaders in the field, >> this first edition is now available. Download your free book today! >> http://p.sf.net/sfu/NeoTech_______________________________________________ >> Postgres-xc-general mailing list >> Pos...@li... >> https://lists.sourceforge.net/lists/listinfo/postgres-xc-general >> >> > > > > -- |
From: 鈴木 幸市 <ko...@in...> - 2014-04-18 07:29:25
|
Reverse order. First, exclude removing node from the node set your tables are distributed or replicated. You can do this with ALTER TABLE. All the row in the table will be redistributed to modified set of nodes for distributed tables. In the case of replicated tables, tables in the removing node will just be detached. Then, you can issue DROP NODE before you really stop and clear the node resource. --- Koichi Suzuki 2014/04/18 16:13、Juned Khan <jkh...@gm...<mailto:jkh...@gm...>> のメール: what if i just remove datanode slave and add it again ? all data will be copied to slave ? will it impact on master ? On Fri, Apr 18, 2014 at 6:29 AM, 鈴木 幸市 <ko...@in...<mailto:ko...@in...>> wrote: The impact to the master server is almost the same as in the case of vanilla PG to build a slave using pg_basebackup. It sends all the master database file resource to the slave together with its WAL. Good thing compared with more primitive way of pg_start_backup() and pg_stop_backup() is the data are read directly from the cache so impact to I/O workload will be smaller. If you’re concerning safety to the master resource, there could be another means, for example, to stop the master, copy everything to the slave, change master to enable WAL shipping and slave to run as the slave. Principally, this should work and you can keep the master resource safer but I’ve not tested this yet. Regards; --- Koichi Suzuki 2014/04/17 20:23、Juned Khan <jkh...@gm...<mailto:jkh...@gm...>> のメール: Hi koichi, Is there any other short solution to fix this issue ? 1. Run checkpoint and vacuum full at the master, Found the docs to perform vacuum full but i have confusion about how to run checkpoint manually. 2. Build the slave from the scotch using pg_basebackup (pgxc_ctl provides this means). should i run this command from datanode slave server something like ( pg_basebackup -U postgres -R -D /srv/pgsql/standby --host=192.168.1.17 --port=5432) And very important thing how it will impact on datanode master server ? as of now i have only this master running on server. so as of now i just don't want to take a chance, and somehow want to start slave for backup. Please advice Regards Juned Khan On Thu, Apr 17, 2014 at 4:43 PM, Juned Khan <jkh...@gm...<mailto:jkh...@gm...>> wrote: so i have to do experiment for this. i really need that much connection. Thanks for the suggestion @Michael On Thu, Apr 17, 2014 at 11:40 AM, Michael Paquier <mic...@gm...<mailto:mic...@gm...>> wrote: On Thu, Apr 17, 2014 at 2:51 PM, Juned Khan <jkh...@gm...<mailto:jkh...@gm...>> wrote: > And i can use pgpool and pgbouncer with pgxc right ? In front of the Coordinators, that's fine. But I am not sure in from of the Datanodes as XC has one extra connection parameter to identify the node type from which connection is from, and a couple of additional message types to pass down transaction ID, timestamp and snapshot data from Coordinator to Datanodes (actually Coordinators as well for DDL queries). If those message types and/or connection parameters get filtered by pgpool or pgbouncer, well you cannot use them. I've personally never given a try though, but the idea is worth an attempt to reduce lock contention that could be caused by a too high value of max_connections. -- Michael ------------------------------------------------------------------------------ Learn Graph Databases - Download FREE O'Reilly Book "Graph Databases" is the definitive new guide to graph databases and their applications. Written by three acclaimed leaders in the field, this first edition is now available. Download your free book today! http://p.sf.net/sfu/NeoTech_______________________________________________ Postgres-xc-general mailing list Pos...@li...<mailto:Pos...@li...> https://lists.sourceforge.net/lists/listinfo/postgres-xc-general -- Thanks, Juned Khan iNextrix Technologies Pvt Ltd. www.inextrix.com<http://www.inextrix.com/> |
From: Juned K. <jkh...@gm...> - 2014-04-18 07:14:03
|
what if i just remove datanode slave and add it again ? all data will be copied to slave ? will it impact on master ? On Fri, Apr 18, 2014 at 6:29 AM, 鈴木 幸市 <ko...@in...> wrote: > The impact to the master server is almost the same as in the case of > vanilla PG to build a slave using pg_basebackup. It sends all the master > database file resource to the slave together with its WAL. Good thing > compared with more primitive way of pg_start_backup() and pg_stop_backup() > is the data are read directly from the cache so impact to I/O workload will > be smaller. > > If you’re concerning safety to the master resource, there could be > another means, for example, to stop the master, copy everything to the > slave, change master to enable WAL shipping and slave to run as the slave. > Principally, this should work and you can keep the master resource safer > but I’ve not tested this yet. > > Regards; > --- > Koichi Suzuki > > 2014/04/17 20:23、Juned Khan <jkh...@gm...> のメール: > > Hi koichi, > > Is there any other short solution to fix this issue ? > > 1. Run checkpoint and vacuum full at the master, > Found the docs to perform vacuum full but i have confusion about how > to run checkpoint manually. > > 2. Build the slave from the scotch using pg_basebackup (pgxc_ctl provides > this means). > should i run this command from datanode slave server something like > ( pg_basebackup -U postgres -R -D /srv/pgsql/standby --host=192.168.1.17 > --port=5432) > > And very important thing how it will impact on datanode master server ? > as of now i have only this master running on server. > so as of now i just don't want to take a chance, and somehow want to > start slave for backup. > > Please advice > > Regards > Juned Khan > > On Thu, Apr 17, 2014 at 4:43 PM, Juned Khan <jkh...@gm...> wrote: > >> so i have to do experiment for this. i really need that much connection. >> >> Thanks for the suggestion @Michael >> >> >> On Thu, Apr 17, 2014 at 11:40 AM, Michael Paquier < >> mic...@gm...> wrote: >> >>> On Thu, Apr 17, 2014 at 2:51 PM, Juned Khan <jkh...@gm...> wrote: >>> > And i can use pgpool and pgbouncer with pgxc right ? >>> In front of the Coordinators, that's fine. But I am not sure in from >>> of the Datanodes as XC has one extra connection parameter to identify >>> the node type from which connection is from, and a couple of >>> additional message types to pass down transaction ID, timestamp and >>> snapshot data from Coordinator to Datanodes (actually Coordinators as >>> well for DDL queries). If those message types and/or connection >>> parameters get filtered by pgpool or pgbouncer, well you cannot use >>> them. I've personally never given a try though, but the idea is worth >>> an attempt to reduce lock contention that could be caused by a too >>> high value of max_connections. >>> -- >>> Michael >>> >> >> >> >> > ------------------------------------------------------------------------------ > Learn Graph Databases - Download FREE O'Reilly Book > "Graph Databases" is the definitive new guide to graph databases and their > applications. Written by three acclaimed leaders in the field, > this first edition is now available. Download your free book today! > http://p.sf.net/sfu/NeoTech_______________________________________________ > Postgres-xc-general mailing list > Pos...@li... > https://lists.sourceforge.net/lists/listinfo/postgres-xc-general > > > -- Thanks, Juned Khan iNextrix Technologies Pvt Ltd. www.inextrix.com |
From: 鈴木 幸市 <ko...@in...> - 2014-04-18 00:59:43
|
The impact to the master server is almost the same as in the case of vanilla PG to build a slave using pg_basebackup. It sends all the master database file resource to the slave together with its WAL. Good thing compared with more primitive way of pg_start_backup() and pg_stop_backup() is the data are read directly from the cache so impact to I/O workload will be smaller. If you’re concerning safety to the master resource, there could be another means, for example, to stop the master, copy everything to the slave, change master to enable WAL shipping and slave to run as the slave. Principally, this should work and you can keep the master resource safer but I’ve not tested this yet. Regards; --- Koichi Suzuki 2014/04/17 20:23、Juned Khan <jkh...@gm...<mailto:jkh...@gm...>> のメール: Hi koichi, Is there any other short solution to fix this issue ? 1. Run checkpoint and vacuum full at the master, Found the docs to perform vacuum full but i have confusion about how to run checkpoint manually. 2. Build the slave from the scotch using pg_basebackup (pgxc_ctl provides this means). should i run this command from datanode slave server something like ( pg_basebackup -U postgres -R -D /srv/pgsql/standby --host=192.168.1.17 --port=5432) And very important thing how it will impact on datanode master server ? as of now i have only this master running on server. so as of now i just don't want to take a chance, and somehow want to start slave for backup. Please advice Regards Juned Khan On Thu, Apr 17, 2014 at 4:43 PM, Juned Khan <jkh...@gm...<mailto:jkh...@gm...>> wrote: so i have to do experiment for this. i really need that much connection. Thanks for the suggestion @Michael On Thu, Apr 17, 2014 at 11:40 AM, Michael Paquier <mic...@gm...<mailto:mic...@gm...>> wrote: On Thu, Apr 17, 2014 at 2:51 PM, Juned Khan <jkh...@gm...<mailto:jkh...@gm...>> wrote: > And i can use pgpool and pgbouncer with pgxc right ? In front of the Coordinators, that's fine. But I am not sure in from of the Datanodes as XC has one extra connection parameter to identify the node type from which connection is from, and a couple of additional message types to pass down transaction ID, timestamp and snapshot data from Coordinator to Datanodes (actually Coordinators as well for DDL queries). If those message types and/or connection parameters get filtered by pgpool or pgbouncer, well you cannot use them. I've personally never given a try though, but the idea is worth an attempt to reduce lock contention that could be caused by a too high value of max_connections. -- Michael ------------------------------------------------------------------------------ Learn Graph Databases - Download FREE O'Reilly Book "Graph Databases" is the definitive new guide to graph databases and their applications. Written by three acclaimed leaders in the field, this first edition is now available. Download your free book today! http://p.sf.net/sfu/NeoTech_______________________________________________ Postgres-xc-general mailing list Pos...@li... https://lists.sourceforge.net/lists/listinfo/postgres-xc-general |
From: Juned K. <jkh...@gm...> - 2014-04-17 11:24:06
|
Hi koichi, Is there any other short solution to fix this issue ? 1. Run checkpoint and vacuum full at the master, Found the docs to perform vacuum full but i have confusion about how to run checkpoint manually. 2. Build the slave from the scotch using pg_basebackup (pgxc_ctl provides this means). should i run this command from datanode slave server something like ( pg_basebackup -U postgres -R -D /srv/pgsql/standby --host=192.168.1.17 --port=5432) And very important thing how it will impact on datanode master server ? as of now i have only this master running on server. so as of now i just don't want to take a chance, and somehow want to start slave for backup. Please advice Regards Juned Khan On Thu, Apr 17, 2014 at 4:43 PM, Juned Khan <jkh...@gm...> wrote: > so i have to do experiment for this. i really need that much connection. > > Thanks for the suggestion @Michael > > > On Thu, Apr 17, 2014 at 11:40 AM, Michael Paquier < > mic...@gm...> wrote: > >> On Thu, Apr 17, 2014 at 2:51 PM, Juned Khan <jkh...@gm...> wrote: >> > And i can use pgpool and pgbouncer with pgxc right ? >> In front of the Coordinators, that's fine. But I am not sure in from >> of the Datanodes as XC has one extra connection parameter to identify >> the node type from which connection is from, and a couple of >> additional message types to pass down transaction ID, timestamp and >> snapshot data from Coordinator to Datanodes (actually Coordinators as >> well for DDL queries). If those message types and/or connection >> parameters get filtered by pgpool or pgbouncer, well you cannot use >> them. I've personally never given a try though, but the idea is worth >> an attempt to reduce lock contention that could be caused by a too >> high value of max_connections. >> -- >> Michael >> > > > > |
From: Juned K. <jkh...@gm...> - 2014-04-17 11:13:29
|
so i have to do experiment for this. i really need that much connection. Thanks for the suggestion @Michael On Thu, Apr 17, 2014 at 11:40 AM, Michael Paquier <mic...@gm... > wrote: > On Thu, Apr 17, 2014 at 2:51 PM, Juned Khan <jkh...@gm...> wrote: > > And i can use pgpool and pgbouncer with pgxc right ? > In front of the Coordinators, that's fine. But I am not sure in from > of the Datanodes as XC has one extra connection parameter to identify > the node type from which connection is from, and a couple of > additional message types to pass down transaction ID, timestamp and > snapshot data from Coordinator to Datanodes (actually Coordinators as > well for DDL queries). If those message types and/or connection > parameters get filtered by pgpool or pgbouncer, well you cannot use > them. I've personally never given a try though, but the idea is worth > an attempt to reduce lock contention that could be caused by a too > high value of max_connections. > -- > Michael > -- Thanks, Juned Khan iNextrix Technologies Pvt Ltd. www.inextrix.com |
From: Juned K. <jkh...@gm...> - 2014-04-17 09:30:39
|
Ohh i see i think i read the same earlier but still i can see data in newly created node. myDB=# select count(*) from accounts; count ------- 79 (1 row) And all datanode has same records, so i wonder from where data comes from ? is it synced automatically ? On Thu, Apr 17, 2014 at 1:53 PM, 鈴木 幸市 <ko...@in...> wrote: > While adding a coordinator or a datanode, only catalogs are copied. You > should issue ALTER TABLE to each table to redistribute table rows. > If you don’t want new datanode to be invoked in some tables, you can skip > ALTER TABLE. Table rows will be distributed in the old set of > datanodes. > > If you do it manually, it’s long process and there could be many > pitfalls. Hope pgxc_ctl helps. > > Regards; > --- > Koichi Suzuki > > 2014/04/17 16:49、Juned Khan <jkh...@gm...> のメール: > > Hi All, > > I want to add few more coordinator and datnode in pgxc. during adding > process whole database is being copy to new component. > > I have very large database around 8GB, in this database only one table is > there which holds the large number of records. > > is there anyway to exclude such table while adding new coordinator or > datanode using pgxc_ctl command ? > > I want to exclude some table just to make adding coordinator and datanode > process smother and faster. > > Please suggest. > > -- > Thanks, > Juned Khan > <http://www.inextrix.com/> > > ------------------------------------------------------------------------------ > Learn Graph Databases - Download FREE O'Reilly Book > "Graph Databases" is the definitive new guide to graph databases and their > applications. Written by three acclaimed leaders in the field, > this first edition is now available. Download your free book today! > http://p.sf.net/sfu/NeoTech_______________________________________________ > Postgres-xc-general mailing list > Pos...@li... > https://lists.sourceforge.net/lists/listinfo/postgres-xc-general > > > -- Thanks, Juned Khan iNextrix Technologies Pvt Ltd. www.inextrix.com |
From: Ashutosh B. <ash...@en...> - 2014-04-17 09:16:41
|
On Thu, Apr 17, 2014 at 1:19 PM, Juned Khan <jkh...@gm...> wrote: > Hi All, > > I want to add few more coordinator and datnode in pgxc. during adding > process whole database is being copy to new component. > > That shouldn't happen. It should copy only the object definitions and catalog information and not the data. How are you copying the database? > I have very large database around 8GB, in this database only one table is > there which holds the large number of records. > > is there anyway to exclude such table while adding new coordinator or > datanode using pgxc_ctl command ? > > I want to exclude some table just to make adding coordinator and datanode > process smother and faster. > > Please suggest. > > -- > Thanks, > Juned Khan > <http://www.inextrix.com/> > > > ------------------------------------------------------------------------------ > Learn Graph Databases - Download FREE O'Reilly Book > "Graph Databases" is the definitive new guide to graph databases and their > applications. Written by three acclaimed leaders in the field, > this first edition is now available. Download your free book today! > http://p.sf.net/sfu/NeoTech > _______________________________________________ > Postgres-xc-general mailing list > Pos...@li... > https://lists.sourceforge.net/lists/listinfo/postgres-xc-general > > -- Best Wishes, Ashutosh Bapat EnterpriseDB Corporation The Postgres Database Company |
From: 鈴木 幸市 <ko...@in...> - 2014-04-17 08:23:55
|
While adding a coordinator or a datanode, only catalogs are copied. You should issue ALTER TABLE to each table to redistribute table rows. If you don’t want new datanode to be invoked in some tables, you can skip ALTER TABLE. Table rows will be distributed in the old set of datanodes. If you do it manually, it’s long process and there could be many pitfalls. Hope pgxc_ctl helps. Regards; --- Koichi Suzuki 2014/04/17 16:49、Juned Khan <jkh...@gm...<mailto:jkh...@gm...>> のメール: Hi All, I want to add few more coordinator and datnode in pgxc. during adding process whole database is being copy to new component. I have very large database around 8GB, in this database only one table is there which holds the large number of records. is there anyway to exclude such table while adding new coordinator or datanode using pgxc_ctl command ? I want to exclude some table just to make adding coordinator and datanode process smother and faster. Please suggest. -- Thanks, Juned Khan <http://www.inextrix.com/> ------------------------------------------------------------------------------ Learn Graph Databases - Download FREE O'Reilly Book "Graph Databases" is the definitive new guide to graph databases and their applications. Written by three acclaimed leaders in the field, this first edition is now available. Download your free book today! http://p.sf.net/sfu/NeoTech_______________________________________________ Postgres-xc-general mailing list Pos...@li... https://lists.sourceforge.net/lists/listinfo/postgres-xc-general |
From: Juned K. <jkh...@gm...> - 2014-04-17 07:49:16
|
Hi All, I want to add few more coordinator and datnode in pgxc. during adding process whole database is being copy to new component. I have very large database around 8GB, in this database only one table is there which holds the large number of records. is there anyway to exclude such table while adding new coordinator or datanode using pgxc_ctl command ? I want to exclude some table just to make adding coordinator and datanode process smother and faster. Please suggest. -- Thanks, Juned Khan <http://www.inextrix.com/> |
From: Michael P. <mic...@gm...> - 2014-04-17 06:10:27
|
On Thu, Apr 17, 2014 at 2:51 PM, Juned Khan <jkh...@gm...> wrote: > And i can use pgpool and pgbouncer with pgxc right ? In front of the Coordinators, that's fine. But I am not sure in from of the Datanodes as XC has one extra connection parameter to identify the node type from which connection is from, and a couple of additional message types to pass down transaction ID, timestamp and snapshot data from Coordinator to Datanodes (actually Coordinators as well for DDL queries). If those message types and/or connection parameters get filtered by pgpool or pgbouncer, well you cannot use them. I've personally never given a try though, but the idea is worth an attempt to reduce lock contention that could be caused by a too high value of max_connections. -- Michael |
From: Juned K. <jkh...@gm...> - 2014-04-17 05:51:57
|
And i can use pgpool and pgbouncer with pgxc right ? On Thu, Apr 17, 2014 at 10:07 AM, Michael Paquier <mic...@gm... > wrote: > > > > On Thu, Apr 17, 2014 at 1:13 PM, Juned Khan <jkh...@gm...> wrote: > >> Hi Michael, >> >> Why i had to set max_connection value to that much because it was giving >> me error like. >> >> FATAL: sorry, too many clients already at /ust/local/... >> >> should i use pgpool to handle that much connection? >> > pgbouncer is a better choice if you just want connection pooling. pgpool > is a swiss knife with more features that you may need. > -- > Michael > -- Thanks, Juned Khan iNextrix Technologies Pvt Ltd. www.inextrix.com |
From: Michael P. <mic...@gm...> - 2014-04-17 04:37:21
|
On Thu, Apr 17, 2014 at 1:13 PM, Juned Khan <jkh...@gm...> wrote: > Hi Michael, > > Why i had to set max_connection value to that much because it was giving > me error like. > > FATAL: sorry, too many clients already at /ust/local/... > > should i use pgpool to handle that much connection? > pgbouncer is a better choice if you just want connection pooling. pgpool is a swiss knife with more features that you may need. -- Michael |
From: Juned K. <jkh...@gm...> - 2014-04-17 04:16:12
|
Hi koicihi, Can i search for the WAL file which contains this value and edit it and then try to start datanode slave again ? How it will impact to existing components, i mean what risks are there ? On Thu, Apr 17, 2014 at 9:43 AM, Juned Khan <jkh...@gm...> wrote: > Hi Michael, > > Why i had to set max_connection value to that much because it was giving > me error like. > > FATAL: sorry, too many clients already at /ust/local/... > > should i use pgpool to handle that much connection? > > > > On Thu, Apr 17, 2014 at 6:41 AM, Michael Paquier < > mic...@gm...> wrote: > >> >> >> >> On Wed, Apr 16, 2014 at 9:53 PM, Juned Khan <jkh...@gm...> wrote: >> >>> Hi All, >>> >>> When i tried to set max_connection value to 1000 it gave me this error >>> >> That's a lot, man! Concurrency between sessions is going to blow up your >> performance. >> >> From the docs i came to know that to use than much connection i have to >>> modify kernel configuration >>> >>> But now i am now trying to set 500 connection instead of 1000 then its >>> giving below error. >>> >>> LOG: database system was interrupted while in recovery at log time >>> 2014-04-16 09:06:06 WAT >>> HINT: If this has occurred more than once some data might be corrupted >>> and you might need to choose an earlier recovery target. >>> LOG: entering standby mode >>> LOG: restored log file "000000010000000B00000011" from archive >>> FATAL: hot standby is not possible because max_connections = 500 is a >>> lower setting than on the master server (its value was 1000) >>> LOG: startup process (PID 16829) exited with exit code 1 >>> LOG: aborting startup due to startup process failure >>> >> Anyone please suggest if any other way is there to fix this like clearing >>> cache or something so it can read correct values. >>> >> Update the master first, then the slave. >> -- >> Michael >> > > > > > -- Thanks, Juned Khan iNextrix Technologies Pvt Ltd. www.inextrix.com |
From: Juned K. <jkh...@gm...> - 2014-04-17 04:13:24
|
Hi Michael, Why i had to set max_connection value to that much because it was giving me error like. FATAL: sorry, too many clients already at /ust/local/... should i use pgpool to handle that much connection? On Thu, Apr 17, 2014 at 6:41 AM, Michael Paquier <mic...@gm...>wrote: > > > > On Wed, Apr 16, 2014 at 9:53 PM, Juned Khan <jkh...@gm...> wrote: > >> Hi All, >> >> When i tried to set max_connection value to 1000 it gave me this error >> > That's a lot, man! Concurrency between sessions is going to blow up your > performance. > > From the docs i came to know that to use than much connection i have to >> modify kernel configuration >> >> But now i am now trying to set 500 connection instead of 1000 then its >> giving below error. >> >> LOG: database system was interrupted while in recovery at log time >> 2014-04-16 09:06:06 WAT >> HINT: If this has occurred more than once some data might be corrupted >> and you might need to choose an earlier recovery target. >> LOG: entering standby mode >> LOG: restored log file "000000010000000B00000011" from archive >> FATAL: hot standby is not possible because max_connections = 500 is a >> lower setting than on the master server (its value was 1000) >> LOG: startup process (PID 16829) exited with exit code 1 >> LOG: aborting startup due to startup process failure >> > Anyone please suggest if any other way is there to fix this like clearing >> cache or something so it can read correct values. >> > Update the master first, then the slave. > -- > Michael > |