You can subscribe to this list here.
2010 |
Jan
|
Feb
|
Mar
|
Apr
|
May
(2) |
Jun
|
Jul
|
Aug
(6) |
Sep
|
Oct
(19) |
Nov
(1) |
Dec
|
---|---|---|---|---|---|---|---|---|---|---|---|---|
2011 |
Jan
(12) |
Feb
(1) |
Mar
(4) |
Apr
(4) |
May
(32) |
Jun
(12) |
Jul
(11) |
Aug
(1) |
Sep
(6) |
Oct
(3) |
Nov
|
Dec
(10) |
2012 |
Jan
(11) |
Feb
(1) |
Mar
(3) |
Apr
(25) |
May
(53) |
Jun
(38) |
Jul
(103) |
Aug
(54) |
Sep
(31) |
Oct
(66) |
Nov
(77) |
Dec
(20) |
2013 |
Jan
(91) |
Feb
(86) |
Mar
(103) |
Apr
(107) |
May
(25) |
Jun
(37) |
Jul
(17) |
Aug
(59) |
Sep
(38) |
Oct
(78) |
Nov
(29) |
Dec
(15) |
2014 |
Jan
(23) |
Feb
(82) |
Mar
(118) |
Apr
(101) |
May
(103) |
Jun
(45) |
Jul
(6) |
Aug
(10) |
Sep
|
Oct
(32) |
Nov
|
Dec
(9) |
2015 |
Jan
(3) |
Feb
(5) |
Mar
|
Apr
(1) |
May
|
Jun
|
Jul
(9) |
Aug
(4) |
Sep
(3) |
Oct
|
Nov
|
Dec
|
2016 |
Jan
(3) |
Feb
|
Mar
|
Apr
|
May
|
Jun
|
Jul
|
Aug
|
Sep
|
Oct
|
Nov
|
Dec
|
2017 |
Jan
|
Feb
|
Mar
|
Apr
|
May
|
Jun
(3) |
Jul
|
Aug
|
Sep
|
Oct
|
Nov
|
Dec
|
2018 |
Jan
|
Feb
|
Mar
|
Apr
|
May
(4) |
Jun
|
Jul
|
Aug
|
Sep
|
Oct
|
Nov
|
Dec
|
S | M | T | W | T | F | S |
---|---|---|---|---|---|---|
|
|
|
|
1
(15) |
2
(10) |
3
(2) |
4
(6) |
5
|
6
(1) |
7
(23) |
8
|
9
|
10
|
11
|
12
(2) |
13
|
14
|
15
|
16
(2) |
17
(2) |
18
|
19
|
20
(1) |
21
(2) |
22
(3) |
23
(2) |
24
(5) |
25
(2) |
26
(3) |
27
(4) |
28
(6) |
29
(9) |
30
(3) |
31
|
From: 鈴木 幸市 <ko...@in...> - 2014-05-07 08:21:52
|
Please understand that GTM is not a performance bottleneck. It may need dedicated network segment and a server, but GTM’s load average is quite low. I understand this ends up with bad workload balance and this can be a concern for some people. Without GTM, each node may have to do much more calculation to and exchange much more data to get snapshot and determine if a given row can be vacuumed/vacuum frozen. I don’t know how effective this could be. Regards; --- Koichi Suzuki 2014/05/07 17:10、ZhangJulian <jul...@ou...<mailto:jul...@ou...>> のメール: Of course it is attractive. The performance bottleneck, GTM, could be removed. And the cluster will be comprised of only one type of unified component. :) Thanks Julian ________________________________ From: ko...@in...<mailto:ko...@in...> To: jul...@ou...<mailto:jul...@ou...> CC: koi...@gm...<mailto:koi...@gm...>; dor...@gm...<mailto:dor...@gm...>; pos...@li...<mailto:pos...@li...> Subject: Re: [Postgres-xc-general] Pgxc_ctl Primer draft Date: Wed, 7 May 2014 08:00:12 +0000 Right. Maybe I can find how to calculate this without GTM and GXID. Anyway, I thing we should keep track of root XID and local XID. I’m now designing how to do this. Hope we can share the outcome as well soon. Algorithm could be complicated but cluster configuration may look significantly simpler. How do you think providing global MVCC without GTM/GXID attractive? Regards; --- Koichi Suzuki 2014/05/07 16:51、ZhangJulian <jul...@ou...<mailto:jul...@ou...>> のメール: Oh, yes. The oldest GXID must be in the snapshot of the oldest alive GXID. So if we can know the old alive GXID, we can derive the oldest GXID which is still referred. ________________________________ From: ko...@in...<mailto:ko...@in...> To: jul...@ou...<mailto:jul...@ou...> CC: koi...@gm...<mailto:koi...@gm...>; dor...@gm...<mailto:dor...@gm...>; pos...@li...<mailto:pos...@li...> Subject: Re: [Postgres-xc-general] Pgxc_ctl Primer draft Date: Wed, 7 May 2014 04:00:25 +0000 Oldest alive GXID is not correct. We need referred oldest GXID, which is the oldest GXID appears in all the snapshot being used. Please consider that in the case of long, repeated-read transaction, lifetime of snapshot can be very long. Regards; --- Koichi Suzuki 2014/05/07 12:25、ZhangJulian <jul...@ou...<mailto:jul...@ou...>> のメール: I said 'time' as the clock value. You had considered more than I had known. For the VACUUM, as my understanding, if some data which can be vacuumed, but is not vacuumed in time, this is OK. So if we collect the oldest alive GXID, even it is smaller than the current accurate value, it still can guide to VACUUM. Am I right? Thanks Julian ________________________________ From: ko...@in...<mailto:ko...@in...> To: jul...@ou...<mailto:jul...@ou...> CC: koi...@gm...<mailto:koi...@gm...>; dor...@gm...<mailto:dor...@gm...>; pos...@li...<mailto:pos...@li...> Subject: Re: [Postgres-xc-general] Pgxc_ctl Primer draft Date: Wed, 7 May 2014 02:40:43 +0000 What do you mean by “time-based policy”, does it based on (G)XID, or real clock value? To my view, it depend upon what “time” we depend on. Yes, I’m now studying if we can use real “clock” value for this. In this case, we may not need GTM if the clock is accurate enough among servers involved. If you mean not to use global “snapshot” and if it is feasible, we may not need GTM. If we associate each local transaction to its “root” transaction, which is the transaction application generated directly, we can maintaing the visibility by calculating the “snapshot” each time needed, by collecting it from all the other nodes. We need to consider the “vacuum”. I’ve not found a good way to determine if some “deleted” rows can be removed from the database and if some “live” row’s xmim value can be frozen. Regards; --- Koichi Suzuki 2014/05/07 11:19、ZhangJulian <jul...@ou...<mailto:jul...@ou...>> のメール: Is it possible to do the row visibility check based on a time based policy? That is, 1. Each data node maintains a data structure: gtid - start time - end time. Only the gtids modifying data on current data node are contained. 2. Each data node maintains the oldest alive gtid, which may not be updated synchronously. 3. GTM is only responsible to generate a sequence of GTID, which is only an integer value. 4. The time in different data nodes may be not consistent, but I think in some scenario, the application can bear the little difference. Is there any potential issues? Thanks > Date: Sun, 4 May 2014 19:36:20 +0900 > From: koi...@gm...<mailto:koi...@gm...> > To: dor...@gm...<mailto:dor...@gm...> > CC: pos...@li...<mailto:pos...@li...> > Subject: Re: [Postgres-xc-general] Pgxc_ctl Primer draft > > As discussed in the last year's XC-day, GTM proxy should be integrated > as postmaster backend. Maybe GTM can be. Coordinator/Datanode > can also be integrated into one. > > Apparently, this is the direction we should take. At first, there > were no such good experience to start with. Before version 1.0, we > determined that the datanode and the coordinator can share the same > binary. It is true that we started with the idea to provide > cluster-wide MVCC and now we found the next direction. > > With this integration and when start with only one node, we don't need > GTM, which looks identical to standalone PG. When we add the server, > at present we do need GTM. Only accumulating local transactions in > the nodes cannot maintain cluster-wide database consistency. > > I'm still investigating an idea how to get rid of GTM. We need to do > the following: > > 1) To provide cluster wide MVCC, > 2) To provide good means to determine which row can be vacuumed. > > My current idea is: if we associate any local XID to the root > transaction (the transaction which application created), we may be > able to provide cluster wide MVCC by calculating cluster-wide snapshot > when needed. I don't know how efficient it is and t don't have good > idea how to determine if a given row can be vacuumed. > > This is the current situation. > > Hope to have much more input on this. > > Anyway, hope my draft helps people who is trying to use Postgres-XC. > > Best; > --- > Koichi Suzuki > > > 2014-05-04 19:05 GMT+09:00 Dorian Hoxha <dor...@gm...<mailto:dor...@gm...>>: > > Probably even the gtm-proxy need to be merged with datanode+coordinator from > > what i read. > > > > If you make only local transactions (inside 1 datanode) + not using global > > sequences, will there be no traffic to the GTM for that transaction ? > > > > > > On Sun, May 4, 2014 at 6:24 AM, Michael Paquier <mic...@gm...<mailto:mic...@gm...>> > > wrote: > >> > >> On Sun, May 4, 2014 at 12:59 AM, Dorian Hoxha <dor...@gm...<mailto:dor...@gm...>> > >> wrote: > >> >> You just need commodity INTEL server runnign Linux. > >> > Are INTEL cpu required ? If not INTEL can be removed ? (also running > >> > typo) > >> Not really... I agree to what you mean here. > >> > >> >> For datawarehouse > >> >> > >> >> applications, you may need separate patch which devides complexed query > >> >> into smaller > >> >> > >> >> chunks which run in datanodes in parallel. StormDB will provide such > >> >> patche. > >> > > >> > Wasn't stormdb bought by another company ? Is there an opensource > >> > alternative ? Fix the "patche" typo ? > >> > > >> > A way to make it simpler is by merging coordinator and datanode into 1 > >> > and > >> > making it possible for a 'node' to not hold data (be a coordinator > >> > only), > >> > like in elastic-search, but you probably already know that. > >> +1. This would alleviate data transfer between cross-node joins where > >> Coordinator and Datanodes are on separate servers. You could always > >> have both nodes on the same server with the XC of now... But that's > >> double number of nodes to monitor. > >> > >> > What exact things does the gtm-proxy do? For example, a single row > >> > insert > >> > wouldn't need the gtm (coordinator just inserts it to the right > >> > data-node)(asumming no sequences, since for that the gtm is needed)? > >> Grouping messages between Coordinator/Datanode and GTM to reduce > >> package interferences and improve performance. > >> > >> > If multiple tables are sharded on the same key (example: user_id). Will > >> > all > >> > the rows, from the same user in different tables be in the same > >> > data-node ? > >> Yep. Node choice algorithm is based using the data type of the key. > >> -- > >> Michael > > > > > > ------------------------------------------------------------------------------ > "Accelerate Dev Cycles with Automated Cross-Browser Testing - For FREE > Instantly run your Selenium tests across 300+ browser/OS combos. Get > unparalleled scalability from the best Selenium testing platform available. > Simple to use. Nothing to install. Get started now for free." > http://p.sf.net/sfu/SauceLabs > _______________________________________________ > Postgres-xc-general mailing list > Pos...@li...<mailto:Pos...@li...> > https://lists.sourceforge.net/lists/listinfo/postgres-xc-general ------------------------------------------------------------------------------ Is your legacy SCM system holding you back? Join Perforce May 7 to find out: • 3 signs your SCM is hindering your productivity • Requirements for releasing software faster • Expert tips and advice for migrating your SCM now http://p.sf.net/sfu/perforce_______________________________________________ Postgres-xc-general mailing list Pos...@li...<mailto:Pos...@li...> https://lists.sourceforge.net/lists/listinfo/postgres-xc-general |
From: 鈴木 幸市 <ko...@in...> - 2014-05-07 08:18:01
|
Okay. I see. Let me think another action to do. --- Koichi Suzuki 2014/05/07 17:12、Juned Khan <jkh...@gm...<mailto:jkh...@gm...>> のメール: you mean i should try to restart gtm_pxy2 and coord2. but i have already tried to restart this components but it didn't start at all. and it also caused other components to stop functioning. so i am very scared to do anything with live pgxc environment. On Wed, May 7, 2014 at 1:17 PM, 鈴木 幸市 <ko...@in...<mailto:ko...@in...>> wrote: Oh, your datanode has inconsistency. I wonder how come. The reason why your datanode slave does not work will be this. From your result of pyxc_ctl monitor all command result, your datanode is still running. What you can do is to restart stopping gtm_pxy2 and coord2. This will not be harmful. Please try and let me know what’s going on. If you have a problem, please share what you get from the server log. Regards; --- Koichi Suzuki 2014/05/07 16:25、Juned Khan <jkh...@gm...<mailto:jkh...@gm...>> のメール: here is the logs coord LOG: failed to connect to Datanode WARNING: can not connect to node 16384 LOG: failed to acquire connections STATEMENT: LOCK TABLE public.accounts IN ACCESS SHARE MODE ERROR: Failed to get pooled connections STATEMENT: LOCK TABLE public.accounts IN ACCESS SHARE MODE datanode: WARNING: archive_mode enabled, yet archive_command is not set WARNING: archive_mode enabled, yet archive_command is not set WARNING: archive_mode enabled, yet archive_command is not set On Wed, May 7, 2014 at 12:39 PM, 鈴木 幸市 <ko...@in...<mailto:ko...@in...>> wrote: Did you have any more messages to the log file of the node (both coordinator and datanode)? Maybe you have some extra messages in the server log at the coordinator. --- Koichi Suzuki 2014/05/07 16:03、Juned Khan <jkh...@gm...<mailto:jkh...@gm...>> のメール: nope it didn't astpp=# SELECT pgxc_pool_reload(); pgxc_pool_reload ------------------ t (1 row) astpp=# \q PGXC pg_dumpall -h db02 > dumpall.sql pg_dump: [archiver (db)] query failed: ERROR: Failed to get pooled connections pg_dump: [archiver (db)] query was: LOCK TABLE public.accounts IN ACCESS SHARE MODE pg_dumpall: pg_dump failed on database "astpp", exiting On Wed, May 7, 2014 at 12:29 PM, 鈴木 幸市 <ko...@in...<mailto:ko...@in...>> wrote: Hmm, did pgxc_pool_reload() work? --- Koichi Suzuki 2014/05/07 14:33、Juned Khan <jkh...@gm...<mailto:jkh...@gm...>> のメール: PGXC pg_dumpall -h db02 > dumpall.sql pg_dump: [archiver (db)] query failed: ERROR: Failed to get pooled connections pg_dump: [archiver (db)] query was: LOCK TABLE public.accounts IN ACCESS SHARE MODE pg_dumpall: pg_dump failed on database "astpp", exiting On Wed, May 7, 2014 at 10:56 AM, 鈴木 幸市 <ko...@in...<mailto:ko...@in...>> wrote: What error did you have? --- Koichi Suzuki 2014/05/07 14:18、Juned Khan <jkh...@gm...<mailto:jkh...@gm...>> のメール: yeah i tried to restart the coord2 and gtm_pxy2 but no success. I have tried to take pg_dumpall but its giving me same error. please suggest. Regards Juned Khan On Wed, May 7, 2014 at 6:31 AM, 鈴木 幸市 <ko...@in...<mailto:ko...@in...>> wrote: Did you try to restart the coord2 and gtm_pxy2? Also, I advise to use pg_dumpall, not pg_dump unless you’re backing up single database. I’m afraid coord2 had had some issue. Regards; --- Koichi Suzuki 2014/05/06 21:50、Juned Khan <jkh...@gm...<mailto:jkh...@gm...>> のメール: nope i have directly did those steps on live server. I don't know exactly what cause that problem. one day i just tried to run some queries manually and got those errors. I have tried to restart all components several times without success. now another problem is i am not able to take dump of database. i am getting this error. postgres@db02:~$ pg_dump -h db02 --exclude-table=cdrs db -f db.sql pg_dump: [archiver (db)] query failed: ERROR: Failed to get pooled connections pg_dump: [archiver (db)] query was: LOCK TABLE public.accounts IN ACCESS SHARE MODE Please suggest. On Sun, May 4, 2014 at 4:10 PM, Koichi Suzuki <koi...@gm...<mailto:koi...@gm...>> wrote: Did you make any test before you did any action on your database? Also could you share what you did to have this situation? Anyway, with this situation, I believe no DDL has been handled except for temporary object, which is session specific. So I believe you can restart GTM proxy gtm_pxy2 (you don't have to reinitialize it as long as you maintain gtm_proxy.conf) and coord2. Anyway, it is very essential to record and share what you did, and more important thing is to test it with non-product environment and see what goes on, and then review any step you are taking before you do. Hope to have more info on this. Best Regards; --- Koichi Suzuki 2014-05-03 14:31 GMT+09:00 Juned Khan <jkh...@gm...<mailto:jkh...@gm...>>: > Hi Koichi, > > I tried to follow the steps removing datanode slave and adding it again but > i no success. even earlier only datanode slave was not working but now gtm > slave, gtm_pxy2, coord2 and dn_slave is not starting up. I have removed GTM > slave and datanode slave > > here is my current status of pgxc > > PGXC monitor all > Running: gtm master > Running: gtm proxy gtm_pxy1 > Not running: gtm proxy gtm_pxy2 > Running: coordinator master coord1 > Not running: coordinator master coord2 > Running: datanode master datanode1 > > Now each time i have to connect on db02 only( i.e PGXC Psql -h db02 -d > mydatabase) but its not allowing me to modify table structure. and giving me > below error. > > mydatabase=# alter table invoice_summary_data add countrycode text not null > default ''::character(1); > ERROR: Failed to get pooled connections > CONTEXT: SQL statement "EXECUTE DIRECT ON (coord2) 'SELECT > pg_catalog.pg_try_advisory_xact_lock_shared(65535, 0)'" > > So as of now i am planning to remove those components which are not working > (gtm_pxy2,coord2 ). so it will not go to connect cood2 which is down. > > what will be the impact of this, i don't want to loost database access > completely. > > Please suggest. > > Regards > Juned Khan > > ------------------------------------------------------------------------------ > "Accelerate Dev Cycles with Automated Cross-Browser Testing - For FREE > Instantly run your Selenium tests across 300+ browser/OS combos. Get > unparalleled scalability from the best Selenium testing platform available. > Simple to use. Nothing to install. Get started now for free." > http://p.sf.net/sfu/SauceLabs > _______________________________________________ > Postgres-xc-general mailing list > Pos...@li...<mailto:Pos...@li...> > https://lists.sourceforge.net/lists/listinfo/postgres-xc-general > -- Thanks, Juned Khan iNextrix Technologies Pvt Ltd. www.inextrix.com<http://www.inextrix.com/> -- Thanks, Juned Khan iNextrix Technologies Pvt Ltd. www.inextrix.com<http://www.inextrix.com/> -- Thanks, Juned Khan iNextrix Technologies Pvt Ltd. www.inextrix.com<http://www.inextrix.com/> -- Thanks, Juned Khan iNextrix Technologies Pvt Ltd. www.inextrix.com<http://www.inextrix.com/> |
From: Juned K. <jkh...@gm...> - 2014-05-07 08:12:18
|
you mean i should try to restart gtm_pxy2 and coord2. but i have already tried to restart this components but it didn't start at all. and it also caused other components to stop functioning. so i am very scared to do anything with live pgxc environment. On Wed, May 7, 2014 at 1:17 PM, 鈴木 幸市 <ko...@in...> wrote: > Oh, your datanode has inconsistency. I wonder how come. The reason > why your datanode slave does not work will be this. > > From your result of pyxc_ctl monitor all command result, your datanode > is still running. > > What you can do is to restart stopping gtm_pxy2 and coord2. This will > not be harmful. Please try and let me know what’s going on. If you have > a problem, please share what you get from the server log. > > Regards; > > --- > Koichi Suzuki > > 2014/05/07 16:25、Juned Khan <jkh...@gm...> のメール: > > here is the logs > coord > LOG: failed to connect to Datanode > WARNING: can not connect to node 16384 > LOG: failed to acquire connections > STATEMENT: LOCK TABLE public.accounts IN ACCESS SHARE MODE > ERROR: Failed to get pooled connections > STATEMENT: LOCK TABLE public.accounts IN ACCESS SHARE MODE > > datanode: > WARNING: archive_mode enabled, yet archive_command is not set > WARNING: archive_mode enabled, yet archive_command is not set > WARNING: archive_mode enabled, yet archive_command is not set > > > > > On Wed, May 7, 2014 at 12:39 PM, 鈴木 幸市 <ko...@in...> wrote: > >> Did you have any more messages to the log file of the node (both >> coordinator and datanode)? Maybe you have some extra messages in the >> server log at the coordinator. >> >> >> --- >> Koichi Suzuki >> >> 2014/05/07 16:03、Juned Khan <jkh...@gm...> のメール: >> >> nope it didn't >> >> astpp=# SELECT pgxc_pool_reload(); >> pgxc_pool_reload >> ------------------ >> t >> (1 row) >> >> astpp=# \q >> PGXC pg_dumpall -h db02 > dumpall.sql >> pg_dump: [archiver (db)] query failed: ERROR: Failed to get pooled >> connections >> pg_dump: [archiver (db)] query was: LOCK TABLE public.accounts IN ACCESS >> SHARE MODE >> pg_dumpall: pg_dump failed on database "astpp", exiting >> >> >> >> On Wed, May 7, 2014 at 12:29 PM, 鈴木 幸市 <ko...@in...> wrote: >> >>> Hmm, did pgxc_pool_reload() work? >>> --- >>> Koichi Suzuki >>> >>> 2014/05/07 14:33、Juned Khan <jkh...@gm...> のメール: >>> >>> PGXC pg_dumpall -h db02 > dumpall.sql >>> pg_dump: [archiver (db)] query failed: ERROR: Failed to get pooled >>> connections >>> pg_dump: [archiver (db)] query was: LOCK TABLE public.accounts IN ACCESS >>> SHARE MODE >>> pg_dumpall: pg_dump failed on database "astpp", exiting >>> >>> >>> >>> On Wed, May 7, 2014 at 10:56 AM, 鈴木 幸市 <ko...@in...> wrote: >>> >>>> What error did you have? >>>> --- >>>> Koichi Suzuki >>>> >>>> 2014/05/07 14:18、Juned Khan <jkh...@gm...> のメール: >>>> >>>> yeah i tried to restart the coord2 and gtm_pxy2 but no success. >>>> I have tried to take pg_dumpall but its giving me same error. >>>> >>>> please suggest. >>>> >>>> Regards >>>> Juned Khan >>>> >>>> On Wed, May 7, 2014 at 6:31 AM, 鈴木 幸市 <ko...@in...> wrote: >>>> >>>>> Did you try to restart the coord2 and gtm_pxy2? Also, I advise to >>>>> use pg_dumpall, not pg_dump unless you’re backing up single database. >>>>> I’m afraid coord2 had had some issue. >>>>> >>>>> Regards; >>>>> --- >>>>> Koichi Suzuki >>>>> >>>>> 2014/05/06 21:50、Juned Khan <jkh...@gm...> のメール: >>>>> >>>>> nope i have directly did those steps on live server. I don't know >>>>> exactly what cause that problem. one day i just tried to run some queries >>>>> manually and got those errors. >>>>> >>>>> I have tried to restart all components several times without success. >>>>> >>>>> now another problem is i am not able to take dump of database. i am >>>>> getting this error. >>>>> >>>>> postgres@db02:~$ pg_dump -h db02 --exclude-table=cdrs db -f db.sql >>>>> pg_dump: [archiver (db)] query failed: ERROR: Failed to get pooled >>>>> connections >>>>> pg_dump: [archiver (db)] query was: LOCK TABLE public.accounts IN >>>>> ACCESS SHARE MODE >>>>> >>>>> Please suggest. >>>>> >>>>> >>>>> >>>>> On Sun, May 4, 2014 at 4:10 PM, Koichi Suzuki <koi...@gm...>wrote: >>>>> >>>>>> Did you make any test before you did any action on your database? >>>>>> Also could you share what you did to have this situation? Anyway, >>>>>> with this situation, I believe no DDL has been handled except for >>>>>> temporary object, which is session specific. >>>>>> >>>>>> So I believe you can restart GTM proxy gtm_pxy2 (you don't have to >>>>>> reinitialize it as long as you maintain gtm_proxy.conf) and coord2. >>>>>> >>>>>> Anyway, it is very essential to record and share what you did, and >>>>>> more important thing is to test it with non-product environment and >>>>>> see what goes on, and then review any step you are taking before you >>>>>> do. >>>>>> >>>>>> Hope to have more info on this. >>>>>> >>>>>> Best Regards; >>>>>> --- >>>>>> Koichi Suzuki >>>>>> >>>>>> >>>>>> 2014-05-03 14:31 GMT+09:00 Juned Khan <jkh...@gm...>: >>>>>> > Hi Koichi, >>>>>> > >>>>>> > I tried to follow the steps removing datanode slave and adding it >>>>>> again but >>>>>> > i no success. even earlier only datanode slave was not working but >>>>>> now gtm >>>>>> > slave, gtm_pxy2, coord2 and dn_slave is not starting up. I have >>>>>> removed GTM >>>>>> > slave and datanode slave >>>>>> > >>>>>> > here is my current status of pgxc >>>>>> > >>>>>> > PGXC monitor all >>>>>> > Running: gtm master >>>>>> > Running: gtm proxy gtm_pxy1 >>>>>> > Not running: gtm proxy gtm_pxy2 >>>>>> > Running: coordinator master coord1 >>>>>> > Not running: coordinator master coord2 >>>>>> > Running: datanode master datanode1 >>>>>> > >>>>>> > Now each time i have to connect on db02 only( i.e PGXC Psql -h db02 >>>>>> -d >>>>>> > mydatabase) but its not allowing me to modify table structure. and >>>>>> giving me >>>>>> > below error. >>>>>> > >>>>>> > mydatabase=# alter table invoice_summary_data add countrycode text >>>>>> not null >>>>>> > default ''::character(1); >>>>>> > ERROR: Failed to get pooled connections >>>>>> > CONTEXT: SQL statement "EXECUTE DIRECT ON (coord2) 'SELECT >>>>>> > pg_catalog.pg_try_advisory_xact_lock_shared(65535, 0)'" >>>>>> > >>>>>> > So as of now i am planning to remove those components which are not >>>>>> working >>>>>> > (gtm_pxy2,coord2 ). so it will not go to connect cood2 which is >>>>>> down. >>>>>> > >>>>>> > what will be the impact of this, i don't want to loost database >>>>>> access >>>>>> > completely. >>>>>> > >>>>>> > Please suggest. >>>>>> > >>>>>> > Regards >>>>>> > Juned Khan >>>>>> > >>>>>> > >>>>>> ------------------------------------------------------------------------------ >>>>>> > "Accelerate Dev Cycles with Automated Cross-Browser Testing - For >>>>>> FREE >>>>>> > Instantly run your Selenium tests across 300+ browser/OS combos. >>>>>> Get >>>>>> > unparalleled scalability from the best Selenium testing platform >>>>>> available. >>>>>> > Simple to use. Nothing to install. Get started now for free." >>>>>> > http://p.sf.net/sfu/SauceLabs >>>>>> > _______________________________________________ >>>>>> > Postgres-xc-general mailing list >>>>>> > Pos...@li... >>>>>> > https://lists.sourceforge.net/lists/listinfo/postgres-xc-general >>>>>> > >>>>>> >>>>> >>>>> >>>>> >>>> >>> >>> >>> -- >>> Thanks, >>> Juned Khan >>> iNextrix Technologies Pvt Ltd. >>> www.inextrix.com >>> >>> >>> >> >> >> -- >> Thanks, >> Juned Khan >> iNextrix Technologies Pvt Ltd. >> www.inextrix.com >> >> >> > > > -- > Thanks, > Juned Khan > iNextrix Technologies Pvt Ltd. > www.inextrix.com > > > -- Thanks, Juned Khan iNextrix Technologies Pvt Ltd. www.inextrix.com |
From: ZhangJulian <jul...@ou...> - 2014-05-07 08:10:55
|
Of course it is attractive. The performance bottleneck, GTM, could be removed. And the cluster will be comprised of only one type of unified component. :) Thanks Julian From: ko...@in... To: jul...@ou... CC: koi...@gm...; dor...@gm...; pos...@li... Subject: Re: [Postgres-xc-general] Pgxc_ctl Primer draft Date: Wed, 7 May 2014 08:00:12 +0000 Right. Maybe I can find how to calculate this without GTM and GXID. Anyway, I thing we should keep track of root XID and local XID. I’m now designing how to do this. Hope we can share the outcome as well soon. Algorithm could be complicated but cluster configuration may look significantly simpler. How do you think providing global MVCC without GTM/GXID attractive? Regards; --- Koichi Suzuki 2014/05/07 16:51、ZhangJulian <jul...@ou...> のメール: Oh, yes. The oldest GXID must be in the snapshot of the oldest alive GXID. So if we can know the old alive GXID, we can derive the oldest GXID which is still referred. From: ko...@in... To: jul...@ou... CC: koi...@gm...; dor...@gm...; pos...@li... Subject: Re: [Postgres-xc-general] Pgxc_ctl Primer draft Date: Wed, 7 May 2014 04:00:25 +0000 Oldest alive GXID is not correct. We need referred oldest GXID, which is the oldest GXID appears in all the snapshot being used. Please consider that in the case of long, repeated-read transaction, lifetime of snapshot can be very long. Regards; --- Koichi Suzuki 2014/05/07 12:25、ZhangJulian <jul...@ou...> のメール: I said 'time' as the clock value. You had considered more than I had known. For the VACUUM, as my understanding, if some data which can be vacuumed, but is not vacuumed in time, this is OK. So if we collect the oldest alive GXID, even it is smaller than the current accurate value, it still can guide to VACUUM. Am I right? Thanks Julian From: ko...@in... To: jul...@ou... CC: koi...@gm...; dor...@gm...; pos...@li... Subject: Re: [Postgres-xc-general] Pgxc_ctl Primer draft Date: Wed, 7 May 2014 02:40:43 +0000 What do you mean by “time-based policy”, does it based on (G)XID, or real clock value? To my view, it depend upon what “time” we depend on. Yes, I’m now studying if we can use real “clock” value for this. In this case, we may not need GTM if the clock is accurate enough among servers involved. If you mean not to use global “snapshot” and if it is feasible, we may not need GTM. If we associate each local transaction to its “root” transaction, which is the transaction application generated directly, we can maintaing the visibility by calculating the “snapshot” each time needed, by collecting it from all the other nodes. We need to consider the “vacuum”. I’ve not found a good way to determine if some “deleted” rows can be removed from the database and if some “live” row’s xmim value can be frozen. Regards; --- Koichi Suzuki 2014/05/07 11:19、ZhangJulian <jul...@ou...> のメール: Is it possible to do the row visibility check based on a time based policy? That is, 1. Each data node maintains a data structure: gtid - start time - end time. Only the gtids modifying data on current data node are contained. 2. Each data node maintains the oldest alive gtid, which may not be updated synchronously. 3. GTM is only responsible to generate a sequence of GTID, which is only an integer value. 4. The time in different data nodes may be not consistent, but I think in some scenario, the application can bear the little difference. Is there any potential issues? Thanks > Date: Sun, 4 May 2014 19:36:20 +0900 > From: koi...@gm... > To: dor...@gm... > CC: pos...@li... > Subject: Re: [Postgres-xc-general] Pgxc_ctl Primer draft > > As discussed in the last year's XC-day, GTM proxy should be integrated > as postmaster backend. Maybe GTM can be. Coordinator/Datanode > can also be integrated into one. > > Apparently, this is the direction we should take. At first, there > were no such good experience to start with. Before version 1.0, we > determined that the datanode and the coordinator can share the same > binary. It is true that we started with the idea to provide > cluster-wide MVCC and now we found the next direction. > > With this integration and when start with only one node, we don't need > GTM, which looks identical to standalone PG. When we add the server, > at present we do need GTM. Only accumulating local transactions in > the nodes cannot maintain cluster-wide database consistency. > > I'm still investigating an idea how to get rid of GTM. We need to do > the following: > > 1) To provide cluster wide MVCC, > 2) To provide good means to determine which row can be vacuumed. > > My current idea is: if we associate any local XID to the root > transaction (the transaction which application created), we may be > able to provide cluster wide MVCC by calculating cluster-wide snapshot > when needed. I don't know how efficient it is and t don't have good > idea how to determine if a given row can be vacuumed. > > This is the current situation. > > Hope to have much more input on this. > > Anyway, hope my draft helps people who is trying to use Postgres-XC. > > Best; > --- > Koichi Suzuki > > > 2014-05-04 19:05 GMT+09:00 Dorian Hoxha <dor...@gm...>: > > Probably even the gtm-proxy need to be merged with datanode+coordinator from > > what i read. > > > > If you make only local transactions (inside 1 datanode) + not using global > > sequences, will there be no traffic to the GTM for that transaction ? > > > > > > On Sun, May 4, 2014 at 6:24 AM, Michael Paquier <mic...@gm...> > > wrote: > >> > >> On Sun, May 4, 2014 at 12:59 AM, Dorian Hoxha <dor...@gm...> > >> wrote: > >> >> You just need commodity INTEL server runnign Linux. > >> > Are INTEL cpu required ? If not INTEL can be removed ? (also running > >> > typo) > >> Not really... I agree to what you mean here. > >> > >> >> For datawarehouse > >> >> > >> >> applications, you may need separate patch which devides complexed query > >> >> into smaller > >> >> > >> >> chunks which run in datanodes in parallel. StormDB will provide such > >> >> patche. > >> > > >> > Wasn't stormdb bought by another company ? Is there an opensource > >> > alternative ? Fix the "patche" typo ? > >> > > >> > A way to make it simpler is by merging coordinator and datanode into 1 > >> > and > >> > making it possible for a 'node' to not hold data (be a coordinator > >> > only), > >> > like in elastic-search, but you probably already know that. > >> +1. This would alleviate data transfer between cross-node joins where > >> Coordinator and Datanodes are on separate servers. You could always > >> have both nodes on the same server with the XC of now... But that's > >> double number of nodes to monitor. > >> > >> > What exact things does the gtm-proxy do? For example, a single row > >> > insert > >> > wouldn't need the gtm (coordinator just inserts it to the right > >> > data-node)(asumming no sequences, since for that the gtm is needed)? > >> Grouping messages between Coordinator/Datanode and GTM to reduce > >> package interferences and improve performance. > >> > >> > If multiple tables are sharded on the same key (example: user_id). Will > >> > all > >> > the rows, from the same user in different tables be in the same > >> > data-node ? > >> Yep. Node choice algorithm is based using the data type of the key. > >> -- > >> Michael > > > > > > ------------------------------------------------------------------------------ > "Accelerate Dev Cycles with Automated Cross-Browser Testing - For FREE > Instantly run your Selenium tests across 300+ browser/OS combos. Get > unparalleled scalability from the best Selenium testing platform available. > Simple to use. Nothing to install. Get started now for free." > http://p.sf.net/sfu/SauceLabs > _______________________________________________ > Postgres-xc-general mailing list > Pos...@li... > https://lists.sourceforge.net/lists/listinfo/postgres-xc-general ------------------------------------------------------------------------------ Is your legacy SCM system holding you back? Join Perforce May 7 to find out: • 3 signs your SCM is hindering your productivity • Requirements for releasing software faster • Expert tips and advice for migrating your SCM now http://p.sf.net/sfu/perforce_______________________________________________ Postgres-xc-general mailing list Pos...@li... https://lists.sourceforge.net/lists/listinfo/postgres-xc-general |
From: 鈴木 幸市 <ko...@in...> - 2014-05-07 08:00:25
|
Right. Maybe I can find how to calculate this without GTM and GXID. Anyway, I thing we should keep track of root XID and local XID. I’m now designing how to do this. Hope we can share the outcome as well soon. Algorithm could be complicated but cluster configuration may look significantly simpler. How do you think providing global MVCC without GTM/GXID attractive? Regards; --- Koichi Suzuki 2014/05/07 16:51、ZhangJulian <jul...@ou...<mailto:jul...@ou...>> のメール: Oh, yes. The oldest GXID must be in the snapshot of the oldest alive GXID. So if we can know the old alive GXID, we can derive the oldest GXID which is still referred. ________________________________ From: ko...@in...<mailto:ko...@in...> To: jul...@ou...<mailto:jul...@ou...> CC: koi...@gm...<mailto:koi...@gm...>; dor...@gm...<mailto:dor...@gm...>; pos...@li...<mailto:pos...@li...> Subject: Re: [Postgres-xc-general] Pgxc_ctl Primer draft Date: Wed, 7 May 2014 04:00:25 +0000 Oldest alive GXID is not correct. We need referred oldest GXID, which is the oldest GXID appears in all the snapshot being used. Please consider that in the case of long, repeated-read transaction, lifetime of snapshot can be very long. Regards; --- Koichi Suzuki 2014/05/07 12:25、ZhangJulian <jul...@ou...<mailto:jul...@ou...>> のメール: I said 'time' as the clock value. You had considered more than I had known. For the VACUUM, as my understanding, if some data which can be vacuumed, but is not vacuumed in time, this is OK. So if we collect the oldest alive GXID, even it is smaller than the current accurate value, it still can guide to VACUUM. Am I right? Thanks Julian ________________________________ From: ko...@in...<mailto:ko...@in...> To: jul...@ou...<mailto:jul...@ou...> CC: koi...@gm...<mailto:koi...@gm...>; dor...@gm...<mailto:dor...@gm...>; pos...@li...<mailto:pos...@li...> Subject: Re: [Postgres-xc-general] Pgxc_ctl Primer draft Date: Wed, 7 May 2014 02:40:43 +0000 What do you mean by “time-based policy”, does it based on (G)XID, or real clock value? To my view, it depend upon what “time” we depend on. Yes, I’m now studying if we can use real “clock” value for this. In this case, we may not need GTM if the clock is accurate enough among servers involved. If you mean not to use global “snapshot” and if it is feasible, we may not need GTM. If we associate each local transaction to its “root” transaction, which is the transaction application generated directly, we can maintaing the visibility by calculating the “snapshot” each time needed, by collecting it from all the other nodes. We need to consider the “vacuum”. I’ve not found a good way to determine if some “deleted” rows can be removed from the database and if some “live” row’s xmim value can be frozen. Regards; --- Koichi Suzuki 2014/05/07 11:19、ZhangJulian <jul...@ou...<mailto:jul...@ou...>> のメール: Is it possible to do the row visibility check based on a time based policy? That is, 1. Each data node maintains a data structure: gtid - start time - end time. Only the gtids modifying data on current data node are contained. 2. Each data node maintains the oldest alive gtid, which may not be updated synchronously. 3. GTM is only responsible to generate a sequence of GTID, which is only an integer value. 4. The time in different data nodes may be not consistent, but I think in some scenario, the application can bear the little difference. Is there any potential issues? Thanks > Date: Sun, 4 May 2014 19:36:20 +0900 > From: koi...@gm...<mailto:koi...@gm...> > To: dor...@gm...<mailto:dor...@gm...> > CC: pos...@li...<mailto:pos...@li...> > Subject: Re: [Postgres-xc-general] Pgxc_ctl Primer draft > > As discussed in the last year's XC-day, GTM proxy should be integrated > as postmaster backend. Maybe GTM can be. Coordinator/Datanode > can also be integrated into one. > > Apparently, this is the direction we should take. At first, there > were no such good experience to start with. Before version 1.0, we > determined that the datanode and the coordinator can share the same > binary. It is true that we started with the idea to provide > cluster-wide MVCC and now we found the next direction. > > With this integration and when start with only one node, we don't need > GTM, which looks identical to standalone PG. When we add the server, > at present we do need GTM. Only accumulating local transactions in > the nodes cannot maintain cluster-wide database consistency. > > I'm still investigating an idea how to get rid of GTM. We need to do > the following: > > 1) To provide cluster wide MVCC, > 2) To provide good means to determine which row can be vacuumed. > > My current idea is: if we associate any local XID to the root > transaction (the transaction which application created), we may be > able to provide cluster wide MVCC by calculating cluster-wide snapshot > when needed. I don't know how efficient it is and t don't have good > idea how to determine if a given row can be vacuumed. > > This is the current situation. > > Hope to have much more input on this. > > Anyway, hope my draft helps people who is trying to use Postgres-XC. > > Best; > --- > Koichi Suzuki > > > 2014-05-04 19:05 GMT+09:00 Dorian Hoxha <dor...@gm...<mailto:dor...@gm...>>: > > Probably even the gtm-proxy need to be merged with datanode+coordinator from > > what i read. > > > > If you make only local transactions (inside 1 datanode) + not using global > > sequences, will there be no traffic to the GTM for that transaction ? > > > > > > On Sun, May 4, 2014 at 6:24 AM, Michael Paquier <mic...@gm...<mailto:mic...@gm...>> > > wrote: > >> > >> On Sun, May 4, 2014 at 12:59 AM, Dorian Hoxha <dor...@gm...<mailto:dor...@gm...>> > >> wrote: > >> >> You just need commodity INTEL server runnign Linux. > >> > Are INTEL cpu required ? If not INTEL can be removed ? (also running > >> > typo) > >> Not really... I agree to what you mean here. > >> > >> >> For datawarehouse > >> >> > >> >> applications, you may need separate patch which devides complexed query > >> >> into smaller > >> >> > >> >> chunks which run in datanodes in parallel. StormDB will provide such > >> >> patche. > >> > > >> > Wasn't stormdb bought by another company ? Is there an opensource > >> > alternative ? Fix the "patche" typo ? > >> > > >> > A way to make it simpler is by merging coordinator and datanode into 1 > >> > and > >> > making it possible for a 'node' to not hold data (be a coordinator > >> > only), > >> > like in elastic-search, but you probably already know that. > >> +1. This would alleviate data transfer between cross-node joins where > >> Coordinator and Datanodes are on separate servers. You could always > >> have both nodes on the same server with the XC of now... But that's > >> double number of nodes to monitor. > >> > >> > What exact things does the gtm-proxy do? For example, a single row > >> > insert > >> > wouldn't need the gtm (coordinator just inserts it to the right > >> > data-node)(asumming no sequences, since for that the gtm is needed)? > >> Grouping messages between Coordinator/Datanode and GTM to reduce > >> package interferences and improve performance. > >> > >> > If multiple tables are sharded on the same key (example: user_id). Will > >> > all > >> > the rows, from the same user in different tables be in the same > >> > data-node ? > >> Yep. Node choice algorithm is based using the data type of the key. > >> -- > >> Michael > > > > > > ------------------------------------------------------------------------------ > "Accelerate Dev Cycles with Automated Cross-Browser Testing - For FREE > Instantly run your Selenium tests across 300+ browser/OS combos. Get > unparalleled scalability from the best Selenium testing platform available. > Simple to use. Nothing to install. Get started now for free." > http://p.sf.net/sfu/SauceLabs > _______________________________________________ > Postgres-xc-general mailing list > Pos...@li...<mailto:Pos...@li...> > https://lists.sourceforge.net/lists/listinfo/postgres-xc-general ------------------------------------------------------------------------------ Is your legacy SCM system holding you back? Join Perforce May 7 to find out: • 3 signs your SCM is hindering your productivity • Requirements for releasing software faster • Expert tips and advice for migrating your SCM now http://p.sf.net/sfu/perforce_______________________________________________ Postgres-xc-general mailing list Pos...@li...<mailto:Pos...@li...> https://lists.sourceforge.net/lists/listinfo/postgres-xc-general |
From: ZhangJulian <jul...@ou...> - 2014-05-07 07:51:36
|
Oh, yes. The oldest GXID must be in the snapshot of the oldest alive GXID. So if we can know the old alive GXID, we can derive the oldest GXID which is still referred. From: ko...@in... To: jul...@ou... CC: koi...@gm...; dor...@gm...; pos...@li... Subject: Re: [Postgres-xc-general] Pgxc_ctl Primer draft Date: Wed, 7 May 2014 04:00:25 +0000 Oldest alive GXID is not correct. We need referred oldest GXID, which is the oldest GXID appears in all the snapshot being used. Please consider that in the case of long, repeated-read transaction, lifetime of snapshot can be very long. Regards; --- Koichi Suzuki 2014/05/07 12:25、ZhangJulian <jul...@ou...> のメール: I said 'time' as the clock value. You had considered more than I had known. For the VACUUM, as my understanding, if some data which can be vacuumed, but is not vacuumed in time, this is OK. So if we collect the oldest alive GXID, even it is smaller than the current accurate value, it still can guide to VACUUM. Am I right? Thanks Julian From: ko...@in... To: jul...@ou... CC: koi...@gm...; dor...@gm...; pos...@li... Subject: Re: [Postgres-xc-general] Pgxc_ctl Primer draft Date: Wed, 7 May 2014 02:40:43 +0000 What do you mean by “time-based policy”, does it based on (G)XID, or real clock value? To my view, it depend upon what “time” we depend on. Yes, I’m now studying if we can use real “clock” value for this. In this case, we may not need GTM if the clock is accurate enough among servers involved. If you mean not to use global “snapshot” and if it is feasible, we may not need GTM. If we associate each local transaction to its “root” transaction, which is the transaction application generated directly, we can maintaing the visibility by calculating the “snapshot” each time needed, by collecting it from all the other nodes. We need to consider the “vacuum”. I’ve not found a good way to determine if some “deleted” rows can be removed from the database and if some “live” row’s xmim value can be frozen. Regards; --- Koichi Suzuki 2014/05/07 11:19、ZhangJulian <jul...@ou...> のメール: Is it possible to do the row visibility check based on a time based policy? That is, 1. Each data node maintains a data structure: gtid - start time - end time. Only the gtids modifying data on current data node are contained. 2. Each data node maintains the oldest alive gtid, which may not be updated synchronously. 3. GTM is only responsible to generate a sequence of GTID, which is only an integer value. 4. The time in different data nodes may be not consistent, but I think in some scenario, the application can bear the little difference. Is there any potential issues? Thanks > Date: Sun, 4 May 2014 19:36:20 +0900 > From: koi...@gm... > To: dor...@gm... > CC: pos...@li... > Subject: Re: [Postgres-xc-general] Pgxc_ctl Primer draft > > As discussed in the last year's XC-day, GTM proxy should be integrated > as postmaster backend. Maybe GTM can be. Coordinator/Datanode > can also be integrated into one. > > Apparently, this is the direction we should take. At first, there > were no such good experience to start with. Before version 1.0, we > determined that the datanode and the coordinator can share the same > binary. It is true that we started with the idea to provide > cluster-wide MVCC and now we found the next direction. > > With this integration and when start with only one node, we don't need > GTM, which looks identical to standalone PG. When we add the server, > at present we do need GTM. Only accumulating local transactions in > the nodes cannot maintain cluster-wide database consistency. > > I'm still investigating an idea how to get rid of GTM. We need to do > the following: > > 1) To provide cluster wide MVCC, > 2) To provide good means to determine which row can be vacuumed. > > My current idea is: if we associate any local XID to the root > transaction (the transaction which application created), we may be > able to provide cluster wide MVCC by calculating cluster-wide snapshot > when needed. I don't know how efficient it is and t don't have good > idea how to determine if a given row can be vacuumed. > > This is the current situation. > > Hope to have much more input on this. > > Anyway, hope my draft helps people who is trying to use Postgres-XC. > > Best; > --- > Koichi Suzuki > > > 2014-05-04 19:05 GMT+09:00 Dorian Hoxha <dor...@gm...>: > > Probably even the gtm-proxy need to be merged with datanode+coordinator from > > what i read. > > > > If you make only local transactions (inside 1 datanode) + not using global > > sequences, will there be no traffic to the GTM for that transaction ? > > > > > > On Sun, May 4, 2014 at 6:24 AM, Michael Paquier <mic...@gm...> > > wrote: > >> > >> On Sun, May 4, 2014 at 12:59 AM, Dorian Hoxha <dor...@gm...> > >> wrote: > >> >> You just need commodity INTEL server runnign Linux. > >> > Are INTEL cpu required ? If not INTEL can be removed ? (also running > >> > typo) > >> Not really... I agree to what you mean here. > >> > >> >> For datawarehouse > >> >> > >> >> applications, you may need separate patch which devides complexed query > >> >> into smaller > >> >> > >> >> chunks which run in datanodes in parallel. StormDB will provide such > >> >> patche. > >> > > >> > Wasn't stormdb bought by another company ? Is there an opensource > >> > alternative ? Fix the "patche" typo ? > >> > > >> > A way to make it simpler is by merging coordinator and datanode into 1 > >> > and > >> > making it possible for a 'node' to not hold data (be a coordinator > >> > only), > >> > like in elastic-search, but you probably already know that. > >> +1. This would alleviate data transfer between cross-node joins where > >> Coordinator and Datanodes are on separate servers. You could always > >> have both nodes on the same server with the XC of now... But that's > >> double number of nodes to monitor. > >> > >> > What exact things does the gtm-proxy do? For example, a single row > >> > insert > >> > wouldn't need the gtm (coordinator just inserts it to the right > >> > data-node)(asumming no sequences, since for that the gtm is needed)? > >> Grouping messages between Coordinator/Datanode and GTM to reduce > >> package interferences and improve performance. > >> > >> > If multiple tables are sharded on the same key (example: user_id). Will > >> > all > >> > the rows, from the same user in different tables be in the same > >> > data-node ? > >> Yep. Node choice algorithm is based using the data type of the key. > >> -- > >> Michael > > > > > > ------------------------------------------------------------------------------ > "Accelerate Dev Cycles with Automated Cross-Browser Testing - For FREE > Instantly run your Selenium tests across 300+ browser/OS combos. Get > unparalleled scalability from the best Selenium testing platform available. > Simple to use. Nothing to install. Get started now for free." > http://p.sf.net/sfu/SauceLabs > _______________________________________________ > Postgres-xc-general mailing list > Pos...@li... > https://lists.sourceforge.net/lists/listinfo/postgres-xc-general ------------------------------------------------------------------------------ Is your legacy SCM system holding you back? Join Perforce May 7 to find out: • 3 signs your SCM is hindering your productivity • Requirements for releasing software faster • Expert tips and advice for migrating your SCM now http://p.sf.net/sfu/perforce_______________________________________________ Postgres-xc-general mailing list Pos...@li... https://lists.sourceforge.net/lists/listinfo/postgres-xc-general |
From: 鈴木 幸市 <ko...@in...> - 2014-05-07 07:47:42
|
Oh, your datanode has inconsistency. I wonder how come. The reason why your datanode slave does not work will be this. From your result of pyxc_ctl monitor all command result, your datanode is still running. What you can do is to restart stopping gtm_pxy2 and coord2. This will not be harmful. Please try and let me know what’s going on. If you have a problem, please share what you get from the server log. Regards; --- Koichi Suzuki 2014/05/07 16:25、Juned Khan <jkh...@gm...<mailto:jkh...@gm...>> のメール: here is the logs coord LOG: failed to connect to Datanode WARNING: can not connect to node 16384 LOG: failed to acquire connections STATEMENT: LOCK TABLE public.accounts IN ACCESS SHARE MODE ERROR: Failed to get pooled connections STATEMENT: LOCK TABLE public.accounts IN ACCESS SHARE MODE datanode: WARNING: archive_mode enabled, yet archive_command is not set WARNING: archive_mode enabled, yet archive_command is not set WARNING: archive_mode enabled, yet archive_command is not set On Wed, May 7, 2014 at 12:39 PM, 鈴木 幸市 <ko...@in...<mailto:ko...@in...>> wrote: Did you have any more messages to the log file of the node (both coordinator and datanode)? Maybe you have some extra messages in the server log at the coordinator. --- Koichi Suzuki 2014/05/07 16:03、Juned Khan <jkh...@gm...<mailto:jkh...@gm...>> のメール: nope it didn't astpp=# SELECT pgxc_pool_reload(); pgxc_pool_reload ------------------ t (1 row) astpp=# \q PGXC pg_dumpall -h db02 > dumpall.sql pg_dump: [archiver (db)] query failed: ERROR: Failed to get pooled connections pg_dump: [archiver (db)] query was: LOCK TABLE public.accounts IN ACCESS SHARE MODE pg_dumpall: pg_dump failed on database "astpp", exiting On Wed, May 7, 2014 at 12:29 PM, 鈴木 幸市 <ko...@in...<mailto:ko...@in...>> wrote: Hmm, did pgxc_pool_reload() work? --- Koichi Suzuki 2014/05/07 14:33、Juned Khan <jkh...@gm...<mailto:jkh...@gm...>> のメール: PGXC pg_dumpall -h db02 > dumpall.sql pg_dump: [archiver (db)] query failed: ERROR: Failed to get pooled connections pg_dump: [archiver (db)] query was: LOCK TABLE public.accounts IN ACCESS SHARE MODE pg_dumpall: pg_dump failed on database "astpp", exiting On Wed, May 7, 2014 at 10:56 AM, 鈴木 幸市 <ko...@in...<mailto:ko...@in...>> wrote: What error did you have? --- Koichi Suzuki 2014/05/07 14:18、Juned Khan <jkh...@gm...<mailto:jkh...@gm...>> のメール: yeah i tried to restart the coord2 and gtm_pxy2 but no success. I have tried to take pg_dumpall but its giving me same error. please suggest. Regards Juned Khan On Wed, May 7, 2014 at 6:31 AM, 鈴木 幸市 <ko...@in...<mailto:ko...@in...>> wrote: Did you try to restart the coord2 and gtm_pxy2? Also, I advise to use pg_dumpall, not pg_dump unless you’re backing up single database. I’m afraid coord2 had had some issue. Regards; --- Koichi Suzuki 2014/05/06 21:50、Juned Khan <jkh...@gm...<mailto:jkh...@gm...>> のメール: nope i have directly did those steps on live server. I don't know exactly what cause that problem. one day i just tried to run some queries manually and got those errors. I have tried to restart all components several times without success. now another problem is i am not able to take dump of database. i am getting this error. postgres@db02:~$ pg_dump -h db02 --exclude-table=cdrs db -f db.sql pg_dump: [archiver (db)] query failed: ERROR: Failed to get pooled connections pg_dump: [archiver (db)] query was: LOCK TABLE public.accounts IN ACCESS SHARE MODE Please suggest. On Sun, May 4, 2014 at 4:10 PM, Koichi Suzuki <koi...@gm...<mailto:koi...@gm...>> wrote: Did you make any test before you did any action on your database? Also could you share what you did to have this situation? Anyway, with this situation, I believe no DDL has been handled except for temporary object, which is session specific. So I believe you can restart GTM proxy gtm_pxy2 (you don't have to reinitialize it as long as you maintain gtm_proxy.conf) and coord2. Anyway, it is very essential to record and share what you did, and more important thing is to test it with non-product environment and see what goes on, and then review any step you are taking before you do. Hope to have more info on this. Best Regards; --- Koichi Suzuki 2014-05-03 14:31 GMT+09:00 Juned Khan <jkh...@gm...<mailto:jkh...@gm...>>: > Hi Koichi, > > I tried to follow the steps removing datanode slave and adding it again but > i no success. even earlier only datanode slave was not working but now gtm > slave, gtm_pxy2, coord2 and dn_slave is not starting up. I have removed GTM > slave and datanode slave > > here is my current status of pgxc > > PGXC monitor all > Running: gtm master > Running: gtm proxy gtm_pxy1 > Not running: gtm proxy gtm_pxy2 > Running: coordinator master coord1 > Not running: coordinator master coord2 > Running: datanode master datanode1 > > Now each time i have to connect on db02 only( i.e PGXC Psql -h db02 -d > mydatabase) but its not allowing me to modify table structure. and giving me > below error. > > mydatabase=# alter table invoice_summary_data add countrycode text not null > default ''::character(1); > ERROR: Failed to get pooled connections > CONTEXT: SQL statement "EXECUTE DIRECT ON (coord2) 'SELECT > pg_catalog.pg_try_advisory_xact_lock_shared(65535, 0)'" > > So as of now i am planning to remove those components which are not working > (gtm_pxy2,coord2 ). so it will not go to connect cood2 which is down. > > what will be the impact of this, i don't want to loost database access > completely. > > Please suggest. > > Regards > Juned Khan > > ------------------------------------------------------------------------------ > "Accelerate Dev Cycles with Automated Cross-Browser Testing - For FREE > Instantly run your Selenium tests across 300+ browser/OS combos. Get > unparalleled scalability from the best Selenium testing platform available. > Simple to use. Nothing to install. Get started now for free." > http://p.sf.net/sfu/SauceLabs > _______________________________________________ > Postgres-xc-general mailing list > Pos...@li...<mailto:Pos...@li...> > https://lists.sourceforge.net/lists/listinfo/postgres-xc-general > -- Thanks, Juned Khan iNextrix Technologies Pvt Ltd. www.inextrix.com<http://www.inextrix.com/> -- Thanks, Juned Khan iNextrix Technologies Pvt Ltd. www.inextrix.com<http://www.inextrix.com/> -- Thanks, Juned Khan iNextrix Technologies Pvt Ltd. www.inextrix.com<http://www.inextrix.com/> |
From: Juned K. <jkh...@gm...> - 2014-05-07 07:25:38
|
here is the logs coord LOG: failed to connect to Datanode WARNING: can not connect to node 16384 LOG: failed to acquire connections STATEMENT: LOCK TABLE public.accounts IN ACCESS SHARE MODE ERROR: Failed to get pooled connections STATEMENT: LOCK TABLE public.accounts IN ACCESS SHARE MODE datanode: WARNING: archive_mode enabled, yet archive_command is not set WARNING: archive_mode enabled, yet archive_command is not set WARNING: archive_mode enabled, yet archive_command is not set On Wed, May 7, 2014 at 12:39 PM, 鈴木 幸市 <ko...@in...> wrote: > Did you have any more messages to the log file of the node (both > coordinator and datanode)? Maybe you have some extra messages in the > server log at the coordinator. > > > --- > Koichi Suzuki > > 2014/05/07 16:03、Juned Khan <jkh...@gm...> のメール: > > nope it didn't > > astpp=# SELECT pgxc_pool_reload(); > pgxc_pool_reload > ------------------ > t > (1 row) > > astpp=# \q > PGXC pg_dumpall -h db02 > dumpall.sql > pg_dump: [archiver (db)] query failed: ERROR: Failed to get pooled > connections > pg_dump: [archiver (db)] query was: LOCK TABLE public.accounts IN ACCESS > SHARE MODE > pg_dumpall: pg_dump failed on database "astpp", exiting > > > > On Wed, May 7, 2014 at 12:29 PM, 鈴木 幸市 <ko...@in...> wrote: > >> Hmm, did pgxc_pool_reload() work? >> --- >> Koichi Suzuki >> >> 2014/05/07 14:33、Juned Khan <jkh...@gm...> のメール: >> >> PGXC pg_dumpall -h db02 > dumpall.sql >> pg_dump: [archiver (db)] query failed: ERROR: Failed to get pooled >> connections >> pg_dump: [archiver (db)] query was: LOCK TABLE public.accounts IN ACCESS >> SHARE MODE >> pg_dumpall: pg_dump failed on database "astpp", exiting >> >> >> >> On Wed, May 7, 2014 at 10:56 AM, 鈴木 幸市 <ko...@in...> wrote: >> >>> What error did you have? >>> --- >>> Koichi Suzuki >>> >>> 2014/05/07 14:18、Juned Khan <jkh...@gm...> のメール: >>> >>> yeah i tried to restart the coord2 and gtm_pxy2 but no success. >>> I have tried to take pg_dumpall but its giving me same error. >>> >>> please suggest. >>> >>> Regards >>> Juned Khan >>> >>> On Wed, May 7, 2014 at 6:31 AM, 鈴木 幸市 <ko...@in...> wrote: >>> >>>> Did you try to restart the coord2 and gtm_pxy2? Also, I advise to >>>> use pg_dumpall, not pg_dump unless you’re backing up single database. >>>> I’m afraid coord2 had had some issue. >>>> >>>> Regards; >>>> --- >>>> Koichi Suzuki >>>> >>>> 2014/05/06 21:50、Juned Khan <jkh...@gm...> のメール: >>>> >>>> nope i have directly did those steps on live server. I don't know >>>> exactly what cause that problem. one day i just tried to run some queries >>>> manually and got those errors. >>>> >>>> I have tried to restart all components several times without success. >>>> >>>> now another problem is i am not able to take dump of database. i am >>>> getting this error. >>>> >>>> postgres@db02:~$ pg_dump -h db02 --exclude-table=cdrs db -f db.sql >>>> pg_dump: [archiver (db)] query failed: ERROR: Failed to get pooled >>>> connections >>>> pg_dump: [archiver (db)] query was: LOCK TABLE public.accounts IN >>>> ACCESS SHARE MODE >>>> >>>> Please suggest. >>>> >>>> >>>> >>>> On Sun, May 4, 2014 at 4:10 PM, Koichi Suzuki <koi...@gm...>wrote: >>>> >>>>> Did you make any test before you did any action on your database? >>>>> Also could you share what you did to have this situation? Anyway, >>>>> with this situation, I believe no DDL has been handled except for >>>>> temporary object, which is session specific. >>>>> >>>>> So I believe you can restart GTM proxy gtm_pxy2 (you don't have to >>>>> reinitialize it as long as you maintain gtm_proxy.conf) and coord2. >>>>> >>>>> Anyway, it is very essential to record and share what you did, and >>>>> more important thing is to test it with non-product environment and >>>>> see what goes on, and then review any step you are taking before you >>>>> do. >>>>> >>>>> Hope to have more info on this. >>>>> >>>>> Best Regards; >>>>> --- >>>>> Koichi Suzuki >>>>> >>>>> >>>>> 2014-05-03 14:31 GMT+09:00 Juned Khan <jkh...@gm...>: >>>>> > Hi Koichi, >>>>> > >>>>> > I tried to follow the steps removing datanode slave and adding it >>>>> again but >>>>> > i no success. even earlier only datanode slave was not working but >>>>> now gtm >>>>> > slave, gtm_pxy2, coord2 and dn_slave is not starting up. I have >>>>> removed GTM >>>>> > slave and datanode slave >>>>> > >>>>> > here is my current status of pgxc >>>>> > >>>>> > PGXC monitor all >>>>> > Running: gtm master >>>>> > Running: gtm proxy gtm_pxy1 >>>>> > Not running: gtm proxy gtm_pxy2 >>>>> > Running: coordinator master coord1 >>>>> > Not running: coordinator master coord2 >>>>> > Running: datanode master datanode1 >>>>> > >>>>> > Now each time i have to connect on db02 only( i.e PGXC Psql -h db02 >>>>> -d >>>>> > mydatabase) but its not allowing me to modify table structure. and >>>>> giving me >>>>> > below error. >>>>> > >>>>> > mydatabase=# alter table invoice_summary_data add countrycode text >>>>> not null >>>>> > default ''::character(1); >>>>> > ERROR: Failed to get pooled connections >>>>> > CONTEXT: SQL statement "EXECUTE DIRECT ON (coord2) 'SELECT >>>>> > pg_catalog.pg_try_advisory_xact_lock_shared(65535, 0)'" >>>>> > >>>>> > So as of now i am planning to remove those components which are not >>>>> working >>>>> > (gtm_pxy2,coord2 ). so it will not go to connect cood2 which is down. >>>>> > >>>>> > what will be the impact of this, i don't want to loost database >>>>> access >>>>> > completely. >>>>> > >>>>> > Please suggest. >>>>> > >>>>> > Regards >>>>> > Juned Khan >>>>> > >>>>> > >>>>> ------------------------------------------------------------------------------ >>>>> > "Accelerate Dev Cycles with Automated Cross-Browser Testing - For >>>>> FREE >>>>> > Instantly run your Selenium tests across 300+ browser/OS combos. Get >>>>> > unparalleled scalability from the best Selenium testing platform >>>>> available. >>>>> > Simple to use. Nothing to install. Get started now for free." >>>>> > http://p.sf.net/sfu/SauceLabs >>>>> > _______________________________________________ >>>>> > Postgres-xc-general mailing list >>>>> > Pos...@li... >>>>> > https://lists.sourceforge.net/lists/listinfo/postgres-xc-general >>>>> > >>>>> >>>> >>>> >>>> >>> >> >> >> -- >> Thanks, >> Juned Khan >> iNextrix Technologies Pvt Ltd. >> www.inextrix.com >> >> >> > > > -- > Thanks, > Juned Khan > iNextrix Technologies Pvt Ltd. > www.inextrix.com > > > -- Thanks, Juned Khan iNextrix Technologies Pvt Ltd. www.inextrix.com |
From: 鈴木 幸市 <ko...@in...> - 2014-05-07 07:09:21
|
Did you have any more messages to the log file of the node (both coordinator and datanode)? Maybe you have some extra messages in the server log at the coordinator. --- Koichi Suzuki 2014/05/07 16:03、Juned Khan <jkh...@gm...<mailto:jkh...@gm...>> のメール: nope it didn't astpp=# SELECT pgxc_pool_reload(); pgxc_pool_reload ------------------ t (1 row) astpp=# \q PGXC pg_dumpall -h db02 > dumpall.sql pg_dump: [archiver (db)] query failed: ERROR: Failed to get pooled connections pg_dump: [archiver (db)] query was: LOCK TABLE public.accounts IN ACCESS SHARE MODE pg_dumpall: pg_dump failed on database "astpp", exiting On Wed, May 7, 2014 at 12:29 PM, 鈴木 幸市 <ko...@in...<mailto:ko...@in...>> wrote: Hmm, did pgxc_pool_reload() work? --- Koichi Suzuki 2014/05/07 14:33、Juned Khan <jkh...@gm...<mailto:jkh...@gm...>> のメール: PGXC pg_dumpall -h db02 > dumpall.sql pg_dump: [archiver (db)] query failed: ERROR: Failed to get pooled connections pg_dump: [archiver (db)] query was: LOCK TABLE public.accounts IN ACCESS SHARE MODE pg_dumpall: pg_dump failed on database "astpp", exiting On Wed, May 7, 2014 at 10:56 AM, 鈴木 幸市 <ko...@in...<mailto:ko...@in...>> wrote: What error did you have? --- Koichi Suzuki 2014/05/07 14:18、Juned Khan <jkh...@gm...<mailto:jkh...@gm...>> のメール: yeah i tried to restart the coord2 and gtm_pxy2 but no success. I have tried to take pg_dumpall but its giving me same error. please suggest. Regards Juned Khan On Wed, May 7, 2014 at 6:31 AM, 鈴木 幸市 <ko...@in...<mailto:ko...@in...>> wrote: Did you try to restart the coord2 and gtm_pxy2? Also, I advise to use pg_dumpall, not pg_dump unless you’re backing up single database. I’m afraid coord2 had had some issue. Regards; --- Koichi Suzuki 2014/05/06 21:50、Juned Khan <jkh...@gm...<mailto:jkh...@gm...>> のメール: nope i have directly did those steps on live server. I don't know exactly what cause that problem. one day i just tried to run some queries manually and got those errors. I have tried to restart all components several times without success. now another problem is i am not able to take dump of database. i am getting this error. postgres@db02:~$ pg_dump -h db02 --exclude-table=cdrs db -f db.sql pg_dump: [archiver (db)] query failed: ERROR: Failed to get pooled connections pg_dump: [archiver (db)] query was: LOCK TABLE public.accounts IN ACCESS SHARE MODE Please suggest. On Sun, May 4, 2014 at 4:10 PM, Koichi Suzuki <koi...@gm...<mailto:koi...@gm...>> wrote: Did you make any test before you did any action on your database? Also could you share what you did to have this situation? Anyway, with this situation, I believe no DDL has been handled except for temporary object, which is session specific. So I believe you can restart GTM proxy gtm_pxy2 (you don't have to reinitialize it as long as you maintain gtm_proxy.conf) and coord2. Anyway, it is very essential to record and share what you did, and more important thing is to test it with non-product environment and see what goes on, and then review any step you are taking before you do. Hope to have more info on this. Best Regards; --- Koichi Suzuki 2014-05-03 14:31 GMT+09:00 Juned Khan <jkh...@gm...<mailto:jkh...@gm...>>: > Hi Koichi, > > I tried to follow the steps removing datanode slave and adding it again but > i no success. even earlier only datanode slave was not working but now gtm > slave, gtm_pxy2, coord2 and dn_slave is not starting up. I have removed GTM > slave and datanode slave > > here is my current status of pgxc > > PGXC monitor all > Running: gtm master > Running: gtm proxy gtm_pxy1 > Not running: gtm proxy gtm_pxy2 > Running: coordinator master coord1 > Not running: coordinator master coord2 > Running: datanode master datanode1 > > Now each time i have to connect on db02 only( i.e PGXC Psql -h db02 -d > mydatabase) but its not allowing me to modify table structure. and giving me > below error. > > mydatabase=# alter table invoice_summary_data add countrycode text not null > default ''::character(1); > ERROR: Failed to get pooled connections > CONTEXT: SQL statement "EXECUTE DIRECT ON (coord2) 'SELECT > pg_catalog.pg_try_advisory_xact_lock_shared(65535, 0)'" > > So as of now i am planning to remove those components which are not working > (gtm_pxy2,coord2 ). so it will not go to connect cood2 which is down. > > what will be the impact of this, i don't want to loost database access > completely. > > Please suggest. > > Regards > Juned Khan > > ------------------------------------------------------------------------------ > "Accelerate Dev Cycles with Automated Cross-Browser Testing - For FREE > Instantly run your Selenium tests across 300+ browser/OS combos. Get > unparalleled scalability from the best Selenium testing platform available. > Simple to use. Nothing to install. Get started now for free." > http://p.sf.net/sfu/SauceLabs > _______________________________________________ > Postgres-xc-general mailing list > Pos...@li...<mailto:Pos...@li...> > https://lists.sourceforge.net/lists/listinfo/postgres-xc-general > -- Thanks, Juned Khan iNextrix Technologies Pvt Ltd. www.inextrix.com<http://www.inextrix.com/> -- Thanks, Juned Khan iNextrix Technologies Pvt Ltd. www.inextrix.com<http://www.inextrix.com/> |
From: Juned K. <jkh...@gm...> - 2014-05-07 07:03:43
|
nope it didn't astpp=# SELECT pgxc_pool_reload(); pgxc_pool_reload ------------------ t (1 row) astpp=# \q PGXC pg_dumpall -h db02 > dumpall.sql pg_dump: [archiver (db)] query failed: ERROR: Failed to get pooled connections pg_dump: [archiver (db)] query was: LOCK TABLE public.accounts IN ACCESS SHARE MODE pg_dumpall: pg_dump failed on database "astpp", exiting On Wed, May 7, 2014 at 12:29 PM, 鈴木 幸市 <ko...@in...> wrote: > Hmm, did pgxc_pool_reload() work? > --- > Koichi Suzuki > > 2014/05/07 14:33、Juned Khan <jkh...@gm...> のメール: > > PGXC pg_dumpall -h db02 > dumpall.sql > pg_dump: [archiver (db)] query failed: ERROR: Failed to get pooled > connections > pg_dump: [archiver (db)] query was: LOCK TABLE public.accounts IN ACCESS > SHARE MODE > pg_dumpall: pg_dump failed on database "astpp", exiting > > > > On Wed, May 7, 2014 at 10:56 AM, 鈴木 幸市 <ko...@in...> wrote: > >> What error did you have? >> --- >> Koichi Suzuki >> >> 2014/05/07 14:18、Juned Khan <jkh...@gm...> のメール: >> >> yeah i tried to restart the coord2 and gtm_pxy2 but no success. >> I have tried to take pg_dumpall but its giving me same error. >> >> please suggest. >> >> Regards >> Juned Khan >> >> On Wed, May 7, 2014 at 6:31 AM, 鈴木 幸市 <ko...@in...> wrote: >> >>> Did you try to restart the coord2 and gtm_pxy2? Also, I advise to >>> use pg_dumpall, not pg_dump unless you’re backing up single database. >>> I’m afraid coord2 had had some issue. >>> >>> Regards; >>> --- >>> Koichi Suzuki >>> >>> 2014/05/06 21:50、Juned Khan <jkh...@gm...> のメール: >>> >>> nope i have directly did those steps on live server. I don't know >>> exactly what cause that problem. one day i just tried to run some queries >>> manually and got those errors. >>> >>> I have tried to restart all components several times without success. >>> >>> now another problem is i am not able to take dump of database. i am >>> getting this error. >>> >>> postgres@db02:~$ pg_dump -h db02 --exclude-table=cdrs db -f db.sql >>> pg_dump: [archiver (db)] query failed: ERROR: Failed to get pooled >>> connections >>> pg_dump: [archiver (db)] query was: LOCK TABLE public.accounts IN ACCESS >>> SHARE MODE >>> >>> Please suggest. >>> >>> >>> >>> On Sun, May 4, 2014 at 4:10 PM, Koichi Suzuki <koi...@gm...>wrote: >>> >>>> Did you make any test before you did any action on your database? >>>> Also could you share what you did to have this situation? Anyway, >>>> with this situation, I believe no DDL has been handled except for >>>> temporary object, which is session specific. >>>> >>>> So I believe you can restart GTM proxy gtm_pxy2 (you don't have to >>>> reinitialize it as long as you maintain gtm_proxy.conf) and coord2. >>>> >>>> Anyway, it is very essential to record and share what you did, and >>>> more important thing is to test it with non-product environment and >>>> see what goes on, and then review any step you are taking before you >>>> do. >>>> >>>> Hope to have more info on this. >>>> >>>> Best Regards; >>>> --- >>>> Koichi Suzuki >>>> >>>> >>>> 2014-05-03 14:31 GMT+09:00 Juned Khan <jkh...@gm...>: >>>> > Hi Koichi, >>>> > >>>> > I tried to follow the steps removing datanode slave and adding it >>>> again but >>>> > i no success. even earlier only datanode slave was not working but >>>> now gtm >>>> > slave, gtm_pxy2, coord2 and dn_slave is not starting up. I have >>>> removed GTM >>>> > slave and datanode slave >>>> > >>>> > here is my current status of pgxc >>>> > >>>> > PGXC monitor all >>>> > Running: gtm master >>>> > Running: gtm proxy gtm_pxy1 >>>> > Not running: gtm proxy gtm_pxy2 >>>> > Running: coordinator master coord1 >>>> > Not running: coordinator master coord2 >>>> > Running: datanode master datanode1 >>>> > >>>> > Now each time i have to connect on db02 only( i.e PGXC Psql -h db02 -d >>>> > mydatabase) but its not allowing me to modify table structure. and >>>> giving me >>>> > below error. >>>> > >>>> > mydatabase=# alter table invoice_summary_data add countrycode text >>>> not null >>>> > default ''::character(1); >>>> > ERROR: Failed to get pooled connections >>>> > CONTEXT: SQL statement "EXECUTE DIRECT ON (coord2) 'SELECT >>>> > pg_catalog.pg_try_advisory_xact_lock_shared(65535, 0)'" >>>> > >>>> > So as of now i am planning to remove those components which are not >>>> working >>>> > (gtm_pxy2,coord2 ). so it will not go to connect cood2 which is down. >>>> > >>>> > what will be the impact of this, i don't want to loost database access >>>> > completely. >>>> > >>>> > Please suggest. >>>> > >>>> > Regards >>>> > Juned Khan >>>> > >>>> > >>>> ------------------------------------------------------------------------------ >>>> > "Accelerate Dev Cycles with Automated Cross-Browser Testing - For FREE >>>> > Instantly run your Selenium tests across 300+ browser/OS combos. Get >>>> > unparalleled scalability from the best Selenium testing platform >>>> available. >>>> > Simple to use. Nothing to install. Get started now for free." >>>> > http://p.sf.net/sfu/SauceLabs >>>> > _______________________________________________ >>>> > Postgres-xc-general mailing list >>>> > Pos...@li... >>>> > https://lists.sourceforge.net/lists/listinfo/postgres-xc-general >>>> > >>>> >>> >>> >>> >> > > > -- > Thanks, > Juned Khan > iNextrix Technologies Pvt Ltd. > www.inextrix.com > > > -- Thanks, Juned Khan iNextrix Technologies Pvt Ltd. www.inextrix.com |
From: 鈴木 幸市 <ko...@in...> - 2014-05-07 06:59:14
|
Hmm, did pgxc_pool_reload() work? --- Koichi Suzuki 2014/05/07 14:33、Juned Khan <jkh...@gm...<mailto:jkh...@gm...>> のメール: PGXC pg_dumpall -h db02 > dumpall.sql pg_dump: [archiver (db)] query failed: ERROR: Failed to get pooled connections pg_dump: [archiver (db)] query was: LOCK TABLE public.accounts IN ACCESS SHARE MODE pg_dumpall: pg_dump failed on database "astpp", exiting On Wed, May 7, 2014 at 10:56 AM, 鈴木 幸市 <ko...@in...<mailto:ko...@in...>> wrote: What error did you have? --- Koichi Suzuki 2014/05/07 14:18、Juned Khan <jkh...@gm...<mailto:jkh...@gm...>> のメール: yeah i tried to restart the coord2 and gtm_pxy2 but no success. I have tried to take pg_dumpall but its giving me same error. please suggest. Regards Juned Khan On Wed, May 7, 2014 at 6:31 AM, 鈴木 幸市 <ko...@in...<mailto:ko...@in...>> wrote: Did you try to restart the coord2 and gtm_pxy2? Also, I advise to use pg_dumpall, not pg_dump unless you’re backing up single database. I’m afraid coord2 had had some issue. Regards; --- Koichi Suzuki 2014/05/06 21:50、Juned Khan <jkh...@gm...<mailto:jkh...@gm...>> のメール: nope i have directly did those steps on live server. I don't know exactly what cause that problem. one day i just tried to run some queries manually and got those errors. I have tried to restart all components several times without success. now another problem is i am not able to take dump of database. i am getting this error. postgres@db02:~$ pg_dump -h db02 --exclude-table=cdrs db -f db.sql pg_dump: [archiver (db)] query failed: ERROR: Failed to get pooled connections pg_dump: [archiver (db)] query was: LOCK TABLE public.accounts IN ACCESS SHARE MODE Please suggest. On Sun, May 4, 2014 at 4:10 PM, Koichi Suzuki <koi...@gm...<mailto:koi...@gm...>> wrote: Did you make any test before you did any action on your database? Also could you share what you did to have this situation? Anyway, with this situation, I believe no DDL has been handled except for temporary object, which is session specific. So I believe you can restart GTM proxy gtm_pxy2 (you don't have to reinitialize it as long as you maintain gtm_proxy.conf) and coord2. Anyway, it is very essential to record and share what you did, and more important thing is to test it with non-product environment and see what goes on, and then review any step you are taking before you do. Hope to have more info on this. Best Regards; --- Koichi Suzuki 2014-05-03 14:31 GMT+09:00 Juned Khan <jkh...@gm...<mailto:jkh...@gm...>>: > Hi Koichi, > > I tried to follow the steps removing datanode slave and adding it again but > i no success. even earlier only datanode slave was not working but now gtm > slave, gtm_pxy2, coord2 and dn_slave is not starting up. I have removed GTM > slave and datanode slave > > here is my current status of pgxc > > PGXC monitor all > Running: gtm master > Running: gtm proxy gtm_pxy1 > Not running: gtm proxy gtm_pxy2 > Running: coordinator master coord1 > Not running: coordinator master coord2 > Running: datanode master datanode1 > > Now each time i have to connect on db02 only( i.e PGXC Psql -h db02 -d > mydatabase) but its not allowing me to modify table structure. and giving me > below error. > > mydatabase=# alter table invoice_summary_data add countrycode text not null > default ''::character(1); > ERROR: Failed to get pooled connections > CONTEXT: SQL statement "EXECUTE DIRECT ON (coord2) 'SELECT > pg_catalog.pg_try_advisory_xact_lock_shared(65535, 0)'" > > So as of now i am planning to remove those components which are not working > (gtm_pxy2,coord2 ). so it will not go to connect cood2 which is down. > > what will be the impact of this, i don't want to loost database access > completely. > > Please suggest. > > Regards > Juned Khan > > ------------------------------------------------------------------------------ > "Accelerate Dev Cycles with Automated Cross-Browser Testing - For FREE > Instantly run your Selenium tests across 300+ browser/OS combos. Get > unparalleled scalability from the best Selenium testing platform available. > Simple to use. Nothing to install. Get started now for free." > http://p.sf.net/sfu/SauceLabs > _______________________________________________ > Postgres-xc-general mailing list > Pos...@li...<mailto:Pos...@li...> > https://lists.sourceforge.net/lists/listinfo/postgres-xc-general > -- Thanks, Juned Khan iNextrix Technologies Pvt Ltd. www.inextrix.com<http://www.inextrix.com/> |
From: Juned K. <jkh...@gm...> - 2014-05-07 05:34:01
|
PGXC pg_dumpall -h db02 > dumpall.sql pg_dump: [archiver (db)] query failed: ERROR: Failed to get pooled connections pg_dump: [archiver (db)] query was: LOCK TABLE public.accounts IN ACCESS SHARE MODE pg_dumpall: pg_dump failed on database "astpp", exiting On Wed, May 7, 2014 at 10:56 AM, 鈴木 幸市 <ko...@in...> wrote: > What error did you have? > --- > Koichi Suzuki > > 2014/05/07 14:18、Juned Khan <jkh...@gm...> のメール: > > yeah i tried to restart the coord2 and gtm_pxy2 but no success. > I have tried to take pg_dumpall but its giving me same error. > > please suggest. > > Regards > Juned Khan > > On Wed, May 7, 2014 at 6:31 AM, 鈴木 幸市 <ko...@in...> wrote: > >> Did you try to restart the coord2 and gtm_pxy2? Also, I advise to use >> pg_dumpall, not pg_dump unless you’re backing up single database. I’m >> afraid coord2 had had some issue. >> >> Regards; >> --- >> Koichi Suzuki >> >> 2014/05/06 21:50、Juned Khan <jkh...@gm...> のメール: >> >> nope i have directly did those steps on live server. I don't know >> exactly what cause that problem. one day i just tried to run some queries >> manually and got those errors. >> >> I have tried to restart all components several times without success. >> >> now another problem is i am not able to take dump of database. i am >> getting this error. >> >> postgres@db02:~$ pg_dump -h db02 --exclude-table=cdrs db -f db.sql >> pg_dump: [archiver (db)] query failed: ERROR: Failed to get pooled >> connections >> pg_dump: [archiver (db)] query was: LOCK TABLE public.accounts IN ACCESS >> SHARE MODE >> >> Please suggest. >> >> >> >> On Sun, May 4, 2014 at 4:10 PM, Koichi Suzuki <koi...@gm...>wrote: >> >>> Did you make any test before you did any action on your database? >>> Also could you share what you did to have this situation? Anyway, >>> with this situation, I believe no DDL has been handled except for >>> temporary object, which is session specific. >>> >>> So I believe you can restart GTM proxy gtm_pxy2 (you don't have to >>> reinitialize it as long as you maintain gtm_proxy.conf) and coord2. >>> >>> Anyway, it is very essential to record and share what you did, and >>> more important thing is to test it with non-product environment and >>> see what goes on, and then review any step you are taking before you >>> do. >>> >>> Hope to have more info on this. >>> >>> Best Regards; >>> --- >>> Koichi Suzuki >>> >>> >>> 2014-05-03 14:31 GMT+09:00 Juned Khan <jkh...@gm...>: >>> > Hi Koichi, >>> > >>> > I tried to follow the steps removing datanode slave and adding it >>> again but >>> > i no success. even earlier only datanode slave was not working but now >>> gtm >>> > slave, gtm_pxy2, coord2 and dn_slave is not starting up. I have >>> removed GTM >>> > slave and datanode slave >>> > >>> > here is my current status of pgxc >>> > >>> > PGXC monitor all >>> > Running: gtm master >>> > Running: gtm proxy gtm_pxy1 >>> > Not running: gtm proxy gtm_pxy2 >>> > Running: coordinator master coord1 >>> > Not running: coordinator master coord2 >>> > Running: datanode master datanode1 >>> > >>> > Now each time i have to connect on db02 only( i.e PGXC Psql -h db02 -d >>> > mydatabase) but its not allowing me to modify table structure. and >>> giving me >>> > below error. >>> > >>> > mydatabase=# alter table invoice_summary_data add countrycode text >>> not null >>> > default ''::character(1); >>> > ERROR: Failed to get pooled connections >>> > CONTEXT: SQL statement "EXECUTE DIRECT ON (coord2) 'SELECT >>> > pg_catalog.pg_try_advisory_xact_lock_shared(65535, 0)'" >>> > >>> > So as of now i am planning to remove those components which are not >>> working >>> > (gtm_pxy2,coord2 ). so it will not go to connect cood2 which is down. >>> > >>> > what will be the impact of this, i don't want to loost database access >>> > completely. >>> > >>> > Please suggest. >>> > >>> > Regards >>> > Juned Khan >>> > >>> > >>> ------------------------------------------------------------------------------ >>> > "Accelerate Dev Cycles with Automated Cross-Browser Testing - For FREE >>> > Instantly run your Selenium tests across 300+ browser/OS combos. Get >>> > unparalleled scalability from the best Selenium testing platform >>> available. >>> > Simple to use. Nothing to install. Get started now for free." >>> > http://p.sf.net/sfu/SauceLabs >>> > _______________________________________________ >>> > Postgres-xc-general mailing list >>> > Pos...@li... >>> > https://lists.sourceforge.net/lists/listinfo/postgres-xc-general >>> > >>> >> >> >> > -- Thanks, Juned Khan iNextrix Technologies Pvt Ltd. www.inextrix.com |
From: 鈴木 幸市 <ko...@in...> - 2014-05-07 05:26:40
|
What error did you have? --- Koichi Suzuki 2014/05/07 14:18、Juned Khan <jkh...@gm...<mailto:jkh...@gm...>> のメール: yeah i tried to restart the coord2 and gtm_pxy2 but no success. I have tried to take pg_dumpall but its giving me same error. please suggest. Regards Juned Khan On Wed, May 7, 2014 at 6:31 AM, 鈴木 幸市 <ko...@in...<mailto:ko...@in...>> wrote: Did you try to restart the coord2 and gtm_pxy2? Also, I advise to use pg_dumpall, not pg_dump unless you’re backing up single database. I’m afraid coord2 had had some issue. Regards; --- Koichi Suzuki 2014/05/06 21:50、Juned Khan <jkh...@gm...<mailto:jkh...@gm...>> のメール: nope i have directly did those steps on live server. I don't know exactly what cause that problem. one day i just tried to run some queries manually and got those errors. I have tried to restart all components several times without success. now another problem is i am not able to take dump of database. i am getting this error. postgres@db02:~$ pg_dump -h db02 --exclude-table=cdrs db -f db.sql pg_dump: [archiver (db)] query failed: ERROR: Failed to get pooled connections pg_dump: [archiver (db)] query was: LOCK TABLE public.accounts IN ACCESS SHARE MODE Please suggest. On Sun, May 4, 2014 at 4:10 PM, Koichi Suzuki <koi...@gm...<mailto:koi...@gm...>> wrote: Did you make any test before you did any action on your database? Also could you share what you did to have this situation? Anyway, with this situation, I believe no DDL has been handled except for temporary object, which is session specific. So I believe you can restart GTM proxy gtm_pxy2 (you don't have to reinitialize it as long as you maintain gtm_proxy.conf) and coord2. Anyway, it is very essential to record and share what you did, and more important thing is to test it with non-product environment and see what goes on, and then review any step you are taking before you do. Hope to have more info on this. Best Regards; --- Koichi Suzuki 2014-05-03 14:31 GMT+09:00 Juned Khan <jkh...@gm...<mailto:jkh...@gm...>>: > Hi Koichi, > > I tried to follow the steps removing datanode slave and adding it again but > i no success. even earlier only datanode slave was not working but now gtm > slave, gtm_pxy2, coord2 and dn_slave is not starting up. I have removed GTM > slave and datanode slave > > here is my current status of pgxc > > PGXC monitor all > Running: gtm master > Running: gtm proxy gtm_pxy1 > Not running: gtm proxy gtm_pxy2 > Running: coordinator master coord1 > Not running: coordinator master coord2 > Running: datanode master datanode1 > > Now each time i have to connect on db02 only( i.e PGXC Psql -h db02 -d > mydatabase) but its not allowing me to modify table structure. and giving me > below error. > > mydatabase=# alter table invoice_summary_data add countrycode text not null > default ''::character(1); > ERROR: Failed to get pooled connections > CONTEXT: SQL statement "EXECUTE DIRECT ON (coord2) 'SELECT > pg_catalog.pg_try_advisory_xact_lock_shared(65535, 0)'" > > So as of now i am planning to remove those components which are not working > (gtm_pxy2,coord2 ). so it will not go to connect cood2 which is down. > > what will be the impact of this, i don't want to loost database access > completely. > > Please suggest. > > Regards > Juned Khan > > ------------------------------------------------------------------------------ > "Accelerate Dev Cycles with Automated Cross-Browser Testing - For FREE > Instantly run your Selenium tests across 300+ browser/OS combos. Get > unparalleled scalability from the best Selenium testing platform available. > Simple to use. Nothing to install. Get started now for free." > http://p.sf.net/sfu/SauceLabs > _______________________________________________ > Postgres-xc-general mailing list > Pos...@li...<mailto:Pos...@li...> > https://lists.sourceforge.net/lists/listinfo/postgres-xc-general > |
From: Juned K. <jkh...@gm...> - 2014-05-07 05:19:06
|
yeah i tried to restart the coord2 and gtm_pxy2 but no success. I have tried to take pg_dumpall but its giving me same error. please suggest. Regards Juned Khan On Wed, May 7, 2014 at 6:31 AM, 鈴木 幸市 <ko...@in...> wrote: > Did you try to restart the coord2 and gtm_pxy2? Also, I advise to use > pg_dumpall, not pg_dump unless you’re backing up single database. I’m > afraid coord2 had had some issue. > > Regards; > --- > Koichi Suzuki > > 2014/05/06 21:50、Juned Khan <jkh...@gm...> のメール: > > nope i have directly did those steps on live server. I don't know > exactly what cause that problem. one day i just tried to run some queries > manually and got those errors. > > I have tried to restart all components several times without success. > > now another problem is i am not able to take dump of database. i am > getting this error. > > postgres@db02:~$ pg_dump -h db02 --exclude-table=cdrs db -f db.sql > pg_dump: [archiver (db)] query failed: ERROR: Failed to get pooled > connections > pg_dump: [archiver (db)] query was: LOCK TABLE public.accounts IN ACCESS > SHARE MODE > > Please suggest. > > > > On Sun, May 4, 2014 at 4:10 PM, Koichi Suzuki <koi...@gm...>wrote: > >> Did you make any test before you did any action on your database? >> Also could you share what you did to have this situation? Anyway, >> with this situation, I believe no DDL has been handled except for >> temporary object, which is session specific. >> >> So I believe you can restart GTM proxy gtm_pxy2 (you don't have to >> reinitialize it as long as you maintain gtm_proxy.conf) and coord2. >> >> Anyway, it is very essential to record and share what you did, and >> more important thing is to test it with non-product environment and >> see what goes on, and then review any step you are taking before you >> do. >> >> Hope to have more info on this. >> >> Best Regards; >> --- >> Koichi Suzuki >> >> >> 2014-05-03 14:31 GMT+09:00 Juned Khan <jkh...@gm...>: >> > Hi Koichi, >> > >> > I tried to follow the steps removing datanode slave and adding it again >> but >> > i no success. even earlier only datanode slave was not working but now >> gtm >> > slave, gtm_pxy2, coord2 and dn_slave is not starting up. I have removed >> GTM >> > slave and datanode slave >> > >> > here is my current status of pgxc >> > >> > PGXC monitor all >> > Running: gtm master >> > Running: gtm proxy gtm_pxy1 >> > Not running: gtm proxy gtm_pxy2 >> > Running: coordinator master coord1 >> > Not running: coordinator master coord2 >> > Running: datanode master datanode1 >> > >> > Now each time i have to connect on db02 only( i.e PGXC Psql -h db02 -d >> > mydatabase) but its not allowing me to modify table structure. and >> giving me >> > below error. >> > >> > mydatabase=# alter table invoice_summary_data add countrycode text not >> null >> > default ''::character(1); >> > ERROR: Failed to get pooled connections >> > CONTEXT: SQL statement "EXECUTE DIRECT ON (coord2) 'SELECT >> > pg_catalog.pg_try_advisory_xact_lock_shared(65535, 0)'" >> > >> > So as of now i am planning to remove those components which are not >> working >> > (gtm_pxy2,coord2 ). so it will not go to connect cood2 which is down. >> > >> > what will be the impact of this, i don't want to loost database access >> > completely. >> > >> > Please suggest. >> > >> > Regards >> > Juned Khan >> > >> > >> ------------------------------------------------------------------------------ >> > "Accelerate Dev Cycles with Automated Cross-Browser Testing - For FREE >> > Instantly run your Selenium tests across 300+ browser/OS combos. Get >> > unparalleled scalability from the best Selenium testing platform >> available. >> > Simple to use. Nothing to install. Get started now for free." >> > http://p.sf.net/sfu/SauceLabs >> > _______________________________________________ >> > Postgres-xc-general mailing list >> > Pos...@li... >> > https://lists.sourceforge.net/lists/listinfo/postgres-xc-general >> > >> > > > |
From: 鈴木 幸市 <ko...@in...> - 2014-05-07 04:00:37
|
Oldest alive GXID is not correct. We need referred oldest GXID, which is the oldest GXID appears in all the snapshot being used. Please consider that in the case of long, repeated-read transaction, lifetime of snapshot can be very long. Regards; --- Koichi Suzuki 2014/05/07 12:25、ZhangJulian <jul...@ou...<mailto:jul...@ou...>> のメール: I said 'time' as the clock value. You had considered more than I had known. For the VACUUM, as my understanding, if some data which can be vacuumed, but is not vacuumed in time, this is OK. So if we collect the oldest alive GXID, even it is smaller than the current accurate value, it still can guide to VACUUM. Am I right? Thanks Julian ________________________________ From: ko...@in...<mailto:ko...@in...> To: jul...@ou...<mailto:jul...@ou...> CC: koi...@gm...<mailto:koi...@gm...>; dor...@gm...<mailto:dor...@gm...>; pos...@li...<mailto:pos...@li...> Subject: Re: [Postgres-xc-general] Pgxc_ctl Primer draft Date: Wed, 7 May 2014 02:40:43 +0000 What do you mean by “time-based policy”, does it based on (G)XID, or real clock value? To my view, it depend upon what “time” we depend on. Yes, I’m now studying if we can use real “clock” value for this. In this case, we may not need GTM if the clock is accurate enough among servers involved. If you mean not to use global “snapshot” and if it is feasible, we may not need GTM. If we associate each local transaction to its “root” transaction, which is the transaction application generated directly, we can maintaing the visibility by calculating the “snapshot” each time needed, by collecting it from all the other nodes. We need to consider the “vacuum”. I’ve not found a good way to determine if some “deleted” rows can be removed from the database and if some “live” row’s xmim value can be frozen. Regards; --- Koichi Suzuki 2014/05/07 11:19、ZhangJulian <jul...@ou...<mailto:jul...@ou...>> のメール: Is it possible to do the row visibility check based on a time based policy? That is, 1. Each data node maintains a data structure: gtid - start time - end time. Only the gtids modifying data on current data node are contained. 2. Each data node maintains the oldest alive gtid, which may not be updated synchronously. 3. GTM is only responsible to generate a sequence of GTID, which is only an integer value. 4. The time in different data nodes may be not consistent, but I think in some scenario, the application can bear the little difference. Is there any potential issues? Thanks > Date: Sun, 4 May 2014 19:36:20 +0900 > From: koi...@gm...<mailto:koi...@gm...> > To: dor...@gm...<mailto:dor...@gm...> > CC: pos...@li...<mailto:pos...@li...> > Subject: Re: [Postgres-xc-general] Pgxc_ctl Primer draft > > As discussed in the last year's XC-day, GTM proxy should be integrated > as postmaster backend. Maybe GTM can be. Coordinator/Datanode > can also be integrated into one. > > Apparently, this is the direction we should take. At first, there > were no such good experience to start with. Before version 1.0, we > determined that the datanode and the coordinator can share the same > binary. It is true that we started with the idea to provide > cluster-wide MVCC and now we found the next direction. > > With this integration and when start with only one node, we don't need > GTM, which looks identical to standalone PG. When we add the server, > at present we do need GTM. Only accumulating local transactions in > the nodes cannot maintain cluster-wide database consistency. > > I'm still investigating an idea how to get rid of GTM. We need to do > the following: > > 1) To provide cluster wide MVCC, > 2) To provide good means to determine which row can be vacuumed. > > My current idea is: if we associate any local XID to the root > transaction (the transaction which application created), we may be > able to provide cluster wide MVCC by calculating cluster-wide snapshot > when needed. I don't know how efficient it is and t don't have good > idea how to determine if a given row can be vacuumed. > > This is the current situation. > > Hope to have much more input on this. > > Anyway, hope my draft helps people who is trying to use Postgres-XC. > > Best; > --- > Koichi Suzuki > > > 2014-05-04 19:05 GMT+09:00 Dorian Hoxha <dor...@gm...<mailto:dor...@gm...>>: > > Probably even the gtm-proxy need to be merged with datanode+coordinator from > > what i read. > > > > If you make only local transactions (inside 1 datanode) + not using global > > sequences, will there be no traffic to the GTM for that transaction ? > > > > > > On Sun, May 4, 2014 at 6:24 AM, Michael Paquier <mic...@gm...<mailto:mic...@gm...>> > > wrote: > >> > >> On Sun, May 4, 2014 at 12:59 AM, Dorian Hoxha <dor...@gm...<mailto:dor...@gm...>> > >> wrote: > >> >> You just need commodity INTEL server runnign Linux. > >> > Are INTEL cpu required ? If not INTEL can be removed ? (also running > >> > typo) > >> Not really... I agree to what you mean here. > >> > >> >> For datawarehouse > >> >> > >> >> applications, you may need separate patch which devides complexed query > >> >> into smaller > >> >> > >> >> chunks which run in datanodes in parallel. StormDB will provide such > >> >> patche. > >> > > >> > Wasn't stormdb bought by another company ? Is there an opensource > >> > alternative ? Fix the "patche" typo ? > >> > > >> > A way to make it simpler is by merging coordinator and datanode into 1 > >> > and > >> > making it possible for a 'node' to not hold data (be a coordinator > >> > only), > >> > like in elastic-search, but you probably already know that. > >> +1. This would alleviate data transfer between cross-node joins where > >> Coordinator and Datanodes are on separate servers. You could always > >> have both nodes on the same server with the XC of now... But that's > >> double number of nodes to monitor. > >> > >> > What exact things does the gtm-proxy do? For example, a single row > >> > insert > >> > wouldn't need the gtm (coordinator just inserts it to the right > >> > data-node)(asumming no sequences, since for that the gtm is needed)? > >> Grouping messages between Coordinator/Datanode and GTM to reduce > >> package interferences and improve performance. > >> > >> > If multiple tables are sharded on the same key (example: user_id). Will > >> > all > >> > the rows, from the same user in different tables be in the same > >> > data-node ? > >> Yep. Node choice algorithm is based using the data type of the key. > >> -- > >> Michael > > > > > > ------------------------------------------------------------------------------ > "Accelerate Dev Cycles with Automated Cross-Browser Testing - For FREE > Instantly run your Selenium tests across 300+ browser/OS combos. Get > unparalleled scalability from the best Selenium testing platform available. > Simple to use. Nothing to install. Get started now for free." > http://p.sf.net/sfu/SauceLabs > _______________________________________________ > Postgres-xc-general mailing list > Pos...@li...<mailto:Pos...@li...> > https://lists.sourceforge.net/lists/listinfo/postgres-xc-general ------------------------------------------------------------------------------ Is your legacy SCM system holding you back? Join Perforce May 7 to find out: • 3 signs your SCM is hindering your productivity • Requirements for releasing software faster • Expert tips and advice for migrating your SCM now http://p.sf.net/sfu/perforce_______________________________________________ Postgres-xc-general mailing list Pos...@li...<mailto:Pos...@li...> https://lists.sourceforge.net/lists/listinfo/postgres-xc-general |
From: ZhangJulian <jul...@ou...> - 2014-05-07 03:25:38
|
I said 'time' as the clock value. You had considered more than I had known. For the VACUUM, as my understanding, if some data which can be vacuumed, but is not vacuumed in time, this is OK. So if we collect the oldest alive GXID, even it is smaller than the current accurate value, it still can guide to VACUUM. Am I right? Thanks Julian From: ko...@in... To: jul...@ou... CC: koi...@gm...; dor...@gm...; pos...@li... Subject: Re: [Postgres-xc-general] Pgxc_ctl Primer draft Date: Wed, 7 May 2014 02:40:43 +0000 What do you mean by “time-based policy”, does it based on (G)XID, or real clock value? To my view, it depend upon what “time” we depend on. Yes, I’m now studying if we can use real “clock” value for this. In this case, we may not need GTM if the clock is accurate enough among servers involved. If you mean not to use global “snapshot” and if it is feasible, we may not need GTM. If we associate each local transaction to its “root” transaction, which is the transaction application generated directly, we can maintaing the visibility by calculating the “snapshot” each time needed, by collecting it from all the other nodes. We need to consider the “vacuum”. I’ve not found a good way to determine if some “deleted” rows can be removed from the database and if some “live” row’s xmim value can be frozen. Regards; --- Koichi Suzuki 2014/05/07 11:19、ZhangJulian <jul...@ou...> のメール: Is it possible to do the row visibility check based on a time based policy? That is, 1. Each data node maintains a data structure: gtid - start time - end time. Only the gtids modifying data on current data node are contained. 2. Each data node maintains the oldest alive gtid, which may not be updated synchronously. 3. GTM is only responsible to generate a sequence of GTID, which is only an integer value. 4. The time in different data nodes may be not consistent, but I think in some scenario, the application can bear the little difference. Is there any potential issues? Thanks > Date: Sun, 4 May 2014 19:36:20 +0900 > From: koi...@gm... > To: dor...@gm... > CC: pos...@li... > Subject: Re: [Postgres-xc-general] Pgxc_ctl Primer draft > > As discussed in the last year's XC-day, GTM proxy should be integrated > as postmaster backend. Maybe GTM can be. Coordinator/Datanode > can also be integrated into one. > > Apparently, this is the direction we should take. At first, there > were no such good experience to start with. Before version 1.0, we > determined that the datanode and the coordinator can share the same > binary. It is true that we started with the idea to provide > cluster-wide MVCC and now we found the next direction. > > With this integration and when start with only one node, we don't need > GTM, which looks identical to standalone PG. When we add the server, > at present we do need GTM. Only accumulating local transactions in > the nodes cannot maintain cluster-wide database consistency. > > I'm still investigating an idea how to get rid of GTM. We need to do > the following: > > 1) To provide cluster wide MVCC, > 2) To provide good means to determine which row can be vacuumed. > > My current idea is: if we associate any local XID to the root > transaction (the transaction which application created), we may be > able to provide cluster wide MVCC by calculating cluster-wide snapshot > when needed. I don't know how efficient it is and t don't have good > idea how to determine if a given row can be vacuumed. > > This is the current situation. > > Hope to have much more input on this. > > Anyway, hope my draft helps people who is trying to use Postgres-XC. > > Best; > --- > Koichi Suzuki > > > 2014-05-04 19:05 GMT+09:00 Dorian Hoxha <dor...@gm...>: > > Probably even the gtm-proxy need to be merged with datanode+coordinator from > > what i read. > > > > If you make only local transactions (inside 1 datanode) + not using global > > sequences, will there be no traffic to the GTM for that transaction ? > > > > > > On Sun, May 4, 2014 at 6:24 AM, Michael Paquier <mic...@gm...> > > wrote: > >> > >> On Sun, May 4, 2014 at 12:59 AM, Dorian Hoxha <dor...@gm...> > >> wrote: > >> >> You just need commodity INTEL server runnign Linux. > >> > Are INTEL cpu required ? If not INTEL can be removed ? (also running > >> > typo) > >> Not really... I agree to what you mean here. > >> > >> >> For datawarehouse > >> >> > >> >> applications, you may need separate patch which devides complexed query > >> >> into smaller > >> >> > >> >> chunks which run in datanodes in parallel. StormDB will provide such > >> >> patche. > >> > > >> > Wasn't stormdb bought by another company ? Is there an opensource > >> > alternative ? Fix the "patche" typo ? > >> > > >> > A way to make it simpler is by merging coordinator and datanode into 1 > >> > and > >> > making it possible for a 'node' to not hold data (be a coordinator > >> > only), > >> > like in elastic-search, but you probably already know that. > >> +1. This would alleviate data transfer between cross-node joins where > >> Coordinator and Datanodes are on separate servers. You could always > >> have both nodes on the same server with the XC of now... But that's > >> double number of nodes to monitor. > >> > >> > What exact things does the gtm-proxy do? For example, a single row > >> > insert > >> > wouldn't need the gtm (coordinator just inserts it to the right > >> > data-node)(asumming no sequences, since for that the gtm is needed)? > >> Grouping messages between Coordinator/Datanode and GTM to reduce > >> package interferences and improve performance. > >> > >> > If multiple tables are sharded on the same key (example: user_id). Will > >> > all > >> > the rows, from the same user in different tables be in the same > >> > data-node ? > >> Yep. Node choice algorithm is based using the data type of the key. > >> -- > >> Michael > > > > > > ------------------------------------------------------------------------------ > "Accelerate Dev Cycles with Automated Cross-Browser Testing - For FREE > Instantly run your Selenium tests across 300+ browser/OS combos. Get > unparalleled scalability from the best Selenium testing platform available. > Simple to use. Nothing to install. Get started now for free." > http://p.sf.net/sfu/SauceLabs > _______________________________________________ > Postgres-xc-general mailing list > Pos...@li... > https://lists.sourceforge.net/lists/listinfo/postgres-xc-general ------------------------------------------------------------------------------ Is your legacy SCM system holding you back? Join Perforce May 7 to find out: • 3 signs your SCM is hindering your productivity • Requirements for releasing software faster • Expert tips and advice for migrating your SCM now http://p.sf.net/sfu/perforce_______________________________________________ Postgres-xc-general mailing list Pos...@li... https://lists.sourceforge.net/lists/listinfo/postgres-xc-general |
From: 鈴木 幸市 <ko...@in...> - 2014-05-07 02:40:55
|
What do you mean by “time-based policy”, does it based on (G)XID, or real clock value? To my view, it depend upon what “time” we depend on. Yes, I’m now studying if we can use real “clock” value for this. In this case, we may not need GTM if the clock is accurate enough among servers involved. If you mean not to use global “snapshot” and if it is feasible, we may not need GTM. If we associate each local transaction to its “root” transaction, which is the transaction application generated directly, we can maintaing the visibility by calculating the “snapshot” each time needed, by collecting it from all the other nodes. We need to consider the “vacuum”. I’ve not found a good way to determine if some “deleted” rows can be removed from the database and if some “live” row’s xmim value can be frozen. Regards; --- Koichi Suzuki 2014/05/07 11:19、ZhangJulian <jul...@ou...<mailto:jul...@ou...>> のメール: Is it possible to do the row visibility check based on a time based policy? That is, 1. Each data node maintains a data structure: gtid - start time - end time. Only the gtids modifying data on current data node are contained. 2. Each data node maintains the oldest alive gtid, which may not be updated synchronously. 3. GTM is only responsible to generate a sequence of GTID, which is only an integer value. 4. The time in different data nodes may be not consistent, but I think in some scenario, the application can bear the little difference. Is there any potential issues? Thanks > Date: Sun, 4 May 2014 19:36:20 +0900 > From: koi...@gm...<mailto:koi...@gm...> > To: dor...@gm...<mailto:dor...@gm...> > CC: pos...@li...<mailto:pos...@li...> > Subject: Re: [Postgres-xc-general] Pgxc_ctl Primer draft > > As discussed in the last year's XC-day, GTM proxy should be integrated > as postmaster backend. Maybe GTM can be. Coordinator/Datanode > can also be integrated into one. > > Apparently, this is the direction we should take. At first, there > were no such good experience to start with. Before version 1.0, we > determined that the datanode and the coordinator can share the same > binary. It is true that we started with the idea to provide > cluster-wide MVCC and now we found the next direction. > > With this integration and when start with only one node, we don't need > GTM, which looks identical to standalone PG. When we add the server, > at present we do need GTM. Only accumulating local transactions in > the nodes cannot maintain cluster-wide database consistency. > > I'm still investigating an idea how to get rid of GTM. We need to do > the following: > > 1) To provide cluster wide MVCC, > 2) To provide good means to determine which row can be vacuumed. > > My current idea is: if we associate any local XID to the root > transaction (the transaction which application created), we may be > able to provide cluster wide MVCC by calculating cluster-wide snapshot > when needed. I don't know how efficient it is and t don't have good > idea how to determine if a given row can be vacuumed. > > This is the current situation. > > Hope to have much more input on this. > > Anyway, hope my draft helps people who is trying to use Postgres-XC. > > Best; > --- > Koichi Suzuki > > > 2014-05-04 19:05 GMT+09:00 Dorian Hoxha <dor...@gm...<mailto:dor...@gm...>>: > > Probably even the gtm-proxy need to be merged with datanode+coordinator from > > what i read. > > > > If you make only local transactions (inside 1 datanode) + not using global > > sequences, will there be no traffic to the GTM for that transaction ? > > > > > > On Sun, May 4, 2014 at 6:24 AM, Michael Paquier <mic...@gm...<mailto:mic...@gm...>> > > wrote: > >> > >> On Sun, May 4, 2014 at 12:59 AM, Dorian Hoxha <dor...@gm...<mailto:dor...@gm...>> > >> wrote: > >> >> You just need commodity INTEL server runnign Linux. > >> > Are INTEL cpu required ? If not INTEL can be removed ? (also running > >> > typo) > >> Not really... I agree to what you mean here. > >> > >> >> For datawarehouse > >> >> > >> >> applications, you may need separate patch which devides complexed query > >> >> into smaller > >> >> > >> >> chunks which run in datanodes in parallel. StormDB will provide such > >> >> patche. > >> > > >> > Wasn't stormdb bought by another company ? Is there an opensource > >> > alternative ? Fix the "patche" typo ? > >> > > >> > A way to make it simpler is by merging coordinator and datanode into 1 > >> > and > >> > making it possible for a 'node' to not hold data (be a coordinator > >> > only), > >> > like in elastic-search, but you probably already know that. > >> +1. This would alleviate data transfer between cross-node joins where > >> Coordinator and Datanodes are on separate servers. You could always > >> have both nodes on the same server with the XC of now... But that's > >> double number of nodes to monitor. > >> > >> > What exact things does the gtm-proxy do? For example, a single row > >> > insert > >> > wouldn't need the gtm (coordinator just inserts it to the right > >> > data-node)(asumming no sequences, since for that the gtm is needed)? > >> Grouping messages between Coordinator/Datanode and GTM to reduce > >> package interferences and improve performance. > >> > >> > If multiple tables are sharded on the same key (example: user_id). Will > >> > all > >> > the rows, from the same user in different tables be in the same > >> > data-node ? > >> Yep. Node choice algorithm is based using the data type of the key. > >> -- > >> Michael > > > > > > ------------------------------------------------------------------------------ > "Accelerate Dev Cycles with Automated Cross-Browser Testing - For FREE > Instantly run your Selenium tests across 300+ browser/OS combos. Get > unparalleled scalability from the best Selenium testing platform available. > Simple to use. Nothing to install. Get started now for free." > http://p.sf.net/sfu/SauceLabs > _______________________________________________ > Postgres-xc-general mailing list > Pos...@li...<mailto:Pos...@li...> > https://lists.sourceforge.net/lists/listinfo/postgres-xc-general ------------------------------------------------------------------------------ Is your legacy SCM system holding you back? Join Perforce May 7 to find out: • 3 signs your SCM is hindering your productivity • Requirements for releasing software faster • Expert tips and advice for migrating your SCM now http://p.sf.net/sfu/perforce_______________________________________________ Postgres-xc-general mailing list Pos...@li... https://lists.sourceforge.net/lists/listinfo/postgres-xc-general |
From: ZhangJulian <jul...@ou...> - 2014-05-07 02:19:34
|
Is it possible to do the row visibility check based on a time based policy? That is, 1. Each data node maintains a data structure: gtid - start time - end time. Only the gtids modifying data on current data node are contained. 2. Each data node maintains the oldest alive gtid, which may not be updated synchronously. 3. GTM is only responsible to generate a sequence of GTID, which is only an integer value. 4. The time in different data nodes may be not consistent, but I think in some scenario, the application can bear the little difference. Is there any potential issues? Thanks > Date: Sun, 4 May 2014 19:36:20 +0900 > From: koi...@gm... > To: dor...@gm... > CC: pos...@li... > Subject: Re: [Postgres-xc-general] Pgxc_ctl Primer draft > > As discussed in the last year's XC-day, GTM proxy should be integrated > as postmaster backend. Maybe GTM can be. Coordinator/Datanode > can also be integrated into one. > > Apparently, this is the direction we should take. At first, there > were no such good experience to start with. Before version 1.0, we > determined that the datanode and the coordinator can share the same > binary. It is true that we started with the idea to provide > cluster-wide MVCC and now we found the next direction. > > With this integration and when start with only one node, we don't need > GTM, which looks identical to standalone PG. When we add the server, > at present we do need GTM. Only accumulating local transactions in > the nodes cannot maintain cluster-wide database consistency. > > I'm still investigating an idea how to get rid of GTM. We need to do > the following: > > 1) To provide cluster wide MVCC, > 2) To provide good means to determine which row can be vacuumed. > > My current idea is: if we associate any local XID to the root > transaction (the transaction which application created), we may be > able to provide cluster wide MVCC by calculating cluster-wide snapshot > when needed. I don't know how efficient it is and t don't have good > idea how to determine if a given row can be vacuumed. > > This is the current situation. > > Hope to have much more input on this. > > Anyway, hope my draft helps people who is trying to use Postgres-XC. > > Best; > --- > Koichi Suzuki > > > 2014-05-04 19:05 GMT+09:00 Dorian Hoxha <dor...@gm...>: > > Probably even the gtm-proxy need to be merged with datanode+coordinator from > > what i read. > > > > If you make only local transactions (inside 1 datanode) + not using global > > sequences, will there be no traffic to the GTM for that transaction ? > > > > > > On Sun, May 4, 2014 at 6:24 AM, Michael Paquier <mic...@gm...> > > wrote: > >> > >> On Sun, May 4, 2014 at 12:59 AM, Dorian Hoxha <dor...@gm...> > >> wrote: > >> >> You just need commodity INTEL server runnign Linux. > >> > Are INTEL cpu required ? If not INTEL can be removed ? (also running > >> > typo) > >> Not really... I agree to what you mean here. > >> > >> >> For datawarehouse > >> >> > >> >> applications, you may need separate patch which devides complexed query > >> >> into smaller > >> >> > >> >> chunks which run in datanodes in parallel. StormDB will provide such > >> >> patche. > >> > > >> > Wasn't stormdb bought by another company ? Is there an opensource > >> > alternative ? Fix the "patche" typo ? > >> > > >> > A way to make it simpler is by merging coordinator and datanode into 1 > >> > and > >> > making it possible for a 'node' to not hold data (be a coordinator > >> > only), > >> > like in elastic-search, but you probably already know that. > >> +1. This would alleviate data transfer between cross-node joins where > >> Coordinator and Datanodes are on separate servers. You could always > >> have both nodes on the same server with the XC of now... But that's > >> double number of nodes to monitor. > >> > >> > What exact things does the gtm-proxy do? For example, a single row > >> > insert > >> > wouldn't need the gtm (coordinator just inserts it to the right > >> > data-node)(asumming no sequences, since for that the gtm is needed)? > >> Grouping messages between Coordinator/Datanode and GTM to reduce > >> package interferences and improve performance. > >> > >> > If multiple tables are sharded on the same key (example: user_id). Will > >> > all > >> > the rows, from the same user in different tables be in the same > >> > data-node ? > >> Yep. Node choice algorithm is based using the data type of the key. > >> -- > >> Michael > > > > > > ------------------------------------------------------------------------------ > "Accelerate Dev Cycles with Automated Cross-Browser Testing - For FREE > Instantly run your Selenium tests across 300+ browser/OS combos. Get > unparalleled scalability from the best Selenium testing platform available. > Simple to use. Nothing to install. Get started now for free." > http://p.sf.net/sfu/SauceLabs > _______________________________________________ > Postgres-xc-general mailing list > Pos...@li... > https://lists.sourceforge.net/lists/listinfo/postgres-xc-general |
From: 鈴木 幸市 <ko...@in...> - 2014-05-07 01:01:30
|
Did you try to restart the coord2 and gtm_pxy2? Also, I advise to use pg_dumpall, not pg_dump unless you’re backing up single database. I’m afraid coord2 had had some issue. Regards; --- Koichi Suzuki 2014/05/06 21:50、Juned Khan <jkh...@gm...<mailto:jkh...@gm...>> のメール: nope i have directly did those steps on live server. I don't know exactly what cause that problem. one day i just tried to run some queries manually and got those errors. I have tried to restart all components several times without success. now another problem is i am not able to take dump of database. i am getting this error. postgres@db02:~$ pg_dump -h db02 --exclude-table=cdrs db -f db.sql pg_dump: [archiver (db)] query failed: ERROR: Failed to get pooled connections pg_dump: [archiver (db)] query was: LOCK TABLE public.accounts IN ACCESS SHARE MODE Please suggest. On Sun, May 4, 2014 at 4:10 PM, Koichi Suzuki <koi...@gm...<mailto:koi...@gm...>> wrote: Did you make any test before you did any action on your database? Also could you share what you did to have this situation? Anyway, with this situation, I believe no DDL has been handled except for temporary object, which is session specific. So I believe you can restart GTM proxy gtm_pxy2 (you don't have to reinitialize it as long as you maintain gtm_proxy.conf) and coord2. Anyway, it is very essential to record and share what you did, and more important thing is to test it with non-product environment and see what goes on, and then review any step you are taking before you do. Hope to have more info on this. Best Regards; --- Koichi Suzuki 2014-05-03 14:31 GMT+09:00 Juned Khan <jkh...@gm...<mailto:jkh...@gm...>>: > Hi Koichi, > > I tried to follow the steps removing datanode slave and adding it again but > i no success. even earlier only datanode slave was not working but now gtm > slave, gtm_pxy2, coord2 and dn_slave is not starting up. I have removed GTM > slave and datanode slave > > here is my current status of pgxc > > PGXC monitor all > Running: gtm master > Running: gtm proxy gtm_pxy1 > Not running: gtm proxy gtm_pxy2 > Running: coordinator master coord1 > Not running: coordinator master coord2 > Running: datanode master datanode1 > > Now each time i have to connect on db02 only( i.e PGXC Psql -h db02 -d > mydatabase) but its not allowing me to modify table structure. and giving me > below error. > > mydatabase=# alter table invoice_summary_data add countrycode text not null > default ''::character(1); > ERROR: Failed to get pooled connections > CONTEXT: SQL statement "EXECUTE DIRECT ON (coord2) 'SELECT > pg_catalog.pg_try_advisory_xact_lock_shared(65535, 0)'" > > So as of now i am planning to remove those components which are not working > (gtm_pxy2,coord2 ). so it will not go to connect cood2 which is down. > > what will be the impact of this, i don't want to loost database access > completely. > > Please suggest. > > Regards > Juned Khan > > ------------------------------------------------------------------------------ > "Accelerate Dev Cycles with Automated Cross-Browser Testing - For FREE > Instantly run your Selenium tests across 300+ browser/OS combos. Get > unparalleled scalability from the best Selenium testing platform available. > Simple to use. Nothing to install. Get started now for free." > http://p.sf.net/sfu/SauceLabs > _______________________________________________ > Postgres-xc-general mailing list > Pos...@li...<mailto:Pos...@li...> > https://lists.sourceforge.net/lists/listinfo/postgres-xc-general > -- Thanks, Juned Khan iNextrix Technologies Pvt Ltd. www.inextrix.com<http://www.inextrix.com/> |
From: Juned K. <jkh...@gm...> - 2014-05-06 12:50:23
|
nope i have directly did those steps on live server. I don't know exactly what cause that problem. one day i just tried to run some queries manually and got those errors. I have tried to restart all components several times without success. now another problem is i am not able to take dump of database. i am getting this error. postgres@db02:~$ pg_dump -h db02 --exclude-table=cdrs db -f db.sql pg_dump: [archiver (db)] query failed: ERROR: Failed to get pooled connections pg_dump: [archiver (db)] query was: LOCK TABLE public.accounts IN ACCESS SHARE MODE Please suggest. On Sun, May 4, 2014 at 4:10 PM, Koichi Suzuki <koi...@gm...> wrote: > Did you make any test before you did any action on your database? > Also could you share what you did to have this situation? Anyway, > with this situation, I believe no DDL has been handled except for > temporary object, which is session specific. > > So I believe you can restart GTM proxy gtm_pxy2 (you don't have to > reinitialize it as long as you maintain gtm_proxy.conf) and coord2. > > Anyway, it is very essential to record and share what you did, and > more important thing is to test it with non-product environment and > see what goes on, and then review any step you are taking before you > do. > > Hope to have more info on this. > > Best Regards; > --- > Koichi Suzuki > > > 2014-05-03 14:31 GMT+09:00 Juned Khan <jkh...@gm...>: > > Hi Koichi, > > > > I tried to follow the steps removing datanode slave and adding it again > but > > i no success. even earlier only datanode slave was not working but now > gtm > > slave, gtm_pxy2, coord2 and dn_slave is not starting up. I have removed > GTM > > slave and datanode slave > > > > here is my current status of pgxc > > > > PGXC monitor all > > Running: gtm master > > Running: gtm proxy gtm_pxy1 > > Not running: gtm proxy gtm_pxy2 > > Running: coordinator master coord1 > > Not running: coordinator master coord2 > > Running: datanode master datanode1 > > > > Now each time i have to connect on db02 only( i.e PGXC Psql -h db02 -d > > mydatabase) but its not allowing me to modify table structure. and > giving me > > below error. > > > > mydatabase=# alter table invoice_summary_data add countrycode text not > null > > default ''::character(1); > > ERROR: Failed to get pooled connections > > CONTEXT: SQL statement "EXECUTE DIRECT ON (coord2) 'SELECT > > pg_catalog.pg_try_advisory_xact_lock_shared(65535, 0)'" > > > > So as of now i am planning to remove those components which are not > working > > (gtm_pxy2,coord2 ). so it will not go to connect cood2 which is down. > > > > what will be the impact of this, i don't want to loost database access > > completely. > > > > Please suggest. > > > > Regards > > Juned Khan > > > > > ------------------------------------------------------------------------------ > > "Accelerate Dev Cycles with Automated Cross-Browser Testing - For FREE > > Instantly run your Selenium tests across 300+ browser/OS combos. Get > > unparalleled scalability from the best Selenium testing platform > available. > > Simple to use. Nothing to install. Get started now for free." > > http://p.sf.net/sfu/SauceLabs > > _______________________________________________ > > Postgres-xc-general mailing list > > Pos...@li... > > https://lists.sourceforge.net/lists/listinfo/postgres-xc-general > > > -- Thanks, Juned Khan iNextrix Technologies Pvt Ltd. www.inextrix.com |
From: Koichi S. <koi...@gm...> - 2014-05-04 13:25:13
|
As to StormDB, yes, there was a news that TransLattice bought StormDB. Though, I found StormDB's site still valid so I wrote this way. Hope somebody points out which I should refer to, StormDB or Translattice. Regards; --- Koichi Suzuki 2014-05-04 0:59 GMT+09:00 Dorian Hoxha <dor...@gm...>: >> You just need commodity INTEL server runnign Linux. > > Are INTEL cpu required ? If not INTEL can be removed ? (also running typo) > >> For datawarehouse >> >> applications, you may need separate patch which devides complexed query >> into smaller >> >> chunks which run in datanodes in parallel. StormDB will provide such >> patche. > > Wasn't stormdb bought by another company ? Is there an opensource > alternative ? Fix the "patche" typo ? > > A way to make it simpler is by merging coordinator and datanode into 1 and > making it possible for a 'node' to not hold data (be a coordinator only), > like in elastic-search, but you probably already know that. > > What exact things does the gtm-proxy do? For example, a single row insert > wouldn't need the gtm (coordinator just inserts it to the right > data-node)(asumming no sequences, since for that the gtm is needed)? > > If multiple tables are sharded on the same key (example: user_id). Will all > the rows, from the same user in different tables be in the same data-node ? > > > > > > > > > On Fri, May 2, 2014 at 10:43 AM, Koichi Suzuki <koi...@gm...> > wrote: >> >> Hello; >> >> Because there are many discussions here on Postgres-XC configuration >> and operation, I drafted a paper "Pgcx_ctl primer" to provide how to >> configure Postgres-XC cluster and how to operate it. >> >> https://sourceforge.net/projects/postgres-xc/files/Pgxc_ctl_primer/? >> contains the document. >> >> Hope to have feedback on it. >> >> Regards; >> --- >> Koichi Suzuki >> >> >> ------------------------------------------------------------------------------ >> "Accelerate Dev Cycles with Automated Cross-Browser Testing - For FREE >> Instantly run your Selenium tests across 300+ browser/OS combos. Get >> unparalleled scalability from the best Selenium testing platform >> available. >> Simple to use. Nothing to install. Get started now for free." >> http://p.sf.net/sfu/SauceLabs >> _______________________________________________ >> Postgres-xc-general mailing list >> Pos...@li... >> https://lists.sourceforge.net/lists/listinfo/postgres-xc-general > > |
From: Michael P. <mic...@gm...> - 2014-05-04 12:44:38
|
On Sun, May 4, 2014 at 7:05 PM, Dorian Hoxha <dor...@gm...> wrote: > Probably even the gtm-proxy need to be merged with datanode+coordinator from > what i read. > > If you make only local transactions (inside 1 datanode) + not using global > sequences, will there be no traffic to the GTM for that transaction ? You are forgetting global transaction ID, global snapshot and global timestamp feeding across nodes to satisfy MVCC :) -- Michael |
From: Koichi S. <koi...@gm...> - 2014-05-04 10:40:32
|
Did you make any test before you did any action on your database? Also could you share what you did to have this situation? Anyway, with this situation, I believe no DDL has been handled except for temporary object, which is session specific. So I believe you can restart GTM proxy gtm_pxy2 (you don't have to reinitialize it as long as you maintain gtm_proxy.conf) and coord2. Anyway, it is very essential to record and share what you did, and more important thing is to test it with non-product environment and see what goes on, and then review any step you are taking before you do. Hope to have more info on this. Best Regards; --- Koichi Suzuki 2014-05-03 14:31 GMT+09:00 Juned Khan <jkh...@gm...>: > Hi Koichi, > > I tried to follow the steps removing datanode slave and adding it again but > i no success. even earlier only datanode slave was not working but now gtm > slave, gtm_pxy2, coord2 and dn_slave is not starting up. I have removed GTM > slave and datanode slave > > here is my current status of pgxc > > PGXC monitor all > Running: gtm master > Running: gtm proxy gtm_pxy1 > Not running: gtm proxy gtm_pxy2 > Running: coordinator master coord1 > Not running: coordinator master coord2 > Running: datanode master datanode1 > > Now each time i have to connect on db02 only( i.e PGXC Psql -h db02 -d > mydatabase) but its not allowing me to modify table structure. and giving me > below error. > > mydatabase=# alter table invoice_summary_data add countrycode text not null > default ''::character(1); > ERROR: Failed to get pooled connections > CONTEXT: SQL statement "EXECUTE DIRECT ON (coord2) 'SELECT > pg_catalog.pg_try_advisory_xact_lock_shared(65535, 0)'" > > So as of now i am planning to remove those components which are not working > (gtm_pxy2,coord2 ). so it will not go to connect cood2 which is down. > > what will be the impact of this, i don't want to loost database access > completely. > > Please suggest. > > Regards > Juned Khan > > ------------------------------------------------------------------------------ > "Accelerate Dev Cycles with Automated Cross-Browser Testing - For FREE > Instantly run your Selenium tests across 300+ browser/OS combos. Get > unparalleled scalability from the best Selenium testing platform available. > Simple to use. Nothing to install. Get started now for free." > http://p.sf.net/sfu/SauceLabs > _______________________________________________ > Postgres-xc-general mailing list > Pos...@li... > https://lists.sourceforge.net/lists/listinfo/postgres-xc-general > |
From: Koichi S. <koi...@gm...> - 2014-05-04 10:36:28
|
As discussed in the last year's XC-day, GTM proxy should be integrated as postmaster backend. Maybe GTM can be. Coordinator/Datanode can also be integrated into one. Apparently, this is the direction we should take. At first, there were no such good experience to start with. Before version 1.0, we determined that the datanode and the coordinator can share the same binary. It is true that we started with the idea to provide cluster-wide MVCC and now we found the next direction. With this integration and when start with only one node, we don't need GTM, which looks identical to standalone PG. When we add the server, at present we do need GTM. Only accumulating local transactions in the nodes cannot maintain cluster-wide database consistency. I'm still investigating an idea how to get rid of GTM. We need to do the following: 1) To provide cluster wide MVCC, 2) To provide good means to determine which row can be vacuumed. My current idea is: if we associate any local XID to the root transaction (the transaction which application created), we may be able to provide cluster wide MVCC by calculating cluster-wide snapshot when needed. I don't know how efficient it is and t don't have good idea how to determine if a given row can be vacuumed. This is the current situation. Hope to have much more input on this. Anyway, hope my draft helps people who is trying to use Postgres-XC. Best; --- Koichi Suzuki 2014-05-04 19:05 GMT+09:00 Dorian Hoxha <dor...@gm...>: > Probably even the gtm-proxy need to be merged with datanode+coordinator from > what i read. > > If you make only local transactions (inside 1 datanode) + not using global > sequences, will there be no traffic to the GTM for that transaction ? > > > On Sun, May 4, 2014 at 6:24 AM, Michael Paquier <mic...@gm...> > wrote: >> >> On Sun, May 4, 2014 at 12:59 AM, Dorian Hoxha <dor...@gm...> >> wrote: >> >> You just need commodity INTEL server runnign Linux. >> > Are INTEL cpu required ? If not INTEL can be removed ? (also running >> > typo) >> Not really... I agree to what you mean here. >> >> >> For datawarehouse >> >> >> >> applications, you may need separate patch which devides complexed query >> >> into smaller >> >> >> >> chunks which run in datanodes in parallel. StormDB will provide such >> >> patche. >> > >> > Wasn't stormdb bought by another company ? Is there an opensource >> > alternative ? Fix the "patche" typo ? >> > >> > A way to make it simpler is by merging coordinator and datanode into 1 >> > and >> > making it possible for a 'node' to not hold data (be a coordinator >> > only), >> > like in elastic-search, but you probably already know that. >> +1. This would alleviate data transfer between cross-node joins where >> Coordinator and Datanodes are on separate servers. You could always >> have both nodes on the same server with the XC of now... But that's >> double number of nodes to monitor. >> >> > What exact things does the gtm-proxy do? For example, a single row >> > insert >> > wouldn't need the gtm (coordinator just inserts it to the right >> > data-node)(asumming no sequences, since for that the gtm is needed)? >> Grouping messages between Coordinator/Datanode and GTM to reduce >> package interferences and improve performance. >> >> > If multiple tables are sharded on the same key (example: user_id). Will >> > all >> > the rows, from the same user in different tables be in the same >> > data-node ? >> Yep. Node choice algorithm is based using the data type of the key. >> -- >> Michael > > |
From: Dorian H. <dor...@gm...> - 2014-05-04 10:06:12
|
Probably even the gtm-proxy need to be merged with datanode+coordinator from what i read. If you make only local transactions (inside 1 datanode) + not using global sequences, will there be no traffic to the GTM for that transaction ? On Sun, May 4, 2014 at 6:24 AM, Michael Paquier <mic...@gm...>wrote: > On Sun, May 4, 2014 at 12:59 AM, Dorian Hoxha <dor...@gm...> > wrote: > >> You just need commodity INTEL server runnign Linux. > > Are INTEL cpu required ? If not INTEL can be removed ? (also running > typo) > Not really... I agree to what you mean here. > > >> For datawarehouse > >> > >> applications, you may need separate patch which devides complexed query > >> into smaller > >> > >> chunks which run in datanodes in parallel. StormDB will provide such > >> patche. > > > > Wasn't stormdb bought by another company ? Is there an opensource > > alternative ? Fix the "patche" typo ? > > > > A way to make it simpler is by merging coordinator and datanode into 1 > and > > making it possible for a 'node' to not hold data (be a coordinator only), > > like in elastic-search, but you probably already know that. > +1. This would alleviate data transfer between cross-node joins where > Coordinator and Datanodes are on separate servers. You could always > have both nodes on the same server with the XC of now... But that's > double number of nodes to monitor. > > > What exact things does the gtm-proxy do? For example, a single row insert > > wouldn't need the gtm (coordinator just inserts it to the right > > data-node)(asumming no sequences, since for that the gtm is needed)? > Grouping messages between Coordinator/Datanode and GTM to reduce > package interferences and improve performance. > > > If multiple tables are sharded on the same key (example: user_id). Will > all > > the rows, from the same user in different tables be in the same > data-node ? > Yep. Node choice algorithm is based using the data type of the key. > -- > Michael > |