You can subscribe to this list here.
2010 |
Jan
|
Feb
|
Mar
|
Apr
|
May
(2) |
Jun
|
Jul
|
Aug
(6) |
Sep
|
Oct
(19) |
Nov
(1) |
Dec
|
---|---|---|---|---|---|---|---|---|---|---|---|---|
2011 |
Jan
(12) |
Feb
(1) |
Mar
(4) |
Apr
(4) |
May
(32) |
Jun
(12) |
Jul
(11) |
Aug
(1) |
Sep
(6) |
Oct
(3) |
Nov
|
Dec
(10) |
2012 |
Jan
(11) |
Feb
(1) |
Mar
(3) |
Apr
(25) |
May
(53) |
Jun
(38) |
Jul
(103) |
Aug
(54) |
Sep
(31) |
Oct
(66) |
Nov
(77) |
Dec
(20) |
2013 |
Jan
(91) |
Feb
(86) |
Mar
(103) |
Apr
(107) |
May
(25) |
Jun
(37) |
Jul
(17) |
Aug
(59) |
Sep
(38) |
Oct
(78) |
Nov
(29) |
Dec
(15) |
2014 |
Jan
(23) |
Feb
(82) |
Mar
(118) |
Apr
(101) |
May
(103) |
Jun
(45) |
Jul
(6) |
Aug
(10) |
Sep
|
Oct
(32) |
Nov
|
Dec
(9) |
2015 |
Jan
(3) |
Feb
(5) |
Mar
|
Apr
(1) |
May
|
Jun
|
Jul
(9) |
Aug
(4) |
Sep
(3) |
Oct
|
Nov
|
Dec
|
2016 |
Jan
(3) |
Feb
|
Mar
|
Apr
|
May
|
Jun
|
Jul
|
Aug
|
Sep
|
Oct
|
Nov
|
Dec
|
2017 |
Jan
|
Feb
|
Mar
|
Apr
|
May
|
Jun
(3) |
Jul
|
Aug
|
Sep
|
Oct
|
Nov
|
Dec
|
2018 |
Jan
|
Feb
|
Mar
|
Apr
|
May
(4) |
Jun
|
Jul
|
Aug
|
Sep
|
Oct
|
Nov
|
Dec
|
S | M | T | W | T | F | S |
---|---|---|---|---|---|---|
|
|
|
|
|
|
1
|
2
|
3
|
4
(5) |
5
(6) |
6
(1) |
7
|
8
|
9
|
10
|
11
|
12
|
13
|
14
|
15
|
16
|
17
|
18
|
19
|
20
|
21
|
22
|
23
|
24
|
25
|
26
|
27
|
28
|
29
|
30
|
31
|
|
|
|
|
|
From: Marcin K. <mr...@gm...> - 2011-01-05 21:14:39
|
-----BEGIN PGP SIGNED MESSAGE----- Hash: SHA1 Hello everyone, I wrote (mostly quick and dirty) script for generating XC config files, distributing them to nodes and GTM, and starting and stopping cluster: http://inet.btw2.pl/gen_pgxc_conf.tar.gz I would appreciate if smbdy of you takes a look if generated files are OK... - -- Regards, mk - -- Premature optimization is the root of all fun. -----BEGIN PGP SIGNATURE----- Version: GnuPG v1.4.9 (MingW32) Comment: Using GnuPG with Mozilla - http://enigmail.mozdev.org/ iQEcBAEBAgAGBQJNJN83AAoJEFMgHzhQQ7hOuy8H/iP0SEVoIMtkwtH8kllU5N1v s0JNu2LtSoUZeeFNR0tcJp2t1kbt/DkQgGIwGtPgwXFTEoc8aO7Mj/1aCpxWrxAZ EuKCFrBfFKVSDaZxaHHuIEXHEwikXtTDH5qHG4776J60hXv9tUM29TWUh/cGDfvC AE+uoQNX134r6I/z3JUGOMZzk7fvxgJuhyoL2e6xzJPgW1BPwXooGz7CRrIKl8Xt at7cMnTcI3XV4BQuCYhrbSX/ckhrclapZe1lngrZRmR5Um50WsoZSJ/VhVAgLBRu t1s0zi8nY4OqDQwg+2uXGRjA524oDK4g8RCKox2XQuqZj1ioQi4Mbx+rk1rWk2w= =ugXX -----END PGP SIGNATURE----- |
From: Marcin K. <mr...@gm...> - 2011-01-05 21:11:35
|
-----BEGIN PGP SIGNED MESSAGE----- Hash: SHA1 Hello everyone, I have successfully (?) configured XC on two nodes (the cluster successfully created database and role and granted privileges on the db), however I have encountered a problem: postgres=# \c etest; psql (8.4.3) You are now connected to database "etest". etest=# CREATE TABLE "user" ( etest(# id SERIAL NOT NULL, etest(# name VARCHAR, etest(# fullname VARCHAR, etest(# srv1 VARCHAR, etest(# srv2 VARCHAR, etest(# PRIMARY KEY (id) etest(# ) etest-# ; NOTICE: CREATE TABLE will create implicit sequence "user_id_seq" for serial column "user.id" server closed the connection unexpectedly This probably means the server terminated abnormally before or while processing the request. The connection to the server was lost. Attempting reset: Succeeded. The SQL code is simple enough I thought? The snag is I'm using ORM (SQLAlchemy 0.6.5) and this is what generated that code. After restart of a cluster I'm getting: postgres=# \c etest psql (8.4.3) You are now connected to database "etest". etest=# create table "testme" (id INTEGER, name VARCHAR); ERROR: Could not commit (or autocommit) data node connection - -- Regards, mk - -- Premature optimization is the root of all fun. -----BEGIN PGP SIGNATURE----- Version: GnuPG v1.4.9 (MingW32) Comment: Using GnuPG with Mozilla - http://enigmail.mozdev.org/ iQEcBAEBAgAGBQJNJN5+AAoJEFMgHzhQQ7hOEsQH/2VaXQovjZhR6IT085Jim1Gb RM3B5UMZ/rWfRaqDRUdtY3SaV5HDNQd+vtsCoHg71ftAFjutAlUC6JNB4hlypxiR MFvRO1nsCq3arQWEALxtbL4plHeHN28b1uJesB1vchjYJCvyR9MIIVSBU2P0Rnv+ qutNm/0rhbDhCo+7xOmK3J0r9VdBKvA+9NgYAi9AH9m7sMbIAvMcABZ9ws8QMSeG /aOCeWYSQCmUtG8/YPcQxdruKC7+G/d0lgY+BK2jP4iu0wnWSYA4ihzBI+FuKq8t ZAOvN2ku6DNvVGSeEnesOL9uK0dUOK1nkENmAd1AMMQ81jHcFB9hADrO/7oO530= =4/r3 -----END PGP SIGNATURE----- |
From: Koichi S. <koi...@gm...> - 2011-01-05 15:15:26
|
Hi, Using local pg_dump will consume local XID, which may affect the following cluster operation. You may have to give GTM safe GXID value to begin with. It will be safer to simply copy $PGDATA, which will not consume any local XID. Regards; ---------- Koichi Suzuki 2011/1/5 Marcin Krol <mr...@gm...>: > -----BEGIN PGP SIGNED MESSAGE----- > Hash: SHA1 > > > Thanks a lot for reply, Suzuki! > > I presume that at the moment to recover (fail-back?) a node a following > course of action would also be effective: > > 1. stop the cluster > > 2. dump the coordinator and datanode dbs, e.g. using pg_dump > > 3. send the dumps to the node being recovered and recreate coord and > datanode using the dumps? > > > Regards, > Marcin Krol > > > Koichi Suzuki wrote: >> Hi, >> >> So far, we can use PITR for individual datanode. Unfortunately, >> we've not released any utility to set it up. >> >> Now we're working to add mirroring capability of datanodes, which >> allow whole cluster continue to run and maintain cluster integrity >> even when some mirror fails with disk failure. In this case, mirror >> can be failed back by stopping whole cluster, copying files from >> another surviving mirrors and restart the cluster. >> >> In the case of coordinator, because all the coordinators are >> essentially clones, we can continue to run the cluster without the >> failed coordinator. To fail-back the coordinator, we can copy whole >> database from another coordinator while the cluster is shut down and >> restart whole cluster. >> >> When a coordinator is failed and is involved in outstanding 2PC, we >> need to clear it up to prevent them to appear in snapshots for a long >> time. We're now implemeting this capability. >> >> Ideally, it's so nice to have each component failed back without >> stopping cluster operation. This will be a challenge of this year. >> >> Regards; >> ---------- >> Koichi Suzuki >> >> >> >> 2011/1/5 Marcin Krol <mr...@gm...>: >> Hello everyone, >> >> Suppose a node falls out of a cluster: say, it had a disk failure. >> >> After getting that node back online, how can I resync that node with a >> cluster so cluster integrity is preserved? >> >> >>> > - > ------------------------------------------------------------------------------ > Learn how Oracle Real Application Clusters (RAC) One Node allows customers > to consolidate database storage, standardize their database environment, > and, > should the need arise, upgrade to a full multi-node Oracle RAC database > without downtime or disruption > http://p.sf.net/sfu/oracle-sfdevnl > _______________________________________________ > Postgres-xc-general mailing list > Pos...@li... > https://lists.sourceforge.net/lists/listinfo/postgres-xc-general >>> > > - -- > > Regards, > mk > > - -- > Premature optimization is the root of all fun. > -----BEGIN PGP SIGNATURE----- > Version: GnuPG v1.4.9 (MingW32) > Comment: Using GnuPG with Mozilla - http://enigmail.mozdev.org/ > > iQEcBAEBAgAGBQJNJGOTAAoJEFMgHzhQQ7hOnJwH/2GteBKvhHUIEaAa1TUsKY7M > zasEpQihvnE63OZYldFJDCo2v+NBKPCfiOgx1eOFjtocxZPNfFaM5S8a2zDTdAKz > ut8LVg0+SiCEaN5ryJqUhakFf/3gW8w8UCjoSxxf8DIYHQvpvMk3pLJrIQs6jbF6 > /GpHOZNznpZN4Syk70PyvCcdQ3u1RuAkthgc80jJjydRaWn9iibyuDe8uQSqZwf1 > PsK+5/bibEUVYf0zcQ8lyxsSC48hU6bN9ha5UAlKGvrYdx1rdJPoRlgzjTVEfexo > ff4jvyJNoG2F7GeGrcVhyZEz7S17Krt0EjJxMEPWxNa6D2nmPyr8hK2zcJOtfIo= > =nPj6 > -----END PGP SIGNATURE----- > > ------------------------------------------------------------------------------ > Learn how Oracle Real Application Clusters (RAC) One Node allows customers > to consolidate database storage, standardize their database environment, and, > should the need arise, upgrade to a full multi-node Oracle RAC database > without downtime or disruption > http://p.sf.net/sfu/oracle-sfdevnl > _______________________________________________ > Postgres-xc-general mailing list > Pos...@li... > https://lists.sourceforge.net/lists/listinfo/postgres-xc-general > |
From: Marcin K. <mr...@gm...> - 2011-01-05 12:27:11
|
-----BEGIN PGP SIGNED MESSAGE----- Hash: SHA1 Thanks a lot for reply, Suzuki! I presume that at the moment to recover (fail-back?) a node a following course of action would also be effective: 1. stop the cluster 2. dump the coordinator and datanode dbs, e.g. using pg_dump 3. send the dumps to the node being recovered and recreate coord and datanode using the dumps? Regards, Marcin Krol Koichi Suzuki wrote: > Hi, > > So far, we can use PITR for individual datanode. Unfortunately, > we've not released any utility to set it up. > > Now we're working to add mirroring capability of datanodes, which > allow whole cluster continue to run and maintain cluster integrity > even when some mirror fails with disk failure. In this case, mirror > can be failed back by stopping whole cluster, copying files from > another surviving mirrors and restart the cluster. > > In the case of coordinator, because all the coordinators are > essentially clones, we can continue to run the cluster without the > failed coordinator. To fail-back the coordinator, we can copy whole > database from another coordinator while the cluster is shut down and > restart whole cluster. > > When a coordinator is failed and is involved in outstanding 2PC, we > need to clear it up to prevent them to appear in snapshots for a long > time. We're now implemeting this capability. > > Ideally, it's so nice to have each component failed back without > stopping cluster operation. This will be a challenge of this year. > > Regards; > ---------- > Koichi Suzuki > > > > 2011/1/5 Marcin Krol <mr...@gm...>: > Hello everyone, > > Suppose a node falls out of a cluster: say, it had a disk failure. > > After getting that node back online, how can I resync that node with a > cluster so cluster integrity is preserved? > > >> - ------------------------------------------------------------------------------ Learn how Oracle Real Application Clusters (RAC) One Node allows customers to consolidate database storage, standardize their database environment, and, should the need arise, upgrade to a full multi-node Oracle RAC database without downtime or disruption http://p.sf.net/sfu/oracle-sfdevnl _______________________________________________ Postgres-xc-general mailing list Pos...@li... https://lists.sourceforge.net/lists/listinfo/postgres-xc-general >> - -- Regards, mk - -- Premature optimization is the root of all fun. -----BEGIN PGP SIGNATURE----- Version: GnuPG v1.4.9 (MingW32) Comment: Using GnuPG with Mozilla - http://enigmail.mozdev.org/ iQEcBAEBAgAGBQJNJGOTAAoJEFMgHzhQQ7hOnJwH/2GteBKvhHUIEaAa1TUsKY7M zasEpQihvnE63OZYldFJDCo2v+NBKPCfiOgx1eOFjtocxZPNfFaM5S8a2zDTdAKz ut8LVg0+SiCEaN5ryJqUhakFf/3gW8w8UCjoSxxf8DIYHQvpvMk3pLJrIQs6jbF6 /GpHOZNznpZN4Syk70PyvCcdQ3u1RuAkthgc80jJjydRaWn9iibyuDe8uQSqZwf1 PsK+5/bibEUVYf0zcQ8lyxsSC48hU6bN9ha5UAlKGvrYdx1rdJPoRlgzjTVEfexo ff4jvyJNoG2F7GeGrcVhyZEz7S17Krt0EjJxMEPWxNa6D2nmPyr8hK2zcJOtfIo= =nPj6 -----END PGP SIGNATURE----- |
From: Koichi S. <koi...@gm...> - 2011-01-05 02:01:04
|
Hi, So far, we can use PITR for individual datanode. Unfortunately, we've not released any utility to set it up. Now we're working to add mirroring capability of datanodes, which allow whole cluster continue to run and maintain cluster integrity even when some mirror fails with disk failure. In this case, mirror can be failed back by stopping whole cluster, copying files from another surviving mirrors and restart the cluster. In the case of coordinator, because all the coordinators are essentially clones, we can continue to run the cluster without the failed coordinator. To fail-back the coordinator, we can copy whole database from another coordinator while the cluster is shut down and restart whole cluster. When a coordinator is failed and is involved in outstanding 2PC, we need to clear it up to prevent them to appear in snapshots for a long time. We're now implemeting this capability. Ideally, it's so nice to have each component failed back without stopping cluster operation. This will be a challenge of this year. Regards; ---------- Koichi Suzuki 2011/1/5 Marcin Krol <mr...@gm...>: > -----BEGIN PGP SIGNED MESSAGE----- > Hash: SHA1 > > Hello everyone, > > Suppose a node falls out of a cluster: say, it had a disk failure. > > After getting that node back online, how can I resync that node with a > cluster so cluster integrity is preserved? > > > - -- > > Regards, > mk > > - -- > Premature optimization is the root of all fun. > -----BEGIN PGP SIGNATURE----- > Version: GnuPG v1.4.9 (MingW32) > Comment: Using GnuPG with Mozilla - http://enigmail.mozdev.org/ > > iQEcBAEBAgAGBQJNI38tAAoJEFMgHzhQQ7hOD60H/R9DhEfSU//+v/ab+N/MJ/oB > ksbbWowklFo71iLonW/P9ZmeRD6NROX3a/gYNmBetQ7KUp2fAmyb35ijJNmwL7mA > FTC9fpVD1Vv3DL1jWWo/7EyYA/mdcHsPhHGgKe4/s6mvESOU8dR0bFep1FDYFYuK > 1DCsvB+l++Yu1HLJlbzJzmeS8E7f+pjPDzWyt8fPZ8StROc5lHq393c75gg2MKx7 > 03ylfxAqW6JIKbrqNwSsk7B5AoQ+tJETxbAzSaK5FoZ63fXcAtGFfOTQbSJ8pyyt > HXfGziR0qgwjbhLGMLV+O56Dhj5lKT/UB8lrEljAJQUfPH4mpva/b2u84IkVQ98= > =3wVa > -----END PGP SIGNATURE----- > > ------------------------------------------------------------------------------ > Learn how Oracle Real Application Clusters (RAC) One Node allows customers > to consolidate database storage, standardize their database environment, and, > should the need arise, upgrade to a full multi-node Oracle RAC database > without downtime or disruption > http://p.sf.net/sfu/oracle-sfdevnl > _______________________________________________ > Postgres-xc-general mailing list > Pos...@li... > https://lists.sourceforge.net/lists/listinfo/postgres-xc-general > |
From: Michael P. <mic...@gm...> - 2011-01-05 01:44:32
|
For the answers to your questions... please see inline. About CREATE DATABASE, there is since version 0.9.3 all he necessary mechanisms to synchronize automatically Coordinators (well, DDL...) through the whole cluster. So it is not necessary to resynchronize the catalog files of Coordinators manually. If you are getting snapshots warning when creating a Database, you may have configuration problems. You should double check your configuration files. > I have NOT set up GTM proxy. Do I have to do that? Is it related to my problem? It is not necessary to set up a GTM Proxy to have your cluster working correctly. > Ok, but is the code in git repo stable enough for production use? Code is not stable enough to be used in production, we are currently working a lot on the code stabilization, SQL support and HA features. -- Michael Paquier http://michaelpq.users.sourceforge.net |