You can subscribe to this list here.
| 2010 |
Jan
|
Feb
|
Mar
|
Apr
|
May
(2) |
Jun
|
Jul
|
Aug
(6) |
Sep
|
Oct
(19) |
Nov
(1) |
Dec
|
|---|---|---|---|---|---|---|---|---|---|---|---|---|
| 2011 |
Jan
(12) |
Feb
(1) |
Mar
(4) |
Apr
(4) |
May
(32) |
Jun
(12) |
Jul
(11) |
Aug
(1) |
Sep
(6) |
Oct
(3) |
Nov
|
Dec
(10) |
| 2012 |
Jan
(11) |
Feb
(1) |
Mar
(3) |
Apr
(25) |
May
(53) |
Jun
(38) |
Jul
(103) |
Aug
(54) |
Sep
(31) |
Oct
(66) |
Nov
(77) |
Dec
(20) |
| 2013 |
Jan
(91) |
Feb
(86) |
Mar
(103) |
Apr
(107) |
May
(25) |
Jun
(37) |
Jul
(17) |
Aug
(59) |
Sep
(38) |
Oct
(78) |
Nov
(29) |
Dec
(15) |
| 2014 |
Jan
(23) |
Feb
(82) |
Mar
(118) |
Apr
(101) |
May
(103) |
Jun
(45) |
Jul
(6) |
Aug
(10) |
Sep
|
Oct
(32) |
Nov
|
Dec
(9) |
| 2015 |
Jan
(3) |
Feb
(5) |
Mar
|
Apr
(1) |
May
|
Jun
|
Jul
(9) |
Aug
(4) |
Sep
(3) |
Oct
|
Nov
|
Dec
|
| 2016 |
Jan
(3) |
Feb
|
Mar
|
Apr
|
May
|
Jun
|
Jul
|
Aug
|
Sep
|
Oct
|
Nov
|
Dec
|
| 2017 |
Jan
|
Feb
|
Mar
|
Apr
|
May
|
Jun
(3) |
Jul
|
Aug
|
Sep
|
Oct
|
Nov
|
Dec
|
| 2018 |
Jan
|
Feb
|
Mar
|
Apr
|
May
(4) |
Jun
|
Jul
|
Aug
|
Sep
|
Oct
|
Nov
|
Dec
|
| S | M | T | W | T | F | S |
|---|---|---|---|---|---|---|
|
|
|
1
(6) |
2
(3) |
3
(4) |
4
(4) |
5
(7) |
|
6
(3) |
7
(16) |
8
(4) |
9
(6) |
10
(3) |
11
|
12
|
|
13
|
14
(2) |
15
(2) |
16
(1) |
17
(14) |
18
|
19
|
|
20
|
21
|
22
|
23
|
24
|
25
|
26
|
|
27
|
28
(1) |
29
(2) |
30
|
31
|
|
|
|
From: Michael P. <mic...@gm...> - 2013-10-06 13:40:02
|
On Sun, Oct 6, 2013 at 7:45 PM, Yehezkel Horowitz <hor...@ch...> wrote: > Second, I allow myself suggest you to consider some conventions for your > mailing list (as example cUrl’s Etiquette: > http://curl.haxx.se/mail/etiquette.html) as it is quite hard to follow > threads in the archive. This is rather interesting. Thanks for pointing to that! > My goal – I have an application that needs SQL DB and must always be up (I > have a backup machine for this purpose). Have you thought about PostgreSQL itself for your solution. Is there any reason you'd need XC? Do you have an amount of data that forces you to use multi-master architecture or perhaps PG itself could handle it? > I plan to deploy as follow: > Machine A: 1 Datanode, 1 Coordinator, 1 GTM proxy, 1 GTM > Machine B: 1 Datanode, 1 Coordinator, 1 GTM proxy, 1 GTM-slave > > Both machines have my application installed on, and the clients of my > application will connect to the working machine (in normal case, they can > connect to either one of them with simple load-balancer, hence I need > multi-master replication). So all your tables will be replicated. > If I understand correctly, in case of failure in Machine A, I need to > promote the GTM-slave to become GTM master, and reconnect the GTM proxy - > all this could be done in Machine B. Right? Yep, this is doable. If all your data is replicated you would be able to do that. However you need to keep in mind that you will not be able to write new data to node B if node A is not accessible. If you data is replicated and you need to update a table, both nodes need to work. Or if you want B to be still writable, you could update the node information inside it, make it workable alone, and when server A is up again recreate a new XC node from scratch and add it again to the cluster. > My questions: > > 1. In your docs, you always put the GTM in dedicated machine. > a. Is this a requirement, just an easy to understand topology or best > practice? GTM consumes a certain amount of CPU and does not need much RAM, while for your nodes you might prioritize the opposite. > b. In case of best practice, what is the expected penalty in case the > GTM is deployed on the same machine with coordinator and datanode? CPU resource consumption and reduction of performance if your queries need some CPU with for example internal sort operations among other things. > c. In such deployment, is there a need for GTM proxy on this machine? This is actually a good question. GTM proxy is here to reduce the amount of data exchanged between GTM and the nodes. So yes if you have a lot of concurrent sessions in the whole cluster. > 2. What should I do after Machine A is back to life if I want: > a. Make it act as a new slave? > b. Make it become the master again? There is no principle of master/slave in XC like in Postgres (well you could create a slave node for an individual Coordinator/Datanode). But basically in your configuration machine A and B have the same state. Only GTM is a slave. Regards, -- Michael |
|
From: Yehezkel H. <hor...@ch...> - 2013-10-06 11:53:09
|
First I want to thank you all for this project, it seems very interesting, involving highly sophisticated technology and answering a real need of the industry. Second, I allow myself suggest you to consider some conventions for your mailing list (as example cUrl's Etiquette: http://curl.haxx.se/mail/etiquette.html) as it is quite hard to follow threads in the archive. My goal - I have an application that needs SQL DB and must always be up (I have a backup machine for this purpose). I plan to deploy as follow: Machine A: 1 Datanode, 1 Coordinator, 1 GTM proxy, 1 GTM Machine B: 1 Datanode, 1 Coordinator, 1 GTM proxy, 1 GTM-slave Both machines have my application installed on, and the clients of my application will connect to the working machine (in normal case, they can connect to either one of them with simple load-balancer, hence I need multi-mater replication). If I understand correctly, in case of failure in Machine A, I need to promote the GTM-slave to become GTM master, and reconnect the GTM proxy - all this could be done in Machine B. Right? My questions: 1. In your docs, you always put the GTM in dedicated machine. a. Is this a requirement, just an easy to understand topology or best practice? b. In case of best practice, what is the expected penalty in case the GTM is deployed on the same machine with coordinator and datanode? c. In such deployment, is there a need for GTM proxy on this machine? 2. What should I do after Machine A is back to life if I want: a. Make it act as a new slave? b. Make it become the master again? I saw this question in the archive (http://sourceforge.net/p/postgres-xc/mailman/message/31302978/), but didn't found any answer: > I suppose my question is: what do I need to do, to make the former masters > into new slaves? To me it would make sense to be able to failover node1 > once and then again, and be left with more or less the same configuration > as in the beginning. It would be okay if there is some magic command I can > run to reconfigure a former master as the new slave. Hope I don't ask silly questions, but I couldn't find answers in the docs/archive. Thanks in advanced Yehezkel Horowitz Check Point Software Technologies Ltd. |
|
From: Michael P. <mic...@gm...> - 2013-10-05 21:49:59
|
On Sat, Oct 5, 2013 at 11:48 PM, Sandeep Gupta <gup...@gm...> wrote: > > Hi Michael, > > Sure. I am using pgxc v.1. 1.0 or 1.1? > For the query explain verbose update person set intervened = 84 from d1_sum > WHERE person.pid=d1_sum.pid; > > > Query Plan: > > Update on public.person (cost=0.00..0.00 rows=1000 width=24) > Node/s: datanode1 > Node expr: person.pid > Remote query: UPDATE ONLY public.person SET intervened = $3 WHERE > ((person.ctid = $4) AND (person.xc_node_id = $5)) > -> Data Node Scan on "_REMOTE_TABLE_QUERY_" (cost=0.00..0.00 rows=1000 > width=24) > Output: person.pid, person.persons, 84, person.pid, person.ctid, > person.xc_node_id, d1_sum.ctid > Node/s: datanode1 > Remote query: SELECT l.a_1, l.a_2, l.a_3, l.a_4, r.a_1 FROM > ((SELECT person.pid, person.persons, person.ctid, person.xc_node_i > d FROM ONLY public.person WHERE true) l(a_1, a_2, a_3, a_4) JOIN (SELECT > d1_sum.ctid, d1_sum.pid FROM ONLY public.d1_sum WHERE true) r( > a_1, a_2) ON (true)) WHERE (l.a_1 = r.a_2) > (8 rows) > > For the second style > > explain verbose update person set intervened = 84 where person.pid = (select > d1_sum.pid from d1_sum,person WHERE person.pid=d1_sum.pid); > > Update on public.person (cost=0.00..0.00 rows=1000 width=18) > Node/s: datanode1 > Node expr: public.person.pid > Remote query: UPDATE ONLY public.person SET intervened = $3 WHERE > ((person.ctid = $4) AND (person.xc_node_id = $5)) > InitPlan 1 (returns $0) > -> Data Node Scan on "_REMOTE_TABLE_QUERY_" (cost=0.00..0.00 > rows=1000 width=4) > Output: d1_sum.pid > Node/s: datanode1 > Remote query: SELECT l.a_1 FROM ((SELECT d1_sum.pid FROM ONLY > public.d1_sum WHERE true) l(a_1) JOIN (SELECT person.pid FROM > ONLY public.person WHERE true) r(a_1) ON (true)) WHERE (l.a_1 = r.a_1) > -> Data Node Scan on person "_REMOTE_TABLE_QUERY_" (cost=0.00..0.00 > rows=1000 width=18) > Output: public.person.pid, public.person.persons, 84, > public.person.pid, public.person.ctid, public.person.xc_node_id > Node/s: datanode1 > Remote query: SELECT pid, persons, ctid, xc_node_id FROM ONLY > public.person WHERE true > Coordinator quals: (public.person.pid = $0) > > > In both the scenarios the planner breaks it into two parts: update and join. > The results of join is pulled up at the coordinator and then shipped one by > one for update. Indeed you are right. It seems that FROM clause support in UPDATE is limited? Others, comments on that? I thought that there has been some work done in the area. -- Michael |
|
From: Sandeep G. <gup...@gm...> - 2013-10-05 14:48:09
|
Hi Michael,
Sure. I am using pgxc v.1.
For the query explain verbose update person set intervened = 84 from d1_sum
WHERE person.pid=d1_sum.pid;
Query Plan:
Update on public.person (cost=0.00..0.00 rows=1000 width=24)
Node/s: datanode1
Node expr: person.pid
Remote query: UPDATE ONLY public.person SET intervened = $3 WHERE
((person.ctid = $4) AND (person.xc_node_id = $5))
-> Data Node Scan on "_REMOTE_TABLE_QUERY_" (cost=0.00..0.00 rows=1000
width=24)
Output: person.pid, person.persons, 84, person.pid, person.ctid,
person.xc_node_id, d1_sum.ctid
Node/s: datanode1
Remote query: SELECT l.a_1, l.a_2, l.a_3, l.a_4, r.a_1 FROM
((SELECT person.pid, person.persons, person.ctid, person.xc_node_i
d FROM ONLY public.person WHERE true) l(a_1, a_2, a_3, a_4) JOIN (SELECT
d1_sum.ctid, d1_sum.pid FROM ONLY public.d1_sum WHERE true) r(
a_1, a_2) ON (true)) WHERE (l.a_1 = r.a_2)
(8 rows)
For the second style
explain verbose update person set intervened = 84 where person.pid =
(select d1_sum.pid from d1_sum,person WHERE person.pid=d1_sum.pid);
Update on public.person (cost=0.00..0.00 rows=1000 width=18)
Node/s: datanode1
Node expr: public.person.pid
Remote query: UPDATE ONLY public.person SET intervened = $3 WHERE
((person.ctid = $4) AND (person.xc_node_id = $5))
InitPlan 1 (returns $0)
-> Data Node Scan on "_REMOTE_TABLE_QUERY_" (cost=0.00..0.00
rows=1000 width=4)
Output: d1_sum.pid
Node/s: datanode1
Remote query: SELECT l.a_1 FROM ((SELECT d1_sum.pid FROM ONLY
public.d1_sum WHERE true) l(a_1) JOIN (SELECT person.pid FROM
ONLY public.person WHERE true) r(a_1) ON (true)) WHERE (l.a_1 = r.a_1)
-> Data Node Scan on person "_REMOTE_TABLE_QUERY_" (cost=0.00..0.00
rows=1000 width=18)
Output: public.person.pid, public.person.persons, 84,
public.person.pid, public.person.ctid, public.person.xc_node_id
Node/s: datanode1
Remote query: SELECT pid, persons, ctid, xc_node_id FROM ONLY
public.person WHERE true
Coordinator quals: (public.person.pid = $0)
In both the scenarios the planner breaks it into two parts: update and
join. The results of join is pulled up at the coordinator and then shipped
one by one for update.
Thanks for taking a look.
-Sandeep
On Sat, Oct 5, 2013 at 10:26 AM, Michael Paquier
<mic...@gm...>wrote:
> On Sat, Oct 5, 2013 at 11:14 PM, Sandeep Gupta <gup...@gm...>
> wrote:
> > Thanks Michael. I understand. The only issue is that we have an update
> > query as
> >
> > update T set T.a = -1 from A where A.x = T.x
> >
> >
> > Both A and T and distributed by x column. The problem is that coordinator
> > first does the join and then
> > calls update several times at each datanode. This is turning out to be
> too
> > slow. Would have
> > been better if the entire query was shipped to the datanodes.
> Hum?! Logically, I would imagine that if A and T are distributed by x
> this WHERE clause should be pushed down as the SET clause is a
> constant. However perhaps UPDATE FROM does not have an explicit
> support... Could you provide the version number and an EXPLAIN VERBOSE
> output?
>
> What if you put the where join in a subquery or a WITH clause? Like
> that for example:
> update T set T.a = -1 where A.x = (select A.x from A,T where A.x = T.x);
> --
> Michael
>
|
|
From: Michael P. <mic...@gm...> - 2013-10-05 14:26:59
|
On Sat, Oct 5, 2013 at 11:14 PM, Sandeep Gupta <gup...@gm...> wrote: > Thanks Michael. I understand. The only issue is that we have an update > query as > > update T set T.a = -1 from A where A.x = T.x > > > Both A and T and distributed by x column. The problem is that coordinator > first does the join and then > calls update several times at each datanode. This is turning out to be too > slow. Would have > been better if the entire query was shipped to the datanodes. Hum?! Logically, I would imagine that if A and T are distributed by x this WHERE clause should be pushed down as the SET clause is a constant. However perhaps UPDATE FROM does not have an explicit support... Could you provide the version number and an EXPLAIN VERBOSE output? What if you put the where join in a subquery or a WITH clause? Like that for example: update T set T.a = -1 where A.x = (select A.x from A,T where A.x = T.x); -- Michael |
|
From: Michael P. <mic...@gm...> - 2013-10-05 14:19:10
|
On Sat, Oct 5, 2013 at 9:00 PM, Stefan Lekov <ar...@er...> wrote: > Hello, > > I'm new to the Postgres-XC project. In fact I am still considering if I > should install it > in order to try it as a replacement of my current database clusters > (those are based > around MySQL and its binary_log based replication). Have you considered PostgreSQL as a potential solution before Postgres-XC. Why do you especially need XC? > Before actually starting the installation of postgres-xc I would like > to know what is the procedure for restarting > nodes. I have already read a few documents/mails regarding restoring or > resyncing > a failed datanode, however these documents does not answer my simple > question: > > What should be the procedure for rebooting servers? For example I have a > kernel > updated pending (due to security reasons) - I'm installing the new > kernel, but I have > to reboot all machine. Theoretically all nodes (both coordinators and > datanodes) > are working on different physical servers or VMes. In a perfect scenario > I would > like to keep the system in production while I am restarting the servers > one by one. > However I am not sure what would be the effect of rebooting servers one > by one. If a node is restarted or facing an outage, all the transactions it needs to be involved in will simply fail. In the case of Coordinator, this has effect only for DDL. For Datanodes, this has effect as well for DDL, but also for DML and SELECT of the node is needed for the transaction. > For purpose of example let me have four datanodes: A,B,C,D > > All servers are synced and are operating as expected. > 1) Upgrade A, reboot A > 2) INSERT/UPDATE/DELETE queries > 3) A boots up and is successfully started > 4) INSERT/UPDATE/DELETE queries > 5) Upgrade B, reboot B > ... > ... > As for the "Coordinators" nodes. How are those affected by temporary > stopping > and restarting the postgres-xc related services. What should be the load > balancer > in front of these servers in order to be able to both load-balance and > fail-over if > one of the Coordinators is offline either due to failed server or due to > rebooting > servers. DDLs won't work. Applications will use one access point. In this case no problems for your application, connect to the other Coordinators to execute queries as long as they are not DDLs. > I have no problem with relatively heavy operation of full restore of a > datanode in > event of failed server. Such restoration operation can be properly > scheduled and > executed, however I am interested how would postgres-xc react to simple > scenarioa simple operation of restarting a server due to whatever > reasons should As mentioned above, transactions that will need it will simply fail. You could always failover a slave for the outage period if necessary. -- Michael |
|
From: Sandeep G. <gup...@gm...> - 2013-10-05 14:14:26
|
Thanks Michael. I understand. The only issue is that we have an update query as update T set T.a = -1 from A where A.x = T.x Both A and T and distributed by x column. The problem is that coordinator first does the join and then calls update several times at each datanode. This is turning out to be too slow. Would have been better if the entire query was shipped to the datanodes. Thanks. Sandeep On Sat, Oct 5, 2013 at 6:27 AM, Michael Paquier <mic...@gm...>wrote: > On Sat, Oct 5, 2013 at 2:58 AM, Sandeep Gupta <gup...@gm...> > wrote: > > I understand that the datanodes are read only and that updates/insert > can > > happen at coordinator. > You got it. > > > Also, it does not allow modification of column over which the records > are distributed. > Hum no, 1.1 allows ALTER TABLE that you can use to change the > distribution type of a table. > > > However, in case I know what I am doing, it there anyway possible to > modify > > the values directly at datanodes. > > The modifications are not over column over which distribution happens. > If you mean by connecting directly to the Datanodes, no. You would > break data consistency if table is replicated by the way by doing > that. Let the Coordinator planner do the job and choose the remote > nodes for you. > > There have been discussion to merge Coordinators and Datanodes > together though. This would allow what you say, with a simpler cluster > design. > -- > Michael > |
|
From: Stefan L. <ar...@er...> - 2013-10-05 12:27:37
|
Hello, I'm new to the Postgres-XC project. In fact I am still considering if I should install it in order to try it as a replacement of my current database clusters (those are based around MySQL and its binary_log based replication). Before actually starting the installation of postgres-xc I would like to know what is the procedure for restarting nodes. I have already read a few documents/mails regarding restoring or resyncing a failed datanode, however these documents does not answer my simple question: What should be the procedure for rebooting servers? For example I have a kernel updated pending (due to security reasons) - I'm installing the new kernel, but I have to reboot all machine. Theoretically all nodes (both coordinators and datanodes) are working on different physical servers or VMes. In a perfect scenario I would like to keep the system in production while I am restarting the servers one by one. However I am not sure what would be the effect of rebooting servers one by one. For purpose of example let me have four datanodes: A,B,C,D All servers are synced and are operating as expected. 1) Upgrade A, reboot A 2) INSERT/UPDATE/DELETE queries 3) A boots up and is successfully started 4) INSERT/UPDATE/DELETE queries 5) Upgrade B, reboot B ... ... As for the "Coordinators" nodes. How are those affected by temporary stopping and restarting the postgres-xc related services. What should be the load balancer in front of these servers in order to be able to both load-balance and fail-over if one of the Coordinators is offline either due to failed server or due to rebooting servers. I have no problem with relatively heavy operation of full restore of a datanode in event of failed server. Such restoration operation can be properly scheduled and executed, however I am interested how would postgres-xc react to simple scenarioa simple operation of restarting a server due to whatever reasons should Kind Regards, Stefan Lekov |
|
From: Michael P. <mic...@gm...> - 2013-10-05 10:27:21
|
On Sat, Oct 5, 2013 at 2:58 AM, Sandeep Gupta <gup...@gm...> wrote: > I understand that the datanodes are read only and that updates/insert can > happen at coordinator. You got it. > Also, it does not allow modification of column over which the records are distributed. Hum no, 1.1 allows ALTER TABLE that you can use to change the distribution type of a table. > However, in case I know what I am doing, it there anyway possible to modify > the values directly at datanodes. > The modifications are not over column over which distribution happens. If you mean by connecting directly to the Datanodes, no. You would break data consistency if table is replicated by the way by doing that. Let the Coordinator planner do the job and choose the remote nodes for you. There have been discussion to merge Coordinators and Datanodes together though. This would allow what you say, with a simpler cluster design. -- Michael |
|
From: Sandeep G. <gup...@gm...> - 2013-10-04 17:58:31
|
Hi, I understand that the datanodes are read only and that updates/insert can happen at co-ordinator. Also, it does not allow modification of column over which the records are distributed. However, in case I know what I am doing, it there anyway possible to modify the values directly at datanodes. The modifications are not over column over which distribution happens. Thanks. Sandeep |
|
From: Julian <jul...@gm...> - 2013-10-04 09:59:47
|
Dear Sir,
My cluster has configured by pg_ctl now, but till filed.
Error message
--------------------------------------------------------------------------------
PGXC add coordinator master coord4 node4 20004 20010 /opt/pgxc/nodes/coord
…...
Actual Command: ssh pgxc@node4 "( pg_ctl start -Z restoremode -D /opt/pgxc/nodes/coord -o -i ) > /tmp/squeeze-10-200_STDOUT_2618_16 2>&1" < /dev/null > /dev/null 2>&1
Bring remote stdout: scp pgxc@node4:/tmp/squeeze-10-200_STDOUT_2618_16 /tmp/STDOUT_2618_17 > /dev/null 2>&1
SET
SET
psql:/tmp/GENERAL_2618_15:12: ERROR: role "pgxc" already exists
ALTER ROLE
REVOKE
REVOKE
GRANT
GRANT
CREATE NODE
CREATE NODE
CREATE NODE
CREATE NODE
CREATE NODE
CREATE NODE
You are now connected to database "postgres" as user "pgxc".
SET
SET
SET
SET
SET
COMMENT
CREATE EXTENSION
COMMENT
SET
SET
SET
psql:/tmp/GENERAL_2618_15:92: connection to server was lost
Actual Command: ssh pgxc@node4 "( pg_ctl stop -Z restoremode -D /opt/pgxc/nodes/coord ) > /tmp/squeeze-10-200_STDOUT_2618_18 2>&1" < /dev/null > /dev/null 2>&1
Bring remote stdout: scp pgxc@node4:/tmp/squeeze-10-200_STDOUT_2618_18 /tmp/STDOUT_2618_19 > /dev/null 2>&1
Starting coordinator master coord4
Done.
CREATE NODE
CREATE NODE
CREATE NODE
ALTER NODE
----------------------------------------------------------------
Error log
--------------------------------------------------------------------
ERROR: role "pgxc" already exists
STATEMENT: CREATE ROLE pgxc;
LOG: server process (PID 3013) was terminated by signal 11: Segmentation fault
DETAIL: Failed process was running: CREATE TABLE user_info_hash (
id integer NOT NULL,
firstname text,
lastname text,
info text
)
DISTRIBUTE BY HASH (id)
TO NODE (datanode1,datanode2,datanode3);
LOG: terminating any other active server processes
WARNING: terminating connection because of crash of another server process
DETAIL: The postmaster has commanded this server process to roll back the current transaction and exit, because another server process exited abnormally and possibly co
rrupted shared memory.
HINT: In a moment you should be able to reconnect to the database and repeat your command.
---------------------------------------------------------------------------
Any comments are appreciated
Best Regards,
--
Julian
使用 Sparrow (http://www.sparrowmailapp.com/?sig) 發信
On 2013年10月3日Thursday at 上午9:31, Koichi Suzuki wrote:
> You cannot add a coordinator in such a way. There're many issued to be resolved internally. You can configure and operate whole cluster with pgxc_ctl to get handy way to add coordinator/datanode.
>
> I understand you have your cluster configured without pgxc_ctl. In this case, adding coordinator manually could be a bit complicated work. Sorry, I've not uploaded the detailed step to do it.
>
> Whole steps will be found in add_coordinatorMaster() function defined in coord_cmd.c of pgxc_ctl source code. It will be found at contrib/pgxc_ctl in the release material.
>
> Please allow a bit of time to find my time to upload this information to XC wiki.
>
> Or, you can backup whole database with pg_dumpall, then reconfigure new xc cluster with additional coordinator, and then restore the backup.
>
> Regards;
>
> ---
> Koichi Suzuki
>
>
>
>
> 2013/10/3 Julian <jul...@gm... (mailto:jul...@gm...)>
> > Dear Sir,
> >
> > I have a cluster with 3 coordinators and 3 datanodes on 3 VM, today i
> > was try to added a new coordinator to the cluster, when i using command
> > "psql postgres -f coordinator.sql -p 5455" to restore the backup file to
> > the new coordinator.
> >
> > Then i got this message :
> >
> > psql:coordinator-dump.sql:105: connection to server was lost
> >
> > In the log file:
> > ----------------------------------------------------------------------------------------------------------------------------------------------------------------
> > 2013-10-02 23:00:09.811 CST,,,29061,,524c34de.7185,2,,2013-10-02
> > 22:59:42 CST,,0,LOG,00000,"server process (PID 29094) was terminated by
> > signal 11: Segmentation fault","Failed process was r
> > unning: CREATE TABLE user_info_hash (
> > id integer NOT NULL,
> > firstname text,
> > lastname text,
> > info text
> > )
> > DISTRIBUTE BY HASH (id)
> > TO NODE (dn2,dn1,dn3);",,,,,,,,""
> > 2013-10-02 23:00:09.811 CST,,,29061,,524c34de.7185,3,,2013-10-02
> > 22:59:42 CST,,0,LOG,00000,"terminating any other active server
> > processes",,,,,,,,,""
> > ------------------------------------------------------------------------------------------------------------------------------------------------------------------
> > refer to
> > http://postgres-xc.sourceforge.net/docs/1_1/add-node-coordinator.html
> >
> > Is there something i doing worng?
> >
> >
> > Thanks for your kindly reply.
> >
> > And sorry for my poor english.
> >
> >
> > Best regards.
> >
> > ------------------------------------------------------------------------------
> > October Webinars: Code for Performance
> > Free Intel webinars can help you accelerate application performance.
> > Explore tips for MPI, OpenMP, advanced profiling, and more. Get the most from
> > the latest Intel processors and coprocessors. See abstracts and register >
> > http://pubads.g.doubleclick.net/gampad/clk?id=60134791&iu=/4140/ostg.clktrk
> > _______________________________________________
> > Postgres-xc-general mailing list
> > Pos...@li... (mailto:Pos...@li...)
> > https://lists.sourceforge.net/lists/listinfo/postgres-xc-general
>
|
|
From: Koichi S. <koi...@gm...> - 2013-10-04 02:33:11
|
Sorry, I'm not familiar with Hibernate but JDBC does support DISTRIBUTE BY REPLICATION. You can issue CREATE TABLE through general jdbc apps, or you can issue ALTER TABLE to change distributed table into replicated table outside Hibernate, through psql. There's no influence in other query statement. Regards; --- Koichi Suzuki 2013/10/4 Anson Abraham <ans...@gm...> > So I migrated a pg database (9.1) to postgres-xc 1.1. Required me to > essentially apply the Distribute By Replication to most of the tables. But > doing so, apparently this app threw out an error: > > Unable to upgrade schema to latest version. > org.hibernate.exception.GenericJDBCException: ResultSet not positioned properly, perhaps you need to call next. > at org.hibernate.exception.internal.StandardSQLExceptionConverter.convert(StandardSQLExceptionConverter.java:54) > at org.hibernate.engine.jdbc.spi.SqlExceptionHelper.convert(SqlExceptionHelper.java:125) > at org.hibernate.engine.jdbc.spi.SqlExceptionHelper.convert(SqlExceptionHelper.java:110) > at org.hibernate.engine.jdbc.internal.proxy.AbstractResultSetProxyHandler.continueInvocation(AbstractResultSetProxyHandler.java:108) > at org.hibernate.engine.jdbc.internal.proxy.AbstractProxyHandler.invoke(AbstractProxyHandler.java:81) > at $Proxy10.getInt(Unknown Source) > at com.cloudera.enterprise.dbutil.DbUtil.getSchemaVersion(DbUtil.java:212) > at com.cloudera.enterprise.dbutil.DbUtil$1SchemaVersionWork.execute(DbUtil.java:159) > at org.hibernate.jdbc.WorkExecutor.executeWork(WorkExecutor.java:54) > at org.hibernate.internal.SessionImpl$2.accept(SessionImpl.java:1937) > at org.hibernate.internal.SessionImpl$2.accept(SessionImpl.java:1934) > at org.hibernate.engine.jdbc.internal.JdbcCoordinatorImpl.coordinateWork(JdbcCoordinatorImpl.java:211) > at org.hibernate.internal.SessionImpl.doWork(SessionImpl.java:1955) > at org.hibernate.internal.SessionImpl.doWork(SessionImpl.java:1941) > at com.cloudera.enterprise.dbutil.DbUtil.getSchemaVersion(DbUtil.java:171) > at com.cloudera.enterprise.dbutil.DbUtil.upgradeSchema(DbUtil.java:333) > at com.cloudera.cmon.FhDatabaseManager.initialize(FhDatabaseManager.java:68) > at com.cloudera.cmon.firehose.Main.main(Main.java:339) > Caused by: org.postgresql.util.PSQLException: ResultSet not positioned properly, perhaps you need to call next. > at org.postgresql.jdbc2.AbstractJdbc2ResultSet.checkResultSet(AbstractJdbc2ResultSet.java:2695) > at org.postgresql.jdbc2.AbstractJdbc2ResultSet.getInt(AbstractJdbc2ResultSet.java:1992) > at com.mchange.v2.c3p0.impl.NewProxyResultSet.getInt(NewProxyResultSet.java:2547) > at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method) > at sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:39) > at sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:25) > at java.lang.reflect.Method.invoke(Method.java:597) > at org.hibernate.engine.jdbc.internal.proxy.AbstractResultSetProxyHandler.continueInvocation(AbstractResultSetProxyHandler.java:104) > ... > > > So I'm assuming hibernate does not support Distribute by Replication? Is that so, or did I not need to apply distribute by replication, though the table has a PK w/ is also reference as FK from another table? If not, is there a hack to get around this, w/o having to recompile hibernate objects? > > > > ------------------------------------------------------------------------------ > October Webinars: Code for Performance > Free Intel webinars can help you accelerate application performance. > Explore tips for MPI, OpenMP, advanced profiling, and more. Get the most > from > the latest Intel processors and coprocessors. See abstracts and register > > http://pubads.g.doubleclick.net/gampad/clk?id=60134791&iu=/4140/ostg.clktrk > _______________________________________________ > Postgres-xc-general mailing list > Pos...@li... > https://lists.sourceforge.net/lists/listinfo/postgres-xc-general > > |
|
From: Mason S. <ma...@st...> - 2013-10-04 00:49:09
|
On Thu, Oct 3, 2013 at 1:34 PM, Sandeep Gupta <gup...@gm...>wrote: > Hi, > > > Setup has two computers. The configuration for gtm, gtm_proxy, datanode, > and, coordinator all have listen_address='*'. > > > Here are the info about setup: > postgres=# select * from > pgxc_node; node_name | > node_type | node_port | node_host | nodeis_primary | nodeis_preferred | > node_id > > ----------------+-----------+-----------+-----------+----------------+------------------+------------- > coord1 | C | 5432 | localhost | f | > f | 1885696643 > datanode_c1_d1 | D | 45421 | sfx057 | f | > f | -1199687708 > datanode_c2_d1 | D | 45421 | sfx050 | f | > f | -294121722 > (3 rows) > > > select pgxc_pool_reload(); > pgxc_pool_reload > ------------------ > t > (1 row) > > However, when I create a table I get > > ERROR: Failed to get pooled connections > Please check if you have a firewall running and open up the necessary ports. Also, edit pg_hba.conf to allow connections from the coordinator to the data nodes. > > Is there anything else apart from listen_address that I am missing? > Please let me know. > > > -Sandeep > > > ------------------------------------------------------------------------------ > October Webinars: Code for Performance > Free Intel webinars can help you accelerate application performance. > Explore tips for MPI, OpenMP, advanced profiling, and more. Get the most > from > the latest Intel processors and coprocessors. See abstracts and register > > http://pubads.g.doubleclick.net/gampad/clk?id=60134791&iu=/4140/ostg.clktrk > _______________________________________________ > Postgres-xc-general mailing list > Pos...@li... > https://lists.sourceforge.net/lists/listinfo/postgres-xc-general > > -- Mason Sharp StormDB - http://www.stormdb.com The Database Cloud Postgres-XC Support and Services |
|
From: Sandeep G. <gup...@gm...> - 2013-10-03 20:34:08
|
Hi, Setup has two computers. The configuration for gtm, gtm_proxy, datanode, and, coordinator all have listen_address='*'. Here are the info about setup: postgres=# select * from pgxc_node; node_name | node_type | node_port | node_host | nodeis_primary | nodeis_preferred | node_id ----------------+-----------+-----------+-----------+----------------+------------------+------------- coord1 | C | 5432 | localhost | f | f | 1885696643 datanode_c1_d1 | D | 45421 | sfx057 | f | f | -1199687708 datanode_c2_d1 | D | 45421 | sfx050 | f | f | -294121722 (3 rows) select pgxc_pool_reload(); pgxc_pool_reload ------------------ t (1 row) However, when I create a table I get ERROR: Failed to get pooled connections Is there anything else apart from listen_address that I am missing? Please let me know. -Sandeep |
|
From: Anson A. <ans...@gm...> - 2013-10-03 19:48:38
|
So I migrated a pg database (9.1) to postgres-xc 1.1. Required me to essentially apply the Distribute By Replication to most of the tables. But doing so, apparently this app threw out an error: Unable to upgrade schema to latest version. org.hibernate.exception.GenericJDBCException: ResultSet not positioned properly, perhaps you need to call next. at org.hibernate.exception.internal.StandardSQLExceptionConverter.convert(StandardSQLExceptionConverter.java:54) at org.hibernate.engine.jdbc.spi.SqlExceptionHelper.convert(SqlExceptionHelper.java:125) at org.hibernate.engine.jdbc.spi.SqlExceptionHelper.convert(SqlExceptionHelper.java:110) at org.hibernate.engine.jdbc.internal.proxy.AbstractResultSetProxyHandler.continueInvocation(AbstractResultSetProxyHandler.java:108) at org.hibernate.engine.jdbc.internal.proxy.AbstractProxyHandler.invoke(AbstractProxyHandler.java:81) at $Proxy10.getInt(Unknown Source) at com.cloudera.enterprise.dbutil.DbUtil.getSchemaVersion(DbUtil.java:212) at com.cloudera.enterprise.dbutil.DbUtil$1SchemaVersionWork.execute(DbUtil.java:159) at org.hibernate.jdbc.WorkExecutor.executeWork(WorkExecutor.java:54) at org.hibernate.internal.SessionImpl$2.accept(SessionImpl.java:1937) at org.hibernate.internal.SessionImpl$2.accept(SessionImpl.java:1934) at org.hibernate.engine.jdbc.internal.JdbcCoordinatorImpl.coordinateWork(JdbcCoordinatorImpl.java:211) at org.hibernate.internal.SessionImpl.doWork(SessionImpl.java:1955) at org.hibernate.internal.SessionImpl.doWork(SessionImpl.java:1941) at com.cloudera.enterprise.dbutil.DbUtil.getSchemaVersion(DbUtil.java:171) at com.cloudera.enterprise.dbutil.DbUtil.upgradeSchema(DbUtil.java:333) at com.cloudera.cmon.FhDatabaseManager.initialize(FhDatabaseManager.java:68) at com.cloudera.cmon.firehose.Main.main(Main.java:339) Caused by: org.postgresql.util.PSQLException: ResultSet not positioned properly, perhaps you need to call next. at org.postgresql.jdbc2.AbstractJdbc2ResultSet.checkResultSet(AbstractJdbc2ResultSet.java:2695) at org.postgresql.jdbc2.AbstractJdbc2ResultSet.getInt(AbstractJdbc2ResultSet.java:1992) at com.mchange.v2.c3p0.impl.NewProxyResultSet.getInt(NewProxyResultSet.java:2547) at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method) at sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:39) at sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:25) at java.lang.reflect.Method.invoke(Method.java:597) at org.hibernate.engine.jdbc.internal.proxy.AbstractResultSetProxyHandler.continueInvocation(AbstractResultSetProxyHandler.java:104) ... So I'm assuming hibernate does not support Distribute by Replication? Is that so, or did I not need to apply distribute by replication, though the table has a PK w/ is also reference as FK from another table? If not, is there a hack to get around this, w/o having to recompile hibernate objects? |
|
From: Hector M. J. <hec...@et...> - 2013-10-03 16:21:42
|
Hi all,
First of all, I would like to thank Mr. Koichi Suzuki for their comments
to my previous post. Was right with regard to the parameter
max_connection. In the new configuration each DataNode has a maximum of
800 (three coordinators with 200 concurrent connections each plus 200 extra)
Regarding the suggestion about stress tool to use (dbt-1), I'm in the
study of their characteristics and peculiarities of installation and use.
When I get my first results with it I publishes in this forum.
I still have details to resolve as is the use of parameter
gtmPxyExtraConfig of pgxc_ctl.conf to include parameters (as
worker_threads = xx, for example) in the gtm_proxy settings.
This was one of the problems detected during the initial deployment
through pgxc_ctl tool.
This is the second post about my impressions with pgxc-1.1
In the previous post exposed them details of the scenario we want to
build as well as configurations necessary to reach a state of
functionality and operability acceptable.
Once you reach that point, we design and execute a set of tests (based
on pgbench) in order to measure the performance of our installation and
know when we reached our goals: 300 (or more) concurrent connections and
increase the number of transactional operations.
The modifications was made mainly on DataNodes config. These changes
weres implemented through datanodeExtraConfig parameter (pgxc_ctl.conf
file) and were as follows:
# ================================================
# Added to all the DataNode postgresql.conf
# Original: datanodeExtraConfig
log_destination = 'syslog'
logging_collector = on
log_directory = 'pg_log'
listen_addresses = '*'
max_connections = 800
work_mem = 100MB
fsync = off
shared_buffers = 5GB
wal_buffers = 1MB
checkpoint_timeout = 5min
effective_cache_size = 12GB
checkpoint_segments = 64
checkpoint_completion_target = 0.9
maintenance_work_mem = 4GB
max_prepared_transactions = 800
synchronous_commit = off
These modifications allowed us to obtain results with increases between
two and three times (in some cases, more than three times) with respect
to the initial results set (number of transactions and tps).
Our other goal, get more than 300 concurrent connections is reached and
can measure up to 355 connections.
During the course of testing measurements were obtained on the
consumption of CPU and memory resources on each of the components of the
cluster (dn01, DN02, DN03, GTM).
For these measurements, use the SAR command parameters:
-R Report memory utilization statistics
-U Report CPU utilization
200 iterations with an interval of 5 seconds (14 test with a duration of
60 seconds divided by 5 seconds is +/- 170 iterations)
The averages obtained by server:
This command was executed on each servers before launch the tests:
sar -u -r 5 200
dn01:
Average: CPU %user %nice %system %iowait %steal
%idle
Average: all 15.79 0.00 18.20 0.44 0.00
65.57
Average: kbmemfree kbmemused %memused kbbuffers kbcached kbcommit
%commit
Average: 12003982 4327934 26.50 44163 1766752 7616567
37.35
dn02:
Average: CPU %user %nice %system %iowait %steal
%idle
Average: all 14.89 0.00 17.37 0.11 0.00
67.62
Average: kbmemfree kbmemused %memused kbbuffers kbcached kbcommit
%commit
Average: 12097661 4234255 25.93 42716 1725394 7609960
37.31
dn03:
Average: CPU %user %nice %system %iowait %steal
%idle
Average: all 16.67 0.00 19.59 0.57 0.00
63.17
Average: kbmemfree kbmemused %memused kbbuffers kbcached kbcommit
%commit
Average: 11603908 4728008 28.95 42955 1708146 7609769
37.31
gtm:
Average: CPU %user %nice %system %iowait %steal
%idle
Average: all 8.54 0.00 24.80 0.12 0.00
66.54
Average: kbmemfree kbmemused %memused kbbuffers kbcached kbcommit
%commit
Average: 3553938 370626 9.44 42358 120419 723856
9.06
The result obtained in each of the tests:
[root@rhelclient ~]# pgbench -c 16 -j 8 -T 60 -h 192.168.97.44 -U
postgres pgbench
starting vacuum...end.
transaction type: TPC-B (sort of)
scaling factor: 100
query mode: simple
number of clients: 16
number of threads: 8
duration: 60 s
number of transactions actually processed: 23680
tps = 394.458636 (including connections establishing)
tps = 394.637063 (excluding connections establishing)
[root@rhelclient ~]# pgbench -S -c 16 -j 8 -T 60 -h 192.168.97.44 -U
postgres pgbench
starting vacuum...end.
transaction type: SELECT only
scaling factor: 100
query mode: simple
number of clients: 16
number of threads: 8
duration: 60 s
number of transactions actually processed: 108929
tps = 1815.247714 (including connections establishing)
tps = 1815.947505 (excluding connections establishing)
[root@rhelclient ~]# pgbench -c 16 -j 16 -T 60 -h 192.168.97.44 -U
postgres pgbench
starting vacuum...end.
transaction type: TPC-B (sort of)
scaling factor: 100
query mode: simple
number of clients: 16
number of threads: 16
duration: 60 s
number of transactions actually processed: 23953
tps = 399.034541 (including connections establishing)
tps = 399.120451 (excluding connections establishing)
[root@rhelclient ~]# pgbench -S -c 16 -j 16 -T 60 -h 192.168.97.44 -U
postgres pgbench
starting vacuum...end.
transaction type: SELECT only
scaling factor: 100
query mode: simple
number of clients: 16
number of threads: 16
duration: 60 s
number of transactions actually processed: 127142
tps = 2118.825088 (including connections establishing)
tps = 2119.318006 (excluding connections establishing)
[root@rhelclient ~]# pgbench -c 96 -j 8 -T 60 -h 192.168.97.44 -U
postgres pgbench
starting vacuum...end.
transaction type: TPC-B (sort of)
scaling factor: 100
query mode: simple
number of clients: 96
number of threads: 8
duration: 60 s
number of transactions actually processed: 95644
tps = 1592.722011 (including connections establishing)
tps = 1595.906611 (excluding connections establishing)
[root@rhelclient ~]# pgbench -S -c 96 -j 8 -T 60 -h 192.168.97.44 -U
postgres pgbench
starting vacuum...end.
transaction type: SELECT only
scaling factor: 100
query mode: simple
number of clients: 96
number of threads: 8
duration: 60 s
number of transactions actually processed: 580728
tps = 9675.754717 (including connections establishing)
tps = 9695.954649 (excluding connections establishing)
[root@rhelclient ~]# pgbench -c 64 -j 32 -T 60 -h 192.168.97.44 -U
postgres pgbench
starting vacuum...end.
transaction type: TPC-B (sort of)
scaling factor: 100
query mode: simple
number of clients: 64
number of threads: 32
duration: 60 s
number of transactions actually processed: 72239
tps = 1183.511659 (including connections establishing)
tps = 1184.529232 (excluding connections establishing)
[root@rhelclient ~]# pgbench -S -c 64 -j 32 -T 60 -h 192.168.97.44 -U
postgres pgbench
starting vacuum...end.
transaction type: SELECT only
scaling factor: 100
query mode: simple
number of clients: 64
number of threads: 32
duration: 60 s
number of transactions actually processed: 388861
tps = 6479.326642 (including connections establishing)
tps = 6482.532350 (excluding connections establishing)
[root@rhelclient ~]# pgbench -c 64 -j 64 -T 60 -h 192.168.97.44 -U
postgres pgbench
starting vacuum...end.
transaction type: TPC-B (sort of)
scaling factor: 100
query mode: simple
number of clients: 64
number of threads: 64
duration: 60 s
number of transactions actually processed: 61663
tps = 1026.636406 (including connections establishing)
tps = 1027.679280 (excluding connections establishing)
[root@rhelclient ~]# pgbench -S -c 64 -j 64 -T 60 -h 192.168.97.44 -U
postgres pgbench
starting vacuum...end.
transaction type: SELECT only
scaling factor: 100
query mode: simple
number of clients: 64
number of threads: 64
duration: 60 s
number of transactions actually processed: 369321
tps = 6151.931064 (including connections establishing)
tps = 6155.611035 (excluding connections establishing)
[root@rhelclient ~]# pgbench -c 104 -j 8 -T 60 -h 192.168.97.44 -U
postgres pgbench
starting vacuum...end.
transaction type: TPC-B (sort of)
scaling factor: 100
query mode: simple
number of clients: 104
number of threads: 8
duration: 60 s
number of transactions actually processed: 80479
tps = 1337.396423 (including connections establishing)
tps = 1347.248687 (excluding connections establishing)
[root@rhelclient ~]# pgbench -S -c 104 -j 8 -T 60 -h 192.168.97.44 -U
postgres pgbench
starting vacuum...end.
transaction type: SELECT only
scaling factor: 100
query mode: simple
number of clients: 104
number of threads: 8
duration: 60 s
number of transactions actually processed: 587109
tps = 9782.401960 (including connections establishing)
tps = 9805.111450 (excluding connections establishing)
[root@rhelclient ~]# pgbench -c 300 -j 10 -T 60 -h 192.168.97.44 -U
postgres pgbench
starting vacuum...end.
transaction type: TPC-B (sort of)
scaling factor: 100
query mode: simple
number of clients: 300
number of threads: 10
duration: 60 s
number of transactions actually processed: 171351
tps = 2849.021939 (including connections establishing)
tps = 2869.345032 (excluding connections establishing)
[root@rhelclient ~]# pgbench -S -c 300 -j 10 -T 60 -h 192.168.97.44 -U
postgres pgbench
starting vacuum...end.
transaction type: SELECT only
scaling factor: 100
query mode: simple
number of clients: 300
number of threads: 10
duration: 60 s
number of transactions actually processed: 1177464
tps = 19613.592584 (including connections establishing)
tps = 19716.537285 (excluding connections establishing)
[root@rhelclient ~]#
Our new goal is to provide to our pgxc cluster of characteristics of
High Availability adding slave servers for DataNodes (and perhaps for
coordinators servers)
When we get this environment, run again the same set of tests above,
plus new ones, such as fault recovery tests simulating the loss of
cluster components, etc..
Ok, that's all for now.
Thank you very much. This is a great project and I'm very glad I found it.
Thanks again,
Hector M. Jacas |
|
From: Koichi S. <koi...@gm...> - 2013-10-03 01:31:26
|
You cannot add a coordinator in such a way. There're many issued to be resolved internally. You can configure and operate whole cluster with pgxc_ctl to get handy way to add coordinator/datanode. I understand you have your cluster configured without pgxc_ctl. In this case, adding coordinator manually could be a bit complicated work. Sorry, I've not uploaded the detailed step to do it. Whole steps will be found in add_coordinatorMaster() function defined in coord_cmd.c of pgxc_ctl source code. It will be found at contrib/pgxc_ctl in the release material. Please allow a bit of time to find my time to upload this information to XC wiki. Or, you can backup whole database with pg_dumpall, then reconfigure new xc cluster with additional coordinator, and then restore the backup. Regards; --- Koichi Suzuki 2013/10/3 Julian <jul...@gm...> > Dear Sir, > > I have a cluster with 3 coordinators and 3 datanodes on 3 VM, today i > was try to added a new coordinator to the cluster, when i using command > "psql postgres -f coordinator.sql -p 5455" to restore the backup file to > the new coordinator. > > Then i got this message : > > psql:coordinator-dump.sql:105: connection to server was lost > > In the log file: > > ---------------------------------------------------------------------------------------------------------------------------------------------------------------- > 2013-10-02 23:00:09.811 CST,,,29061,,524c34de.7185,2,,2013-10-02 > 22:59:42 CST,,0,LOG,00000,"server process (PID 29094) was terminated by > signal 11: Segmentation fault","Failed process was r > unning: CREATE TABLE user_info_hash ( > id integer NOT NULL, > firstname text, > lastname text, > info text > ) > DISTRIBUTE BY HASH (id) > TO NODE (dn2,dn1,dn3);",,,,,,,,"" > 2013-10-02 23:00:09.811 CST,,,29061,,524c34de.7185,3,,2013-10-02 > 22:59:42 CST,,0,LOG,00000,"terminating any other active server > processes",,,,,,,,,"" > > ------------------------------------------------------------------------------------------------------------------------------------------------------------------ > refer to > http://postgres-xc.sourceforge.net/docs/1_1/add-node-coordinator.html > > Is there something i doing worng? > > > Thanks for your kindly reply. > > And sorry for my poor english. > > > Best regards. > > > ------------------------------------------------------------------------------ > October Webinars: Code for Performance > Free Intel webinars can help you accelerate application performance. > Explore tips for MPI, OpenMP, advanced profiling, and more. Get the most > from > the latest Intel processors and coprocessors. See abstracts and register > > http://pubads.g.doubleclick.net/gampad/clk?id=60134791&iu=/4140/ostg.clktrk > _______________________________________________ > Postgres-xc-general mailing list > Pos...@li... > https://lists.sourceforge.net/lists/listinfo/postgres-xc-general > |
|
From: Julian <jul...@gm...> - 2013-10-02 15:53:22
|
Dear Sir,
I have a cluster with 3 coordinators and 3 datanodes on 3 VM, today i
was try to added a new coordinator to the cluster, when i using command
"psql postgres -f coordinator.sql -p 5455" to restore the backup file to
the new coordinator.
Then i got this message :
psql:coordinator-dump.sql:105: connection to server was lost
In the log file:
----------------------------------------------------------------------------------------------------------------------------------------------------------------
2013-10-02 23:00:09.811 CST,,,29061,,524c34de.7185,2,,2013-10-02
22:59:42 CST,,0,LOG,00000,"server process (PID 29094) was terminated by
signal 11: Segmentation fault","Failed process was r
unning: CREATE TABLE user_info_hash (
id integer NOT NULL,
firstname text,
lastname text,
info text
)
DISTRIBUTE BY HASH (id)
TO NODE (dn2,dn1,dn3);",,,,,,,,""
2013-10-02 23:00:09.811 CST,,,29061,,524c34de.7185,3,,2013-10-02
22:59:42 CST,,0,LOG,00000,"terminating any other active server
processes",,,,,,,,,""
------------------------------------------------------------------------------------------------------------------------------------------------------------------
refer to
http://postgres-xc.sourceforge.net/docs/1_1/add-node-coordinator.html
Is there something i doing worng?
Thanks for your kindly reply.
And sorry for my poor english.
Best regards.
|
|
From: Koichi S. <koi...@gm...> - 2013-10-02 03:01:33
|
This is a quick comment to your configuration. 1. It's better to configure gtm_proxy for each VM. gtm_proxy groups request/response between coordinator/datanode and gtm to reduce the network workload. 2. You may not need this many max_connection of each datanode. Your max connection to each coordinator is 200 so max connection to each datanode will be 200 x 4 = 800. Just in case, I suppose 1000 is sufficient. 3. Unfortunately, pgbench is not fullly tuned for XC. If you have specific application in mind, you may want to run it. Or, you can try DBT-1 benchmark, which is better tuned for XC, installation will be a bit complicated though. It is available from Postgres-XC development page, https://sourceforge.net/projects/postgres-xc/ You will find dbt-1 repo at git tag of the page. Good luck. --- Koichi Suzuki 2013/10/2 Hector M. Jacas <hec...@et...> > > Dear Sir, > > Below I discuss the first results of the tests performed with version 1.1 > of pgxc official. > > During the rest of the week I will be making adjustments to the settings > in order to get better performance and up to 300 concurrent connections > (now I can only reach up to 150 concurrent connections). > > If someone with experience in this solution can provide it will be welcome. > > The scenario I have is as follows: > > In a cluster VMware ESXi 5.1 Update 1 with 3 esx host HPBL685c G7 427.9 GB > of ram one vApp with 4 virtual machines. > > 3 of these virtual machines with 16 vCPU and 1GB of ram per vCPU. Each > virtual machine acts of gtm_proxy / coordinator / data node. > > The fourth and last virtual machine (4 vCPUs and 1GB of ram per vCPU) acts > of GTM. > > Operating System: RHELv6.4 (Kernel 2.6.32-358.18.1.el6.x86_64) > Installation Type: Basic > Additional Packages: Development Kit > Filesystem type for the data area: XFS without additional adjustments > > This cluster of postgres-xc is balanced with pirahna (LVS) with a virtual > IP (192.168.97.44) and port 5432 using the LC method (less connections) > > dn01/192.168.97.40:(gtm_proxy p:6666/coordinator p:5432/DataNode p:5433) > dn02/192.168.97.41:(gtm_proxy p:6666/coordinator p:5432/DataNode p:5433) > dn03/192.168.97.42:(gtm_proxy p:6666/coordinator p:5432/DataNode p:5433) > gtm /192.168.97.43: (gtm p:6666) > > The deployment method used is pgxc_ctl with some additional configuration > parameters through coordExtraConfig and datanodeExtraConfig and whose > contents will describe: > > coordExtraConfig: > > # ==============================**================== > # Added to all the coordinator postgresql.conf > # Original: coordExtraConfig > log_destination = 'syslog' > logging_collector = on > log_directory = 'pg_log' > listen_addresses = '*' > max_connections = 200 > > datanodeExtraConfig: > # ==============================**================== > # Added to all the coordinator postgresql.conf > # Original: datanodeExtraConfig > log_destination = 'syslog' > logging_collector = on > log_directory = 'pg_log' > listen_addresses = '*' > max_connections = 4096 > fsync = off > shared_buffers = 4GB > wal_buffers = 1MB > checkpoint_timeout = 5min > checkpoint_segments = 16 > maintenance_work_mem = 4GB > max_prepared_transactions = 4096 > synchronous_commit = off > > NOTE: I should point out that in the case of gtm_proxy definition, I was > unable to add through gtmPxyExtraConfig parameter, additional parameters > such as worker_threads = 15. > > In this particular case, invariably reflected in the final configuration > worker_threads = 1 > > Ok, the results: > > [root@rhelclient ~]# createdb -h 192.168.97.44 -U postgres pgbench > [root@rhelclient ~]# pgbench -i -s 100 -h 192.168.97.44 -U postgres > pgbench > NOTICE: table "pgbench_branches" does not exist, skipping > NOTICE: table "pgbench_tellers" does not exist, skipping > NOTICE: table "pgbench_accounts" does not exist, skipping > NOTICE: table "pgbench_history" does not exist, skipping > creating tables... > 10000 tuples done. > 20000 tuples done. > 30000 tuples done. > 40000 tuples done. > : > : > : > 9950000 tuples done. > 9960000 tuples done. > 9970000 tuples done. > 9980000 tuples done. > 9990000 tuples done. > 10000000 tuples done. > set primary key... > NOTICE: ALTER TABLE / ADD PRIMARY KEY will create implicit index > "pgbench_branches_pkey" for table "pgbench_branches" > NOTICE: ALTER TABLE / ADD PRIMARY KEY will create implicit index > "pgbench_tellers_pkey" for table "pgbench_tellers" > NOTICE: ALTER TABLE / ADD PRIMARY KEY will create implicit index > "pgbench_accounts_pkey" for table "pgbench_accounts" > vacuum...done. > > [root@rhelclient ~]# pgbench -c 16 -j 8 -T 60 -h 192.168.97.44 -U > postgres pgbench > starting vacuum...end. > transaction type: TPC-B (sort of) > scaling factor: 100 > query mode: simple > number of clients: 16 > number of threads: 8 > duration: 60 s > number of transactions actually processed: 10217 > tps = 170.186523 (including connections establishing) > tps = 170.279171 (excluding connections establishing) > > [root@rhelclient ~]# pgbench -S -c 16 -j 8 -T 60 -h 192.168.97.44 -U > postgres pgbench > starting vacuum...end. > transaction type: SELECT only > scaling factor: 100 > query mode: simple > number of clients: 16 > number of threads: 8 > duration: 60 s > number of transactions actually processed: 62484 > tps = 1041.168271 (including connections establishing) > tps = 1041.592065 (excluding connections establishing) > > [root@rhelclient ~]# pgbench -c 16 -j 16 -T 60 -h 192.168.97.44 -U > postgres pgbench > starting vacuum...end. > transaction type: TPC-B (sort of) > scaling factor: 100 > query mode: simple > number of clients: 16 > number of threads: 16 > duration: 60 s > number of transactions actually processed: 14001 > tps = 232.131191 (including connections establishing) > tps = 232.374844 (excluding connections establishing) > > [root@rhelclient ~]# pgbench -S -c 16 -j 16 -T 60 -h 192.168.97.44 -U > postgres pgbench > starting vacuum...end. > transaction type: SELECT only > scaling factor: 100 > query mode: simple > number of clients: 16 > number of threads: 16 > duration: 60 s > number of transactions actually processed: 59981 > tps = 998.525044 (including connections establishing) > tps = 998.773480 (excluding connections establishing) > > [root@rhelclient ~]# pgbench -c 96 -j 8 -T 60 -h 192.168.97.44 -U > postgres pgbench > starting vacuum...end. > transaction type: TPC-B (sort of) > scaling factor: 100 > query mode: simple > number of clients: 96 > number of threads: 8 > duration: 60 s > number of transactions actually processed: 25577 > tps = 423.132610 (including connections establishing) > tps = 424.137946 (excluding connections establishing) > > [root@rhelclient ~]# pgbench -S -c 96 -j 8 -T 60 -h 192.168.97.44 -U > postgres pgbench > starting vacuum...end. > transaction type: SELECT only > scaling factor: 100 > query mode: simple > number of clients: 96 > number of threads: 8 > duration: 60 s > number of transactions actually processed: 115994 > tps = 1929.694756 (including connections establishing) > tps = 1939.253901 (excluding connections establishing) > > [root@rhelclient ~]# pgbench -c 64 -j 32 -T 60 -h 192.168.97.44 -U > postgres pgbench > starting vacuum...end. > transaction type: TPC-B (sort of) > scaling factor: 100 > query mode: simple > number of clients: 64 > number of threads: 32 > duration: 60 s > number of transactions actually processed: 14087 > tps = 234.278526 (including connections establishing) > tps = 234.472208 (excluding connections establishing) > > [root@rhelclient ~]# pgbench -S -c 64 -j 32 -T 60 -h 192.168.97.44 -U > postgres pgbench > starting vacuum...end. > transaction type: SELECT only > scaling factor: 100 > query mode: simple > number of clients: 64 > number of threads: 32 > duration: 60 s > number of transactions actually processed: 124367 > tps = 2072.136619 (including connections establishing) > tps = 2073.475197 (excluding connections establishing) > > [root@rhelclient ~]# pgbench -c 64 -j 64 -T 60 -h 192.168.97.44 -U > postgres pgbench > starting vacuum...end. > transaction type: TPC-B (sort of) > scaling factor: 100 > query mode: simple > number of clients: 64 > number of threads: 64 > duration: 60 s > number of transactions actually processed: 14239 > tps = 232.935903 (including connections establishing) > tps = 236.194708 (excluding connections establishing) > > [root@rhelclient ~]# pgbench -S -c 64 -j 64 -T 60 -h 192.168.97.44 -U > postgres pgbench > starting vacuum...end. > transaction type: SELECT only > scaling factor: 100 > query mode: simple > number of clients: 64 > number of threads: 64 > duration: 60 s > number of transactions actually processed: 95342 > tps = 1530.270930 (including connections establishing) > tps = 1531.320634 (excluding connections establishing) > > [root@rhelclient ~]# pgbench -c 104 -j 8 -T 60 -h 192.168.97.44 -U > postgres pgbench > starting vacuum...end. > transaction type: TPC-B (sort of) > scaling factor: 100 > query mode: simple > number of clients: 104 > number of threads: 8 > duration: 60 s > number of transactions actually processed: 33543 > tps = 542.183967 (including connections establishing) > tps = 543.542901 (excluding connections establishing) > > [root@rhelclient ~]# pgbench -S -c 104 -j 8 -T 60 -h 192.168.97.44 -U > postgres pgbench > starting vacuum...end. > transaction type: SELECT only > scaling factor: 100 > query mode: simple > number of clients: 104 > number of threads: 8 > duration: 60 s > number of transactions actually processed: 286291 > tps = 4770.397054 (including connections establishing) > tps = 4790.285324 (excluding connections establishing) > [root@rhelclient ~]# > --- > This message was processed by Kaspersky Mail Gateway 5.6.28/RELEASE > running at host imx3.etecsa.cu > Visit our web-site: <http://www.kaspersky.com>, <http://www.viruslist.com> > > > ------------------------------------------------------------------------------ > October Webinars: Code for Performance > Free Intel webinars can help you accelerate application performance. > Explore tips for MPI, OpenMP, advanced profiling, and more. Get the most > from > the latest Intel processors and coprocessors. See abstracts and register > > http://pubads.g.doubleclick.net/gampad/clk?id=60134791&iu=/4140/ostg.clktrk > _______________________________________________ > Postgres-xc-general mailing list > Pos...@li... > https://lists.sourceforge.net/lists/listinfo/postgres-xc-general > > |
|
From: Koichi S. <koi...@gm...> - 2013-10-02 01:56:12
|
PostgreSQL community adopted commit fest to keep their release cycle as predictable as possible as a feedback of release 8.3 work. Yes, as Michael mentioned, I think it may take a bit more when commit fest is necessary for XC. Anybody can submit a patch to XC community through developers mailing list, which will be reviewed by some of the community members. The process is now very flexible. Please visit XC web page, postgres-xc.sourceforge.netfor information. Regards; --- Koichi Suzuki 2013/10/2 Michael Paquier <mic...@gm...> > On Wed, Oct 2, 2013 at 1:19 AM, Aris Setyawan <ari...@gm...> wrote: > > Hi, > > > > I'm a PHP web developer. I'm interested in contributing commit-fest > > system for posgre-xc, though, recently, I have no plan or time or > > resource to offer. > > > > I have some questions, about this topic. > > > > 1. Is XC need a commit-fest app? > I honestly don't think that XC, which has a more limited community > than Postgres, is at a state where it would need a commit fest > application. We also do not have a website provider that could host > it. > > > 2. Is Postgresql commit-fest app available in opensource? > Not sure about that. The code of postgresql.org is for example released > here: > https://github.com/postgres/pgweb > I am pretty sure that if it is available it would be under the > PostgreSQL license. If you cannot google it, why not contacting those > guys directly? > > > 3. How commit-fest working? Eg: How to submit a patch, reviewing, > committing. > Have a look here: > https://wiki.postgresql.org/wiki/Submitting_a_Patch > https://wiki.postgresql.org/wiki/CommitFest > > Regards, > -- > Michael > > > ------------------------------------------------------------------------------ > October Webinars: Code for Performance > Free Intel webinars can help you accelerate application performance. > Explore tips for MPI, OpenMP, advanced profiling, and more. Get the most > from > the latest Intel processors and coprocessors. See abstracts and register > > http://pubads.g.doubleclick.net/gampad/clk?id=60134791&iu=/4140/ostg.clktrk > _______________________________________________ > Postgres-xc-general mailing list > Pos...@li... > https://lists.sourceforge.net/lists/listinfo/postgres-xc-general > |
|
From: Michael P. <mic...@gm...> - 2013-10-01 23:53:58
|
On Wed, Oct 2, 2013 at 1:19 AM, Aris Setyawan <ari...@gm...> wrote: > Hi, > > I'm a PHP web developer. I'm interested in contributing commit-fest > system for posgre-xc, though, recently, I have no plan or time or > resource to offer. > > I have some questions, about this topic. > > 1. Is XC need a commit-fest app? I honestly don't think that XC, which has a more limited community than Postgres, is at a state where it would need a commit fest application. We also do not have a website provider that could host it. > 2. Is Postgresql commit-fest app available in opensource? Not sure about that. The code of postgresql.org is for example released here: https://github.com/postgres/pgweb I am pretty sure that if it is available it would be under the PostgreSQL license. If you cannot google it, why not contacting those guys directly? > 3. How commit-fest working? Eg: How to submit a patch, reviewing, committing. Have a look here: https://wiki.postgresql.org/wiki/Submitting_a_Patch https://wiki.postgresql.org/wiki/CommitFest Regards, -- Michael |
|
From: Hector M. J. <hec...@et...> - 2013-10-01 20:41:58
|
Dear Sir, Below I discuss the first results of the tests performed with version 1.1 of pgxc official. During the rest of the week I will be making adjustments to the settings in order to get better performance and up to 300 concurrent connections (now I can only reach up to 150 concurrent connections). If someone with experience in this solution can provide it will be welcome. The scenario I have is as follows: In a cluster VMware ESXi 5.1 Update 1 with 3 esx host HPBL685c G7 427.9 GB of ram one vApp with 4 virtual machines. 3 of these virtual machines with 16 vCPU and 1GB of ram per vCPU. Each virtual machine acts of gtm_proxy / coordinator / data node. The fourth and last virtual machine (4 vCPUs and 1GB of ram per vCPU) acts of GTM. Operating System: RHELv6.4 (Kernel 2.6.32-358.18.1.el6.x86_64) Installation Type: Basic Additional Packages: Development Kit Filesystem type for the data area: XFS without additional adjustments This cluster of postgres-xc is balanced with pirahna (LVS) with a virtual IP (192.168.97.44) and port 5432 using the LC method (less connections) dn01/192.168.97.40:(gtm_proxy p:6666/coordinator p:5432/DataNode p:5433) dn02/192.168.97.41:(gtm_proxy p:6666/coordinator p:5432/DataNode p:5433) dn03/192.168.97.42:(gtm_proxy p:6666/coordinator p:5432/DataNode p:5433) gtm /192.168.97.43: (gtm p:6666) The deployment method used is pgxc_ctl with some additional configuration parameters through coordExtraConfig and datanodeExtraConfig and whose contents will describe: coordExtraConfig: # ================================================ # Added to all the coordinator postgresql.conf # Original: coordExtraConfig log_destination = 'syslog' logging_collector = on log_directory = 'pg_log' listen_addresses = '*' max_connections = 200 datanodeExtraConfig: # ================================================ # Added to all the coordinator postgresql.conf # Original: datanodeExtraConfig log_destination = 'syslog' logging_collector = on log_directory = 'pg_log' listen_addresses = '*' max_connections = 4096 fsync = off shared_buffers = 4GB wal_buffers = 1MB checkpoint_timeout = 5min checkpoint_segments = 16 maintenance_work_mem = 4GB max_prepared_transactions = 4096 synchronous_commit = off NOTE: I should point out that in the case of gtm_proxy definition, I was unable to add through gtmPxyExtraConfig parameter, additional parameters such as worker_threads = 15. In this particular case, invariably reflected in the final configuration worker_threads = 1 Ok, the results: [root@rhelclient ~]# createdb -h 192.168.97.44 -U postgres pgbench [root@rhelclient ~]# pgbench -i -s 100 -h 192.168.97.44 -U postgres pgbench NOTICE: table "pgbench_branches" does not exist, skipping NOTICE: table "pgbench_tellers" does not exist, skipping NOTICE: table "pgbench_accounts" does not exist, skipping NOTICE: table "pgbench_history" does not exist, skipping creating tables... 10000 tuples done. 20000 tuples done. 30000 tuples done. 40000 tuples done. : : : 9950000 tuples done. 9960000 tuples done. 9970000 tuples done. 9980000 tuples done. 9990000 tuples done. 10000000 tuples done. set primary key... NOTICE: ALTER TABLE / ADD PRIMARY KEY will create implicit index "pgbench_branches_pkey" for table "pgbench_branches" NOTICE: ALTER TABLE / ADD PRIMARY KEY will create implicit index "pgbench_tellers_pkey" for table "pgbench_tellers" NOTICE: ALTER TABLE / ADD PRIMARY KEY will create implicit index "pgbench_accounts_pkey" for table "pgbench_accounts" vacuum...done. [root@rhelclient ~]# pgbench -c 16 -j 8 -T 60 -h 192.168.97.44 -U postgres pgbench starting vacuum...end. transaction type: TPC-B (sort of) scaling factor: 100 query mode: simple number of clients: 16 number of threads: 8 duration: 60 s number of transactions actually processed: 10217 tps = 170.186523 (including connections establishing) tps = 170.279171 (excluding connections establishing) [root@rhelclient ~]# pgbench -S -c 16 -j 8 -T 60 -h 192.168.97.44 -U postgres pgbench starting vacuum...end. transaction type: SELECT only scaling factor: 100 query mode: simple number of clients: 16 number of threads: 8 duration: 60 s number of transactions actually processed: 62484 tps = 1041.168271 (including connections establishing) tps = 1041.592065 (excluding connections establishing) [root@rhelclient ~]# pgbench -c 16 -j 16 -T 60 -h 192.168.97.44 -U postgres pgbench starting vacuum...end. transaction type: TPC-B (sort of) scaling factor: 100 query mode: simple number of clients: 16 number of threads: 16 duration: 60 s number of transactions actually processed: 14001 tps = 232.131191 (including connections establishing) tps = 232.374844 (excluding connections establishing) [root@rhelclient ~]# pgbench -S -c 16 -j 16 -T 60 -h 192.168.97.44 -U postgres pgbench starting vacuum...end. transaction type: SELECT only scaling factor: 100 query mode: simple number of clients: 16 number of threads: 16 duration: 60 s number of transactions actually processed: 59981 tps = 998.525044 (including connections establishing) tps = 998.773480 (excluding connections establishing) [root@rhelclient ~]# pgbench -c 96 -j 8 -T 60 -h 192.168.97.44 -U postgres pgbench starting vacuum...end. transaction type: TPC-B (sort of) scaling factor: 100 query mode: simple number of clients: 96 number of threads: 8 duration: 60 s number of transactions actually processed: 25577 tps = 423.132610 (including connections establishing) tps = 424.137946 (excluding connections establishing) [root@rhelclient ~]# pgbench -S -c 96 -j 8 -T 60 -h 192.168.97.44 -U postgres pgbench starting vacuum...end. transaction type: SELECT only scaling factor: 100 query mode: simple number of clients: 96 number of threads: 8 duration: 60 s number of transactions actually processed: 115994 tps = 1929.694756 (including connections establishing) tps = 1939.253901 (excluding connections establishing) [root@rhelclient ~]# pgbench -c 64 -j 32 -T 60 -h 192.168.97.44 -U postgres pgbench starting vacuum...end. transaction type: TPC-B (sort of) scaling factor: 100 query mode: simple number of clients: 64 number of threads: 32 duration: 60 s number of transactions actually processed: 14087 tps = 234.278526 (including connections establishing) tps = 234.472208 (excluding connections establishing) [root@rhelclient ~]# pgbench -S -c 64 -j 32 -T 60 -h 192.168.97.44 -U postgres pgbench starting vacuum...end. transaction type: SELECT only scaling factor: 100 query mode: simple number of clients: 64 number of threads: 32 duration: 60 s number of transactions actually processed: 124367 tps = 2072.136619 (including connections establishing) tps = 2073.475197 (excluding connections establishing) [root@rhelclient ~]# pgbench -c 64 -j 64 -T 60 -h 192.168.97.44 -U postgres pgbench starting vacuum...end. transaction type: TPC-B (sort of) scaling factor: 100 query mode: simple number of clients: 64 number of threads: 64 duration: 60 s number of transactions actually processed: 14239 tps = 232.935903 (including connections establishing) tps = 236.194708 (excluding connections establishing) [root@rhelclient ~]# pgbench -S -c 64 -j 64 -T 60 -h 192.168.97.44 -U postgres pgbench starting vacuum...end. transaction type: SELECT only scaling factor: 100 query mode: simple number of clients: 64 number of threads: 64 duration: 60 s number of transactions actually processed: 95342 tps = 1530.270930 (including connections establishing) tps = 1531.320634 (excluding connections establishing) [root@rhelclient ~]# pgbench -c 104 -j 8 -T 60 -h 192.168.97.44 -U postgres pgbench starting vacuum...end. transaction type: TPC-B (sort of) scaling factor: 100 query mode: simple number of clients: 104 number of threads: 8 duration: 60 s number of transactions actually processed: 33543 tps = 542.183967 (including connections establishing) tps = 543.542901 (excluding connections establishing) [root@rhelclient ~]# pgbench -S -c 104 -j 8 -T 60 -h 192.168.97.44 -U postgres pgbench starting vacuum...end. transaction type: SELECT only scaling factor: 100 query mode: simple number of clients: 104 number of threads: 8 duration: 60 s number of transactions actually processed: 286291 tps = 4770.397054 (including connections establishing) tps = 4790.285324 (excluding connections establishing) [root@rhelclient ~]# |
|
From: Aris S. <ari...@gm...> - 2013-10-01 16:19:56
|
Hi, I'm a PHP web developer. I'm interested in contributing commit-fest system for posgre-xc, though, recently, I have no plan or time or resource to offer. I have some questions, about this topic. 1. Is XC need a commit-fest app? 2. Is Postgresql commit-fest app available in opensource? 3. How commit-fest working? Eg: How to submit a patch, reviewing, committing. Aris. |
|
From: Koichi S. <koi...@gm...> - 2013-10-01 02:13:36
|
Thanks for the infer. I'm afraid this could be operating-system specific and I might have to run the debugger in the same environment to see what's exactly going on. Because I don't have Fedora, it might take a bit long. Appreciated if you run the debugger against gtm_proxy and see when it exits (you can start gtm_proxy as you're doing and attach gdb to gtm_proxy process). BTW, monitor is a pgxc_ctl command. Please issue monitor all at pgxc_ctl prompt. You will get what components are running and whats are not. Regards; --- Koichi Suzuki 2013/10/1 Sandeep Gupta <gup...@gm...> > Hi Koichi, > > Thanks for looking into this. I did some more debugging. The gtm_proxy > runs as long as the datanodes/co-ordinator have not started. > Once I launch the co-ordinator (or the data-nodes, not sure since they all > get invoked by the same script) the gtm_proxy exits. > Unfortunately, the logfile is empty as well. So not sure what is going on. > It is something to do with my system only. > > Is monitor is postgres/pgxc related command. Otherwise I can see the via > ps command that proxy exited. > > -Sandeep > > > > > On Mon, Sep 30, 2013 at 9:04 PM, 鈴木 幸市 <ko...@in...> wrote: > >> I've not seen such phenomena yet. Does gtm_proxy exits silently just >> after it is launched? Could you issue monitor command to see if it is >> running? Pgxc_ctl uses many system(3) and popen(3) and there might be a >> slight chance to hit OS dependencies.sd >> >> Regards; >> --- >> Koichi Suzuki >> >> On 2013/10/01, at 8:09, Sandeep Gupta <gup...@gm...> wrote: >> >> > Hi, >> > >> > I have been working with pgxc for a couple of months on a old machine. >> Today I installed pgxc (v1.1) on >> > a new machine. All the ports, gtm_port, gtm_proxy port, pooler port >> are set to different values than the default. >> > >> > The OS is fedora on core i7. The problem I am facing is that the >> gtm_proxy exits silently. Nothing gets written in the log >> > file as well. However, if i fire the proxy within the gdb debugger >> things work fine (this I didn't double check but I think it is happening). >> > >> > The pgxc launch scripts works fine on other machine. I am not if I have >> messed up some other system parameters etc shmem size etc. >> > Please let me know. Any ideas/suggestions are welcome. >> > >> > -Sandeep >> > >> > >> ------------------------------------------------------------------------------ >> > October Webinars: Code for Performance >> > Free Intel webinars can help you accelerate application performance. >> > Explore tips for MPI, OpenMP, advanced profiling, and more. Get the >> most from >> > the latest Intel processors and coprocessors. See abstracts and >> register > >> > >> http://pubads.g.doubleclick.net/gampad/clk?id=60134791&iu=/4140/ostg.clktrk_______________________________________________ >> > Postgres-xc-general mailing list >> > Pos...@li... >> > https://lists.sourceforge.net/lists/listinfo/postgres-xc-general >> >> > > > ------------------------------------------------------------------------------ > October Webinars: Code for Performance > Free Intel webinars can help you accelerate application performance. > Explore tips for MPI, OpenMP, advanced profiling, and more. Get the most > from > the latest Intel processors and coprocessors. See abstracts and register > > http://pubads.g.doubleclick.net/gampad/clk?id=60134791&iu=/4140/ostg.clktrk > _______________________________________________ > Postgres-xc-general mailing list > Pos...@li... > https://lists.sourceforge.net/lists/listinfo/postgres-xc-general > > |
|
From: Sandeep G. <gup...@gm...> - 2013-10-01 01:59:19
|
Hi Koichi, Thanks for looking into this. I did some more debugging. The gtm_proxy runs as long as the datanodes/co-ordinator have not started. Once I launch the co-ordinator (or the data-nodes, not sure since they all get invoked by the same script) the gtm_proxy exits. Unfortunately, the logfile is empty as well. So not sure what is going on. It is something to do with my system only. Is monitor is postgres/pgxc related command. Otherwise I can see the via ps command that proxy exited. -Sandeep On Mon, Sep 30, 2013 at 9:04 PM, 鈴木 幸市 <ko...@in...> wrote: > I've not seen such phenomena yet. Does gtm_proxy exits silently just > after it is launched? Could you issue monitor command to see if it is > running? Pgxc_ctl uses many system(3) and popen(3) and there might be a > slight chance to hit OS dependencies.sd > > Regards; > --- > Koichi Suzuki > > On 2013/10/01, at 8:09, Sandeep Gupta <gup...@gm...> wrote: > > > Hi, > > > > I have been working with pgxc for a couple of months on a old machine. > Today I installed pgxc (v1.1) on > > a new machine. All the ports, gtm_port, gtm_proxy port, pooler port are > set to different values than the default. > > > > The OS is fedora on core i7. The problem I am facing is that the > gtm_proxy exits silently. Nothing gets written in the log > > file as well. However, if i fire the proxy within the gdb debugger > things work fine (this I didn't double check but I think it is happening). > > > > The pgxc launch scripts works fine on other machine. I am not if I have > messed up some other system parameters etc shmem size etc. > > Please let me know. Any ideas/suggestions are welcome. > > > > -Sandeep > > > > > ------------------------------------------------------------------------------ > > October Webinars: Code for Performance > > Free Intel webinars can help you accelerate application performance. > > Explore tips for MPI, OpenMP, advanced profiling, and more. Get the most > from > > the latest Intel processors and coprocessors. See abstracts and register > > > > > http://pubads.g.doubleclick.net/gampad/clk?id=60134791&iu=/4140/ostg.clktrk_______________________________________________ > > Postgres-xc-general mailing list > > Pos...@li... > > https://lists.sourceforge.net/lists/listinfo/postgres-xc-general > > |