You can subscribe to this list here.
| 2010 |
Jan
|
Feb
|
Mar
|
Apr
|
May
(2) |
Jun
|
Jul
|
Aug
(6) |
Sep
|
Oct
(19) |
Nov
(1) |
Dec
|
|---|---|---|---|---|---|---|---|---|---|---|---|---|
| 2011 |
Jan
(12) |
Feb
(1) |
Mar
(4) |
Apr
(4) |
May
(32) |
Jun
(12) |
Jul
(11) |
Aug
(1) |
Sep
(6) |
Oct
(3) |
Nov
|
Dec
(10) |
| 2012 |
Jan
(11) |
Feb
(1) |
Mar
(3) |
Apr
(25) |
May
(53) |
Jun
(38) |
Jul
(103) |
Aug
(54) |
Sep
(31) |
Oct
(66) |
Nov
(77) |
Dec
(20) |
| 2013 |
Jan
(91) |
Feb
(86) |
Mar
(103) |
Apr
(107) |
May
(25) |
Jun
(37) |
Jul
(17) |
Aug
(59) |
Sep
(38) |
Oct
(78) |
Nov
(29) |
Dec
(15) |
| 2014 |
Jan
(23) |
Feb
(82) |
Mar
(118) |
Apr
(101) |
May
(103) |
Jun
(45) |
Jul
(6) |
Aug
(10) |
Sep
|
Oct
(32) |
Nov
|
Dec
(9) |
| 2015 |
Jan
(3) |
Feb
(5) |
Mar
|
Apr
(1) |
May
|
Jun
|
Jul
(9) |
Aug
(4) |
Sep
(3) |
Oct
|
Nov
|
Dec
|
| 2016 |
Jan
(3) |
Feb
|
Mar
|
Apr
|
May
|
Jun
|
Jul
|
Aug
|
Sep
|
Oct
|
Nov
|
Dec
|
| 2017 |
Jan
|
Feb
|
Mar
|
Apr
|
May
|
Jun
(3) |
Jul
|
Aug
|
Sep
|
Oct
|
Nov
|
Dec
|
| 2018 |
Jan
|
Feb
|
Mar
|
Apr
|
May
(4) |
Jun
|
Jul
|
Aug
|
Sep
|
Oct
|
Nov
|
Dec
|
|
From: Amit K. <ami...@en...> - 2013-10-17 05:52:38
|
I think Aris's expectation is that by using FDW support we can create a cluster of heterogeneous nodes. This is not going to happen just by supporting foreign data wrappers and foreign tables. Note that allowing a foreign table/server to be created in Postgres-XC means a foreign table will be created on a machine which is completely outside the Postgres-XC cluster. Making that machine a part of the cluster is a different thing, and that does not require FDWs. On 17 October 2013 10:48, 鈴木 幸市 <ko...@in...> wrote: > So far, CREATE FOREIGN DATA WRAPPER, CREATE SERVER and CREATE USER MAPPING > are blocked. As Michael suggests, yes, it would be nice to connect to > foreign data through coordinators or even datanodes. > > It's welcome if more people are involved in the test, not just > development. Contribution of the code is more than welcome. > > Unfortunately, nobody dis these work mainly due to the resource. it will > be wonderful if anybody can join and contribute the code. There are not > reason that XC doesn't have to support FDW. > > Best; > --- > Koichi Suzuki > > On 2013/10/17, at 13:11, Michael Paquier <mic...@gm...> > wrote: > > > On Thu, Oct 17, 2013 at 12:43 PM, Aris Setyawan <ari...@gm...> > wrote: > >> Hi, > >> > >> Can XC be used with [write-able] foreign-data wrapper? What I mean > >> here are push down optimization and data distribution. > > XC does not support itself fdw, but I don't see why there would be > > problems to have a Postgres server with a postgres_fdw connect to > > Coordinators for read/write operations or even Datanodes for read-only > > operations as the communication interface is the same as vanilla. Take > > care to use at least XC 1.1~ for the latter though. > > > >> I imagine, If yes, we can have a cluster of not just postgresql node. > >> But we can have oracle or mysql or redis or unlimited cluster. > > Yep. Supporting FDW in XC would be fun as well. Patches welcome. > > -- > > Michael > > > > > ------------------------------------------------------------------------------ > > October Webinars: Code for Performance > > Free Intel webinars can help you accelerate application performance. > > Explore tips for MPI, OpenMP, advanced profiling, and more. Get the most > from > > the latest Intel processors and coprocessors. See abstracts and register > > > > > http://pubads.g.doubleclick.net/gampad/clk?id=60135031&iu=/4140/ostg.clktrk > > _______________________________________________ > > Postgres-xc-general mailing list > > Pos...@li... > > https://lists.sourceforge.net/lists/listinfo/postgres-xc-general > > > > > > ------------------------------------------------------------------------------ > October Webinars: Code for Performance > Free Intel webinars can help you accelerate application performance. > Explore tips for MPI, OpenMP, advanced profiling, and more. Get the most > from > the latest Intel processors and coprocessors. See abstracts and register > > http://pubads.g.doubleclick.net/gampad/clk?id=60135031&iu=/4140/ostg.clktrk > _______________________________________________ > Postgres-xc-general mailing list > Pos...@li... > https://lists.sourceforge.net/lists/listinfo/postgres-xc-general > |
|
From: 鈴木 幸市 <ko...@in...> - 2013-10-17 05:18:30
|
So far, CREATE FOREIGN DATA WRAPPER, CREATE SERVER and CREATE USER MAPPING are blocked. As Michael suggests, yes, it would be nice to connect to foreign data through coordinators or even datanodes. It's welcome if more people are involved in the test, not just development. Contribution of the code is more than welcome. Unfortunately, nobody dis these work mainly due to the resource. it will be wonderful if anybody can join and contribute the code. There are not reason that XC doesn't have to support FDW. Best; --- Koichi Suzuki On 2013/10/17, at 13:11, Michael Paquier <mic...@gm...> wrote: > On Thu, Oct 17, 2013 at 12:43 PM, Aris Setyawan <ari...@gm...> wrote: >> Hi, >> >> Can XC be used with [write-able] foreign-data wrapper? What I mean >> here are push down optimization and data distribution. > XC does not support itself fdw, but I don't see why there would be > problems to have a Postgres server with a postgres_fdw connect to > Coordinators for read/write operations or even Datanodes for read-only > operations as the communication interface is the same as vanilla. Take > care to use at least XC 1.1~ for the latter though. > >> I imagine, If yes, we can have a cluster of not just postgresql node. >> But we can have oracle or mysql or redis or unlimited cluster. > Yep. Supporting FDW in XC would be fun as well. Patches welcome. > -- > Michael > > ------------------------------------------------------------------------------ > October Webinars: Code for Performance > Free Intel webinars can help you accelerate application performance. > Explore tips for MPI, OpenMP, advanced profiling, and more. Get the most from > the latest Intel processors and coprocessors. See abstracts and register > > http://pubads.g.doubleclick.net/gampad/clk?id=60135031&iu=/4140/ostg.clktrk > _______________________________________________ > Postgres-xc-general mailing list > Pos...@li... > https://lists.sourceforge.net/lists/listinfo/postgres-xc-general > |
|
From: Ashutosh B. <ash...@en...> - 2013-10-17 04:12:14
|
Sandeep, It would be nice if you mention the version of XC in your mail. Sort push down is available from 1.1 onwards. If you do not see sort getting pushed down in 1.1, please report detailed definitions of the tables, query and the EXPLAIN output. On Thu, Oct 17, 2013 at 1:09 AM, Sandeep Gupta <gup...@gm...>wrote: > Hi, > > In an another query that requires the result to be aggregated and ordered > by a field (lets say timeo) > the query planner currently pulls the results and then performs a sort > with hash aggregate. > > The table at the datanodes are clustered by timeo. I was wondering if it > possible > for query planner to push down the order by clause at the datanode and > then perform > sort-merge aggregate at the coordinator. Surely, that would be a better > query plan. > > We have tried enable_sort=off etc. but that doesn't work. > > Thanks. > Sandeep > > > > > ------------------------------------------------------------------------------ > October Webinars: Code for Performance > Free Intel webinars can help you accelerate application performance. > Explore tips for MPI, OpenMP, advanced profiling, and more. Get the most > from > the latest Intel processors and coprocessors. See abstracts and register > > http://pubads.g.doubleclick.net/gampad/clk?id=60135031&iu=/4140/ostg.clktrk > _______________________________________________ > Postgres-xc-general mailing list > Pos...@li... > https://lists.sourceforge.net/lists/listinfo/postgres-xc-general > > -- Best Wishes, Ashutosh Bapat EnterpriseDB Corporation The Postgres Database Company |
|
From: Michael P. <mic...@gm...> - 2013-10-17 04:11:20
|
On Thu, Oct 17, 2013 at 12:43 PM, Aris Setyawan <ari...@gm...> wrote: > Hi, > > Can XC be used with [write-able] foreign-data wrapper? What I mean > here are push down optimization and data distribution. XC does not support itself fdw, but I don't see why there would be problems to have a Postgres server with a postgres_fdw connect to Coordinators for read/write operations or even Datanodes for read-only operations as the communication interface is the same as vanilla. Take care to use at least XC 1.1~ for the latter though. > I imagine, If yes, we can have a cluster of not just postgresql node. > But we can have oracle or mysql or redis or unlimited cluster. Yep. Supporting FDW in XC would be fun as well. Patches welcome. -- Michael |
|
From: Aris S. <ari...@gm...> - 2013-10-17 03:43:49
|
Hi, Can XC be used with [write-able] foreign-data wrapper? What I mean here are push down optimization and data distribution. I imagine, If yes, we can have a cluster of not just postgresql node. But we can have oracle or mysql or redis or unlimited cluster. -Aris |
|
From: Sandeep G. <gup...@gm...> - 2013-10-16 19:39:26
|
Hi, In an another query that requires the result to be aggregated and ordered by a field (lets say timeo) the query planner currently pulls the results and then performs a sort with hash aggregate. The table at the datanodes are clustered by timeo. I was wondering if it possible for query planner to push down the order by clause at the datanode and then perform sort-merge aggregate at the coordinator. Surely, that would be a better query plan. We have tried enable_sort=off etc. but that doesn't work. Thanks. Sandeep |
|
From: Koichi S. <koi...@gm...> - 2013-10-15 02:58:13
|
Sorry I did not respond for a while. Please take a look at my comment inline. Regards; --- Koichi Suzuki 2013/10/8 Yehezkel Horowitz <hor...@ch...> > >> My goal - I have an application that needs SQL DB and must always be > >> up (I have a backup machine for this purpose). > >Have you thought about PostgreSQL itself for your solution. Is there any > reason you'd need XC? Do you have an amount of data that >forces you to use > multi-master architecture or perhaps PG itself could handle it? > > I need multi-master capability, as clients might connect to both machines > at the same time; Yes - my tables will be replicated. > > >Yep, this is doable. If all your data is replicated you would be able to > do that. However you need to keep in mind that you will not be able to > write new data to node B if node A is not accessible. If you data is > replicated and you need to update a table, both nodes need to work. > > This is a surprise for me, this wasn't clear in the documentation I read > nor at some PG-XC presentations I looked at in the internet. > Isn't this point one of the conditions for High-Availability of DB - > allowing work to continue even if one of the machines failed? > Postgres-XC assumes any table may be replicated or distributed so XC does not have an operation interface assuming all the tables are replicated. It always assumes there could be some table distributed, some replicated. On the other hand, Postgres-XC's most important feature is to maintain cluster-wide data integrity. XC's replication is for HA, but to provide scalability by proxying as many statement to local datanode and increase parallelism. So, when you issue a DML against a replicated table, Postgres-XC tries to propagate it to all the nodes where it is defined over. If any node is not available, Postgres-XC determines it cannot maintain cluster-wide data integrity. We provide a couple of means to deal with this. 1. ALTER TABLE to change table's replication. You can delete any node. Because this change should go to any other nodes for cluster-wide data integrity, you should have all the datanodes working. 2. Configure slaves for each master. When one of them fails, it can be failed over by its slave. Typically, you can configure slaves at other datanode's server each other. After failover occurs (you may want to integrate with automatic failover system such as Pacemaker and Corosync/Heartbeat) and you feel it's not needed any longer, you can issue ALTER TEABLE to delete failed node your cluster, issue DROP NODE as well, and then stop the slave and release its resource. > >Or if you want B to be still writable, you could update the node > information inside it, make it workable alone, and when server A is up > again recreate a new XC node from scratch and add it again to the cluster. > > What is the correct procedure for doing that? Is there a pgxc_ctl commands > for doing that? > Hope the above helps. > > >> My questions: > >> > >> 1. In your docs, you always put the GTM in dedicated machine. > >> a. Is this a requirement, just an easy to understand topology or > best > >> practice? > >GTM consumes a certain amount of CPU and does not need much RAM, while > for your nodes you might prioritize the opposite. > >> b. In case of best practice, what is the expected penalty in case > the > >> GTM is deployed on the same machine with coordinator and datanode? > >CPU resource consumption and reduction of performance if your queries > need some CPU with for example internal sort operations among other things. > O.K got it; For now I'm trying to make it work, afterwards I'll take care > for make it work faster. > > >> 2. What should I do after Machine A is back to life if I want: > >> a. Make it act as a new slave? > >> b. Make it become the master again? > >There is no principle of master/slave in XC like in Postgres (well you > could create a slave node for an individual Coordinator/Datanode). >But > basically in your configuration machine A and B have the same state. > >Only GTM is a slave. > > Sorry, I meant in the context of GTM - how should I make MachineA a new > GTM-slave or make it a GTM-master again? > You need to configure gtm_proxy for this purpose. Gtm_ctl provides failover option for gtm slave to be the new gtm master. It also provides reconnect option for gtm_proxy to connect to the new gtm master. Pgxc_ctl provides this as corresponding commands. Please take a look at http://postgres-xc.sourceforge.net/docs/1_1/pgxc-ctl.html > > ------------------------------------------------------------------------------ > October Webinars: Code for Performance > Free Intel webinars can help you accelerate application performance. > Explore tips for MPI, OpenMP, advanced profiling, and more. Get the most > from > the latest Intel processors and coprocessors. See abstracts and register > > http://pubads.g.doubleclick.net/gampad/clk?id=60134071&iu=/4140/ostg.clktrk > _______________________________________________ > Postgres-xc-general mailing list > Pos...@li... > https://lists.sourceforge.net/lists/listinfo/postgres-xc-general > |
|
From: Michael P. <mic...@gm...> - 2013-10-15 01:23:03
|
On Mon, Oct 14, 2013 at 4:05 PM, admin <ad...@75...> wrote:
> Hello, I'm a newbie and tring to evaluate pgxc for a project.
> I need to say pgxc is an interesting solution of cluster.
> In the evaluation I found some questions, if somebody can answer it I
> will be very glad.
>
> Environment:
> pgxc 1.1, centos 6 32bit in virtualbox
> ServerA:
> gtm
> coord_01
> datanode_01
> ServerB:
> datanode_02
>
> Question:
> a. Is connection secure between coordinator with datanode, or
> coordinator with gtm?
> I tried if datanode only allow password authentication it will not
> accessable from coordinator,
> and gtm have no such config like pg_hba.conf
No there is no option to use SSL between those connections due to the
internal connection pooling implementation. This is a TODO item.
> b. What is the warning "Do not have a GTM snapshot available" mean?
This warning means that you got a problem when trying to request a
global snapshot when issuing a query on a node. This was for example
reproducible easily by running a query directly on a Datanode without
going though a Coordinator for example.
> c. If gtm_proxy go after gtm, it will direct exit, is it a problem?
> Example if I'm going to write a loader, step cannot be
> gtm_ctl -Z gtm -D gtm start
> gtm_ctl -Z gtm_proxy -D gtm_proxy start
> but must be
> gtm_ctl -Z gtm -D gtm start
> [wait the port use by gtm open]
> gtm_ctl -Z gtm_proxy -D gtm_proxy start
Do you mean if gtm_proxy is started before gtm? Perhaps others
(Pavan?) will correct me, but isn't the gtm_proxy to not exit directly
and wait for a port open?
> d. Check only one node is primary seems have a bug, example:
> select * from pgxc_node;
> "coord_01";"C";5432;"localhost";f;f;1975432854
> "datanode_02";"D";15432;"192.168.8.184";f;f;-1414354208
> "datanode_01";"D";15432;"192.168.8.183";t;t;-1746477557
>
> select pgxc_pool_reload();
> (success)
> alter node datanode_01 with (port=15433);
> ERROR: PGXC node datanode_01: two nodes cannot be primary
> alter node datanode_01 with (primary=true);
> ERROR: PGXC node datanode_01: two nodes cannot be primary
> alter node datanode_01 with (primary=false);
> (success)
Definitely looks like a bug as you are describing it.
> These command all done in the same session open from pgadmin.
> e. I cant set primary key to a table if remote node added, is it not
> supported yet?
> alter table test add primary key (id);
> ERROR: Cannot create index whose evaluation cannot be enforced to remote
> nodes
You need either to make the table test replicated, or hashed using id
as a key in this case. AFAIK pgadmin does not provide support for the
XC-specific query extensions, but you can always run raw SQLs.
> f. I cant modify value of sequence, is it not supported yet?
> select setval('public.test_01_id_seq', 123, true);
> ERROR: GTM error, could not obtain sequence value
This should be supported.
> g. I found some time field name and its type will exchange,
> like "name with type text" some time will trun to "text with type name",
> and type "name" is not valid.
> I have no reproduction way yet but is it a known bug?
Can you provide a test case here?
> h. I found some time sequence from serial will broken then i can't
> insert any data,
> select nextval('public.test_01_id_seq');
> ERROR:
> Status: XX00
> I can upload database files if you need.
Not sure I am following you here.
> i. Is there some way to get what nodes are using with the table?
> These command can manage nodes with table
> alter table tablename add node ...
> alter table tablename delete node ...
> alter table tablename to node ...
> but I dont known what command can list what nodes are using with the table.
pgxc_class contains a list of node OIDS to know where the table data
is located referring to the nodes in pgxc_node.
> j. Is this project ready for production use?
Some do.
--
Michael
|
|
From: admin <ad...@75...> - 2013-10-14 08:02:09
|
Hello, I'm a newbie and tring to evaluate pgxc for a project.
I need to say pgxc is an interesting solution of cluster.
In the evaluation I found some questions, if somebody can answer it I
will be very glad.
Environment:
pgxc 1.1, centos 6 32bit in virtualbox
ServerA:
gtm
coord_01
datanode_01
ServerB:
datanode_02
Question:
a. Is connection secure between coordinator with datanode, or
coordinator with gtm?
I tried if datanode only allow password authentication it will not
accessable from coordinator,
and gtm have no such config like pg_hda.conf
b. What is the warning "Do not have a GTM snapshot available" mean?
c. If gtm_proxy go after gtm, it will direct exit, is it a problem?
Example if I'm going to write a loader, step cannot be
gtm_ctl -Z gtm -D gtm start
gtm_ctl -Z gtm_proxy -D gtm_proxy start
but must be
gtm_ctl -Z gtm -D gtm start
[wait the port use by gtm open]
gtm_ctl -Z gtm_proxy -D gtm_proxy start
d. Check only one node is primary seems have a bug, example:
select * from pgxc_node;
"coord_01";"C";5432;"localhost";f;f;1975432854
"datanode_02";"D";15432;"192.168.8.184";f;f;-1414354208
"datanode_01";"D";15432;"192.168.8.183";t;t;-1746477557
select pgxc_pool_reload();
(success)
alter node datanode_01 with (port=15433);
ERROR: PGXC node datanode_01: two nodes cannot be primary
alter node datanode_01 with (primary=true);
ERROR: PGXC node datanode_01: two nodes cannot be primary
alter node datanode_01 with (primary=false);
(success)
select pgxc_pool_reload();
(success)
select * from pgxc_node;
"coord_01";"C";5432;"localhost";f;f;1975432854
"datanode_02";"D";15432;"192.168.8.184";f;f;-1414354208
"datanode_01";"D";15432;"192.168.8.183";f;t;-1746477557
alter node datanode_01 with (primary=true);
ERROR: PGXC node datanode_01: two nodes cannot be primary
These command all done in the same session open from pgadmin.
e. I cant set primary key to a table if remote node added, is it not
supported yet?
alter table test add primary key (id);
ERROR: Cannot create index whose evaluation cannot be enforced to remote
nodes
f. I cant modify value of sequence, is it not supported yet?
select setval('public.test_01_id_seq', 123, true);
ERROR: GTM error, could not obtain sequence value
g. I found some time field name and its type will exchange,
like "name with type text" some time will trun to "text with type name",
and type "name" is not valid.
I have no reproduction way yet but is it a known bug?
h. I found some time sequence from serial will broken then i can't
insert any data,
select nextval('public.test_01_id_seq');
ERROR:
Status: XX00
I can upload database files if you need.
i. Is there some way to get what nodes are using with the table?
These command can manage nodes with table
alter table tablename add node ...
alter table tablename delete node ...
alter table tablename to node ...
but I dont known what command can list what nodes are using with the table.
j. Is this project ready for production use?
|
|
From: Yehezkel H. <hor...@ch...> - 2013-10-14 07:17:56
|
2nd try. Can you please answer my questions below? TIA Yehezkel Horowitz -----Original Message----- From: Yehezkel Horowitz Sent: Tuesday, October 08, 2013 2:30 PM To: 'Michael Paquier'; <pos...@li...> Subject: RE: [Postgres-xc-general] Some questions about postgres-XC >> My goal - I have an application that needs SQL DB and must always be >> up (I have a backup machine for this purpose). >Have you thought about PostgreSQL itself for your solution. Is there any reason you'd need XC? Do you have an amount of data that >forces you to use multi-master architecture or perhaps PG itself could handle it? I need multi-master capability, as clients might connect to both machines at the same time; Yes - my tables will be replicated. >Yep, this is doable. If all your data is replicated you would be able to do that. However you need to keep in mind that you will not be able to write new data to node B if node A is not accessible. If you data is replicated and you need to update a table, both nodes need to work. This is a surprise for me, this wasn't clear in the documentation I read nor at some PG-XC presentations I looked at in the internet. Isn't this point one of the conditions for High-Availability of DB - allowing work to continue even if one of the machines failed? >Or if you want B to be still writable, you could update the node information inside it, make it workable alone, and when server A is up again recreate a new XC node from scratch and add it again to the cluster. What is the correct procedure for doing that? Is there a pgxc_ctl commands for doing that? >> My questions: >> >> 1. In your docs, you always put the GTM in dedicated machine. >> a. Is this a requirement, just an easy to understand topology or best >> practice? >GTM consumes a certain amount of CPU and does not need much RAM, while for your nodes you might prioritize the opposite. >> b. In case of best practice, what is the expected penalty in case the >> GTM is deployed on the same machine with coordinator and datanode? >CPU resource consumption and reduction of performance if your queries need some CPU with for example internal sort operations among other things. O.K got it; For now I'm trying to make it work, afterwards I'll take care for make it work faster. >> 2. What should I do after Machine A is back to life if I want: >> a. Make it act as a new slave? >> b. Make it become the master again? >There is no principle of master/slave in XC like in Postgres (well you could create a slave node for an individual Coordinator/Datanode). >But basically in your configuration machine A and B have the same state. >Only GTM is a slave. Sorry, I meant in the context of GTM - how should I make MachineA a new GTM-slave or make it a GTM-master again? |
|
From: Stefan L. <ar...@er...> - 2013-10-10 05:55:07
|
On 10/5/2013 5:19 PM, Michael Paquier wrote: > On Sat, Oct 5, 2013 at 9:00 PM, Stefan Lekov <ar...@er...> wrote: >> Hello, I'm new to the Postgres-XC project. In fact I am still >> considering if I should install it in order to try it as a >> replacement of my current database clusters (those are based around >> MySQL and its binary_log based replication). > Have you considered PostgreSQL as a potential solution before > Postgres-XC. Why do you especially need XC? I have used PosgreSQL in the past, I am using it at the moment (for other projects) and I'd like to continue using in the future. My current requirements are including having a multi-master replicated database cluster. These requirements are related to redundancy and possible scalability. While one PostgreSQL server will cope with any load that I can throw at it for the near future, that might not be the case in about an year or two. As for the redundancy part - I am familiar with PostgreSQL capabilities of a warm standby server however I am looking for something more robust. Because of the requirement of "multi-master", I am investigating Postgres-XC and pgpool2 capabilities to deliver such system. I can migrate to a single PostgreSQL server, however I am not really keen on solving the replication dilemma on-the-fly when the system is already running with Postgres - I prefer having something that is already working as expected right from the start. >> Before actually starting the installation of postgres-xc I would like >> to know what is the procedure for restarting nodes. I have already >> read a few documents/mails regarding restoring or resyncing a failed >> datanode, however these documents does not answer my simple question: >> What should be the procedure for rebooting servers? For example I >> have a kernel updated pending (due to security reasons) - I'm >> installing the new kernel, but I have to reboot all machine. >> Theoretically all nodes (both coordinators and datanodes) are working >> on different physical servers or VMes. In a perfect scenario I would >> like to keep the system in production while I am restarting the >> servers one by one. However I am not sure what would be the effect of >> rebooting servers one by one. > If a node is restarted or facing an outage, all the transactions it > needs to be involved in will simply fail. In the case of Coordinator, > this has effect only for DDL. For Datanodes, this has effect as well > for DDL, but also for DML and SELECT of the node is needed for the > transaction. There would be no DDL during these operations. I can limit the queries to DML only. >> For purpose of example let me have four datanodes: A,B,C,D All >> servers are synced and are operating as expected. 1) Upgrade A, >> reboot A 2) INSERT/UPDATE/DELETE queries 3) A boots up and is >> successfully started 4) INSERT/UPDATE/DELETE queries 5) Upgrade B, >> reboot B ... ... As for the "Coordinators" nodes. How are those >> affected by temporary stopping and restarting the postgres-xc related >> services. What should be the load balancer in front of these servers >> in order to be able to both load-balance and fail-over if one of the >> Coordinators is offline either due to failed server or due to >> rebooting servers. > DDLs won't work. Applications will use one access point. In this case > no problems for your application, connect to the other Coordinators to > execute queries as long as they are not DDLs. What system, application or method would you recommend for performing the load-balance/fail-over of connections to the Coordinators. >> I have no problem with relatively heavy operation of full restore of >> a datanode in event of failed server. Such restoration operation can >> be properly scheduled and executed, however I am interested how would >> postgres-xc react to simple scenarioa simple operation of restarting >> a server due to whatever reasons should > As mentioned above, transactions that will need it will simply fail. > You could always failover a slave for the outage period if necessary. Correct me if I'm wrong: All data (read databases, schema, tables, etc) would be replicated to all datanodes. So before host A goes down all servers would have the same dataset. This way no transaction should fail due to the missing datanode A. While A has been booting up several transactions have passed (since such restart is an operation I can schedule, I'm doing that during time when we have low to no load on our systems, thus the transaction count is relatively low). My question is how to bring A back to having "the same dataset" as the rest of the datanodes before I can continue with the next host/datanode? Regards, Stefan Lekov ------------------------------------------------------------------------------ October Webinars: Code for Performance Free Intel webinars can help you accelerate application performance. Explore tips for MPI, OpenMP, advanced profiling, and more. Get the most from the latest Intel processors and coprocessors. See abstracts and register > http://pubads.g.doubleclick.net/gampad/clk?id=60134791&iu=/4140/ostg.clktrk _______________________________________________ Postgres-xc-general mailing list Pos...@li... https://lists.sourceforge.net/lists/listinfo/postgres-xc-general |
|
From: Ashutosh B. <ash...@en...> - 2013-10-10 04:50:15
|
On Wed, Oct 9, 2013 at 9:54 PM, Hector M. Jacas <hec...@et...>wrote: > Hi all, > > First I must apologize because obviously I was reading the document > oriented to bash version of pgxc_ctl. > > In the documentation of the binary version there is no reference to these > facilities (dropdb/dropuser). This is the one I should have read . > > My mistake and my apologies . > > I beg your patience and condescension when from the user role of this > great project I take the liberty to comment the answer to my post . > > I do not share the view of Mr. Ashutosh Bapat when he says " is not an > interface pgxc_ctl for dropping database or user. " reason: "It's just a > cluster management utility " > > I think there is an inconsistency in that statement because the same > reason for not including dropdb and dropuser commands are perfectly valid > createdb and createuser . > There is difference between what is supported as a requirement and what is supported because it fits well in the utility. You may compare pgxc_ctl with pg_ctl, which basically allow controlling the life of server. pgxc_ctl being made for XC, has to support life of a cluster and allows controlling individual server. On top, it allows creating a cluster, (which is not required in pg_ctl, initdb does it). This particular functionality needs Createdb and Createuser, so does it support those. But a user should not look at pgxc_ctl for managing individual databases. The server is more than capable of doing it and that functionality can be accessed through connectors or utilities like create* or drop*. > > You as project developers ( or contribution ) decide the philosophy with > which your product works and I in my role as user I should be able to > reconcile my working methods with the philosophy of the tools I have > selected. > > POSTGRESXC is a great project because it solves big problems. > > PGXC_CTL is another great project because it simplifies the deployment and > management of postgresxc and if you add shortcuts to frequently used > commands (and perhaps, some security features) this project could become a > kind of Central Command for POSTGRESXC . > > Thank you very much for your answers , > > Hector M. Jacas > > > > > On 10/09/2013 12:12 AM, Ashutosh Bapat wrote: > > Hector, > AFAIK, pgxc_ctl is not an interface for dropping database or user. It's > just a cluster management utility. You should use corresponding binaries or > SQL commands for that purpose. > > > On Tue, Oct 8, 2013 at 9:32 PM, Hector M. Jacas <hec...@et...>wrote: > >> >> Hi all, >> >> Among the features described in: >> https://github.com/koichi-szk/PGXC-Tools/blob/master/pgxc_ctl/manual.txtis deleting the databases (Dropdb) and users (Dropuser) and when I try make >> use of these commands pgxc_ctl answers: command not found >> >> PGXC Createdb testdb >> Selected coord2. >> PGXC Dropdb testdb >> sh: Dropdb: command not found >> PGXC Createuser usertest1 >> Selected coord1. >> PGXC Dropuser usertest1 >> sh: Dropuser: command not found >> PGXC >> >> Carefully review the source code and found that in the folder: >> postgres-xc/contrib/pgxc_ctl , there is a file (do_command.c) in which >> reference is made and performed the execution of Createdb (line 2339) and >> Createuser (line 2369). >> >> In this file there is no reference whatsoever to Dropdb or Dropuser . >> >> There is another file (in the same directory) called: pgxc_ctl.bash, in >> which reference is made and run the corresponding command to Createdb, >> Dropdb, Createuser and Dropuser. >> >> Do not remember reading during pgxc compliacion and deployment (or >> pgxc_ctl in the area of contributions ) anything regarding how to handle >> this situation. >> >> How to resolve this issue? >> >> The pgxc_ctl in its binary version lacks Dropdb and Dropuser commands? >> I must choose between the binary version and the version bash? What would >> be the impact of this change ? >> >> Can anyone guide me please >> >> Thanks in advance, >> >> Hector M. Jacas >> >> --- >> This message was processed by Kaspersky Mail Gateway 5.6.28/RELEASE >> running at host imx3.etecsa.cu >> Visit our web-site: <http://www.kaspersky.com>, <http://www.viruslist.com >> > >> >> >> ------------------------------------------------------------------------------ >> October Webinars: Code for Performance >> Free Intel webinars can help you accelerate application performance. >> Explore tips for MPI, OpenMP, advanced profiling, and more. Get the most >> from >> the latest Intel processors and coprocessors. See abstracts and register > >> >> http://pubads.g.doubleclick.net/gampad/clk?id=60134071&iu=/4140/ostg.clktrk >> _______________________________________________ >> Postgres-xc-general mailing list >> Pos...@li... >> https://lists.sourceforge.net/lists/listinfo/postgres-xc-general >> >> > > > -- > Best Wishes, > Ashutosh Bapat > EnterpriseDB Corporation > The Postgres Database Company > > > --- > This message was processed by Kaspersky Mail Gateway 5.6.28/RELEASE running at host imx2.etecsa.cu > > Visit our web-site: <http://www.kaspersky.com> <http://www.kaspersky.com>, <http://www.viruslist.com> <http://www.viruslist.com> > > > > --- > This message was processed by Kaspersky Mail Gateway 5.6.28/RELEASE > running at host imx3.etecsa.cu > Visit our web-site: <http://www.kaspersky.com>, <http://www.viruslist.com> > > -- Best Wishes, Ashutosh Bapat EnterpriseDB Corporation The Postgres Database Company |
|
From: Koichi S. <koi...@gm...> - 2013-10-10 04:02:57
|
Please do not worry about it. It is more than happy to hear any requirements/good to have things. Pgxc_ctl is not a complicated product and you can submit patches. Regards; --- Koichi Suzuki 2013/10/10 Hector M. Jacas <hec...@et...> > Hi all, > > First I must apologize because obviously I was reading the document > oriented to bash version of pgxc_ctl. > > In the documentation of the binary version there is no reference to these > facilities (dropdb/dropuser). This is the one I should have read . > > My mistake and my apologies . > > I beg your patience and condescension when from the user role of this > great project I take the liberty to comment the answer to my post . > > I do not share the view of Mr. Ashutosh Bapat when he says " is not an > interface pgxc_ctl for dropping database or user. " reason: "It's just a > cluster management utility " > > I think there is an inconsistency in that statement because the same > reason for not including dropdb and dropuser commands are perfectly valid > createdb and createuser . > > You as project developers ( or contribution ) decide the philosophy with > which your product works and I in my role as user I should be able to > reconcile my working methods with the philosophy of the tools I have > selected. > > POSTGRESXC is a great project because it solves big problems. > > PGXC_CTL is another great project because it simplifies the deployment and > management of postgresxc and if you add shortcuts to frequently used > commands (and perhaps, some security features) this project could become a > kind of Central Command for POSTGRESXC . > > Thank you very much for your answers , > > Hector M. Jacas > > > > > On 10/09/2013 12:12 AM, Ashutosh Bapat wrote: > > Hector, > AFAIK, pgxc_ctl is not an interface for dropping database or user. It's > just a cluster management utility. You should use corresponding binaries or > SQL commands for that purpose. > > > On Tue, Oct 8, 2013 at 9:32 PM, Hector M. Jacas <hec...@et...>wrote: > >> >> Hi all, >> >> Among the features described in: >> https://github.com/koichi-szk/PGXC-Tools/blob/master/pgxc_ctl/manual.txtis deleting the databases (Dropdb) and users (Dropuser) and when I try make >> use of these commands pgxc_ctl answers: command not found >> >> PGXC Createdb testdb >> Selected coord2. >> PGXC Dropdb testdb >> sh: Dropdb: command not found >> PGXC Createuser usertest1 >> Selected coord1. >> PGXC Dropuser usertest1 >> sh: Dropuser: command not found >> PGXC >> >> Carefully review the source code and found that in the folder: >> postgres-xc/contrib/pgxc_ctl , there is a file (do_command.c) in which >> reference is made and performed the execution of Createdb (line 2339) and >> Createuser (line 2369). >> >> In this file there is no reference whatsoever to Dropdb or Dropuser . >> >> There is another file (in the same directory) called: pgxc_ctl.bash, in >> which reference is made and run the corresponding command to Createdb, >> Dropdb, Createuser and Dropuser. >> >> Do not remember reading during pgxc compliacion and deployment (or >> pgxc_ctl in the area of contributions ) anything regarding how to handle >> this situation. >> >> How to resolve this issue? >> >> The pgxc_ctl in its binary version lacks Dropdb and Dropuser commands? >> I must choose between the binary version and the version bash? What would >> be the impact of this change ? >> >> Can anyone guide me please >> >> Thanks in advance, >> >> Hector M. Jacas >> >> --- >> This message was processed by Kaspersky Mail Gateway 5.6.28/RELEASE >> running at host imx3.etecsa.cu >> Visit our web-site: <http://www.kaspersky.com>, <http://www.viruslist.com >> > >> >> >> ------------------------------------------------------------------------------ >> October Webinars: Code for Performance >> Free Intel webinars can help you accelerate application performance. >> Explore tips for MPI, OpenMP, advanced profiling, and more. Get the most >> from >> the latest Intel processors and coprocessors. See abstracts and register > >> >> http://pubads.g.doubleclick.net/gampad/clk?id=60134071&iu=/4140/ostg.clktrk >> _______________________________________________ >> Postgres-xc-general mailing list >> Pos...@li... >> https://lists.sourceforge.net/lists/listinfo/postgres-xc-general >> >> > > > -- > Best Wishes, > Ashutosh Bapat > EnterpriseDB Corporation > The Postgres Database Company > > > --- > This message was processed by Kaspersky Mail Gateway 5.6.28/RELEASE running at host imx2.etecsa.cu > > Visit our web-site: <http://www.kaspersky.com> <http://www.kaspersky.com>, <http://www.viruslist.com> <http://www.viruslist.com> > > > > --- > This message was processed by Kaspersky Mail Gateway 5.6.28/RELEASE > running at host imx3.etecsa.cu > Visit our web-site: <http://www.kaspersky.com>, <http://www.viruslist.com> > > > ------------------------------------------------------------------------------ > October Webinars: Code for Performance > Free Intel webinars can help you accelerate application performance. > Explore tips for MPI, OpenMP, advanced profiling, and more. Get the most > from > the latest Intel processors and coprocessors. See abstracts and register > > http://pubads.g.doubleclick.net/gampad/clk?id=60134071&iu=/4140/ostg.clktrk > _______________________________________________ > Postgres-xc-general mailing list > Pos...@li... > https://lists.sourceforge.net/lists/listinfo/postgres-xc-general > > |
|
From: Hector M. J. <hec...@et...> - 2013-10-09 16:25:04
|
--- This message was processed by Kaspersky Mail Gateway 5.6.28/RELEASE running at host imx3.etecsa.cu Visit our web-site: <http://www.kaspersky.com>, <http://www.viruslist.com> |
|
From: Javier H. <jhe...@ce...> - 2013-10-09 11:28:26
|
Hi Ashutosh,
The current code (v1.1) is:
case T_ArrayRef:
/*
* When multiple values of of an array are updated at once
* FQS planner cannot yet handle SQL representation correctly.
* So disable FQS in this case and let standard planner
manage it.
*/
case T_FieldStore:
/*
* PostgreSQL deparsing logic does not handle the FieldStore
* for more than one fields (see processIndirection()). So,
let's
* handle it through standard planner, where whole row will be
* constructed.
*/
case T_SetToDefault:
/*
* PGXCTODO: we should actually check whether the default
value to
* be substituted is shippable to the Datanode. Some cases like
* nextval() of a sequence can not be shipped to the
Datanode, hence
* for now default values can not be shipped to the Datanodes
*/
pgxc_set_shippability_reason(sc_context, SS_UNSUPPORTED_EXPR);
pgxc_set_exprtype_shippability(exprType(node), sc_context);
break;
Sorry, next time I will send a patch.
On 09/10/13 13:24, Ashutosh Bapat wrote:
> Hi Javier,
> What's your change? Can you please provide a patch?
>
>
> On Wed, Oct 9, 2013 at 4:53 PM, Javier Hernandez <jhe...@ce...
> <mailto:jhe...@ce...>> wrote:
>
> Hello,
>
> I am not sure if this is the right mailing list for sending
> Feature request, if not, sorry for the inconvenience.
>
> I know you are working on improvements to the planner (Feature
> Request #95). One important for my project is the ability to use
> FQS for queries that reference elements in an array column. One
> example:
>
> CREATE TABLE TestArray (id SERIAL, flux FLOAT4*[]*, error
> FLOAT*[]*);
>
> SELECT flux, error FROM TestArray WHERE id = 12; -- Planner
> uses FQS
> SELECT flux[1], error[1] FROM TestArray WHERE id = 12; --
> Planner uses Remote query and completes in coordinator
>
> I am beginning to dig in the code and I think I have a
> temporary solution:
>
>
> src/backend/optimizer/util/pgxcship.c:709
>
> case T_ArrayRef:
> /*
> * When multiple values of of an array are updated at once
> * FQS planner cannot yet handle SQL representation
> correctly.
> * So disable FQS in this case and let standard
> planner manage it.
> */
> if (sc_context->sc_query != NULL &&
> sc_context->sc_query->commandType == CMD_SELECT) {
> pgxc_set_exprtype_shippability(exprType(node), sc_context);
> } else {
> pgxc_set_shippability_reason(sc_context, SS_UNSUPPORTED_EXPR);
> pgxc_set_exprtype_shippability(exprType(node), sc_context);
> }
> break;
>
>
> The original code is always marking the query as
> SS_UNSUPPORTED_EXPR, but according to the existing comment I think
> my change is safe for Select. Isn't it?
>
>
> Thank you very much,
>
> --
> Javier Hernández
> Ingeniero de Bases de Datos
> Centro de Estudios de Fisica del Cosmos de Aragon (ceFca)
> http://www.cefca.es
> Plza San Juan Nº 1, Planta 2ª
> E-44001 Teruel (Spain)
> Phone: +34 978 221266 Ext.1105
> Fax: +34 978 611801
>
>
> ------------------------------------------------------------------------------
> October Webinars: Code for Performance
> Free Intel webinars can help you accelerate application performance.
> Explore tips for MPI, OpenMP, advanced profiling, and more. Get
> the most from
> the latest Intel processors and coprocessors. See abstracts and
> register >
> http://pubads.g.doubleclick.net/gampad/clk?id=60134071&iu=/4140/ostg.clktrk
> _______________________________________________
> Postgres-xc-general mailing list
> Pos...@li...
> <mailto:Pos...@li...>
> https://lists.sourceforge.net/lists/listinfo/postgres-xc-general
>
>
>
>
> --
> Best Wishes,
> Ashutosh Bapat
> EnterpriseDB Corporation
> The Postgres Database Company
|
|
From: Ashutosh B. <ash...@en...> - 2013-10-09 11:24:12
|
Hi Javier,
What's your change? Can you please provide a patch?
On Wed, Oct 9, 2013 at 4:53 PM, Javier Hernandez <jhe...@ce...>wrote:
> Hello,
>
> I am not sure if this is the right mailing list for sending Feature
> request, if not, sorry for the inconvenience.
>
> I know you are working on improvements to the planner (Feature Request
> #95). One important for my project is the ability to use FQS for queries
> that reference elements in an array column. One example:
>
> CREATE TABLE TestArray (id SERIAL, flux FLOAT4*[]*, error FLOAT*[]*);
>
> SELECT flux, error FROM TestArray WHERE id = 12; -- Planner uses FQS
> SELECT flux[1], error[1] FROM TestArray WHERE id = 12; -- Planner uses
> Remote query and completes in coordinator
>
> I am beginning to dig in the code and I think I have a temporary
> solution:
>
>
> src/backend/optimizer/util/pgxcship.c:709
>
> case T_ArrayRef:
> /*
> * When multiple values of of an array are updated at once
> * FQS planner cannot yet handle SQL representation correctly.
> * So disable FQS in this case and let standard planner manage
> it.
> */
> if (sc_context->sc_query != NULL &&
> sc_context->sc_query->commandType == CMD_SELECT) {
> pgxc_set_exprtype_shippability(exprType(node),
> sc_context);
> } else {
> pgxc_set_shippability_reason(sc_context,
> SS_UNSUPPORTED_EXPR);
> pgxc_set_exprtype_shippability(exprType(node),
> sc_context);
> }
> break;
>
>
> The original code is always marking the query as SS_UNSUPPORTED_EXPR,
> but according to the existing comment I think my change is safe for Select.
> Isn't it?
>
>
> Thank you very much,
>
> --
> Javier Hernández
> Ingeniero de Bases de Datos
> Centro de Estudios de Fisica del Cosmos de Aragon (ceFca)http://www.cefca.es
> Plza San Juan Nº 1, Planta 2ª
> E-44001 Teruel (Spain)
> Phone: +34 978 221266 Ext.1105
> Fax: +34 978 611801
>
>
>
> ------------------------------------------------------------------------------
> October Webinars: Code for Performance
> Free Intel webinars can help you accelerate application performance.
> Explore tips for MPI, OpenMP, advanced profiling, and more. Get the most
> from
> the latest Intel processors and coprocessors. See abstracts and register >
> http://pubads.g.doubleclick.net/gampad/clk?id=60134071&iu=/4140/ostg.clktrk
> _______________________________________________
> Postgres-xc-general mailing list
> Pos...@li...
> https://lists.sourceforge.net/lists/listinfo/postgres-xc-general
>
>
--
Best Wishes,
Ashutosh Bapat
EnterpriseDB Corporation
The Postgres Database Company
|
|
From: Javier H. <jhe...@ce...> - 2013-10-09 11:20:31
|
Hello,
I am not sure if this is the right mailing list for sending Feature
request, if not, sorry for the inconvenience.
I know you are working on improvements to the planner (Feature
Request #95). One important for my project is the ability to use FQS for
queries that reference elements in an array column. One example:
CREATE TABLE TestArray (id SERIAL, flux FLOAT4*[]*, error FLOAT*[]*);
SELECT flux, error FROM TestArray WHERE id = 12; -- Planner uses FQS
SELECT flux[1], error[1] FROM TestArray WHERE id = 12; -- Planner
uses Remote query and completes in coordinator
I am beginning to dig in the code and I think I have a temporary
solution:
src/backend/optimizer/util/pgxcship.c:709
case T_ArrayRef:
/*
* When multiple values of of an array are updated at once
* FQS planner cannot yet handle SQL representation correctly.
* So disable FQS in this case and let standard planner
manage it.
*/
if (sc_context->sc_query != NULL &&
sc_context->sc_query->commandType == CMD_SELECT) {
pgxc_set_exprtype_shippability(exprType(node),
sc_context);
} else {
pgxc_set_shippability_reason(sc_context,
SS_UNSUPPORTED_EXPR);
pgxc_set_exprtype_shippability(exprType(node),
sc_context);
}
break;
The original code is always marking the query as
SS_UNSUPPORTED_EXPR, but according to the existing comment I think my
change is safe for Select. Isn't it?
Thank you very much,
--
Javier Hernández
Ingeniero de Bases de Datos
Centro de Estudios de Fisica del Cosmos de Aragon (ceFca)
http://www.cefca.es
Plza San Juan Nº 1, Planta 2ª
E-44001 Teruel (Spain)
Phone: +34 978 221266 Ext.1105
Fax: +34 978 611801
|
|
From: 鈴木 幸市 <ko...@in...> - 2013-10-09 05:32:17
|
Year, bit it does provide some command shortcut such as Createdb, Createuser and Psql and yes, bash version used to have Dropdb and Dropuser, although it did not run clean_connection, it was user's responsibility. Maybe I can create separate pgxc_ctl repo to accommodate such requirements. So pgxc_ctl will be a kind of separate project, just like many PG contrib modules. Of course, you can issue separate dropdb and drop user with appropriate -h and -p options. Dropdb and Dropuser is just a shortcut for this. I think Psql is useful because it is used very often. I don't like to type -h/-p each time, rather, I'd like to specify coordinator name if needed. Any more inputs? --- Koichi Suzuki On 2013/10/09, at 13:12, Ashutosh Bapat <ash...@en...<mailto:ash...@en...>> wrote: Hector, AFAIK, pgxc_ctl is not an interface for dropping database or user. It's just a cluster management utility. You should use corresponding binaries or SQL commands for that purpose. On Tue, Oct 8, 2013 at 9:32 PM, Hector M. Jacas <hec...@et...<mailto:hec...@et...>> wrote: Hi all, Among the features described in: https://github.com/koichi-szk/PGXC-Tools/blob/master/pgxc_ctl/manual.txt is deleting the databases (Dropdb) and users (Dropuser) and when I try make use of these commands pgxc_ctl answers: command not found PGXC Createdb testdb Selected coord2. PGXC Dropdb testdb sh: Dropdb: command not found PGXC Createuser usertest1 Selected coord1. PGXC Dropuser usertest1 sh: Dropuser: command not found PGXC Carefully review the source code and found that in the folder: postgres-xc/contrib/pgxc_ctl , there is a file (do_command.c) in which reference is made and performed the execution of Createdb (line 2339) and Createuser (line 2369). In this file there is no reference whatsoever to Dropdb or Dropuser . There is another file (in the same directory) called: pgxc_ctl.bash, in which reference is made and run the corresponding command to Createdb, Dropdb, Createuser and Dropuser. Do not remember reading during pgxc compliacion and deployment (or pgxc_ctl in the area of contributions ) anything regarding how to handle this situation. How to resolve this issue? The pgxc_ctl in its binary version lacks Dropdb and Dropuser commands? I must choose between the binary version and the version bash? What would be the impact of this change ? Can anyone guide me please Thanks in advance, Hector M. Jacas --- This message was processed by Kaspersky Mail Gateway 5.6.28/RELEASE running at host imx3.etecsa.cu<http://imx3.etecsa.cu/> Visit our web-site: <http://www.kaspersky.com<http://www.kaspersky.com/>>, <http://www.viruslist.com<http://www.viruslist.com/>> ------------------------------------------------------------------------------ October Webinars: Code for Performance Free Intel webinars can help you accelerate application performance. Explore tips for MPI, OpenMP, advanced profiling, and more. Get the most from the latest Intel processors and coprocessors. See abstracts and register > http://pubads.g.doubleclick.net/gampad/clk?id=60134071&iu=/4140/ostg.clktrk _______________________________________________ Postgres-xc-general mailing list Pos...@li...<mailto:Pos...@li...> https://lists.sourceforge.net/lists/listinfo/postgres-xc-general -- Best Wishes, Ashutosh Bapat EnterpriseDB Corporation The Postgres Database Company ------------------------------------------------------------------------------ October Webinars: Code for Performance Free Intel webinars can help you accelerate application performance. Explore tips for MPI, OpenMP, advanced profiling, and more. Get the most from the latest Intel processors and coprocessors. See abstracts and register > http://pubads.g.doubleclick.net/gampad/clk?id=60134071&iu=/4140/ostg.clktrk_______________________________________________ Postgres-xc-general mailing list Pos...@li... https://lists.sourceforge.net/lists/listinfo/postgres-xc-general |
|
From: Ashutosh B. <ash...@en...> - 2013-10-09 04:12:53
|
Hector, AFAIK, pgxc_ctl is not an interface for dropping database or user. It's just a cluster management utility. You should use corresponding binaries or SQL commands for that purpose. On Tue, Oct 8, 2013 at 9:32 PM, Hector M. Jacas <hec...@et...>wrote: > > Hi all, > > Among the features described in: https://github.com/koichi-szk/** > PGXC-Tools/blob/master/pgxc_**ctl/manual.txt<https://github.com/koichi-szk/PGXC-Tools/blob/master/pgxc_ctl/manual.txt>is deleting the databases (Dropdb) and users (Dropuser) and when I try make > use of these commands pgxc_ctl answers: command not found > > PGXC Createdb testdb > Selected coord2. > PGXC Dropdb testdb > sh: Dropdb: command not found > PGXC Createuser usertest1 > Selected coord1. > PGXC Dropuser usertest1 > sh: Dropuser: command not found > PGXC > > Carefully review the source code and found that in the folder: > postgres-xc/contrib/pgxc_ctl , there is a file (do_command.c) in which > reference is made and performed the execution of Createdb (line 2339) and > Createuser (line 2369). > > In this file there is no reference whatsoever to Dropdb or Dropuser . > > There is another file (in the same directory) called: pgxc_ctl.bash, in > which reference is made and run the corresponding command to Createdb, > Dropdb, Createuser and Dropuser. > > Do not remember reading during pgxc compliacion and deployment (or > pgxc_ctl in the area of contributions ) anything regarding how to handle > this situation. > > How to resolve this issue? > > The pgxc_ctl in its binary version lacks Dropdb and Dropuser commands? > I must choose between the binary version and the version bash? What would > be the impact of this change ? > > Can anyone guide me please > > Thanks in advance, > > Hector M. Jacas > > --- > This message was processed by Kaspersky Mail Gateway 5.6.28/RELEASE > running at host imx3.etecsa.cu > Visit our web-site: <http://www.kaspersky.com>, <http://www.viruslist.com> > > > ------------------------------------------------------------------------------ > October Webinars: Code for Performance > Free Intel webinars can help you accelerate application performance. > Explore tips for MPI, OpenMP, advanced profiling, and more. Get the most > from > the latest Intel processors and coprocessors. See abstracts and register > > http://pubads.g.doubleclick.net/gampad/clk?id=60134071&iu=/4140/ostg.clktrk > _______________________________________________ > Postgres-xc-general mailing list > Pos...@li... > https://lists.sourceforge.net/lists/listinfo/postgres-xc-general > > -- Best Wishes, Ashutosh Bapat EnterpriseDB Corporation The Postgres Database Company |
|
From: Hector M. J. <hec...@et...> - 2013-10-08 17:33:13
|
Hi all, Among the features described in: https://github.com/koichi-szk/PGXC-Tools/blob/master/pgxc_ctl/manual.txt is deleting the databases (Dropdb) and users (Dropuser) and when I try make use of these commands pgxc_ctl answers: command not found PGXC Createdb testdb Selected coord2. PGXC Dropdb testdb sh: Dropdb: command not found PGXC Createuser usertest1 Selected coord1. PGXC Dropuser usertest1 sh: Dropuser: command not found PGXC Carefully review the source code and found that in the folder: postgres-xc/contrib/pgxc_ctl , there is a file (do_command.c) in which reference is made and performed the execution of Createdb (line 2339) and Createuser (line 2369). In this file there is no reference whatsoever to Dropdb or Dropuser . There is another file (in the same directory) called: pgxc_ctl.bash, in which reference is made and run the corresponding command to Createdb, Dropdb, Createuser and Dropuser. Do not remember reading during pgxc compliacion and deployment (or pgxc_ctl in the area of contributions ) anything regarding how to handle this situation. How to resolve this issue? The pgxc_ctl in its binary version lacks Dropdb and Dropuser commands? I must choose between the binary version and the version bash? What would be the impact of this change ? Can anyone guide me please Thanks in advance, Hector M. Jacas |
|
From: Yehezkel H. <hor...@ch...> - 2013-10-08 11:29:55
|
>> My goal - I have an application that needs SQL DB and must always be >> up (I have a backup machine for this purpose). >Have you thought about PostgreSQL itself for your solution. Is there any reason you'd need XC? Do you have an amount of data that >forces you to use multi-master architecture or perhaps PG itself could handle it? I need multi-master capability, as clients might connect to both machines at the same time; Yes - my tables will be replicated. >Yep, this is doable. If all your data is replicated you would be able to do that. However you need to keep in mind that you will not be able to write new data to node B if node A is not accessible. If you data is replicated and you need to update a table, both nodes need to work. This is a surprise for me, this wasn't clear in the documentation I read nor at some PG-XC presentations I looked at in the internet. Isn't this point one of the conditions for High-Availability of DB - allowing work to continue even if one of the machines failed? >Or if you want B to be still writable, you could update the node information inside it, make it workable alone, and when server A is up again recreate a new XC node from scratch and add it again to the cluster. What is the correct procedure for doing that? Is there a pgxc_ctl commands for doing that? >> My questions: >> >> 1. In your docs, you always put the GTM in dedicated machine. >> a. Is this a requirement, just an easy to understand topology or best >> practice? >GTM consumes a certain amount of CPU and does not need much RAM, while for your nodes you might prioritize the opposite. >> b. In case of best practice, what is the expected penalty in case the >> GTM is deployed on the same machine with coordinator and datanode? >CPU resource consumption and reduction of performance if your queries need some CPU with for example internal sort operations among other things. O.K got it; For now I'm trying to make it work, afterwards I'll take care for make it work faster. >> 2. What should I do after Machine A is back to life if I want: >> a. Make it act as a new slave? >> b. Make it become the master again? >There is no principle of master/slave in XC like in Postgres (well you could create a slave node for an individual Coordinator/Datanode). >But basically in your configuration machine A and B have the same state. >Only GTM is a slave. Sorry, I meant in the context of GTM - how should I make MachineA a new GTM-slave or make it a GTM-master again? |
|
From: Ashutosh B. <ash...@en...> - 2013-10-08 04:04:25
|
The changes have to be in pgxc_make_modifytable() to construct a shippable DML out of the scan plan and the modification plan. Changes will be needed in ExecUpdate/Delete/Insert for execution of this shippable DML. Also, there will be changes needed in pgxc_shippability_walker() to improve FQS technique for the same. On Mon, Oct 7, 2013 at 9:10 PM, Sandeep Gupta <gup...@gm...>wrote: > Hi Ashutosh, > > Thanks for the note. I cannot commit right away. However, whenever you > have some time can you mention relevant portions in the codebase where > changes have to be made. I have a general understanding of the execution > engine. I will take a look see if it feasible for me. > > Thanks. > Sandeep > > > > On Sun, Oct 6, 2013 at 11:53 PM, Ashutosh Bapat < > ash...@en...> wrote: > >> >> >> >> On Sat, Oct 5, 2013 at 7:44 PM, Sandeep Gupta <gup...@gm...>wrote: >> >>> Thanks Michael. I understand. The only issue is that we have an update >>> query as >>> >>> update T set T.a = -1 from A where A.x = T.x >>> >>> >>> Both A and T and distributed by x column. The problem is that >>> coordinator first does the join and then >>> calls update several times at each datanode. This is turning out to be >>> too slow. Would have >>> been better if the entire query was shipped to the datanodes. >>> >>> >> Right now there is no way to ship a DML with more than one relation >> involved there. But that's something, I have been thinking about. If you >> have developer resources and can produce a patch. I can help. >> >> >>> Thanks. >>> Sandeep >>> >>> >>> >>> On Sat, Oct 5, 2013 at 6:27 AM, Michael Paquier < >>> mic...@gm...> wrote: >>> >>>> On Sat, Oct 5, 2013 at 2:58 AM, Sandeep Gupta <gup...@gm...> >>>> wrote: >>>> > I understand that the datanodes are read only and that >>>> updates/insert can >>>> > happen at coordinator. >>>> You got it. >>>> >>>> > Also, it does not allow modification of column over which the records >>>> are distributed. >>>> Hum no, 1.1 allows ALTER TABLE that you can use to change the >>>> distribution type of a table. >>>> >>>> > However, in case I know what I am doing, it there anyway possible to >>>> modify >>>> > the values directly at datanodes. >>>> > The modifications are not over column over which distribution happens. >>>> If you mean by connecting directly to the Datanodes, no. You would >>>> break data consistency if table is replicated by the way by doing >>>> that. Let the Coordinator planner do the job and choose the remote >>>> nodes for you. >>>> >>>> There have been discussion to merge Coordinators and Datanodes >>>> together though. This would allow what you say, with a simpler cluster >>>> design. >>>> -- >>>> Michael >>>> >>> >>> >>> >>> ------------------------------------------------------------------------------ >>> October Webinars: Code for Performance >>> Free Intel webinars can help you accelerate application performance. >>> Explore tips for MPI, OpenMP, advanced profiling, and more. Get the most >>> from >>> the latest Intel processors and coprocessors. See abstracts and register >>> > >>> >>> http://pubads.g.doubleclick.net/gampad/clk?id=60134791&iu=/4140/ostg.clktrk >>> _______________________________________________ >>> Postgres-xc-general mailing list >>> Pos...@li... >>> https://lists.sourceforge.net/lists/listinfo/postgres-xc-general >>> >>> >> >> >> -- >> Best Wishes, >> Ashutosh Bapat >> EnterpriseDB Corporation >> The Postgres Database Company >> > > -- Best Wishes, Ashutosh Bapat EnterpriseDB Corporation The Postgres Database Company |
|
From: 鈴木 幸市 <ko...@in...> - 2013-10-08 01:08:54
|
On 2013/10/07, at 23:27, Julian <jul...@gm...<mailto:jul...@gm...>> wrote: I can not just issue "add coordinator master",it alway asked me for more here is my step: # Deploy binaries PGXC$ depoly all #initialize everyting: one GTM Master, three GTM_Proxy, three coordinator, three datanode PGXC$ init all These commands are for the initial ones. I think you've already done that. ..... PGXC$ Createdb pgxc ..... PGXC$ Psql pgxc=# select * from pgxc_node; node_name | node_type | node_port | node_host | nodeis_primary | nodeis_preferred | node_id ..... pgxc=# create table user_info_hash(id int primary key,firstname text,lastname text,info text) distribute by hash(id); NOTICE: CREATE TABLE / PRIMARY KEY will create implicit index "user_info_hash_pkey" for table "user_info_hash" CREATE TABLE .... PGXC$ deploy node4 PGXC$ add datanode master datanode4 ERROR: please specify the host for the datanode masetr PGXC$ add datanode master datanode4 node4 20008 /opt/pgxc/nodes/dn_master ERROR: sorry found some inconflicts in datanode master configuration. add command syntax will be found at http://postgres-xc.sourceforge.net/docs/1_1/pgxc-ctl.html. Please take a look at the section F.32.12. -------------------------------------------------------------- 2. and i found when the dump file contain table with CREATE TABLE user_info_hash ( id integer NOT NULL, firstname text, lastname text, info text ) DISTRIBUTE BY HASH (id) TO NODE (datanode1,datanode2,datanode3); is alway failed to add new coordinator but with this CREATE TABLE user_id ( id integer NOT NULL ) DISTRIBUTE BY HASH (id) TO NODE (dn2,dn1,dn3); it's sucessed to add the new coordinator to the cluster No. To add a coordinator, you should have correct resource in "add" command. The error message above indicates there are some conflict in the resource you specified with another node. You must specify unique port number and work directory within the server. To add a coordinator, you should issue the following command: PGXC$ add coordinator master name host port pooler dir When you issue this command and it is successful, your pgxc_ctl.conf will have additional line with this additional node. Please take a look at it. If it fails and you'd like to restore to the previous configuration, you can edit pgxc_ctl.conf by yourself. In this point, pgxc_ctl is still "primitive". Regards; --- Koichi Suzuki 於 7/10/2013 18:18, Koichi Suzuki 提到: I found your configuration, in terms of owner and user, is the same as my demonstration scenario. Then what you should do is: 1. Log in to your operating system as pgxcUser, 2. Initialize Postgres-XC cluster with "init all" command of pgxc_ctl, Then the database user $pgxcOwner should have been created and you don't have to worry about it. When you add a coordinator, simply issue "add coordinator master" command. It should work. If you have any other issue, please let me know. My pgxc_ctl demonstration scenario will be found at https://sourceforge.net/apps/mediawiki/postgres-xc/index.php?title=PGOpen2013_Postgres_Open_2013 The configuration is with full slaves, which you can disable. The demo adds a datanode, not a coordinator. I believe there's no significant differences. Regards; --- Koichi Suzuki 2013/10/7 Koichi Suzuki <koi...@gm...<mailto:koi...@gm...>> I understood the situation. Could you let me know if pgxc is not the operating system user name you are using? If so, I will run the test with this situation and see what is going to happen. Maybe a bug (not Postgres-XC core, but pgxc_ctl). If pgxc is the operating system user you are using, then there could be another cause. Also, please let me know your setting of pgxcUser in pgxc_ctl.conf? Best; --- Koichi Suzuki 2013/10/7 Julian <jul...@gm...<mailto:jul...@gm...>> Sorry, i found i have already config pgxcOwner=pgxc in pxc_ctl.conf. And i forgot to mention there's something strange about to add new coordinator, i can’t add coord4 by "add coordinator master coord4 node4 20004 20010 /opt/pgxc/nodes/coord” it will show me ERROR: sorry found some inconflicts in coordinator master configuration.PGXC$ unless i add coord4 to pgxc_ctl.conf and remove it at PGXC shell PGXC$ remove coordinator master coord4 ERROR: PGXC Node coord4: object not defined ERROR: PGXC Node coord4: object not defined ERROR: PGXC Node coord4: object not defined Is it normal ? attach file is my pgxc config Julian 使用 Sparrow<http://www.sparrowmailapp.com/?sig> 發信 On 2013年10月7日Monday at 下午5:31, Koichi Suzuki wrote: Okay. I'm wondering why internal pg_restore complains pgxc already exists.. If you do not specify pgxc in pgxc_ctl and created it manually, newly-craeted node should not contain the role pgxc, though we have different massage. Could you let me know if you have any idea? Best; --- Koichi Suzuki 2013/10/7 Julian <jul...@gm...<mailto:jul...@gm...>> I did not specify pgxc as a database owner, but i create user pgxc to do these job. -- Julian 使用 Sparrow<http://www.sparrowmailapp.com/?sig> 發信 On 2013年10月7日Monday at 下午4:44, Koichi Suzuki wrote: Did you specify pgxc as a database owner ------------------------------------------------------------------------------ October Webinars: Code for Performance Free Intel webinars can help you accelerate application performance. Explore tips for MPI, OpenMP, advanced profiling, and more. Get the most from the latest Intel processors and coprocessors. See abstracts and register > http://pubads.g.doubleclick.net/gampad/clk?id=60134791&iu=/4140/ostg.clktrk_______________________________________________ Postgres-xc-general mailing list Pos...@li... https://lists.sourceforge.net/lists/listinfo/postgres-xc-general |
|
From: Sandeep G. <gup...@gm...> - 2013-10-07 19:35:28
|
Hi, Sorry, I was using a version of pgxc that is a good 4 months old. The latest seems to be working fine and all those issues don't appear anymore. Sorry for the trouble. -Sandeep On Mon, Oct 7, 2013 at 2:00 PM, Sandeep Gupta <gup...@gm...>wrote: > > Hi, > > I digged a little deeper for this case. Something is amiss and was > wondering if I could get some help. When I launch > multiple clients simultaneously, where each client is performing a copy on > the same table. > > First, thing that looks off is that the gtm_proxy has too high activity. > ps command show 428% cpu and 17 % memory on 64 GB ram machine. > > 6795 sandeep 20 0 8468m 8.1g 928 S 428 17.1 39:00.17 gtm_proxy > > This memory usage jumps to 40%. This I think is very high. > > Second, the individual postgres processes (both coordinator and datanodes) > do no show cpu activity. Their cpu usage is 0. > > Third, I get a warning which is fine. However, I feel it makes the system > unstable > WARNING: worker took too long to start; canceled > WARNING: worker took too long to start; canceled > FATAL: Can not register Datanode on GTM > > > In summary, there is something amiss. I would have expected copy command > to work fine because looking at the code it doesn't to be heavily using > the GTM. Any fixes or ideas would be greatly welcomed. > > > -Sandeep > > > > > > > On Mon, Oct 7, 2013 at 11:51 AM, Sandeep Gupta <gup...@gm...>wrote: > >> Hi, >> >> I have a short query. In my setup I have one gtm, one gtm_proxy, one >> co-ordinator, and, multiple datanodes. >> >> It seems for copy gtm_proxy seems to be a bottleneck. Is there any way I >> can have multiple gtm_proxies. If so, how can I use in my setup. Just point >> me to the documentation. >> >> Thanks. >> Sandeep >> >> > |
|
From: Sandeep G. <gup...@gm...> - 2013-10-07 18:00:35
|
Hi, I digged a little deeper for this case. Something is amiss and was wondering if I could get some help. When I launch multiple clients simultaneously, where each client is performing a copy on the same table. First, thing that looks off is that the gtm_proxy has too high activity. ps command show 428% cpu and 17 % memory on 64 GB ram machine. 6795 sandeep 20 0 8468m 8.1g 928 S 428 17.1 39:00.17 gtm_proxy This memory usage jumps to 40%. This I think is very high. Second, the individual postgres processes (both coordinator and datanodes) do no show cpu activity. Their cpu usage is 0. Third, I get a warning which is fine. However, I feel it makes the system unstable WARNING: worker took too long to start; canceled WARNING: worker took too long to start; canceled FATAL: Can not register Datanode on GTM In summary, there is something amiss. I would have expected copy command to work fine because looking at the code it doesn't to be heavily using the GTM. Any fixes or ideas would be greatly welcomed. -Sandeep On Mon, Oct 7, 2013 at 11:51 AM, Sandeep Gupta <gup...@gm...>wrote: > Hi, > > I have a short query. In my setup I have one gtm, one gtm_proxy, one > co-ordinator, and, multiple datanodes. > > It seems for copy gtm_proxy seems to be a bottleneck. Is there any way I > can have multiple gtm_proxies. If so, how can I use in my setup. Just point > me to the documentation. > > Thanks. > Sandeep > > |