You can subscribe to this list here.
2010 |
Jan
|
Feb
|
Mar
|
Apr
(10) |
May
(17) |
Jun
(3) |
Jul
|
Aug
|
Sep
(8) |
Oct
(18) |
Nov
(51) |
Dec
(74) |
---|---|---|---|---|---|---|---|---|---|---|---|---|
2011 |
Jan
(47) |
Feb
(44) |
Mar
(44) |
Apr
(102) |
May
(35) |
Jun
(25) |
Jul
(56) |
Aug
(69) |
Sep
(32) |
Oct
(37) |
Nov
(31) |
Dec
(16) |
2012 |
Jan
(34) |
Feb
(127) |
Mar
(218) |
Apr
(252) |
May
(80) |
Jun
(137) |
Jul
(205) |
Aug
(159) |
Sep
(35) |
Oct
(50) |
Nov
(82) |
Dec
(52) |
2013 |
Jan
(107) |
Feb
(159) |
Mar
(118) |
Apr
(163) |
May
(151) |
Jun
(89) |
Jul
(106) |
Aug
(177) |
Sep
(49) |
Oct
(63) |
Nov
(46) |
Dec
(7) |
2014 |
Jan
(65) |
Feb
(128) |
Mar
(40) |
Apr
(11) |
May
(4) |
Jun
(8) |
Jul
(16) |
Aug
(11) |
Sep
(4) |
Oct
(1) |
Nov
(5) |
Dec
(16) |
2015 |
Jan
(5) |
Feb
|
Mar
(2) |
Apr
(5) |
May
(4) |
Jun
(12) |
Jul
|
Aug
|
Sep
|
Oct
|
Nov
|
Dec
(4) |
2019 |
Jan
|
Feb
|
Mar
|
Apr
|
May
|
Jun
|
Jul
(2) |
Aug
|
Sep
|
Oct
|
Nov
|
Dec
|
S | M | T | W | T | F | S |
---|---|---|---|---|---|---|
|
|
|
|
|
1
(1) |
2
(1) |
3
|
4
|
5
(5) |
6
(4) |
7
(5) |
8
(1) |
9
|
10
|
11
(1) |
12
(1) |
13
|
14
(1) |
15
|
16
|
17
|
18
|
19
(2) |
20
(4) |
21
(3) |
22
(6) |
23
(3) |
24
|
25
(3) |
26
|
27
(2) |
28
|
29
(3) |
30
|
From: Masataka S. <pg...@gm...> - 2013-11-14 08:14:35
|
Hello, all. I'm testing complex SQLs on Postgres-XC 1.1. And I found bad SQL that XC doesn't accept. The SQL is from answer 2 of chapter 17 in SQL puzzle book by Joe Celko. I simplified it for everyone's reproduction. db=# CREATE TABLE t1(id INT, a INT); CREATE TABLE db=# CREATE TABLE t2(id INT, b INT); CREATE TABLE db=# SELECT DISTINCT t2.b FROM t1 JOIN t2 ON t1.id = t2.id GROUP BY b; ERROR: ORDER/GROUP BY expression not found in targetlist This SQL is accepted by PostgreSQL 9.2.4. Can anyone solve or explain this issue? Regards. |
From: Michael P. <mic...@gm...> - 2013-11-12 11:00:02
|
On Mon, Nov 11, 2013 at 10:47 AM, Tomonari Katsumata <kat...@po...> wrote: > "EXECUTE DIRECT" requires literal strings for the query, > but current document doesn't say it clearly. > This is not user-friendly. > Attached file is document patch for execute_direct.sgmlin > to say it need literal strings explicitly. Thanks, committed. I suppose that it doesn't hurt. -- Michael |
From: Tomonari K. <kat...@po...> - 2013-11-11 01:47:28
|
Hi, "EXECUTE DIRECT" requires literal strings for the query, but current document doesn't say it clearly. This is not user-friendly. Attached file is document patch for execute_direct.sgmlin to say it need literal strings explicitly. regards, ------------ NTT Software Corporation Tomonari Katsumata |
From: Amit K. <ami...@en...> - 2013-11-08 05:05:53
|
On 7 November 2013 19:50, Mason Sharp <ms...@tr...> wrote: > > > > On Thu, Nov 7, 2013 at 12:45 AM, 鈴木 幸市 <ko...@in...> wrote: > >> Yes, we need to focus on such general solution for replicated tuple >> identification. >> >> I'm afraid it may take much more research and implementation work. I >> believe the submitted patch handles tuple replica based on the primary key >> or other equivalents if available. If not, the code falls into the >> current case, local CTID. The latter could lead to inconsistent replica >> but it is far better than the current situation. >> >> For short-term solution, I think Mason's code looks reasonable if I >> understand the patch correctly. >> >> Mason, do you have any more thoughts/comments? >> > > I don't think it is unreasonable if a primary/unique key is required to > handle this case. > > I did some testing, but it would be nice if someone else gave it a test > as well. I enabled statement logging to make sure it was doing the right > thing. > > I see someone mentioned a patch out there to only get needed columns. At > the moment extra columns may be used, but at least the data remains > consistent across the cluster, which is most important here. Unfortunately, > I do not have time at the moment to improve this further. If someone else > has time, that would be great, or done as a separate commit. > > Anyway, since this is a critical issue, I think it should get committed to > STABLE_1_1 once reviewed. > > > > >> --- >> Koichi Suzuki >> >> On 2013/11/07, at 14:21, Amit Khandekar <ami...@en... >> > >> wrote: >> >> >> >> >> On 6 November 2013 18:31, Michael Paquier <mic...@gm...>wrote: >> >>> On Wed, Nov 6, 2013 at 3:28 PM, Amit Khandekar >>> <ami...@en...> wrote: >>> > What exactly does the PostgreSQL FDW doc say about updates and primary >>> key ? >>> By having a look here: >>> >>> http://www.postgresql.org/docs/9.3/static/fdw-callbacks.html#FDW-CALLBACKS-UPDATE >>> It is recommended to use a kind of row ID or the primary key columns. >>> In the case of XC row ID = CTID, and its uniqueness is not guaranteed >>> except if coupled with a node ID, which I think it has... Using a CTID >>> + node ID combination makes the analysis of tuple uniqueness >>> impossible for replicated tables either way, so a primary key would be >>> better IMO. >>> >>> > How does the postgres_fdw update a table that has no primary or unique >>> key ? >>> It uses the CTID when scanning remote tuples for UPDATE/DELETE, thing >>> guarantying that tuples are unique in this case as the FDW deals with >>> a single server, here is for example the case of 2 nodes listening >>> ports 5432 and 5433. >>> $ psql -p 5433 -c "CREATE TABLE aa (a int, b int);" >>> CREATE TABLE >>> >>> On server with port 5432: >>> =# CREATE EXTENSION postgres_fdw; >>> CREATE EXTENSION >>> =# CREATE SERVER postgres_server FOREIGN DATA WRAPPER postgres_fdw >>> OPTIONS (host 'localhost', port '5432', dbname 'ioltas'); >>> CREATE SERVER >>> =# CREATE USER MAPPING FOR PUBLIC SERVER postgres_server OPTIONS >>> (password ''); >>> CREATE USER MAPPING >>> =# CREATE FOREIGN TABLE aa_foreign (a int, b int) SERVER >>> postgres_server OPTIONS (table_name 'aa'); >>> CREATE FOREIGN TABLE >>> =# explain verbose update aa_foreign set a = 1, b=2 where a = 1; >>> QUERY PLAN >>> >>> -------------------------------------------------------------------------------- >>> Update on public.aa_foreign (cost=100.00..144.40 rows=14 width=6) >>> Remote SQL: UPDATE public.aa SET a = $2, b = $3 WHERE ctid = $1 >>> -> Foreign Scan on public.aa_foreign (cost=100.00..144.40 rows=14 >>> width=6) >>> Output: 1, 2, ctid >>> Remote SQL: SELECT ctid FROM public.aa WHERE ((a = 1)) FOR >>> UPDATE >>> (5 rows) >>> And ctid is used for scanning... >>> >>> > In the patch, what do we do when the replicated table has no >>> unique/primary >>> > key ? >>> I didn't look at the patch, but I think that replicated tables should >>> also need a primary key. Let's imagine something like that with >>> sessions S1 and S2 for a replication table, and 2 datanodes (1 session >>> runs in common on 1 Coordinator and each Datanode): >>> S1: INSERT VALUES foo in Dn1 >>> S2: INSERT VALUES foo2 in Dn1 >>> S2: INSERT VALUES foo2 in Dn2 >>> S1: INSERT VALUES foo in Dn2 >>> This will imply that those tuples have a different CTID, so a primary >>> key would be necessary as I think that this is possible. >>> >> >> If the patch does not handle the case of replicated table without >> unique key, I think we should have a common solution which takes care of >> this case also. Or else, if this solution can be extended to handle >> no-unique-key case, then that would be good. But I think we would end up in >> having two different implementations, one for unique-key method, and >> another for the other method, which does not seem good. >> >> The method I had in mind was : >> In the scan plan, fetch ctid, node_id from all the datanodes. Use UPDATE >> where ctd = ? , but use nodeid-based method to generate the ExecNodes at >> execute-time (enhance ExecNodes->en_expr evaluation so as to use the >> nodeid from source plan, as against the distribution column that it >> currently uses for distributed tables) . >> >> > Would that work in all cases? What if 2 tuples on each node fulfill the > criteria and a sequence is value being assigned? Might the tuples be > processed in a different order on each node the data ends up being > inconsistent (tuple A gets the value 101, B gets the value 102 on node 1, > and B gets 101, A gets 102 on node 2). I am not sure it is worth trying to > handle the case, and just require a primary key or unique index. > Yes, the tuples need to be fetched in the same order. May be using order by 1, 2, 3, ... . (I hope that if one column is found to be having unique values for a set of rows, sorting is not attempted again for the same set of rows using the remaining columns). Another approach can be considered where we would always have some kind of a hidden (or system) column for replicated tables Its type can be serial type, or an int column with default nextval('sequence_type') so that it will always be executed on coordinator. And use this one as against the primary key. Or may be create table with oids. But not sure if OIDs get incremented/wrapped around exactly the same way regardless of anything. Also, they are documented as having deprecated. These are just some thoughts for a long term solution. I myself don't have the bandwidth to work on XC at this moment, so won't be able to review the patch, so I don't want to fall into a situation where the patch is reworked and I won't review it after all. But these are just points to be thought of in case the would-be reviewer or the submitter feels like extending/modifying this patch for a long term solution taking care of tables without primary key. > >> But this method will not work as-is in case of non-shippable row >> triggers. Because trigger needs to be fired only once per row, and we are >> going to execute UPDATE for all of the ctids of a given row corresponding >> to all of the datanodes. So somehow we should fire triggers only once. This >> method will also hit performance, because currently we fetch *all* columns >> and not just ctid, so it's better to first do that optimization of fetching >> only reqd columns (there's one pending patch submitted in the mailing list, >> which fixes this). >> >> This is just one approach, there might be better approaches.. >> >> Overall, I think if we decide to get this issue solved (and I think we >> should really, this is a serious issue), sufficient resource time needs to >> be given to think over and have discussions before we finalize the approach. >> >> >> -- >>> Michael >>> >> >> >> ------------------------------------------------------------------------------ >> November Webinars for C, C++, Fortran Developers >> Accelerate application performance with scalable programming models. >> Explore >> techniques for threading, error checking, porting, and tuning. Get the >> most >> from the latest Intel processors and coprocessors. See abstracts and >> register >> >> http://pubads.g.doubleclick.net/gampad/clk?id=60136231&iu=/4140/ostg.clktrk_______________________________________________ >> Postgres-xc-developers mailing list >> Pos...@li... >> https://lists.sourceforge.net/lists/listinfo/postgres-xc-developers >> >> >> >> >> ------------------------------------------------------------------------------ >> November Webinars for C, C++, Fortran Developers >> Accelerate application performance with scalable programming models. >> Explore >> techniques for threading, error checking, porting, and tuning. Get the >> most >> from the latest Intel processors and coprocessors. See abstracts and >> register >> >> http://pubads.g.doubleclick.net/gampad/clk?id=60136231&iu=/4140/ostg.clktrk >> _______________________________________________ >> Postgres-xc-developers mailing list >> Pos...@li... >> https://lists.sourceforge.net/lists/listinfo/postgres-xc-developers >> >> > > > -- > Mason Sharp > > TransLattice - http://www.translattice.com > Distributed and Clustered Database Solutions > > > |
From: Mason S. <ms...@tr...> - 2013-11-07 14:20:20
|
On Thu, Nov 7, 2013 at 12:45 AM, 鈴木 幸市 <ko...@in...> wrote: > Yes, we need to focus on such general solution for replicated tuple > identification. > > I'm afraid it may take much more research and implementation work. I > believe the submitted patch handles tuple replica based on the primary key > or other equivalents if available. If not, the code falls into the > current case, local CTID. The latter could lead to inconsistent replica > but it is far better than the current situation. > > For short-term solution, I think Mason's code looks reasonable if I > understand the patch correctly. > > Mason, do you have any more thoughts/comments? > I don't think it is unreasonable if a primary/unique key is required to handle this case. I did some testing, but it would be nice if someone else gave it a test as well. I enabled statement logging to make sure it was doing the right thing. I see someone mentioned a patch out there to only get needed columns. At the moment extra columns may be used, but at least the data remains consistent across the cluster, which is most important here. Unfortunately, I do not have time at the moment to improve this further. If someone else has time, that would be great, or done as a separate commit. Anyway, since this is a critical issue, I think it should get committed to STABLE_1_1 once reviewed. > --- > Koichi Suzuki > > On 2013/11/07, at 14:21, Amit Khandekar <ami...@en...> > wrote: > > > > > On 6 November 2013 18:31, Michael Paquier <mic...@gm...>wrote: > >> On Wed, Nov 6, 2013 at 3:28 PM, Amit Khandekar >> <ami...@en...> wrote: >> > What exactly does the PostgreSQL FDW doc say about updates and primary >> key ? >> By having a look here: >> >> http://www.postgresql.org/docs/9.3/static/fdw-callbacks.html#FDW-CALLBACKS-UPDATE >> It is recommended to use a kind of row ID or the primary key columns. >> In the case of XC row ID = CTID, and its uniqueness is not guaranteed >> except if coupled with a node ID, which I think it has... Using a CTID >> + node ID combination makes the analysis of tuple uniqueness >> impossible for replicated tables either way, so a primary key would be >> better IMO. >> >> > How does the postgres_fdw update a table that has no primary or unique >> key ? >> It uses the CTID when scanning remote tuples for UPDATE/DELETE, thing >> guarantying that tuples are unique in this case as the FDW deals with >> a single server, here is for example the case of 2 nodes listening >> ports 5432 and 5433. >> $ psql -p 5433 -c "CREATE TABLE aa (a int, b int);" >> CREATE TABLE >> >> On server with port 5432: >> =# CREATE EXTENSION postgres_fdw; >> CREATE EXTENSION >> =# CREATE SERVER postgres_server FOREIGN DATA WRAPPER postgres_fdw >> OPTIONS (host 'localhost', port '5432', dbname 'ioltas'); >> CREATE SERVER >> =# CREATE USER MAPPING FOR PUBLIC SERVER postgres_server OPTIONS >> (password ''); >> CREATE USER MAPPING >> =# CREATE FOREIGN TABLE aa_foreign (a int, b int) SERVER >> postgres_server OPTIONS (table_name 'aa'); >> CREATE FOREIGN TABLE >> =# explain verbose update aa_foreign set a = 1, b=2 where a = 1; >> QUERY PLAN >> >> -------------------------------------------------------------------------------- >> Update on public.aa_foreign (cost=100.00..144.40 rows=14 width=6) >> Remote SQL: UPDATE public.aa SET a = $2, b = $3 WHERE ctid = $1 >> -> Foreign Scan on public.aa_foreign (cost=100.00..144.40 rows=14 >> width=6) >> Output: 1, 2, ctid >> Remote SQL: SELECT ctid FROM public.aa WHERE ((a = 1)) FOR UPDATE >> (5 rows) >> And ctid is used for scanning... >> >> > In the patch, what do we do when the replicated table has no >> unique/primary >> > key ? >> I didn't look at the patch, but I think that replicated tables should >> also need a primary key. Let's imagine something like that with >> sessions S1 and S2 for a replication table, and 2 datanodes (1 session >> runs in common on 1 Coordinator and each Datanode): >> S1: INSERT VALUES foo in Dn1 >> S2: INSERT VALUES foo2 in Dn1 >> S2: INSERT VALUES foo2 in Dn2 >> S1: INSERT VALUES foo in Dn2 >> This will imply that those tuples have a different CTID, so a primary >> key would be necessary as I think that this is possible. >> > > If the patch does not handle the case of replicated table without unique > key, I think we should have a common solution which takes care of this case > also. Or else, if this solution can be extended to handle no-unique-key > case, then that would be good. But I think we would end up in having two > different implementations, one for unique-key method, and another for the > other method, which does not seem good. > > The method I had in mind was : > In the scan plan, fetch ctid, node_id from all the datanodes. Use UPDATE > where ctd = ? , but use nodeid-based method to generate the ExecNodes at > execute-time (enhance ExecNodes->en_expr evaluation so as to use the > nodeid from source plan, as against the distribution column that it > currently uses for distributed tables) . > > Would that work in all cases? What if 2 tuples on each node fulfill the criteria and a sequence is value being assigned? Might the tuples be processed in a different order on each node the data ends up being inconsistent (tuple A gets the value 101, B gets the value 102 on node 1, and B gets 101, A gets 102 on node 2). I am not sure it is worth trying to handle the case, and just require a primary key or unique index. > But this method will not work as-is in case of non-shippable row triggers. > Because trigger needs to be fired only once per row, and we are going to > execute UPDATE for all of the ctids of a given row corresponding to all of > the datanodes. So somehow we should fire triggers only once. This method > will also hit performance, because currently we fetch *all* columns and not > just ctid, so it's better to first do that optimization of fetching only > reqd columns (there's one pending patch submitted in the mailing list, > which fixes this). > > This is just one approach, there might be better approaches.. > > Overall, I think if we decide to get this issue solved (and I think we > should really, this is a serious issue), sufficient resource time needs to > be given to think over and have discussions before we finalize the approach. > > > -- >> Michael >> > > > ------------------------------------------------------------------------------ > November Webinars for C, C++, Fortran Developers > Accelerate application performance with scalable programming models. > Explore > techniques for threading, error checking, porting, and tuning. Get the > most > from the latest Intel processors and coprocessors. See abstracts and > register > > http://pubads.g.doubleclick.net/gampad/clk?id=60136231&iu=/4140/ostg.clktrk_______________________________________________ > Postgres-xc-developers mailing list > Pos...@li... > https://lists.sourceforge.net/lists/listinfo/postgres-xc-developers > > > > > ------------------------------------------------------------------------------ > November Webinars for C, C++, Fortran Developers > Accelerate application performance with scalable programming models. > Explore > techniques for threading, error checking, porting, and tuning. Get the most > from the latest Intel processors and coprocessors. See abstracts and > register > http://pubads.g.doubleclick.net/gampad/clk?id=60136231&iu=/4140/ostg.clktrk > _______________________________________________ > Postgres-xc-developers mailing list > Pos...@li... > https://lists.sourceforge.net/lists/listinfo/postgres-xc-developers > > -- Mason Sharp TransLattice - http://www.translattice.com Distributed and Clustered Database Solutions |
From: Koichi S. <koi...@gm...> - 2013-11-07 06:07:36
|
This could be an option if such incompatibility is acceptable. I understand many such replication system assumes primary keys in all the tables so the restriction itself is acceptable. Unique key cannot handle NULL valued key correctly so primary or unique not null (equivalent to primary key) constraint will be better. Regards; --- Koichi Suzuki 2013/11/7 Nikhil Sontakke <ni...@st...> > Isn't it fair to error out if there are no primary/unique keys on the > replicated table? IMO, Mason's approach handles things pretty ok for tables > with unique type of keys. > > Regards, > Nikhils > > > On Thu, Nov 7, 2013 at 11:15 AM, 鈴木 幸市 <ko...@in...> wrote: > >> Yes, we need to focus on such general solution for replicated tuple >> identification. >> >> I'm afraid it may take much more research and implementation work. I >> believe the submitted patch handles tuple replica based on the primary key >> or other equivalents if available. If not, the code falls into the >> current case, local CTID. The latter could lead to inconsistent replica >> but it is far better than the current situation. >> >> For short-term solution, I think Mason's code looks reasonable if I >> understand the patch correctly. >> >> Mason, do you have any more thoughts/comments? >> --- >> Koichi Suzuki >> >> On 2013/11/07, at 14:21, Amit Khandekar <ami...@en... >> > >> wrote: >> >> >> >> >> On 6 November 2013 18:31, Michael Paquier <mic...@gm...>wrote: >> >>> On Wed, Nov 6, 2013 at 3:28 PM, Amit Khandekar >>> <ami...@en...> wrote: >>> > What exactly does the PostgreSQL FDW doc say about updates and primary >>> key ? >>> By having a look here: >>> >>> http://www.postgresql.org/docs/9.3/static/fdw-callbacks.html#FDW-CALLBACKS-UPDATE >>> It is recommended to use a kind of row ID or the primary key columns. >>> In the case of XC row ID = CTID, and its uniqueness is not guaranteed >>> except if coupled with a node ID, which I think it has... Using a CTID >>> + node ID combination makes the analysis of tuple uniqueness >>> impossible for replicated tables either way, so a primary key would be >>> better IMO. >>> >>> > How does the postgres_fdw update a table that has no primary or unique >>> key ? >>> It uses the CTID when scanning remote tuples for UPDATE/DELETE, thing >>> guarantying that tuples are unique in this case as the FDW deals with >>> a single server, here is for example the case of 2 nodes listening >>> ports 5432 and 5433. >>> $ psql -p 5433 -c "CREATE TABLE aa (a int, b int);" >>> CREATE TABLE >>> >>> On server with port 5432: >>> =# CREATE EXTENSION postgres_fdw; >>> CREATE EXTENSION >>> =# CREATE SERVER postgres_server FOREIGN DATA WRAPPER postgres_fdw >>> OPTIONS (host 'localhost', port '5432', dbname 'ioltas'); >>> CREATE SERVER >>> =# CREATE USER MAPPING FOR PUBLIC SERVER postgres_server OPTIONS >>> (password ''); >>> CREATE USER MAPPING >>> =# CREATE FOREIGN TABLE aa_foreign (a int, b int) SERVER >>> postgres_server OPTIONS (table_name 'aa'); >>> CREATE FOREIGN TABLE >>> =# explain verbose update aa_foreign set a = 1, b=2 where a = 1; >>> QUERY PLAN >>> >>> -------------------------------------------------------------------------------- >>> Update on public.aa_foreign (cost=100.00..144.40 rows=14 width=6) >>> Remote SQL: UPDATE public.aa SET a = $2, b = $3 WHERE ctid = $1 >>> -> Foreign Scan on public.aa_foreign (cost=100.00..144.40 rows=14 >>> width=6) >>> Output: 1, 2, ctid >>> Remote SQL: SELECT ctid FROM public.aa WHERE ((a = 1)) FOR >>> UPDATE >>> (5 rows) >>> And ctid is used for scanning... >>> >>> > In the patch, what do we do when the replicated table has no >>> unique/primary >>> > key ? >>> I didn't look at the patch, but I think that replicated tables should >>> also need a primary key. Let's imagine something like that with >>> sessions S1 and S2 for a replication table, and 2 datanodes (1 session >>> runs in common on 1 Coordinator and each Datanode): >>> S1: INSERT VALUES foo in Dn1 >>> S2: INSERT VALUES foo2 in Dn1 >>> S2: INSERT VALUES foo2 in Dn2 >>> S1: INSERT VALUES foo in Dn2 >>> This will imply that those tuples have a different CTID, so a primary >>> key would be necessary as I think that this is possible. >>> >> >> If the patch does not handle the case of replicated table without >> unique key, I think we should have a common solution which takes care of >> this case also. Or else, if this solution can be extended to handle >> no-unique-key case, then that would be good. But I think we would end up in >> having two different implementations, one for unique-key method, and >> another for the other method, which does not seem good. >> >> The method I had in mind was : >> In the scan plan, fetch ctid, node_id from all the datanodes. Use UPDATE >> where ctd = ? , but use nodeid-based method to generate the ExecNodes at >> execute-time (enhance ExecNodes->en_expr evaluation so as to use the >> nodeid from source plan, as against the distribution column that it >> currently uses for distributed tables) . >> But this method will not work as-is in case of non-shippable row >> triggers. Because trigger needs to be fired only once per row, and we are >> going to execute UPDATE for all of the ctids of a given row corresponding >> to all of the datanodes. So somehow we should fire triggers only once. This >> method will also hit performance, because currently we fetch *all* columns >> and not just ctid, so it's better to first do that optimization of fetching >> only reqd columns (there's one pending patch submitted in the mailing list, >> which fixes this). >> >> This is just one approach, there might be better approaches.. >> >> Overall, I think if we decide to get this issue solved (and I think we >> should really, this is a serious issue), sufficient resource time needs to >> be given to think over and have discussions before we finalize the approach. >> >> >> -- >>> Michael >>> >> >> >> ------------------------------------------------------------------------------ >> November Webinars for C, C++, Fortran Developers >> Accelerate application performance with scalable programming models. >> Explore >> techniques for threading, error checking, porting, and tuning. Get the >> most >> from the latest Intel processors and coprocessors. See abstracts and >> register >> >> http://pubads.g.doubleclick.net/gampad/clk?id=60136231&iu=/4140/ostg.clktrk_______________________________________________ >> Postgres-xc-developers mailing list >> Pos...@li... >> https://lists.sourceforge.net/lists/listinfo/postgres-xc-developers >> >> >> >> >> ------------------------------------------------------------------------------ >> November Webinars for C, C++, Fortran Developers >> Accelerate application performance with scalable programming models. >> Explore >> techniques for threading, error checking, porting, and tuning. Get the >> most >> from the latest Intel processors and coprocessors. See abstracts and >> register >> >> http://pubads.g.doubleclick.net/gampad/clk?id=60136231&iu=/4140/ostg.clktrk >> _______________________________________________ >> Postgres-xc-developers mailing list >> Pos...@li... >> https://lists.sourceforge.net/lists/listinfo/postgres-xc-developers >> >> > > > -- > StormDB - http://www.stormdb.com > The Database Cloud > > > ------------------------------------------------------------------------------ > November Webinars for C, C++, Fortran Developers > Accelerate application performance with scalable programming models. > Explore > techniques for threading, error checking, porting, and tuning. Get the most > from the latest Intel processors and coprocessors. See abstracts and > register > http://pubads.g.doubleclick.net/gampad/clk?id=60136231&iu=/4140/ostg.clktrk > _______________________________________________ > Postgres-xc-developers mailing list > Pos...@li... > https://lists.sourceforge.net/lists/listinfo/postgres-xc-developers > > |
From: Nikhil S. <ni...@st...> - 2013-11-07 05:57:43
|
Isn't it fair to error out if there are no primary/unique keys on the replicated table? IMO, Mason's approach handles things pretty ok for tables with unique type of keys. Regards, Nikhils On Thu, Nov 7, 2013 at 11:15 AM, 鈴木 幸市 <ko...@in...> wrote: > Yes, we need to focus on such general solution for replicated tuple > identification. > > I'm afraid it may take much more research and implementation work. I > believe the submitted patch handles tuple replica based on the primary key > or other equivalents if available. If not, the code falls into the > current case, local CTID. The latter could lead to inconsistent replica > but it is far better than the current situation. > > For short-term solution, I think Mason's code looks reasonable if I > understand the patch correctly. > > Mason, do you have any more thoughts/comments? > --- > Koichi Suzuki > > On 2013/11/07, at 14:21, Amit Khandekar <ami...@en...> > wrote: > > > > > On 6 November 2013 18:31, Michael Paquier <mic...@gm...>wrote: > >> On Wed, Nov 6, 2013 at 3:28 PM, Amit Khandekar >> <ami...@en...> wrote: >> > What exactly does the PostgreSQL FDW doc say about updates and primary >> key ? >> By having a look here: >> >> http://www.postgresql.org/docs/9.3/static/fdw-callbacks.html#FDW-CALLBACKS-UPDATE >> It is recommended to use a kind of row ID or the primary key columns. >> In the case of XC row ID = CTID, and its uniqueness is not guaranteed >> except if coupled with a node ID, which I think it has... Using a CTID >> + node ID combination makes the analysis of tuple uniqueness >> impossible for replicated tables either way, so a primary key would be >> better IMO. >> >> > How does the postgres_fdw update a table that has no primary or unique >> key ? >> It uses the CTID when scanning remote tuples for UPDATE/DELETE, thing >> guarantying that tuples are unique in this case as the FDW deals with >> a single server, here is for example the case of 2 nodes listening >> ports 5432 and 5433. >> $ psql -p 5433 -c "CREATE TABLE aa (a int, b int);" >> CREATE TABLE >> >> On server with port 5432: >> =# CREATE EXTENSION postgres_fdw; >> CREATE EXTENSION >> =# CREATE SERVER postgres_server FOREIGN DATA WRAPPER postgres_fdw >> OPTIONS (host 'localhost', port '5432', dbname 'ioltas'); >> CREATE SERVER >> =# CREATE USER MAPPING FOR PUBLIC SERVER postgres_server OPTIONS >> (password ''); >> CREATE USER MAPPING >> =# CREATE FOREIGN TABLE aa_foreign (a int, b int) SERVER >> postgres_server OPTIONS (table_name 'aa'); >> CREATE FOREIGN TABLE >> =# explain verbose update aa_foreign set a = 1, b=2 where a = 1; >> QUERY PLAN >> >> -------------------------------------------------------------------------------- >> Update on public.aa_foreign (cost=100.00..144.40 rows=14 width=6) >> Remote SQL: UPDATE public.aa SET a = $2, b = $3 WHERE ctid = $1 >> -> Foreign Scan on public.aa_foreign (cost=100.00..144.40 rows=14 >> width=6) >> Output: 1, 2, ctid >> Remote SQL: SELECT ctid FROM public.aa WHERE ((a = 1)) FOR UPDATE >> (5 rows) >> And ctid is used for scanning... >> >> > In the patch, what do we do when the replicated table has no >> unique/primary >> > key ? >> I didn't look at the patch, but I think that replicated tables should >> also need a primary key. Let's imagine something like that with >> sessions S1 and S2 for a replication table, and 2 datanodes (1 session >> runs in common on 1 Coordinator and each Datanode): >> S1: INSERT VALUES foo in Dn1 >> S2: INSERT VALUES foo2 in Dn1 >> S2: INSERT VALUES foo2 in Dn2 >> S1: INSERT VALUES foo in Dn2 >> This will imply that those tuples have a different CTID, so a primary >> key would be necessary as I think that this is possible. >> > > If the patch does not handle the case of replicated table without unique > key, I think we should have a common solution which takes care of this case > also. Or else, if this solution can be extended to handle no-unique-key > case, then that would be good. But I think we would end up in having two > different implementations, one for unique-key method, and another for the > other method, which does not seem good. > > The method I had in mind was : > In the scan plan, fetch ctid, node_id from all the datanodes. Use UPDATE > where ctd = ? , but use nodeid-based method to generate the ExecNodes at > execute-time (enhance ExecNodes->en_expr evaluation so as to use the > nodeid from source plan, as against the distribution column that it > currently uses for distributed tables) . > But this method will not work as-is in case of non-shippable row triggers. > Because trigger needs to be fired only once per row, and we are going to > execute UPDATE for all of the ctids of a given row corresponding to all of > the datanodes. So somehow we should fire triggers only once. This method > will also hit performance, because currently we fetch *all* columns and not > just ctid, so it's better to first do that optimization of fetching only > reqd columns (there's one pending patch submitted in the mailing list, > which fixes this). > > This is just one approach, there might be better approaches.. > > Overall, I think if we decide to get this issue solved (and I think we > should really, this is a serious issue), sufficient resource time needs to > be given to think over and have discussions before we finalize the approach. > > > -- >> Michael >> > > > ------------------------------------------------------------------------------ > November Webinars for C, C++, Fortran Developers > Accelerate application performance with scalable programming models. > Explore > techniques for threading, error checking, porting, and tuning. Get the > most > from the latest Intel processors and coprocessors. See abstracts and > register > > http://pubads.g.doubleclick.net/gampad/clk?id=60136231&iu=/4140/ostg.clktrk_______________________________________________ > Postgres-xc-developers mailing list > Pos...@li... > https://lists.sourceforge.net/lists/listinfo/postgres-xc-developers > > > > > ------------------------------------------------------------------------------ > November Webinars for C, C++, Fortran Developers > Accelerate application performance with scalable programming models. > Explore > techniques for threading, error checking, porting, and tuning. Get the most > from the latest Intel processors and coprocessors. See abstracts and > register > http://pubads.g.doubleclick.net/gampad/clk?id=60136231&iu=/4140/ostg.clktrk > _______________________________________________ > Postgres-xc-developers mailing list > Pos...@li... > https://lists.sourceforge.net/lists/listinfo/postgres-xc-developers > > -- StormDB - http://www.stormdb.com The Database Cloud |
From: 鈴木 幸市 <ko...@in...> - 2013-11-07 05:45:32
|
Yes, we need to focus on such general solution for replicated tuple identification. I'm afraid it may take much more research and implementation work. I believe the submitted patch handles tuple replica based on the primary key or other equivalents if available. If not, the code falls into the current case, local CTID. The latter could lead to inconsistent replica but it is far better than the current situation. For short-term solution, I think Mason's code looks reasonable if I understand the patch correctly. Mason, do you have any more thoughts/comments? --- Koichi Suzuki On 2013/11/07, at 14:21, Amit Khandekar <ami...@en...<mailto:ami...@en...>> wrote: On 6 November 2013 18:31, Michael Paquier <mic...@gm...<mailto:mic...@gm...>> wrote: On Wed, Nov 6, 2013 at 3:28 PM, Amit Khandekar <ami...@en...<mailto:ami...@en...>> wrote: > What exactly does the PostgreSQL FDW doc say about updates and primary key ? By having a look here: http://www.postgresql.org/docs/9.3/static/fdw-callbacks.html#FDW-CALLBACKS-UPDATE It is recommended to use a kind of row ID or the primary key columns. In the case of XC row ID = CTID, and its uniqueness is not guaranteed except if coupled with a node ID, which I think it has... Using a CTID + node ID combination makes the analysis of tuple uniqueness impossible for replicated tables either way, so a primary key would be better IMO. > How does the postgres_fdw update a table that has no primary or unique key ? It uses the CTID when scanning remote tuples for UPDATE/DELETE, thing guarantying that tuples are unique in this case as the FDW deals with a single server, here is for example the case of 2 nodes listening ports 5432 and 5433. $ psql -p 5433 -c "CREATE TABLE aa (a int, b int);" CREATE TABLE On server with port 5432: =# CREATE EXTENSION postgres_fdw; CREATE EXTENSION =# CREATE SERVER postgres_server FOREIGN DATA WRAPPER postgres_fdw OPTIONS (host 'localhost', port '5432', dbname 'ioltas'); CREATE SERVER =# CREATE USER MAPPING FOR PUBLIC SERVER postgres_server OPTIONS (password ''); CREATE USER MAPPING =# CREATE FOREIGN TABLE aa_foreign (a int, b int) SERVER postgres_server OPTIONS (table_name 'aa'); CREATE FOREIGN TABLE =# explain verbose update aa_foreign set a = 1, b=2 where a = 1; QUERY PLAN -------------------------------------------------------------------------------- Update on public.aa_foreign (cost=100.00..144.40 rows=14 width=6) Remote SQL: UPDATE public.aa SET a = $2, b = $3 WHERE ctid = $1 -> Foreign Scan on public.aa_foreign (cost=100.00..144.40 rows=14 width=6) Output: 1, 2, ctid Remote SQL: SELECT ctid FROM public.aa WHERE ((a = 1)) FOR UPDATE (5 rows) And ctid is used for scanning... > In the patch, what do we do when the replicated table has no unique/primary > key ? I didn't look at the patch, but I think that replicated tables should also need a primary key. Let's imagine something like that with sessions S1 and S2 for a replication table, and 2 datanodes (1 session runs in common on 1 Coordinator and each Datanode): S1: INSERT VALUES foo in Dn1 S2: INSERT VALUES foo2 in Dn1 S2: INSERT VALUES foo2 in Dn2 S1: INSERT VALUES foo in Dn2 This will imply that those tuples have a different CTID, so a primary key would be necessary as I think that this is possible. If the patch does not handle the case of replicated table without unique key, I think we should have a common solution which takes care of this case also. Or else, if this solution can be extended to handle no-unique-key case, then that would be good. But I think we would end up in having two different implementations, one for unique-key method, and another for the other method, which does not seem good. The method I had in mind was : In the scan plan, fetch ctid, node_id from all the datanodes. Use UPDATE where ctd = ? , but use nodeid-based method to generate the ExecNodes at execute-time (enhance ExecNodes->en_expr evaluation so as to use the nodeid from source plan, as against the distribution column that it currently uses for distributed tables) . But this method will not work as-is in case of non-shippable row triggers. Because trigger needs to be fired only once per row, and we are going to execute UPDATE for all of the ctids of a given row corresponding to all of the datanodes. So somehow we should fire triggers only once. This method will also hit performance, because currently we fetch *all* columns and not just ctid, so it's better to first do that optimization of fetching only reqd columns (there's one pending patch submitted in the mailing list, which fixes this). This is just one approach, there might be better approaches. Overall, I think if we decide to get this issue solved (and I think we should really, this is a serious issue), sufficient resource time needs to be given to think over and have discussions before we finalize the approach. -- Michael ------------------------------------------------------------------------------ November Webinars for C, C++, Fortran Developers Accelerate application performance with scalable programming models. Explore techniques for threading, error checking, porting, and tuning. Get the most from the latest Intel processors and coprocessors. See abstracts and register http://pubads.g.doubleclick.net/gampad/clk?id=60136231&iu=/4140/ostg.clktrk_______________________________________________ Postgres-xc-developers mailing list Pos...@li... https://lists.sourceforge.net/lists/listinfo/postgres-xc-developers |
From: Amit K. <ami...@en...> - 2013-11-07 05:22:33
|
On 6 November 2013 18:31, Michael Paquier <mic...@gm...> wrote: > On Wed, Nov 6, 2013 at 3:28 PM, Amit Khandekar > <ami...@en...> wrote: > > What exactly does the PostgreSQL FDW doc say about updates and primary > key ? > By having a look here: > > http://www.postgresql.org/docs/9.3/static/fdw-callbacks.html#FDW-CALLBACKS-UPDATE > It is recommended to use a kind of row ID or the primary key columns. > In the case of XC row ID = CTID, and its uniqueness is not guaranteed > except if coupled with a node ID, which I think it has... Using a CTID > + node ID combination makes the analysis of tuple uniqueness > impossible for replicated tables either way, so a primary key would be > better IMO. > > > How does the postgres_fdw update a table that has no primary or unique > key ? > It uses the CTID when scanning remote tuples for UPDATE/DELETE, thing > guarantying that tuples are unique in this case as the FDW deals with > a single server, here is for example the case of 2 nodes listening > ports 5432 and 5433. > $ psql -p 5433 -c "CREATE TABLE aa (a int, b int);" > CREATE TABLE > > On server with port 5432: > =# CREATE EXTENSION postgres_fdw; > CREATE EXTENSION > =# CREATE SERVER postgres_server FOREIGN DATA WRAPPER postgres_fdw > OPTIONS (host 'localhost', port '5432', dbname 'ioltas'); > CREATE SERVER > =# CREATE USER MAPPING FOR PUBLIC SERVER postgres_server OPTIONS (password > ''); > CREATE USER MAPPING > =# CREATE FOREIGN TABLE aa_foreign (a int, b int) SERVER > postgres_server OPTIONS (table_name 'aa'); > CREATE FOREIGN TABLE > =# explain verbose update aa_foreign set a = 1, b=2 where a = 1; > QUERY PLAN > > -------------------------------------------------------------------------------- > Update on public.aa_foreign (cost=100.00..144.40 rows=14 width=6) > Remote SQL: UPDATE public.aa SET a = $2, b = $3 WHERE ctid = $1 > -> Foreign Scan on public.aa_foreign (cost=100.00..144.40 rows=14 > width=6) > Output: 1, 2, ctid > Remote SQL: SELECT ctid FROM public.aa WHERE ((a = 1)) FOR UPDATE > (5 rows) > And ctid is used for scanning... > > > In the patch, what do we do when the replicated table has no > unique/primary > > key ? > I didn't look at the patch, but I think that replicated tables should > also need a primary key. Let's imagine something like that with > sessions S1 and S2 for a replication table, and 2 datanodes (1 session > runs in common on 1 Coordinator and each Datanode): > S1: INSERT VALUES foo in Dn1 > S2: INSERT VALUES foo2 in Dn1 > S2: INSERT VALUES foo2 in Dn2 > S1: INSERT VALUES foo in Dn2 > This will imply that those tuples have a different CTID, so a primary > key would be necessary as I think that this is possible. > If the patch does not handle the case of replicated table without unique key, I think we should have a common solution which takes care of this case also. Or else, if this solution can be extended to handle no-unique-key case, then that would be good. But I think we would end up in having two different implementations, one for unique-key method, and another for the other method, which does not seem good. The method I had in mind was : In the scan plan, fetch ctid, node_id from all the datanodes. Use UPDATE where ctd = ? , but use nodeid-based method to generate the ExecNodes at execute-time (enhance ExecNodes->en_expr evaluation so as to use the nodeid from source plan, as against the distribution column that it currently uses for distributed tables) . But this method will not work as-is in case of non-shippable row triggers. Because trigger needs to be fired only once per row, and we are going to execute UPDATE for all of the ctids of a given row corresponding to all of the datanodes. So somehow we should fire triggers only once. This method will also hit performance, because currently we fetch *all* columns and not just ctid, so it's better to first do that optimization of fetching only reqd columns (there's one pending patch submitted in the mailing list, which fixes this). This is just one approach, there might be better approaches. Overall, I think if we decide to get this issue solved (and I think we should really, this is a serious issue), sufficient resource time needs to be given to think over and have discussions before we finalize the approach. -- > Michael > |
From: Michael P. <mic...@gm...> - 2013-11-06 13:01:11
|
On Wed, Nov 6, 2013 at 3:28 PM, Amit Khandekar <ami...@en...> wrote: > What exactly does the PostgreSQL FDW doc say about updates and primary key ? By having a look here: http://www.postgresql.org/docs/9.3/static/fdw-callbacks.html#FDW-CALLBACKS-UPDATE It is recommended to use a kind of row ID or the primary key columns. In the case of XC row ID = CTID, and its uniqueness is not guaranteed except if coupled with a node ID, which I think it has... Using a CTID + node ID combination makes the analysis of tuple uniqueness impossible for replicated tables either way, so a primary key would be better IMO. > How does the postgres_fdw update a table that has no primary or unique key ? It uses the CTID when scanning remote tuples for UPDATE/DELETE, thing guarantying that tuples are unique in this case as the FDW deals with a single server, here is for example the case of 2 nodes listening ports 5432 and 5433. $ psql -p 5433 -c "CREATE TABLE aa (a int, b int);" CREATE TABLE On server with port 5432: =# CREATE EXTENSION postgres_fdw; CREATE EXTENSION =# CREATE SERVER postgres_server FOREIGN DATA WRAPPER postgres_fdw OPTIONS (host 'localhost', port '5432', dbname 'ioltas'); CREATE SERVER =# CREATE USER MAPPING FOR PUBLIC SERVER postgres_server OPTIONS (password ''); CREATE USER MAPPING =# CREATE FOREIGN TABLE aa_foreign (a int, b int) SERVER postgres_server OPTIONS (table_name 'aa'); CREATE FOREIGN TABLE =# explain verbose update aa_foreign set a = 1, b=2 where a = 1; QUERY PLAN -------------------------------------------------------------------------------- Update on public.aa_foreign (cost=100.00..144.40 rows=14 width=6) Remote SQL: UPDATE public.aa SET a = $2, b = $3 WHERE ctid = $1 -> Foreign Scan on public.aa_foreign (cost=100.00..144.40 rows=14 width=6) Output: 1, 2, ctid Remote SQL: SELECT ctid FROM public.aa WHERE ((a = 1)) FOR UPDATE (5 rows) And ctid is used for scanning... > In the patch, what do we do when the replicated table has no unique/primary > key ? I didn't look at the patch, but I think that replicated tables should also need a primary key. Let's imagine something like that with sessions S1 and S2 for a replication table, and 2 datanodes (1 session runs in common on 1 Coordinator and each Datanode): S1: INSERT VALUES foo in Dn1 S2: INSERT VALUES foo2 in Dn1 S2: INSERT VALUES foo2 in Dn2 S1: INSERT VALUES foo in Dn2 This will imply that those tuples have a different CTID, so a primary key would be necessary as I think that this is possible. -- Michael |
From: Amit K. <ami...@en...> - 2013-11-06 06:34:56
|
On 6 November 2013 09:55, Michael Paquier <mic...@gm...> wrote: > On Wed, Nov 6, 2013 at 11:00 AM, Koichi Suzuki <koi...@gm...> > wrote: > > I'm thinking to correct trigger implementation to use primary key or > "unique > > and not null" columns. As you suggested, this could be a serious > problem. > > I found that CTID can be different very easily in replicated tables and > it > > is very dangerous to use current CTID to identify tuples. , > This looks like an approach close to what is recommended for FDWs in > the Postgres docs, where a primary key should be used to scan foreign > tuples, primarily for writable APIs. So yes it might be the way to > go... > I won't have bandwidth to review the patch itself. But just a few questions : What exactly does the PostgreSQL FDW doc say about updates and primary key ? How does the postgres_fdw update a table that has no primary or unique key ? In the patch, what do we do when the replicated table has no unique/primary key ? -- > Michael > > > ------------------------------------------------------------------------------ > November Webinars for C, C++, Fortran Developers > Accelerate application performance with scalable programming models. > Explore > techniques for threading, error checking, porting, and tuning. Get the most > from the latest Intel processors and coprocessors. See abstracts and > register > http://pubads.g.doubleclick.net/gampad/clk?id=60136231&iu=/4140/ostg.clktrk > _______________________________________________ > Postgres-xc-developers mailing list > Pos...@li... > https://lists.sourceforge.net/lists/listinfo/postgres-xc-developers > |
From: Michael P. <mic...@gm...> - 2013-11-06 04:25:59
|
On Wed, Nov 6, 2013 at 11:00 AM, Koichi Suzuki <koi...@gm...> wrote: > I'm thinking to correct trigger implementation to use primary key or "unique > and not null" columns. As you suggested, this could be a serious problem. > I found that CTID can be different very easily in replicated tables and it > is very dangerous to use current CTID to identify tuples. , This looks like an approach close to what is recommended for FDWs in the Postgres docs, where a primary key should be used to scan foreign tuples, primarily for writable APIs. So yes it might be the way to go... -- Michael |
From: Koichi S. <koi...@gm...> - 2013-11-06 02:00:23
|
I'm thinking to correct trigger implementation to use primary key or "unique and not null" columns. As you suggested, this could be a serious problem. I found that CTID can be different very easily in replicated tables and it is very dangerous to use current CTID to identify tuples. , CTID of replicated tables can be different very easily as follows: 1. Start at the status where a block B1 of the table S can accommodate only one more tuple both in the datanode D1 and D2. D1 is the primary node so every transaction updating S comes to D1 first. 2. Transaction T1 and T2 are trying to insert new tuple R1 and R2 respectively. Suppose they are not conflicting. 3. T1 comes first to D1 and inserts R1, which goes to B1. 4. Then T2 comes to D1 and inserts R1, which goes to a new block B2. 5. Coordinator running T1 is very busy so T2 comes first to D2, insert R2, which goes to B1. 6. Then T1 comes to D2, insert R1, which goes to a new block B2. So CTID of R1 points to B1 at D1, B2 at D2, CTID of R2 points to B2 at D1, B1 at D2. This can happen very easily but not easy to reproduce systematically. It is simple to do similar hypothetical experiment inserting rows even in a same block. Any more thoughts? Regards; --- Koichi Suzuki 2013/11/6 Mason Sharp <ms...@tr...> > > > > On Mon, Nov 4, 2013 at 8:17 PM, Koichi Suzuki <koi...@gm...>wrote: > >> Mason; >> >> Thanks for this big patch. I understand the intention and would like >> other members to look at this patch too. >> > > Sure. > > >> >> I'd like to know your idea if this can be a part of the TRIGGER >> improvement? >> > > What did you have in mind? Yes, this impacts triggers as well. > > Regards, > > Mason > > >> >> >> 2013/11/2 Mason Sharp <ms...@tr...> >> >>> Please see attached patch that tries to address the issue of XC using >>> CTID for replicated updates and deletes when it is evaluated at a >>> coordinator instead of being pushed down. >>> >>> The problem here is that CTID could be referring to a different tuple >>> altogether on a different data node, which is what happened for one of our >>> Postgres-XC support customers, leading to data issues. >>> >>> Instead, the patch looks for a primary key or unique index (with the >>> primary key preferred) and uses those values instead of CTID. >>> >>> The patch could be improved further. Extra parameters are set even if >>> not used in the execution of the prepared statement sent down to the data >>> nodes. >>> >>> >>> > -- > Mason Sharp > > TransLattice - http://www.translattice.com > Distributed and Clustered Database Solutions > > > |
From: Mason S. <ms...@tr...> - 2013-11-05 18:35:37
|
On Mon, Nov 4, 2013 at 8:17 PM, Koichi Suzuki <koi...@gm...> wrote: > Mason; > > Thanks for this big patch. I understand the intention and would like > other members to look at this patch too. > Sure. > > I'd like to know your idea if this can be a part of the TRIGGER > improvement? > What did you have in mind? Yes, this impacts triggers as well. Regards, Mason > > > 2013/11/2 Mason Sharp <ms...@tr...> > >> Please see attached patch that tries to address the issue of XC using >> CTID for replicated updates and deletes when it is evaluated at a >> coordinator instead of being pushed down. >> >> The problem here is that CTID could be referring to a different tuple >> altogether on a different data node, which is what happened for one of our >> Postgres-XC support customers, leading to data issues. >> >> Instead, the patch looks for a primary key or unique index (with the >> primary key preferred) and uses those values instead of CTID. >> >> The patch could be improved further. Extra parameters are set even if >> not used in the execution of the prepared statement sent down to the data >> nodes. >> >> >> -- Mason Sharp TransLattice - http://www.translattice.com Distributed and Clustered Database Solutions |
From: 鈴木 幸市 <ko...@in...> - 2013-11-05 07:17:48
|
There could be several different cause of the problem. First of all, to ensure that there's no more outstanding 2PCs, could you visit all the nodes (coordinators and datanodes) directly with psql and see if there's no running 2PCs in any of the nodes? If you find any, you should rollback all of them using the same technique. If not, could you restart the whole cluster? I advise to use -m immediate or -m fast option to stop them with pg_ctl. Then restart the cluster and see what's going on. I believe only 2PC-repated resources are carried over when nodes are restarted and all the others are cleared up. I do hope you a allowed to do so. Regards; --- Koichi Suzuki On 2013/11/05, at 12:15, "West, William" <ww...@uc...<mailto:ww...@uc...>> wrote: Koichi, Yes it is related. I used your solution and it did rollback the long running prepared statement. I assumed this might have been causing an unregistered block on the table I was attempting to run queries against or vacuum. Unfortunately though this did not appear to alleviate the situation below. I have a single table affected by this. The other object in the database seem to work as expected. If I try to run a query against this table or vacuum it creates 2 pids like so: 505149 19636 0.0 0.0 192248 6824 ? Ss 18:26 0:00 postgres: bigonc_prd bigonc 72.220.234.184(49302) VACUUM 505149 19889 0.0 0.0 191872 4992 ? Ss 19:03 0:00 postgres: bigonc_prd bigonc 127.0.0.1(57250) VACUUM waiting To me this looks like a process is pawned from the client while another process is started on the server. The client process seems to be causing a blocking like against the server pid. This is the output from PG_LOCKS for the client process: “virtualxid" "3/324” "3/324” 19636 “ExclusiveLock” t t “transactionid" “” 63030 "3/324” 19636 “ExclusiveLock” t f Thanks for your help with the other issue as well, Bill West From: 鈴木 幸市 <ko...@in...<mailto:ko...@in...>> Date: Monday, November 4, 2013 at 6:36 PM To: William West <ww...@uc...<mailto:ww...@uc...>> Cc: "pos...@li...<mailto:pos...@li...>" <pos...@li...<mailto:pos...@li...>>, "pos...@li...<mailto:pos...@li...>" <pos...@li...<mailto:pos...@li...>> Subject: Re: [Postgres-xc-developers] Repairing Corrupt Objects Is this related to the outstanding 2PC three days ago, which I responded with a solution? If not, could you identify corrupt database object? Regards; --- Koichi Suzuki On 2013/11/02, at 7:34, "West, William" <ww...@uc...<mailto:ww...@uc...>> wrote: All, Sorry for the multiple posting but the first emails seemed to get garbled. Hopefully this will post with the full text. Please ignore the other posts with this topic. I have a database set up but it seems fairly unstable. I have a number of objects in the running database but I have one object that is corrupt. I know this because doing any query or dml statement on this table stalls the client. I also notice that when I run statements that encounter the invalid object. It spawns 2 process ID like below: 505149 19373 0.0 0.0 191768 5908 ? Ss 10:30 0:00 postgres: postgresql bigonc [local] VACUUM 505149 19379 1.2 0.0 191972 24776 ? Ss 10:30 0:00 postgres: postgresql bigonc 127.0.0.1(46305) VACUUM waiting These processes never complete and seem like they are locking one another. However there is nothing showing in pg_locks. Does anyone recognize this behavior. Is there anyway to repair or drop a corrupt object in the database? Thanks, Bill West ------------------------------------------------------------------------------ November Webinars for C, C++, Fortran Developers Accelerate application performance with scalable programming models. Explore techniques for threading, error checking, porting, and tuning. Get the most from the latest Intel processors and coprocessors. See abstracts and register http://pubads.g.doubleclick.net/gampad/clk?id=60136231&iu=/4140/ostg.clktrk_______________________________________________ Postgres-xc-developers mailing list Pos...@li...<mailto:Pos...@li...> https://lists.sourceforge.net/lists/listinfo/postgres-xc-developers |
From: West, W. <ww...@uc...> - 2013-11-05 03:15:40
|
Koichi, Yes it is related. I used your solution and it did rollback the long running prepared statement. I assumed this might have been causing an unregistered block on the table I was attempting to run queries against or vacuum. Unfortunately though this did not appear to alleviate the situation below. I have a single table affected by this. The other object in the database seem to work as expected. If I try to run a query against this table or vacuum it creates 2 pids like so: 505149 19636 0.0 0.0 192248 6824 ? Ss 18:26 0:00 postgres: bigonc_prd bigonc 72.220.234.184(49302) VACUUM 505149 19889 0.0 0.0 191872 4992 ? Ss 19:03 0:00 postgres: bigonc_prd bigonc 127.0.0.1(57250) VACUUM waiting To me this looks like a process is pawned from the client while another process is started on the server. The client process seems to be causing a blocking like against the server pid. This is the output from PG_LOCKS for the client process: “virtualxid" "3/324” "3/324” 19636 “ExclusiveLock” t t “transactionid" “” 63030 "3/324” 19636 “ExclusiveLock” t f Thanks for your help with the other issue as well, Bill West From: 鈴木 幸市 <ko...@in...<mailto:ko...@in...>> Date: Monday, November 4, 2013 at 6:36 PM To: William West <ww...@uc...<mailto:ww...@uc...>> Cc: "pos...@li...<mailto:pos...@li...>" <pos...@li...<mailto:pos...@li...>>, "pos...@li...<mailto:pos...@li...>" <pos...@li...<mailto:pos...@li...>> Subject: Re: [Postgres-xc-developers] Repairing Corrupt Objects Is this related to the outstanding 2PC three days ago, which I responded with a solution? If not, could you identify corrupt database object? Regards; --- Koichi Suzuki On 2013/11/02, at 7:34, "West, William" <ww...@uc...<mailto:ww...@uc...>> wrote: All, Sorry for the multiple posting but the first emails seemed to get garbled. Hopefully this will post with the full text. Please ignore the other posts with this topic. I have a database set up but it seems fairly unstable. I have a number of objects in the running database but I have one object that is corrupt. I know this because doing any query or dml statement on this table stalls the client. I also notice that when I run statements that encounter the invalid object. It spawns 2 process ID like below: 505149 19373 0.0 0.0 191768 5908 ? Ss 10:30 0:00 postgres: postgresql bigonc [local] VACUUM 505149 19379 1.2 0.0 191972 24776 ? Ss 10:30 0:00 postgres: postgresql bigonc 127.0.0.1(46305) VACUUM waiting These processes never complete and seem like they are locking one another. However there is nothing showing in pg_locks. Does anyone recognize this behavior. Is there anyway to repair or drop a corrupt object in the database? Thanks, Bill West ------------------------------------------------------------------------------ November Webinars for C, C++, Fortran Developers Accelerate application performance with scalable programming models. Explore techniques for threading, error checking, porting, and tuning. Get the most from the latest Intel processors and coprocessors. See abstracts and register http://pubads.g.doubleclick.net/gampad/clk?id=60136231&iu=/4140/ostg.clktrk_______________________________________________ Postgres-xc-developers mailing list Pos...@li...<mailto:Pos...@li...> https://lists.sourceforge.net/lists/listinfo/postgres-xc-developers |
From: 鈴木 幸市 <ko...@in...> - 2013-11-05 01:36:36
|
Is this related to the outstanding 2PC three days ago, which I responded with a solution? If not, could you identify corrupt database object? Regards; --- Koichi Suzuki On 2013/11/02, at 7:34, "West, William" <ww...@uc...<mailto:ww...@uc...>> wrote: All, Sorry for the multiple posting but the first emails seemed to get garbled. Hopefully this will post with the full text. Please ignore the other posts with this topic. I have a database set up but it seems fairly unstable. I have a number of objects in the running database but I have one object that is corrupt. I know this because doing any query or dml statement on this table stalls the client. I also notice that when I run statements that encounter the invalid object. It spawns 2 process ID like below: 505149 19373 0.0 0.0 191768 5908 ? Ss 10:30 0:00 postgres: postgresql bigonc [local] VACUUM 505149 19379 1.2 0.0 191972 24776 ? Ss 10:30 0:00 postgres: postgresql bigonc 127.0.0.1(46305) VACUUM waiting These processes never complete and seem like they are locking one another. However there is nothing showing in pg_locks. Does anyone recognize this behavior. Is there anyway to repair or drop a corrupt object in the database? Thanks, Bill West ------------------------------------------------------------------------------ November Webinars for C, C++, Fortran Developers Accelerate application performance with scalable programming models. Explore techniques for threading, error checking, porting, and tuning. Get the most from the latest Intel processors and coprocessors. See abstracts and register http://pubads.g.doubleclick.net/gampad/clk?id=60136231&iu=/4140/ostg.clktrk_______________________________________________ Postgres-xc-developers mailing list Pos...@li... https://lists.sourceforge.net/lists/listinfo/postgres-xc-developers |
From: Koichi S. <koi...@gm...> - 2013-11-05 01:17:28
|
Mason; Thanks for this big patch. I understand the intention and would like other members to look at this patch too. I'd like to know your idea if this can be a part of the TRIGGER improvement? Regards; --- Koichi Suzuki 2013/11/2 Mason Sharp <ms...@tr...> > Please see attached patch that tries to address the issue of XC using CTID > for replicated updates and deletes when it is evaluated at a coordinator > instead of being pushed down. > > The problem here is that CTID could be referring to a different tuple > altogether on a different data node, which is what happened for one of our > Postgres-XC support customers, leading to data issues. > > Instead, the patch looks for a primary key or unique index (with the > primary key preferred) and uses those values instead of CTID. > > The patch could be improved further. Extra parameters are set even if not > used in the execution of the prepared statement sent down to the data nodes. > > Regards, > > > -- > Mason Sharp > > TransLattice - http://www.translattice.com > Distributed and Clustered Database Solutions > > > ------------------------------------------------------------------------------ > November Webinars for C, C++, Fortran Developers > Accelerate application performance with scalable programming models. > Explore > techniques for threading, error checking, porting, and tuning. Get the most > from the latest Intel processors and coprocessors. See abstracts and > register > http://pubads.g.doubleclick.net/gampad/clk?id=60136231&iu=/4140/ostg.clktrk > _______________________________________________ > Postgres-xc-developers mailing list > Pos...@li... > https://lists.sourceforge.net/lists/listinfo/postgres-xc-developers > > |
From: West, W. <ww...@uc...> - 2013-11-01 22:38:26
|
All, Sorry for the multiple posting but the first emails seemed to get garbled. Hopefully this will post with the full text. Please ignore the other posts with this topic. I have a database set up but it seems fairly unstable. I have a number of objects in the running database but I have one object that is corrupt. I know this because doing any query or dml statement on this table stalls the client. I also notice that when I run statements that encounter the invalid object. It spawns 2 process ID like below: 505149 19373 0.0 0.0 191768 5908 ? Ss 10:30 0:00 postgres: postgresql bigonc [local] VACUUM 505149 19379 1.2 0.0 191972 24776 ? Ss 10:30 0:00 postgres: postgresql bigonc 127.0.0.1(46305) VACUUM waiting These processes never complete and seem like they are locking one another. However there is nothing showing in pg_locks. Does anyone recognize this behavior. Is there anyway to repair or drop a corrupt object in the database? Thanks, Bill West |