You can subscribe to this list here.
2010 |
Jan
|
Feb
|
Mar
|
Apr
(4) |
May
(28) |
Jun
(12) |
Jul
(11) |
Aug
(12) |
Sep
(5) |
Oct
(19) |
Nov
(14) |
Dec
(12) |
---|---|---|---|---|---|---|---|---|---|---|---|---|
2011 |
Jan
(18) |
Feb
(30) |
Mar
(115) |
Apr
(89) |
May
(50) |
Jun
(44) |
Jul
(22) |
Aug
(13) |
Sep
(11) |
Oct
(30) |
Nov
(28) |
Dec
(39) |
2012 |
Jan
(38) |
Feb
(18) |
Mar
(43) |
Apr
(91) |
May
(108) |
Jun
(46) |
Jul
(37) |
Aug
(44) |
Sep
(33) |
Oct
(29) |
Nov
(36) |
Dec
(15) |
2013 |
Jan
(35) |
Feb
(611) |
Mar
(5) |
Apr
(55) |
May
(30) |
Jun
(28) |
Jul
(458) |
Aug
(34) |
Sep
(9) |
Oct
(39) |
Nov
(22) |
Dec
(32) |
2014 |
Jan
(16) |
Feb
(16) |
Mar
(42) |
Apr
(179) |
May
(7) |
Jun
(6) |
Jul
(9) |
Aug
|
Sep
(4) |
Oct
|
Nov
(3) |
Dec
|
2015 |
Jan
|
Feb
|
Mar
|
Apr
(2) |
May
(4) |
Jun
|
Jul
|
Aug
|
Sep
|
Oct
|
Nov
|
Dec
|
S | M | T | W | T | F | S |
---|---|---|---|---|---|---|
|
|
|
|
|
1
|
2
|
3
|
4
|
5
(2) |
6
(2) |
7
|
8
(2) |
9
|
10
|
11
(1) |
12
(1) |
13
|
14
(2) |
15
(2) |
16
|
17
|
18
(1) |
19
|
20
|
21
(2) |
22
|
23
|
24
|
25
(1) |
26
|
27
(2) |
28
(2) |
29
(2) |
30
|
31
|
|
|
|
|
|
|
From: Mason <ma...@us...> - 2011-07-29 12:39:27
|
On Fri, Jul 29, 2011 at 4:57 AM, Ashutosh Bapat <ash...@us...> wrote: > Project "Postgres-XC". > > The branch, master has been updated > via 01f9d0376f95f08f352f96f0f1bdda8c56b610a8 (commit) > from 8ed66ed82f879918a16688e0f0138066afb7ca26 (commit) > > > - Log ----------------------------------------------------------------- > http://postgres-xc.git.sourceforge.net/git/gitweb.cgi?p=postgres-xc/postgres-xc;a=commitdiff;h= > > commit 01f9d0376f95f08f352f96f0f1bdda8c56b610a8 > Author: Ashutosh Bapat <ash...@en...> > Date: Fri Jul 29 13:50:49 2011 +0530 > > Allow commands PREPARE and EXECUTE to prepare a statement and execute it resp. > DEALLOCATE is already allowed. > Adds support for parameterised queries. While setting the statement names in > RemoteQuery node, we also set the parameter types. These are used while sending > Parse message to data nodes. The infrastructure to bind the parameter values was > already there. Great. Is this a general solution for multi-step queries, too? Prepared statements are also related to stored function support. Is there any impact on support for that? Thanks, Mason |
From: Ashutosh B. <ash...@us...> - 2011-07-29 08:57:30
|
Project "Postgres-XC". The branch, master has been updated via 01f9d0376f95f08f352f96f0f1bdda8c56b610a8 (commit) from 8ed66ed82f879918a16688e0f0138066afb7ca26 (commit) - Log ----------------------------------------------------------------- http://postgres-xc.git.sourceforge.net/git/gitweb.cgi?p=postgres-xc/postgres-xc;a=commitdiff;h= commit 01f9d0376f95f08f352f96f0f1bdda8c56b610a8 Author: Ashutosh Bapat <ash...@en...> Date: Fri Jul 29 13:50:49 2011 +0530 Allow commands PREPARE and EXECUTE to prepare a statement and execute it resp. DEALLOCATE is already allowed. Adds support for parameterised queries. While setting the statement names in RemoteQuery node, we also set the parameter types. These are used while sending Parse message to data nodes. The infrastructure to bind the parameter values was already there. M src/backend/commands/prepare.c M src/backend/nodes/copyfuncs.c M src/backend/pgxc/pool/execRemote.c M src/backend/pgxc/pool/pgxcnode.c M src/backend/tcop/utility.c M src/include/pgxc/pgxcnode.h M src/include/pgxc/planner.h M src/test/regress/expected/domain_1.out M src/test/regress/expected/functional_deps_1.out M src/test/regress/expected/guc_1.out M src/test/regress/expected/plancache_1.out D src/test/regress/expected/prepare_1.out M src/test/regress/output/tablespace_1.source ----------------------------------------------------------------------- Summary of changes: src/backend/commands/prepare.c | 37 +++---- src/backend/nodes/copyfuncs.c | 2 + src/backend/pgxc/pool/execRemote.c | 2 + src/backend/pgxc/pool/pgxcnode.c | 31 ++++-- src/backend/tcop/utility.c | 12 -- src/include/pgxc/pgxcnode.h | 1 + src/include/pgxc/planner.h | 5 + src/test/regress/expected/domain_1.out | 15 ++-- src/test/regress/expected/functional_deps_1.out | 11 +- src/test/regress/expected/guc_1.out | 5 +- src/test/regress/expected/plancache_1.out | 85 ++++++++-------- src/test/regress/expected/prepare_1.out | 126 ----------------------- src/test/regress/output/tablespace_1.source | 5 +- 13 files changed, 104 insertions(+), 233 deletions(-) delete mode 100644 src/test/regress/expected/prepare_1.out hooks/post-receive -- Postgres-XC |
From: Michael P. <mic...@us...> - 2011-07-28 04:21:45
|
Project "Postgres-XC". The branch, master has been updated via 8ed66ed82f879918a16688e0f0138066afb7ca26 (commit) from 85d499e14b8e69823e4409627c893bb19aed91d9 (commit) - Log ----------------------------------------------------------------- http://postgres-xc.git.sourceforge.net/git/gitweb.cgi?p=postgres-xc/postgres-xc;a=commitdiff;h= commit 8ed66ed82f879918a16688e0f0138066afb7ca26 Author: Michael P <mic...@us...> Date: Thu Jul 28 13:23:50 2011 +0900 Correction for regression test select_implicit SELECT INTO is not supported yet, so this test passes. A src/test/regress/expected/select_implicit_3.out ----------------------------------------------------------------------- Summary of changes: .../{select_implicit.out => select_implicit_3.out} | 25 ++++++++------------ 1 files changed, 10 insertions(+), 15 deletions(-) copy src/test/regress/expected/{select_implicit.out => select_implicit_3.out} (94%) hooks/post-receive -- Postgres-XC |
From: Michael P. <mic...@us...> - 2011-07-28 04:03:28
|
Project "Postgres-XC". The branch, master has been updated via 85d499e14b8e69823e4409627c893bb19aed91d9 (commit) from 1e04470efe097fdaa61ebdff858d7394537d6f9e (commit) - Log ----------------------------------------------------------------- http://postgres-xc.git.sourceforge.net/git/gitweb.cgi?p=postgres-xc/postgres-xc;a=commitdiff;h= commit 85d499e14b8e69823e4409627c893bb19aed91d9 Author: Michael P <mic...@us...> Date: Thu Jul 28 13:05:27 2011 +0900 Correction of regression test update A part of the test was failing due to a forgotten ORDER BY. M src/test/regress/expected/update_1.out M src/test/regress/sql/update.sql ----------------------------------------------------------------------- Summary of changes: src/test/regress/expected/update_1.out | 4 ++-- src/test/regress/sql/update.sql | 2 +- 2 files changed, 3 insertions(+), 3 deletions(-) hooks/post-receive -- Postgres-XC |
From: Michael P. <mic...@us...> - 2011-07-27 06:39:14
|
Project "Postgres-XC". The branch, master has been updated via 1e04470efe097fdaa61ebdff858d7394537d6f9e (commit) from 93d93ce4984b3868d80a6eb6b58e67abf99ce50f (commit) - Log ----------------------------------------------------------------- http://postgres-xc.git.sourceforge.net/git/gitweb.cgi?p=postgres-xc/postgres-xc;a=commit;h=1e04470efe097fdaa61ebdff858d7394537d6f9e commit 1e04470efe097fdaa61ebdff858d7394537d6f9e Author: Michael P <mic...@us...> Date: Wed Jul 27 15:37:09 2011 +0900 Support for TEMPORARY sequences A temporary sequence is created only on local Coordinator. It doesn't need any interaction with GTM as it only lives during the session session is up. Sequence algorithm for nextval, currval and lastval has been changed in accordance to temporary sequences. Moreover, this commit cleans up a part of the code used by XC in utility.c to have a more effective lookup on objects when looking for the node types utility queries should be sent to. COMMENTS are changed to be executed only on Coordinators. Comments on temporary objects need some special handling but this is not targetted yet. M src/backend/catalog/dependency.c M src/backend/commands/sequence.c M src/backend/commands/tablecmds.c M src/backend/pgxc/plan/planner.c M src/backend/tcop/utility.c M src/include/commands/sequence.h M src/include/pgxc/planner.h A src/test/regress/expected/drop_if_exists_1.out M src/test/regress/expected/plancache_1.out M src/test/regress/expected/sequence_2.out ----------------------------------------------------------------------- Summary of changes: src/backend/catalog/dependency.c | 15 +- src/backend/commands/sequence.c | 109 +++++++-- src/backend/commands/tablecmds.c | 14 +- src/backend/pgxc/plan/planner.c | 4 + src/backend/tcop/utility.c | 233 ++++++++++++++------ src/include/commands/sequence.h | 1 + src/include/pgxc/planner.h | 3 +- .../{drop_if_exists.out => drop_if_exists_1.out} | 6 +- src/test/regress/expected/plancache_1.out | 5 - src/test/regress/expected/sequence_2.out | 12 +- 10 files changed, 281 insertions(+), 121 deletions(-) copy src/test/regress/expected/{drop_if_exists.out => drop_if_exists_1.out} (94%) hooks/post-receive -- Postgres-XC |
From: Ashutosh B. <ash...@us...> - 2011-07-27 05:58:05
|
Project "Postgres-XC". The branch, master has been updated via 93d93ce4984b3868d80a6eb6b58e67abf99ce50f (commit) from f0f8a4b0680532a89c364c2294d3ca3a07ef7fea (commit) - Log ----------------------------------------------------------------- http://postgres-xc.git.sourceforge.net/git/gitweb.cgi?p=postgres-xc/postgres-xc;a=commit;h=93d93ce4984b3868d80a6eb6b58e67abf99ce50f commit 93d93ce4984b3868d80a6eb6b58e67abf99ce50f Author: Ashutosh Bapat <ash...@en...> Date: Wed Jul 27 11:12:22 2011 +0530 In function do_query() and ExecRemoteQuery() there is duplicated code respectively. Gathered this code into respective functions and called these functions instead of duplicating the code. The members expr and relid of ExecNodes are renamed as en_expr and en_relid. M src/backend/nodes/copyfuncs.c M src/backend/optimizer/plan/createplan.c M src/backend/parser/analyze.c M src/backend/pgxc/plan/planner.c M src/backend/pgxc/pool/execRemote.c M src/include/pgxc/locator.h ----------------------------------------------------------------------- Summary of changes: src/backend/nodes/copyfuncs.c | 4 +- src/backend/optimizer/plan/createplan.c | 8 +- src/backend/parser/analyze.c | 2 +- src/backend/pgxc/plan/planner.c | 12 +- src/backend/pgxc/pool/execRemote.c | 310 +++++++++++-------------------- src/include/pgxc/locator.h | 4 +- 6 files changed, 119 insertions(+), 221 deletions(-) hooks/post-receive -- Postgres-XC |
From: Michael P. <mic...@us...> - 2011-07-25 00:34:15
|
Project "Postgres-XC". The branch, master has been updated via f0f8a4b0680532a89c364c2294d3ca3a07ef7fea (commit) from 19d1d645290457d8ec8c3ec0e4ff1fcc78b43377 (commit) - Log ----------------------------------------------------------------- http://postgres-xc.git.sourceforge.net/git/gitweb.cgi?p=postgres-xc/postgres-xc;a=commit;h=f0f8a4b0680532a89c364c2294d3ca3a07ef7fea commit f0f8a4b0680532a89c364c2294d3ca3a07ef7fea Author: Michael P <mic...@us...> Date: Mon Jul 25 09:35:24 2011 +0900 Addition of forgotten PGXC flags in xact.c This made the code inconsistent with Postgres 9.1. Report and patch by sch19831106 M src/backend/access/transam/xact.c ----------------------------------------------------------------------- Summary of changes: src/backend/access/transam/xact.c | 13 +++++++++++-- 1 files changed, 11 insertions(+), 2 deletions(-) hooks/post-receive -- Postgres-XC |
From: Michael P. <mic...@us...> - 2011-07-21 06:11:03
|
Project "Postgres-XC". The branch, master has been updated via 19d1d645290457d8ec8c3ec0e4ff1fcc78b43377 (commit) from 0b8135db11723d586033bf45de94705003ef2bf6 (commit) - Log ----------------------------------------------------------------- http://postgres-xc.git.sourceforge.net/git/gitweb.cgi?p=postgres-xc/postgres-xc;a=commit;h=19d1d645290457d8ec8c3ec0e4ff1fcc78b43377 commit 19d1d645290457d8ec8c3ec0e4ff1fcc78b43377 Author: Michael P <mic...@us...> Date: Thu Jul 21 14:45:03 2011 +0900 Reactivate preferred_data_nodes preferred_data_nodes is a GUC parameter that allows the user to set a list of node numbers in a string to determine on which node a read operation should be done for replicated tables. It happened that even if parameter was set, the list of preferred nodes was not created from the GUC string and so it had no effects. It has been found that this fix improves performance of the whole cluster system by up to 30%. M src/backend/pgxc/locator/locator.c ----------------------------------------------------------------------- Summary of changes: src/backend/pgxc/locator/locator.c | 54 +++++++++++++++++++++++++++++++++-- 1 files changed, 50 insertions(+), 4 deletions(-) hooks/post-receive -- Postgres-XC |
From: Michael P. <mic...@us...> - 2011-07-21 06:10:24
|
Project "Postgres-XC". The branch, REL0_9_5_STABLE has been updated via c2d6ca92c3ee26eddbddb79d0c7fbab9f960c884 (commit) from 9c9346e33d899a9aaebaa1e1a6d5f79bb1d8d49d (commit) - Log ----------------------------------------------------------------- http://postgres-xc.git.sourceforge.net/git/gitweb.cgi?p=postgres-xc/postgres-xc;a=commit;h=c2d6ca92c3ee26eddbddb79d0c7fbab9f960c884 commit 0b8135db11723d586033bf45de94705003ef2bf6 Author: Ashutosh Bapat <ash...@en...> Date: Mon Jul 18 17:42:25 2011 +0530 If the havingQuals in query contain aggregates, the aggregates and the VARs not included in the expression trees rooted in those aggregates are included in the targetlist to be pushed to the data node. The aggregates are finalised at the coordinator and havingQual is evaluated. The same technique is used to push aggregates and VARs involved in the expressions in the targetlist to the data nodes. With this patch, we apply the grouping optimizations to the queries containing having clause. M src/backend/optimizer/plan/createplan.c M src/backend/pgxc/plan/planner.c M src/backend/pgxc/pool/postgresql_fdw.c M src/include/pgxc/postgresql_fdw.h M src/test/regress/expected/xc_groupby.out M src/test/regress/expected/xc_having.out ----------------------------------------------------------------------- Summary of changes: src/backend/pgxc/locator/locator.c | 54 +++++++++++++++++++++++++++++++++-- 1 files changed, 50 insertions(+), 4 deletions(-) hooks/post-receive -- Postgres-XC |
From: Ashutosh B. <ash...@us...> - 2011-07-18 12:23:24
|
Project "Postgres-XC". The branch, master has been updated via 0b8135db11723d586033bf45de94705003ef2bf6 (commit) from 5e13a79645239f25615dfc9ee1d177f44e0bbc09 (commit) - Log ----------------------------------------------------------------- http://postgres-xc.git.sourceforge.net/git/gitweb.cgi?p=postgres-xc/postgres-xc;a=commit;h=0b8135db11723d586033bf45de94705003ef2bf6 commit 0b8135db11723d586033bf45de94705003ef2bf6 Author: Ashutosh Bapat <ash...@en...> Date: Mon Jul 18 17:42:25 2011 +0530 If the havingQuals in query contain aggregates, the aggregates and the VARs not included in the expression trees rooted in those aggregates are included in the targetlist to be pushed to the data node. The aggregates are finalised at the coordinator and havingQual is evaluated. The same technique is used to push aggregates and VARs involved in the expressions in the targetlist to the data nodes. With this patch, we apply the grouping optimizations to the queries containing having clause. M src/backend/optimizer/plan/createplan.c M src/backend/pgxc/plan/planner.c M src/backend/pgxc/pool/postgresql_fdw.c M src/include/pgxc/postgresql_fdw.h M src/test/regress/expected/xc_groupby.out M src/test/regress/expected/xc_having.out ----------------------------------------------------------------------- Summary of changes: src/backend/optimizer/plan/createplan.c | 393 +++++++++++++++++------ src/backend/pgxc/plan/planner.c | 2 +- src/backend/pgxc/pool/postgresql_fdw.c | 83 +++++- src/include/pgxc/postgresql_fdw.h | 10 +- src/test/regress/expected/xc_groupby.out | 140 ++++----- src/test/regress/expected/xc_having.out | 514 +++++++++++++----------------- 6 files changed, 657 insertions(+), 485 deletions(-) hooks/post-receive -- Postgres-XC |
From: Michael P. <mic...@us...> - 2011-07-15 07:12:33
|
Project "Postgres-XC". The branch, REL0_9_5_STABLE has been updated via 9c9346e33d899a9aaebaa1e1a6d5f79bb1d8d49d (commit) from 694253102c045e2fc3d145d6718b81b5977004b3 (commit) - Log ----------------------------------------------------------------- http://postgres-xc.git.sourceforge.net/git/gitweb.cgi?p=postgres-xc/postgres-xc;a=commit;h=9c9346e33d899a9aaebaa1e1a6d5f79bb1d8d49d commit 5e13a79645239f25615dfc9ee1d177f44e0bbc09 Author: Michael P <mic...@us...> Date: Fri Jul 15 15:30:12 2011 +0900 Fix for bug 3366724: Autovacuum warning on datanode This was caused by autovacuum launcher on Datanode. As autovacuum is not authorized to get GXID and snapshot from GTM, bypass this warning. Global snapshot is not necessary for launcher as autovacuum analyze and worker are in charge of table cleanup. M src/backend/storage/ipc/procarray.c ----------------------------------------------------------------------- Summary of changes: src/backend/storage/ipc/procarray.c | 19 +++++++++++++------ 1 files changed, 13 insertions(+), 6 deletions(-) hooks/post-receive -- Postgres-XC |
From: Michael P. <mic...@us...> - 2011-07-15 06:43:55
|
Project "Postgres-XC". The branch, master has been updated via 5e13a79645239f25615dfc9ee1d177f44e0bbc09 (commit) from fb2a3a864dee367f2cfe76a3a4fff5e02cb94fc1 (commit) - Log ----------------------------------------------------------------- commit 5e13a79645239f25615dfc9ee1d177f44e0bbc09 Author: Michael P <mic...@us...> Date: Fri Jul 15 15:30:12 2011 +0900 Fix for bug 3366724: Autovacuum warning on datanode This was caused by autovacuum launcher on Datanode. As autovacuum is not authorized to get GXID and snapshot from GTM, bypass this warning. Global snapshot is not necessary for launcher as autovacuum analyze and worker are in charge of table cleanup. diff --git a/src/backend/storage/ipc/procarray.c b/src/backend/storage/ipc/procarray.c index 5477df6..783b352 100644 --- a/src/backend/storage/ipc/procarray.c +++ b/src/backend/storage/ipc/procarray.c @@ -1243,18 +1243,25 @@ GetSnapshotData(Snapshot snapshot) /* else fallthrough */ } - /* If we have no snapshot, we will use a local one. + /* + * If we have no snapshot, we will use a local one. * If we are in normal mode, we output a warning though. * We currently fallback and use a local one at initdb time, * as well as when a new connection occurs. + * This is also the case for autovacuum launcher. + * * IsPostmasterEnvironment - checks for initdb * IsNormalProcessingMode() - checks for new connections + * IsAutoVacuumLauncherProcess - checks for autovacuum launcher process */ - if (IS_PGXC_DATANODE && snapshot_source == SNAPSHOT_UNDEFINED - && IsPostmasterEnvironment && IsNormalProcessingMode()) - { - elog(WARNING, "Do not have a GTM snapshot available"); - } + if (IS_PGXC_DATANODE && + snapshot_source == SNAPSHOT_UNDEFINED && + IsPostmasterEnvironment && + IsNormalProcessingMode() && + !IsAutoVacuumLauncherProcess()) + { + elog(WARNING, "Do not have a GTM snapshot available"); + } #endif /* ----------------------------------------------------------------------- Summary of changes: src/backend/storage/ipc/procarray.c | 19 +++++++++++++------ 1 files changed, 13 insertions(+), 6 deletions(-) hooks/post-receive -- Postgres-XC |
From: Michael P. <mic...@us...> - 2011-07-14 05:05:38
|
Project "Postgres-XC". The branch, master has been updated via fb2a3a864dee367f2cfe76a3a4fff5e02cb94fc1 (commit) from f0f4ae5fddf646b1a41dd4d512fccdf9c9587254 (commit) - Log ----------------------------------------------------------------- commit fb2a3a864dee367f2cfe76a3a4fff5e02cb94fc1 Author: Michael P <mic...@us...> Date: Thu Jul 14 13:56:00 2011 +0900 Performance issue with snapshot processing This commit solves an issue with snapshot processing in the case where cluster was used under multiple Coordinators. When a Coordinator/Coordinator connection was initialized, backend coordinator initialized a transaction to GTM that was never committed. A consequence of that was Snapshot xmin that remained to a constant value. Another consequence was recent global Xmin value set to a lower value, making autovacuum having absolutely no effects. This fix improves performance of the whole cluster by making autovacuum remove correctly old tuples for long-period runs. diff --git a/src/backend/access/transam/varsup.c b/src/backend/access/transam/varsup.c index a5ff753..1c684f5 100644 --- a/src/backend/access/transam/varsup.c +++ b/src/backend/access/transam/varsup.c @@ -203,7 +203,8 @@ GetNewTransactionId(bool isSubXact) increment_xid = false; elog(DEBUG1, "xid (%d) does not follow ShmemVariableCache->nextXid (%d)", xid, ShmemVariableCache->nextXid); - } else + } + else ShmemVariableCache->nextXid = xid; } else diff --git a/src/backend/storage/ipc/procarray.c b/src/backend/storage/ipc/procarray.c index 2de6db4..5477df6 100644 --- a/src/backend/storage/ipc/procarray.c +++ b/src/backend/storage/ipc/procarray.c @@ -1234,7 +1234,8 @@ GetSnapshotData(Snapshot snapshot) if (GetSnapshotDataDataNode(snapshot)) return snapshot; /* else fallthrough */ - } else if (IS_PGXC_COORDINATOR && !IsConnFromCoord()) + } + else if (IS_PGXC_COORDINATOR && !IsConnFromCoord() && IsNormalProcessingMode()) { /* Snapshot has ever been received from remote Coordinator */ if (GetSnapshotDataCoordinator(snapshot)) diff --git a/src/gtm/main/gtm_txn.c b/src/gtm/main/gtm_txn.c index 7d13076..37b40a6 100644 --- a/src/gtm/main/gtm_txn.c +++ b/src/gtm/main/gtm_txn.c @@ -1363,7 +1363,7 @@ ProcessCommitTransactionCommand(Port *myport, StringInfo message) pq_getmsgend(message); oldContext = MemoryContextSwitchTo(TopMemoryContext); - + elog(LOG, "Committing transaction id %u", gxid); /* @@ -1455,6 +1455,8 @@ ProcessCommitPreparedTransactionCommand(Port *myport, StringInfo message) oldContext = MemoryContextSwitchTo(TopMemoryContext); + elog(LOG, "Committing: prepared id %u and commit prepared id %u ", gxid[0], gxid[1]); + /* * Commit the prepared transaction. */ ----------------------------------------------------------------------- Summary of changes: src/backend/access/transam/varsup.c | 3 ++- src/backend/storage/ipc/procarray.c | 3 ++- src/gtm/main/gtm_txn.c | 4 +++- 3 files changed, 7 insertions(+), 3 deletions(-) hooks/post-receive -- Postgres-XC |
From: Michael P. <mic...@us...> - 2011-07-14 05:04:47
|
Project "Postgres-XC". The branch, REL0_9_5_STABLE has been updated via 694253102c045e2fc3d145d6718b81b5977004b3 (commit) from e992f2cb8debeca4123d3137e48949071342c5c0 (commit) - Log ----------------------------------------------------------------- commit 694253102c045e2fc3d145d6718b81b5977004b3 Author: Michael P <mic...@us...> Date: Thu Jul 14 13:56:00 2011 +0900 Performance issue with snapshot processing This commit solves an issue with snapshot processing in the case where cluster was used under multiple Coordinators. When a Coordinator/Coordinator connection was initialized, backend coordinator initialized a transaction to GTM that was never committed. A consequence of that was Snapshot xmin that remained to a constant value. Another consequence was recent global Xmin value set to a lower value, making autovacuum having absolutely no effects. This fix improves performance of the whole cluster by making autovacuum remove correctly old tuples for long-period runs. diff --git a/src/backend/access/transam/varsup.c b/src/backend/access/transam/varsup.c index 4bd8003..cdef834 100644 --- a/src/backend/access/transam/varsup.c +++ b/src/backend/access/transam/varsup.c @@ -203,7 +203,8 @@ GetNewTransactionId(bool isSubXact) increment_xid = false; elog(DEBUG1, "xid (%d) does not follow ShmemVariableCache->nextXid (%d)", xid, ShmemVariableCache->nextXid); - } else + } + else ShmemVariableCache->nextXid = xid; } else diff --git a/src/backend/storage/ipc/procarray.c b/src/backend/storage/ipc/procarray.c index 741f658..d307b42 100644 --- a/src/backend/storage/ipc/procarray.c +++ b/src/backend/storage/ipc/procarray.c @@ -1227,7 +1227,8 @@ GetSnapshotData(Snapshot snapshot) if (GetSnapshotDataDataNode(snapshot)) return snapshot; /* else fallthrough */ - } else if (IS_PGXC_COORDINATOR && !IsConnFromCoord()) + } + else if (IS_PGXC_COORDINATOR && !IsConnFromCoord() && IsNormalProcessingMode()) { /* Snapshot has ever been received from remote Coordinator */ if (GetSnapshotDataCoordinator(snapshot)) diff --git a/src/gtm/main/gtm_txn.c b/src/gtm/main/gtm_txn.c index 7d13076..37b40a6 100644 --- a/src/gtm/main/gtm_txn.c +++ b/src/gtm/main/gtm_txn.c @@ -1363,7 +1363,7 @@ ProcessCommitTransactionCommand(Port *myport, StringInfo message) pq_getmsgend(message); oldContext = MemoryContextSwitchTo(TopMemoryContext); - + elog(LOG, "Committing transaction id %u", gxid); /* @@ -1455,6 +1455,8 @@ ProcessCommitPreparedTransactionCommand(Port *myport, StringInfo message) oldContext = MemoryContextSwitchTo(TopMemoryContext); + elog(LOG, "Committing: prepared id %u and commit prepared id %u ", gxid[0], gxid[1]); + /* * Commit the prepared transaction. */ ----------------------------------------------------------------------- Summary of changes: src/backend/access/transam/varsup.c | 3 ++- src/backend/storage/ipc/procarray.c | 3 ++- src/gtm/main/gtm_txn.c | 4 +++- 3 files changed, 7 insertions(+), 3 deletions(-) hooks/post-receive -- Postgres-XC |
From: Michael P. <mic...@us...> - 2011-07-12 00:07:32
|
Project "Postgres-XC". The branch, master has been updated via f0f4ae5fddf646b1a41dd4d512fccdf9c9587254 (commit) from 7fea05bf15f8f214c2ffffd72d80ae4912af4ad6 (commit) - Log ----------------------------------------------------------------- commit f0f4ae5fddf646b1a41dd4d512fccdf9c9587254 Author: Michael P <mic...@us...> Date: Tue Jul 12 09:07:41 2011 +0900 Fix for WHERE planning when analyzing forwign quals Now the analysis of foreign quals is limited to WHERE clauses. This was causing errors with DBT-1 for queries like that: UPDATE table SET column1 = now() WHERE column2 > 1; by causing an UPDATE trying to occur on local Coordinator. diff --git a/src/backend/pgxc/plan/planner.c b/src/backend/pgxc/plan/planner.c index 098f8c9..f2ee26f 100644 --- a/src/backend/pgxc/plan/planner.c +++ b/src/backend/pgxc/plan/planner.c @@ -1239,12 +1239,6 @@ examine_conditions_walker(Node *expr_node, XCWalkerContext *context) */ return false; } - else - { - /* Check if this node can be pushed down. */ - if (!is_foreign_qual((Node *) arg2)) - return true; - } /* * Check if it is an expression like pcol = expr, where pcol is @@ -1274,22 +1268,9 @@ examine_conditions_walker(Node *expr_node, XCWalkerContext *context) return false; } } - else - { - /* Check if this node can be pushed down. */ - if (!is_foreign_qual((Node *) arg2)) - return true; - } } } - /* See if the function is immutable, otherwise give up */ - if (IsA(expr_node, FuncExpr)) - { - if (!is_foreign_qual((Node *) expr_node)) - return true; - } - /* Handle subquery */ if (IsA(expr_node, SubLink)) { @@ -1660,7 +1641,8 @@ get_plan_nodes_walker(Node *query_node, XCWalkerContext *context) } /* Examine the WHERE clause, too */ - if (examine_conditions_walker(query->jointree->quals, context)) + if (examine_conditions_walker(query->jointree->quals, context) || + !is_foreign_qual(query->jointree->quals)) return true; if (context->query_step->exec_nodes) ----------------------------------------------------------------------- Summary of changes: src/backend/pgxc/plan/planner.c | 22 ++-------------------- 1 files changed, 2 insertions(+), 20 deletions(-) hooks/post-receive -- Postgres-XC |
From: Michael P. <mic...@us...> - 2011-07-11 02:17:00
|
Project "Postgres-XC". The branch, master has been updated via 7fea05bf15f8f214c2ffffd72d80ae4912af4ad6 (commit) from 4eb6e0b9b6d0a1d75ade7c80b2d22f64e8a84544 (commit) - Log ----------------------------------------------------------------- commit 7fea05bf15f8f214c2ffffd72d80ae4912af4ad6 Author: Michael P <mic...@us...> Date: Mon Jul 11 11:18:35 2011 +0900 Fix for "make -j" broken after 9.1 merge Patch by Wang Diancheng diff --git a/src/Makefile b/src/Makefile index 65ea50e..a5b6cd3 100644 --- a/src/Makefile +++ b/src/Makefile @@ -16,11 +16,11 @@ SUBDIRS = \ port \ timezone \ gtm \ + interfaces \ backend \ backend/utils/mb/conversion_procs \ backend/snowball \ include \ - interfaces \ backend/replication/libpqwalreceiver \ bin \ pl \ ----------------------------------------------------------------------- Summary of changes: src/Makefile | 2 +- 1 files changed, 1 insertions(+), 1 deletions(-) hooks/post-receive -- Postgres-XC |
From: Michael P. <mic...@us...> - 2011-07-08 07:01:19
|
Project "Postgres-XC". The branch, master has been updated via 4eb6e0b9b6d0a1d75ade7c80b2d22f64e8a84544 (commit) from 28f420edd65be1966f96bce37893a93f5cf10470 (commit) - Log ----------------------------------------------------------------- commit 4eb6e0b9b6d0a1d75ade7c80b2d22f64e8a84544 Author: Michael P <mic...@us...> Date: Fri Jul 8 15:55:21 2011 +0900 Improvement of expression push-down analysis to foreign servers This patch controls expressions by using the functionalities of XC's foreign data wrapper analysis functions. Basically, immutable functions/operators can be safely pushed down as well as arrays, constant values, booleans, external parameters, etc. For functions and operators, the expression can be pushed down if it has the same meaning for local and foreign server. This commit also cleans up a part of PGXC planner. diff --git a/src/backend/optimizer/plan/createplan.c b/src/backend/optimizer/plan/createplan.c index 5022e84..7b28b89 100644 --- a/src/backend/optimizer/plan/createplan.c +++ b/src/backend/optimizer/plan/createplan.c @@ -39,6 +39,7 @@ #ifdef PGXC #include "pgxc/pgxc.h" #include "pgxc/planner.h" +#include "pgxc/postgresql_fdw.h" #include "access/sysattr.h" #include "utils/builtins.h" #include "utils/syscache.h" @@ -191,7 +192,6 @@ static Material *make_material(Plan *lefttree); #ifdef PGXC static void findReferencedVars(List *parent_vars, Plan *plan, List **out_tlist, Relids *out_relids); -extern bool is_foreign_qual(Node *clause); static void create_remote_clause_expr(PlannerInfo *root, Plan *parent, StringInfo clauses, List *qual, RemoteQuery *scan); static void create_remote_expr(PlannerInfo *root, Plan *parent, StringInfo expr, diff --git a/src/backend/pgxc/plan/planner.c b/src/backend/pgxc/plan/planner.c index 1cd33d1..098f8c9 100644 --- a/src/backend/pgxc/plan/planner.c +++ b/src/backend/pgxc/plan/planner.c @@ -36,6 +36,7 @@ #include "pgxc/pgxc.h" #include "pgxc/locator.h" #include "pgxc/planner.h" +#include "pgxc/postgresql_fdw.h" #include "tcop/pquery.h" #include "utils/acl.h" #include "utils/builtins.h" @@ -170,7 +171,6 @@ static int handle_limit_offset(RemoteQuery *query_step, Query *query, PlannedStm static void InitXCWalkerContext(XCWalkerContext *context); static RemoteQuery *makeRemoteQuery(void); static void validate_part_col_updatable(const Query *query); -static bool is_pgxc_safe_func(Oid funcid); /* @@ -1239,6 +1239,13 @@ examine_conditions_walker(Node *expr_node, XCWalkerContext *context) */ return false; } + else + { + /* Check if this node can be pushed down. */ + if (!is_foreign_qual((Node *) arg2)) + return true; + } + /* * Check if it is an expression like pcol = expr, where pcol is * a partitioning column of the rel1 and planner could not @@ -1267,13 +1274,19 @@ examine_conditions_walker(Node *expr_node, XCWalkerContext *context) return false; } } + else + { + /* Check if this node can be pushed down. */ + if (!is_foreign_qual((Node *) arg2)) + return true; + } } } /* See if the function is immutable, otherwise give up */ if (IsA(expr_node, FuncExpr)) { - if (!is_pgxc_safe_func(((FuncExpr*) expr_node)->funcid)) + if (!is_foreign_qual((Node *) expr_node)) return true; } @@ -3143,64 +3156,6 @@ validate_part_col_updatable(const Query *query) } } -/* - * See if it is safe to use this function in single step. - * - * Based on is_immutable_func from postgresql_fdw.c - * We add an exeption for base postgresql functions, to - * allow now() and others to still execute as part of single step - * queries. - * - * PGXCTODO - we currently make the false assumption that immutable - * functions will not write to the database. This could be addressed - * by either a more thorough analysis of functions at - * creation time or additional tags at creation time (preferably - * in standard PostgreSQL). Ideally such functionality could be - * committed back to standard PostgreSQL. - */ -bool -is_pgxc_safe_func(Oid funcid) -{ - HeapTuple tp; - bool isnull; - Datum datum; - bool ret_val = false; - - tp = SearchSysCache(PROCOID, ObjectIdGetDatum(funcid), 0, 0, 0); - if (!HeapTupleIsValid(tp)) - elog(ERROR, "cache lookup failed for function %u", funcid); - -#ifdef DEBUG_FDW - /* print function name and its immutability */ - { - char *proname; - datum = SysCacheGetAttr(PROCOID, tp, Anum_pg_proc_proname, &isnull); - proname = pstrdup(DatumGetName(datum)->data); - elog(DEBUG1, "func %s(%u) is%s immutable", proname, funcid, - (DatumGetChar(datum) == PROVOLATILE_IMMUTABLE) ? "" : " not"); - pfree(proname); - } -#endif - - datum = SysCacheGetAttr(PROCOID, tp, Anum_pg_proc_provolatile, &isnull); - - if (DatumGetChar(datum) == PROVOLATILE_IMMUTABLE) - ret_val = true; - /* - * Also allow stable and volatile ones that are in the PG_CATALOG_NAMESPACE - * this allows now() and others that do not update the database - * PGXCTODO - examine default functions carefully for those that may - * write to the database. - */ - else - { - datum = SysCacheGetAttr(PROCOID, tp, Anum_pg_proc_pronamespace, &isnull); - if (DatumGetObjectId(datum) == PG_CATALOG_NAMESPACE) - ret_val = true; - } - ReleaseSysCache(tp); - return ret_val; -} /* * GetHashExecNodes - diff --git a/src/backend/pgxc/pool/postgresql_fdw.c b/src/backend/pgxc/pool/postgresql_fdw.c index 93c30b2..1a158a6 100644 --- a/src/backend/pgxc/pool/postgresql_fdw.c +++ b/src/backend/pgxc/pool/postgresql_fdw.c @@ -10,30 +10,25 @@ * *------------------------------------------------------------------------- */ -#include "postgres.h" - +#include "pgxc/postgresql_fdw.h" #include "catalog/pg_operator.h" #include "catalog/pg_proc.h" #include "funcapi.h" -//#include "libpq-fe.h" #include "mb/pg_wchar.h" #include "miscadmin.h" #include "nodes/nodeFuncs.h" #include "nodes/makefuncs.h" #include "optimizer/clauses.h" #include "parser/scansup.h" -#include "pgxc/execRemote.h" #include "utils/builtins.h" #include "utils/lsyscache.h" #include "utils/memutils.h" #include "utils/syscache.h" -//#include "dblink.h" - #define DEBUG_FDW /* - * WHERE caluse optimization level + * WHERE clause optimization level */ #define EVAL_QUAL_LOCAL 0 /* evaluate none in foreign, all in local */ #define EVAL_QUAL_BOTH 1 /* evaluate some in foreign, all in local */ @@ -41,14 +36,8 @@ #define OPTIMIZE_WHERE_CLAUSE EVAL_QUAL_FOREIGN - - /* deparse SQL from the request */ -bool is_immutable_func(Oid funcid); -bool is_foreign_qual(Node *node); static bool foreign_qual_walker(Node *node, void *context); -char *deparseSql(RemoteQueryState *scanstate); - /* * Check whether the function is IMMUTABLE. diff --git a/src/include/pgxc/planner.h b/src/include/pgxc/planner.h index 1a18c9f..4228fbf 100644 --- a/src/include/pgxc/planner.h +++ b/src/include/pgxc/planner.h @@ -138,8 +138,6 @@ extern PlannedStmt *pgxc_planner(Query *query, int cursorOptions, extern Plan *pgxc_grouping_planner(PlannerInfo *root, Plan *agg_plan); extern bool IsHashDistributable(Oid col_type); -extern bool is_immutable_func(Oid funcid); - extern bool IsJoinReducible(RemoteQuery *innernode, RemoteQuery *outernode, List *rtable_list, JoinPath *join_path, JoinReduceInfo *join_info); diff --git a/src/include/pgxc/postgresql_fdw.h b/src/include/pgxc/postgresql_fdw.h new file mode 100644 index 0000000..563236c --- /dev/null +++ b/src/include/pgxc/postgresql_fdw.h @@ -0,0 +1,24 @@ +/*------------------------------------------------------------------------- + * + * postgresql_fdw.h + * + * foreign-data wrapper for PostgreSQL + * + * Portions Copyright (c) 1996-2010, PostgreSQL Global Development Group + * + * IDENTIFICATION + * $PostgreSQL$ + * + *------------------------------------------------------------------------- + */ + +#ifndef POSTGRES_FDW_H +#define POSTGRES_FDW_H + +#include "postgres.h" +#include "pgxc/execRemote.h" + +bool is_immutable_func(Oid funcid); +bool is_foreign_qual(Node *node); +char *deparseSql(RemoteQueryState *scanstate); +#endif ----------------------------------------------------------------------- Summary of changes: src/backend/optimizer/plan/createplan.c | 2 +- src/backend/pgxc/plan/planner.c | 75 ++++++------------------------ src/backend/pgxc/pool/postgresql_fdw.c | 15 +----- src/include/pgxc/planner.h | 2 - src/include/pgxc/postgresql_fdw.h | 24 ++++++++++ 5 files changed, 42 insertions(+), 76 deletions(-) create mode 100644 src/include/pgxc/postgresql_fdw.h hooks/post-receive -- Postgres-XC |
From: Ashutosh B. <ash...@us...> - 2011-07-08 06:17:53
|
Project "Postgres-XC". The branch, master has been updated via 28f420edd65be1966f96bce37893a93f5cf10470 (commit) from e1083f299ae559b26ad1b224f8452ef4d697d23e (commit) - Log ----------------------------------------------------------------- commit 28f420edd65be1966f96bce37893a93f5cf10470 Author: Ashutosh Bapat <ash...@en...> Date: Fri Jul 8 11:39:04 2011 +0530 Due to merge from PG9.1, we have tests xc_groupby and xc_having fail because of following reasons 1. The test alter_table.sql creates two tables tab1 and tab2, but does not drop them. When tests xc_groupby and xc_having create tables with same names, the table creation and subsequently all the queries against these tables fail. Changed the names of these tables to have test names as prefixes. 2. There were planner and costing changes in PG9.1, hence many of the EXPLAIN VERBOSE outputs changed. Fixed the expected outputs for the same. diff --git a/src/test/regress/expected/xc_groupby.out b/src/test/regress/expected/xc_groupby.out index 31e9a8c..08f8da5 100644 --- a/src/test/regress/expected/xc_groupby.out +++ b/src/test/regress/expected/xc_groupby.out @@ -6,11 +6,11 @@ -- Combination 1: enable_hashagg on and distributed tables set enable_hashagg to on; -- create required tables and fill them with data -create table tab1 (val int, val2 int); -create table tab2 (val int, val2 int); -insert into tab1 values (1, 1), (2, 1), (3, 1), (2, 2), (6, 2), (4, 3), (1, 3), (6, 3); -insert into tab2 values (1, 1), (4, 1), (8, 1), (2, 4), (9, 4), (3, 4), (4, 2), (5, 2), (3, 2); -select count(*), sum(val), avg(val), sum(val)::float8/count(*), val2 from tab1 group by val2; +create table xc_groupby_tab1 (val int, val2 int); +create table xc_groupby_tab2 (val int, val2 int); +insert into xc_groupby_tab1 values (1, 1), (2, 1), (3, 1), (2, 2), (6, 2), (4, 3), (1, 3), (6, 3); +insert into xc_groupby_tab2 values (1, 1), (4, 1), (8, 1), (2, 4), (9, 4), (3, 4), (4, 2), (5, 2), (3, 2); +select count(*), sum(val), avg(val), sum(val)::float8/count(*), val2 from xc_groupby_tab1 group by val2; count | sum | avg | ?column? | val2 -------+-----+--------------------+------------------+------ 3 | 6 | 2.0000000000000000 | 2 | 1 @@ -18,10 +18,10 @@ select count(*), sum(val), avg(val), sum(val)::float8/count(*), val2 from tab1 g 3 | 11 | 3.6666666666666667 | 3.66666666666667 | 3 (3 rows) -explain verbose select count(*), sum(val), avg(val), sum(val)::float8/count(*), val2 from tab1 group by val2; +explain verbose select count(*), sum(val), avg(val), sum(val)::float8/count(*), val2 from xc_groupby_tab1 group by val2; QUERY PLAN ------------------------------------------------------------------------------------------------------------- - HashAggregate (cost=1.03..1.06 rows=1 width=8) + HashAggregate (cost=1.03..1.05 rows=1 width=8) Output: count(*), sum(val), avg(val), ((sum(val))::double precision / (count(*))::double precision), val2 -> Materialize (cost=0.00..1.01 rows=1 width=8) Output: val, val2 @@ -30,7 +30,7 @@ explain verbose select count(*), sum(val), avg(val), sum(val)::float8/count(*), (6 rows) -- joins and group by -select count(*), sum(tab1.val * tab2.val), avg(tab1.val*tab2.val), sum(tab1.val*tab2.val)::float8/count(*), tab1.val2, tab2.val2 from tab1 full outer join tab2 on tab1.val2 = tab2.val2 group by tab1.val2, tab2.val2; +select count(*), sum(xc_groupby_tab1.val * xc_groupby_tab2.val), avg(xc_groupby_tab1.val*xc_groupby_tab2.val), sum(xc_groupby_tab1.val*xc_groupby_tab2.val)::float8/count(*), xc_groupby_tab1.val2, xc_groupby_tab2.val2 from xc_groupby_tab1 full outer join xc_groupby_tab2 on xc_groupby_tab1.val2 = xc_groupby_tab2.val2 group by xc_groupby_tab1.val2, xc_groupby_tab2.val2; count | sum | avg | ?column? | val2 | val2 -------+-----+---------------------+------------------+------+------ 6 | 96 | 16.0000000000000000 | 16 | 2 | 2 @@ -39,53 +39,49 @@ select count(*), sum(tab1.val * tab2.val), avg(tab1.val*tab2.val), sum(tab1.val* 3 | | | | | 4 (4 rows) -explain verbose select count(*), sum(tab1.val * tab2.val), avg(tab1.val*tab2.val), sum(tab1.val*tab2.val)::float8/count(*), tab1.val2, tab2.val2 from tab1 full outer join tab2 on tab1.val2 = tab2.val2 group by tab1.val2, tab2.val2; - QUERY PLAN ------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------ - HashAggregate (cost=2.09..2.13 rows=1 width=16) - Output: count(*), sum((tab1.val * tab2.val)), avg((tab1.val * tab2.val)), ((sum((tab1.val * tab2.val)))::double precision / (count(*))::double precision), tab1.val2, tab2.val2 - -> Merge Full Join (cost=2.05..2.07 rows=1 width=16) - Output: tab1.val, tab1.val2, tab2.val, tab2.val2 - Merge Cond: (tab1.val2 = tab2.val2) - -> Sort (cost=1.02..1.03 rows=1 width=8) - Output: tab1.val, tab1.val2 - Sort Key: tab1.val2 +explain verbose select count(*), sum(xc_groupby_tab1.val * xc_groupby_tab2.val), avg(xc_groupby_tab1.val*xc_groupby_tab2.val), sum(xc_groupby_tab1.val*xc_groupby_tab2.val)::float8/count(*), xc_groupby_tab1.val2, xc_groupby_tab2.val2 from xc_groupby_tab1 full outer join xc_groupby_tab2 on xc_groupby_tab1.val2 = xc_groupby_tab2.val2 group by xc_groupby_tab1.val2, xc_groupby_tab2.val2; + QUERY PLAN +--------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------- + HashAggregate (cost=2.08..2.10 rows=1 width=16) + Output: count(*), sum((xc_groupby_tab1.val * xc_groupby_tab2.val)), avg((xc_groupby_tab1.val * xc_groupby_tab2.val)), ((sum((xc_groupby_tab1.val * xc_groupby_tab2.val)))::double precision / (count(*))::double precision), xc_groupby_tab1.val2, xc_groupby_tab2.val2 + -> Hash Full Join (cost=1.03..2.06 rows=1 width=16) + Output: xc_groupby_tab1.val, xc_groupby_tab1.val2, xc_groupby_tab2.val, xc_groupby_tab2.val2 + Hash Cond: (xc_groupby_tab1.val2 = xc_groupby_tab2.val2) + -> Materialize (cost=0.00..1.01 rows=1 width=8) + Output: xc_groupby_tab1.val, xc_groupby_tab1.val2 + -> Data Node Scan (Node Count [2]) on xc_groupby_tab1 (cost=0.00..1.01 rows=1000 width=8) + Output: xc_groupby_tab1.val, xc_groupby_tab1.val2 + -> Hash (cost=1.01..1.01 rows=1 width=8) + Output: xc_groupby_tab2.val, xc_groupby_tab2.val2 -> Materialize (cost=0.00..1.01 rows=1 width=8) - Output: tab1.val, tab1.val2 - -> Data Node Scan (Node Count [2]) on tab1 (cost=0.00..1.01 rows=1000 width=8) - Output: tab1.val, tab1.val2 - -> Sort (cost=1.02..1.03 rows=1 width=8) - Output: tab2.val, tab2.val2 - Sort Key: tab2.val2 - -> Materialize (cost=0.00..1.01 rows=1 width=8) - Output: tab2.val, tab2.val2 - -> Data Node Scan (Node Count [2]) on tab2 (cost=0.00..1.01 rows=1000 width=8) - Output: tab2.val, tab2.val2 -(19 rows) + Output: xc_groupby_tab2.val, xc_groupby_tab2.val2 + -> Data Node Scan (Node Count [2]) on xc_groupby_tab2 (cost=0.00..1.01 rows=1000 width=8) + Output: xc_groupby_tab2.val, xc_groupby_tab2.val2 +(15 rows) -- aggregates over aggregates -select sum(y) from (select sum(val) y, val2%2 x from tab1 group by val2) q1 group by x; +select sum(y) from (select sum(val) y, val2%2 x from xc_groupby_tab1 group by val2) q1 group by x; sum ----- 8 17 (2 rows) -explain verbose select sum(y) from (select sum(val) y, val2%2 x from tab1 group by val2) q1 group by x; - QUERY PLAN ----------------------------------------------------------------------------------------- +explain verbose select sum(y) from (select sum(val) y, val2%2 x from xc_groupby_tab1 group by val2) q1 group by x; + QUERY PLAN +---------------------------------------------------------------------------------------------------------------- HashAggregate (cost=1.05..1.06 rows=1 width=12) - Output: sum((pg_catalog.sum((sum(tab1.val))))), ((tab1.val2 % 2)) + Output: sum((pg_catalog.sum((sum(xc_groupby_tab1.val))))), ((xc_groupby_tab1.val2 % 2)) -> HashAggregate (cost=1.02..1.03 rows=1 width=8) - Output: pg_catalog.sum((sum(tab1.val))), ((tab1.val2 % 2)), tab1.val2 + Output: pg_catalog.sum((sum(xc_groupby_tab1.val))), ((xc_groupby_tab1.val2 % 2)), xc_groupby_tab1.val2 -> Materialize (cost=0.00..0.00 rows=0 width=0) - Output: (sum(tab1.val)), ((tab1.val2 % 2)), tab1.val2 + Output: (sum(xc_groupby_tab1.val)), ((xc_groupby_tab1.val2 % 2)), xc_groupby_tab1.val2 -> Data Node Scan (Node Count [2]) (cost=0.00..1.01 rows=1000 width=8) - Output: sum(tab1.val), (tab1.val2 % 2), tab1.val2 + Output: sum(xc_groupby_tab1.val), (xc_groupby_tab1.val2 % 2), xc_groupby_tab1.val2 (8 rows) -- group by without aggregate -select val2 from tab1 group by val2; +select val2 from xc_groupby_tab1 group by val2; val2 ------ 1 @@ -93,18 +89,18 @@ select val2 from tab1 group by val2; 3 (3 rows) -explain verbose select val2 from tab1 group by val2; +explain verbose select val2 from xc_groupby_tab1 group by val2; QUERY PLAN ---------------------------------------------------------------------------------- HashAggregate (cost=1.02..1.03 rows=1 width=4) - Output: tab1.val2 + Output: xc_groupby_tab1.val2 -> Materialize (cost=0.00..0.00 rows=0 width=0) - Output: tab1.val2 + Output: xc_groupby_tab1.val2 -> Data Node Scan (Node Count [2]) (cost=0.00..1.01 rows=1000 width=4) - Output: tab1.val2 + Output: xc_groupby_tab1.val2 (6 rows) -select val + val2 from tab1 group by val + val2; +select val + val2 from xc_groupby_tab1 group by val + val2; ?column? ---------- 4 @@ -115,18 +111,18 @@ select val + val2 from tab1 group by val + val2; 2 (6 rows) -explain verbose select val + val2 from tab1 group by val + val2; +explain verbose select val + val2 from xc_groupby_tab1 group by val + val2; QUERY PLAN ---------------------------------------------------------------------------------- HashAggregate (cost=1.02..1.03 rows=1 width=8) - Output: ((tab1.val + tab1.val2)) + Output: ((xc_groupby_tab1.val + xc_groupby_tab1.val2)) -> Materialize (cost=0.00..0.00 rows=0 width=0) - Output: ((tab1.val + tab1.val2)) + Output: ((xc_groupby_tab1.val + xc_groupby_tab1.val2)) -> Data Node Scan (Node Count [2]) (cost=0.00..1.01 rows=1000 width=8) - Output: (tab1.val + tab1.val2) + Output: (xc_groupby_tab1.val + xc_groupby_tab1.val2) (6 rows) -select val + val2, val, val2 from tab1 group by val, val2; +select val + val2, val, val2 from xc_groupby_tab1 group by val, val2; ?column? | val | val2 ----------+-----+------ 7 | 4 | 3 @@ -139,18 +135,18 @@ select val + val2, val, val2 from tab1 group by val, val2; 9 | 6 | 3 (8 rows) -explain verbose select val + val2, val, val2 from tab1 group by val, val2; - QUERY PLAN ----------------------------------------------------------------------------------- +explain verbose select val + val2, val, val2 from xc_groupby_tab1 group by val, val2; + QUERY PLAN +--------------------------------------------------------------------------------------------------------------- HashAggregate (cost=1.02..1.03 rows=1 width=8) - Output: ((tab1.val + tab1.val2)), tab1.val, tab1.val2 + Output: ((xc_groupby_tab1.val + xc_groupby_tab1.val2)), xc_groupby_tab1.val, xc_groupby_tab1.val2 -> Materialize (cost=0.00..0.00 rows=0 width=0) - Output: ((tab1.val + tab1.val2)), tab1.val, tab1.val2 + Output: ((xc_groupby_tab1.val + xc_groupby_tab1.val2)), xc_groupby_tab1.val, xc_groupby_tab1.val2 -> Data Node Scan (Node Count [2]) (cost=0.00..1.01 rows=1000 width=8) - Output: (tab1.val + tab1.val2), tab1.val, tab1.val2 + Output: (xc_groupby_tab1.val + xc_groupby_tab1.val2), xc_groupby_tab1.val, xc_groupby_tab1.val2 (6 rows) -select tab1.val + tab2.val2, tab1.val, tab2.val2 from tab1, tab2 where tab1.val = tab2.val group by tab1.val, tab2.val2; +select xc_groupby_tab1.val + xc_groupby_tab2.val2, xc_groupby_tab1.val, xc_groupby_tab2.val2 from xc_groupby_tab1, xc_groupby_tab2 where xc_groupby_tab1.val = xc_groupby_tab2.val group by xc_groupby_tab1.val, xc_groupby_tab2.val2; ?column? | val | val2 ----------+-----+------ 5 | 3 | 2 @@ -161,18 +157,18 @@ select tab1.val + tab2.val2, tab1.val, tab2.val2 from tab1, tab2 where tab1.val 7 | 3 | 4 (6 rows) -explain verbose select tab1.val + tab2.val2, tab1.val, tab2.val2 from tab1, tab2 where tab1.val = tab2.val group by tab1.val, tab2.val2; - QUERY PLAN -------------------------------------------------------------------------------- +explain verbose select xc_groupby_tab1.val + xc_groupby_tab2.val2, xc_groupby_tab1.val, xc_groupby_tab2.val2 from xc_groupby_tab1, xc_groupby_tab2 where xc_groupby_tab1.val = xc_groupby_tab2.val group by xc_groupby_tab1.val, xc_groupby_tab2.val2; + QUERY PLAN +--------------------------------------------------------------------------------------------------------------- HashAggregate (cost=0.00..0.01 rows=1 width=0) - Output: ((tab1.val + tab2.val2)), tab1.val, tab2.val2 + Output: ((xc_groupby_tab1.val + xc_groupby_tab2.val2)), xc_groupby_tab1.val, xc_groupby_tab2.val2 -> Materialize (cost=0.00..0.00 rows=0 width=0) - Output: ((tab1.val + tab2.val2)), tab1.val, tab2.val2 + Output: ((xc_groupby_tab1.val + xc_groupby_tab2.val2)), xc_groupby_tab1.val, xc_groupby_tab2.val2 -> Data Node Scan (Node Count [2]) (cost=0.00..1.01 rows=1 width=4) - Output: (tab1.val + tab2.val2), tab1.val, tab2.val2 + Output: (xc_groupby_tab1.val + xc_groupby_tab2.val2), xc_groupby_tab1.val, xc_groupby_tab2.val2 (6 rows) -select tab1.val + tab2.val2 from tab1, tab2 where tab1.val = tab2.val group by tab1.val + tab2.val2; +select xc_groupby_tab1.val + xc_groupby_tab2.val2 from xc_groupby_tab1, xc_groupby_tab2 where xc_groupby_tab1.val = xc_groupby_tab2.val group by xc_groupby_tab1.val + xc_groupby_tab2.val2; ?column? ---------- 6 @@ -181,19 +177,19 @@ select tab1.val + tab2.val2 from tab1, tab2 where tab1.val = tab2.val group by t 5 (4 rows) -explain verbose select tab1.val + tab2.val2 from tab1, tab2 where tab1.val = tab2.val group by tab1.val + tab2.val2; +explain verbose select xc_groupby_tab1.val + xc_groupby_tab2.val2 from xc_groupby_tab1, xc_groupby_tab2 where xc_groupby_tab1.val = xc_groupby_tab2.val group by xc_groupby_tab1.val + xc_groupby_tab2.val2; QUERY PLAN ------------------------------------------------------------------------------- HashAggregate (cost=0.00..0.01 rows=1 width=0) - Output: ((tab1.val + tab2.val2)) + Output: ((xc_groupby_tab1.val + xc_groupby_tab2.val2)) -> Materialize (cost=0.00..0.00 rows=0 width=0) - Output: ((tab1.val + tab2.val2)) + Output: ((xc_groupby_tab1.val + xc_groupby_tab2.val2)) -> Data Node Scan (Node Count [2]) (cost=0.00..1.01 rows=1 width=4) - Output: (tab1.val + tab2.val2) + Output: (xc_groupby_tab1.val + xc_groupby_tab2.val2) (6 rows) -- group by with aggregates in expression -select count(*) + sum(val) + avg(val), val2 from tab1 group by val2; +select count(*) + sum(val) + avg(val), val2 from xc_groupby_tab1 group by val2; ?column? | val2 ---------------------+------ 11.0000000000000000 | 1 @@ -201,10 +197,10 @@ select count(*) + sum(val) + avg(val), val2 from tab1 group by val2; 17.6666666666666667 | 3 (3 rows) -explain verbose select count(*) + sum(val) + avg(val), val2 from tab1 group by val2; +explain verbose select count(*) + sum(val) + avg(val), val2 from xc_groupby_tab1 group by val2; QUERY PLAN ---------------------------------------------------------------------------------- - HashAggregate (cost=1.02..1.05 rows=1 width=8) + HashAggregate (cost=1.02..1.04 rows=1 width=8) Output: (((count(*) + sum(val)))::numeric + avg(val)), val2 -> Materialize (cost=0.00..1.01 rows=1 width=8) Output: val, val2 @@ -213,7 +209,7 @@ explain verbose select count(*) + sum(val) + avg(val), val2 from tab1 group by v (6 rows) -- group by with expressions in group by clause -select sum(val), avg(val), 2 * val2 from tab1 group by 2 * val2; +select sum(val), avg(val), 2 * val2 from xc_groupby_tab1 group by 2 * val2; sum | avg | ?column? -----+--------------------+---------- 11 | 3.6666666666666667 | 6 @@ -221,35 +217,35 @@ select sum(val), avg(val), 2 * val2 from tab1 group by 2 * val2; 8 | 4.0000000000000000 | 4 (3 rows) -explain verbose select sum(val), avg(val), 2 * val2 from tab1 group by 2 * val2; - QUERY PLAN ------------------------------------------------------------------------------------------------ +explain verbose select sum(val), avg(val), 2 * val2 from xc_groupby_tab1 group by 2 * val2; + QUERY PLAN +-------------------------------------------------------------------------------------------------------------------------------- HashAggregate (cost=1.02..1.04 rows=1 width=8) - Output: pg_catalog.sum((sum(tab1.val))), pg_catalog.avg((avg(tab1.val))), ((2 * tab1.val2)) + Output: pg_catalog.sum((sum(xc_groupby_tab1.val))), pg_catalog.avg((avg(xc_groupby_tab1.val))), ((2 * xc_groupby_tab1.val2)) -> Materialize (cost=0.00..0.00 rows=0 width=0) - Output: (sum(tab1.val)), (avg(tab1.val)), ((2 * tab1.val2)) + Output: (sum(xc_groupby_tab1.val)), (avg(xc_groupby_tab1.val)), ((2 * xc_groupby_tab1.val2)) -> Data Node Scan (Node Count [2]) (cost=0.00..1.01 rows=1000 width=8) - Output: sum(tab1.val), avg(tab1.val), (2 * tab1.val2) + Output: sum(xc_groupby_tab1.val), avg(xc_groupby_tab1.val), (2 * xc_groupby_tab1.val2) (6 rows) -drop table tab1; -drop table tab2; +drop table xc_groupby_tab1; +drop table xc_groupby_tab2; -- some tests involving nulls, characters, float type etc. -create table def(a int, b varchar(25)); -insert into def VALUES (NULL, NULL); -insert into def VALUES (1, NULL); -insert into def VALUES (NULL, 'One'); -insert into def VALUES (2, 'Two'); -insert into def VALUES (2, 'Two'); -insert into def VALUES (3, 'Three'); -insert into def VALUES (4, 'Three'); -insert into def VALUES (5, 'Three'); -insert into def VALUES (6, 'Two'); -insert into def VALUES (7, NULL); -insert into def VALUES (8, 'Two'); -insert into def VALUES (9, 'Three'); -insert into def VALUES (10, 'Three'); -select a,count(a) from def group by a order by a; +create table xc_groupby_def(a int, b varchar(25)); +insert into xc_groupby_def VALUES (NULL, NULL); +insert into xc_groupby_def VALUES (1, NULL); +insert into xc_groupby_def VALUES (NULL, 'One'); +insert into xc_groupby_def VALUES (2, 'Two'); +insert into xc_groupby_def VALUES (2, 'Two'); +insert into xc_groupby_def VALUES (3, 'Three'); +insert into xc_groupby_def VALUES (4, 'Three'); +insert into xc_groupby_def VALUES (5, 'Three'); +insert into xc_groupby_def VALUES (6, 'Two'); +insert into xc_groupby_def VALUES (7, NULL); +insert into xc_groupby_def VALUES (8, 'Two'); +insert into xc_groupby_def VALUES (9, 'Three'); +insert into xc_groupby_def VALUES (10, 'Three'); +select a,count(a) from xc_groupby_def group by a order by a; a | count ----+------- 1 | 1 @@ -265,14 +261,14 @@ select a,count(a) from def group by a order by a; | 0 (11 rows) -explain verbose select a,count(a) from def group by a order by a; +explain verbose select a,count(a) from xc_groupby_def group by a order by a; QUERY PLAN ---------------------------------------------------------------------------------------------- - GroupAggregate (cost=1.02..1.05 rows=1 width=4) + GroupAggregate (cost=1.02..1.04 rows=1 width=4) Output: a, count(a) -> Sort (cost=1.02..1.03 rows=1 width=4) Output: a - Sort Key: def.a + Sort Key: xc_groupby_def.a -> Result (cost=0.00..1.01 rows=1 width=4) Output: a -> Materialize (cost=0.00..1.01 rows=1 width=4) @@ -281,7 +277,7 @@ explain verbose select a,count(a) from def group by a order by a; Output: a, b (11 rows) -select avg(a) from def group by a; +select avg(a) from xc_groupby_def group by a; avg ------------------------ @@ -297,7 +293,7 @@ select avg(a) from def group by a; 4.0000000000000000 (11 rows) -select avg(a) from def group by a; +select avg(a) from xc_groupby_def group by a; avg ------------------------ @@ -313,18 +309,18 @@ select avg(a) from def group by a; 4.0000000000000000 (11 rows) -explain verbose select avg(a) from def group by a; +explain verbose select avg(a) from xc_groupby_def group by a; QUERY PLAN ---------------------------------------------------------------------------------- HashAggregate (cost=1.02..1.03 rows=1 width=4) - Output: pg_catalog.avg((avg(def.a))), def.a + Output: pg_catalog.avg((avg(xc_groupby_def.a))), xc_groupby_def.a -> Materialize (cost=0.00..0.00 rows=0 width=0) - Output: (avg(def.a)), def.a + Output: (avg(xc_groupby_def.a)), xc_groupby_def.a -> Data Node Scan (Node Count [2]) (cost=0.00..1.01 rows=1000 width=4) - Output: avg(def.a), def.a + Output: avg(xc_groupby_def.a), xc_groupby_def.a (6 rows) -select avg(a) from def group by b; +select avg(a) from xc_groupby_def group by b; avg -------------------- 4.0000000000000000 @@ -333,18 +329,18 @@ select avg(a) from def group by b; 6.2000000000000000 (4 rows) -explain verbose select avg(a) from def group by b; +explain verbose select avg(a) from xc_groupby_def group by b; QUERY PLAN ----------------------------------------------------------------------------------- - HashAggregate (cost=1.02..1.03 rows=1 width=33) - Output: pg_catalog.avg((avg(def.a))), def.b + HashAggregate (cost=1.02..1.03 rows=1 width=72) + Output: pg_catalog.avg((avg(xc_groupby_def.a))), xc_groupby_def.b -> Materialize (cost=0.00..0.00 rows=0 width=0) - Output: (avg(def.a)), def.b - -> Data Node Scan (Node Count [2]) (cost=0.00..1.01 rows=1000 width=33) - Output: avg(def.a), def.b + Output: (avg(xc_groupby_def.a)), xc_groupby_def.b + -> Data Node Scan (Node Count [2]) (cost=0.00..1.01 rows=1000 width=72) + Output: avg(xc_groupby_def.a), xc_groupby_def.b (6 rows) -select sum(a) from def group by b; +select sum(a) from xc_groupby_def group by b; sum ----- 8 @@ -353,18 +349,18 @@ select sum(a) from def group by b; 31 (4 rows) -explain verbose select sum(a) from def group by b; +explain verbose select sum(a) from xc_groupby_def group by b; QUERY PLAN ----------------------------------------------------------------------------------- - HashAggregate (cost=1.02..1.03 rows=1 width=33) - Output: pg_catalog.sum((sum(def.a))), def.b + HashAggregate (cost=1.02..1.03 rows=1 width=72) + Output: pg_catalog.sum((sum(xc_groupby_def.a))), xc_groupby_def.b -> Materialize (cost=0.00..0.00 rows=0 width=0) - Output: (sum(def.a)), def.b - -> Data Node Scan (Node Count [2]) (cost=0.00..1.01 rows=1000 width=33) - Output: sum(def.a), def.b + Output: (sum(xc_groupby_def.a)), xc_groupby_def.b + -> Data Node Scan (Node Count [2]) (cost=0.00..1.01 rows=1000 width=72) + Output: sum(xc_groupby_def.a), xc_groupby_def.b (6 rows) -select count(*) from def group by b; +select count(*) from xc_groupby_def group by b; count ------- 3 @@ -373,18 +369,18 @@ select count(*) from def group by b; 5 (4 rows) -explain verbose select count(*) from def group by b; +explain verbose select count(*) from xc_groupby_def group by b; QUERY PLAN ----------------------------------------------------------------------------------- - HashAggregate (cost=1.02..1.03 rows=1 width=29) - Output: pg_catalog.count(*), def.b + HashAggregate (cost=1.02..1.03 rows=1 width=68) + Output: pg_catalog.count(*), xc_groupby_def.b -> Materialize (cost=0.00..0.00 rows=0 width=0) - Output: (count(*)), def.b - -> Data Node Scan (Node Count [2]) (cost=0.00..1.01 rows=1000 width=29) - Output: count(*), def.b + Output: (count(*)), xc_groupby_def.b + -> Data Node Scan (Node Count [2]) (cost=0.00..1.01 rows=1000 width=68) + Output: count(*), xc_groupby_def.b (6 rows) -select count(*) from def where a is not null group by a; +select count(*) from xc_groupby_def where a is not null group by a; count ------- 1 @@ -399,18 +395,18 @@ select count(*) from def where a is not null group by a; 1 (10 rows) -explain verbose select count(*) from def where a is not null group by a; +explain verbose select count(*) from xc_groupby_def where a is not null group by a; QUERY PLAN ---------------------------------------------------------------------------------- HashAggregate (cost=1.02..1.03 rows=1 width=4) - Output: pg_catalog.count(*), def.a + Output: pg_catalog.count(*), xc_groupby_def.a -> Materialize (cost=0.00..0.00 rows=0 width=0) - Output: (count(*)), def.a + Output: (count(*)), xc_groupby_def.a -> Data Node Scan (Node Count [2]) (cost=0.00..1.01 rows=1000 width=4) - Output: count(*), def.a + Output: count(*), xc_groupby_def.a (6 rows) -select b from def group by b; +select b from xc_groupby_def group by b; b ------- @@ -419,18 +415,18 @@ select b from def group by b; Three (4 rows) -explain verbose select b from def group by b; +explain verbose select b from xc_groupby_def group by b; QUERY PLAN ----------------------------------------------------------------------------------- - HashAggregate (cost=1.02..1.03 rows=1 width=29) - Output: def.b + HashAggregate (cost=1.02..1.03 rows=1 width=68) + Output: xc_groupby_def.b -> Materialize (cost=0.00..0.00 rows=0 width=0) - Output: def.b - -> Data Node Scan (Node Count [2]) (cost=0.00..1.01 rows=1000 width=29) - Output: def.b + Output: xc_groupby_def.b + -> Data Node Scan (Node Count [2]) (cost=0.00..1.01 rows=1000 width=68) + Output: xc_groupby_def.b (6 rows) -select b,count(b) from def group by b; +select b,count(b) from xc_groupby_def group by b; b | count -------+------- | 0 @@ -439,156 +435,156 @@ select b,count(b) from def group by b; Three | 5 (4 rows) -explain verbose select b,count(b) from def group by b; +explain verbose select b,count(b) from xc_groupby_def group by b; QUERY PLAN ----------------------------------------------------------------------------------- - HashAggregate (cost=1.02..1.03 rows=1 width=29) - Output: def.b, count((count(def.b))) + HashAggregate (cost=1.02..1.03 rows=1 width=68) + Output: xc_groupby_def.b, count((count(xc_groupby_def.b))) -> Materialize (cost=0.00..0.00 rows=0 width=0) - Output: def.b, (count(def.b)) - -> Data Node Scan (Node Count [2]) (cost=0.00..1.01 rows=1000 width=29) - Output: def.b, count(def.b) + Output: xc_groupby_def.b, (count(xc_groupby_def.b)) + -> Data Node Scan (Node Count [2]) (cost=0.00..1.01 rows=1000 width=68) + Output: xc_groupby_def.b, count(xc_groupby_def.b) (6 rows) -select count(*) from def where b is null group by b; +select count(*) from xc_groupby_def where b is null group by b; count ------- 3 (1 row) -explain verbose select count(*) from def where b is null group by b; +explain verbose select count(*) from xc_groupby_def where b is null group by b; QUERY PLAN ----------------------------------------------------------------------------------- - HashAggregate (cost=1.02..1.03 rows=1 width=29) - Output: pg_catalog.count(*), def.b + HashAggregate (cost=1.02..1.03 rows=1 width=68) + Output: pg_catalog.count(*), xc_groupby_def.b -> Materialize (cost=0.00..0.00 rows=0 width=0) - Output: (count(*)), def.b - -> Data Node Scan (Node Count [2]) (cost=0.00..1.01 rows=1000 width=29) - Output: count(*), def.b + Output: (count(*)), xc_groupby_def.b + -> Data Node Scan (Node Count [2]) (cost=0.00..1.01 rows=1000 width=68) + Output: count(*), xc_groupby_def.b (6 rows) -create table g(a int, b float, c numeric); -insert into g values(1,2.1,3.2); -insert into g values(1,2.1,3.2); -insert into g values(2,2.3,5.2); -select sum(a) from g group by a; +create table xc_groupby_g(a int, b float, c numeric); +insert into xc_groupby_g values(1,2.1,3.2); +insert into xc_groupby_g values(1,2.1,3.2); +insert into xc_groupby_g values(2,2.3,5.2); +select sum(a) from xc_groupby_g group by a; sum ----- 2 2 (2 rows) -explain verbose select sum(a) from g group by a; +explain verbose select sum(a) from xc_groupby_g group by a; QUERY PLAN ---------------------------------------------------------------------------------- HashAggregate (cost=1.02..1.03 rows=1 width=4) - Output: pg_catalog.sum((sum(g.a))), g.a + Output: pg_catalog.sum((sum(xc_groupby_g.a))), xc_groupby_g.a -> Materialize (cost=0.00..0.00 rows=0 width=0) - Output: (sum(g.a)), g.a + Output: (sum(xc_groupby_g.a)), xc_groupby_g.a -> Data Node Scan (Node Count [2]) (cost=0.00..1.01 rows=1000 width=4) - Output: sum(g.a), g.a + Output: sum(xc_groupby_g.a), xc_groupby_g.a (6 rows) -select sum(b) from g group by b; +select sum(b) from xc_groupby_g group by b; sum ----- 2.3 4.2 (2 rows) -explain verbose select sum(b) from g group by b; +explain verbose select sum(b) from xc_groupby_g group by b; QUERY PLAN ---------------------------------------------------------------------------------- HashAggregate (cost=1.02..1.03 rows=1 width=8) - Output: sum((sum(g.b))), g.b + Output: sum((sum(xc_groupby_g.b))), xc_groupby_g.b -> Materialize (cost=0.00..0.00 rows=0 width=0) - Output: (sum(g.b)), g.b + Output: (sum(xc_groupby_g.b)), xc_groupby_g.b -> Data Node Scan (Node Count [2]) (cost=0.00..1.01 rows=1000 width=8) - Output: sum(g.b), g.b + Output: sum(xc_groupby_g.b), xc_groupby_g.b (6 rows) -select sum(c) from g group by b; +select sum(c) from xc_groupby_g group by b; sum ----- 5.2 6.4 (2 rows) -explain verbose select sum(c) from g group by b; +explain verbose select sum(c) from xc_groupby_g group by b; QUERY PLAN ----------------------------------------------------------------------------------- HashAggregate (cost=1.02..1.03 rows=1 width=40) - Output: sum((sum(g.c))), g.b + Output: sum((sum(xc_groupby_g.c))), xc_groupby_g.b -> Materialize (cost=0.00..0.00 rows=0 width=0) - Output: (sum(g.c)), g.b + Output: (sum(xc_groupby_g.c)), xc_groupby_g.b -> Data Node Scan (Node Count [2]) (cost=0.00..1.01 rows=1000 width=40) - Output: sum(g.c), g.b + Output: sum(xc_groupby_g.c), xc_groupby_g.b (6 rows) -select avg(a) from g group by b; +select avg(a) from xc_groupby_g group by b; avg ------------------------ 2.0000000000000000 1.00000000000000000000 (2 rows) -explain verbose select avg(a) from g group by b; +explain verbose select avg(a) from xc_groupby_g group by b; QUERY PLAN ----------------------------------------------------------------------------------- HashAggregate (cost=1.02..1.03 rows=1 width=12) - Output: pg_catalog.avg((avg(g.a))), g.b + Output: pg_catalog.avg((avg(xc_groupby_g.a))), xc_groupby_g.b -> Materialize (cost=0.00..0.00 rows=0 width=0) - Output: (avg(g.a)), g.b + Output: (avg(xc_groupby_g.a)), xc_groupby_g.b -> Data Node Scan (Node Count [2]) (cost=0.00..1.01 rows=1000 width=12) - Output: avg(g.a), g.b + Output: avg(xc_groupby_g.a), xc_groupby_g.b (6 rows) -select avg(b) from g group by c; +select avg(b) from xc_groupby_g group by c; avg ----- 2.3 2.1 (2 rows) -explain verbose select avg(b) from g group by c; +explain verbose select avg(b) from xc_groupby_g group by c; QUERY PLAN ----------------------------------------------------------------------------------- HashAggregate (cost=1.02..1.03 rows=1 width=40) - Output: pg_catalog.avg((avg(g.b))), g.c + Output: pg_catalog.avg((avg(xc_groupby_g.b))), xc_groupby_g.c -> Materialize (cost=0.00..0.00 rows=0 width=0) - Output: (avg(g.b)), g.c + Output: (avg(xc_groupby_g.b)), xc_groupby_g.c -> Data Node Scan (Node Count [2]) (cost=0.00..1.01 rows=1000 width=40) - Output: avg(g.b), g.c + Output: avg(xc_groupby_g.b), xc_groupby_g.c (6 rows) -select avg(c) from g group by c; +select avg(c) from xc_groupby_g group by c; avg -------------------- 5.2000000000000000 3.2000000000000000 (2 rows) -explain verbose select avg(c) from g group by c; +explain verbose select avg(c) from xc_groupby_g group by c; QUERY PLAN ----------------------------------------------------------------------------------- HashAggregate (cost=1.02..1.03 rows=1 width=32) - Output: pg_catalog.avg((avg(g.c))), g.c + Output: pg_catalog.avg((avg(xc_groupby_g.c))), xc_groupby_g.c -> Materialize (cost=0.00..0.00 rows=0 width=0) - Output: (avg(g.c)), g.c + Output: (avg(xc_groupby_g.c)), xc_groupby_g.c -> Data Node Scan (Node Count [2]) (cost=0.00..1.01 rows=1000 width=32) - Output: avg(g.c), g.c + Output: avg(xc_groupby_g.c), xc_groupby_g.c (6 rows) -drop table def; -drop table g; +drop table xc_groupby_def; +drop table xc_groupby_g; -- Combination 2, enable_hashagg on and replicated tables. -- repeat the same tests for replicated tables -- create required tables and fill them with data -create table tab1 (val int, val2 int) distribute by replication; -create table tab2 (val int, val2 int) distribute by replication; -insert into tab1 values (1, 1), (2, 1), (3, 1), (2, 2), (6, 2), (4, 3), (1, 3), (6, 3); -insert into tab2 values (1, 1), (4, 1), (8, 1), (2, 4), (9, 4), (3, 4), (4, 2), (5, 2), (3, 2); -select count(*), sum(val), avg(val), sum(val)::float8/count(*), val2 from tab1 group by val2; +create table xc_groupby_tab1 (val int, val2 int) distribute by replication; +create table xc_groupby_tab2 (val int, val2 int) distribute by replication; +insert into xc_groupby_tab1 values (1, 1), (2, 1), (3, 1), (2, 2), (6, 2), (4, 3), (1, 3), (6, 3); +insert into xc_groupby_tab2 values (1, 1), (4, 1), (8, 1), (2, 4), (9, 4), (3, 4), (4, 2), (5, 2), (3, 2); +select count(*), sum(val), avg(val), sum(val)::float8/count(*), val2 from xc_groupby_tab1 group by val2; count | sum | avg | ?column? | val2 -------+-----+--------------------+------------------+------ 3 | 6 | 2.0000000000000000 | 2 | 1 @@ -596,10 +592,10 @@ select count(*), sum(val), avg(val), sum(val)::float8/count(*), val2 from tab1 g 3 | 11 | 3.6666666666666667 | 3.66666666666667 | 3 (3 rows) -explain verbose select count(*), sum(val), avg(val), sum(val)::float8/count(*), val2 from tab1 group by val2; +explain verbose select count(*), sum(val), avg(val), sum(val)::float8/count(*), val2 from xc_groupby_tab1 group by val2; QUERY PLAN ------------------------------------------------------------------------------------------------------------- - HashAggregate (cost=1.03..1.06 rows=1 width=8) + HashAggregate (cost=1.03..1.05 rows=1 width=8) Output: count(*), sum(val), avg(val), ((sum(val))::double precision / (count(*))::double precision), val2 -> Materialize (cost=0.00..1.01 rows=1 width=8) Output: val, val2 @@ -608,7 +604,7 @@ explain verbose select count(*), sum(val), avg(val), sum(val)::float8/count(*), (6 rows) -- joins and group by -select count(*), sum(tab1.val * tab2.val), avg(tab1.val*tab2.val), sum(tab1.val*tab2.val)::float8/count(*), tab1.val2, tab2.val2 from tab1 full outer join tab2 on tab1.val2 = tab2.val2 group by tab1.val2, tab2.val2; +select count(*), sum(xc_groupby_tab1.val * xc_groupby_tab2.val), avg(xc_groupby_tab1.val*xc_groupby_tab2.val), sum(xc_groupby_tab1.val*xc_groupby_tab2.val)::float8/count(*), xc_groupby_tab1.val2, xc_groupby_tab2.val2 from xc_groupby_tab1 full outer join xc_groupby_tab2 on xc_groupby_tab1.val2 = xc_groupby_tab2.val2 group by xc_groupby_tab1.val2, xc_groupby_tab2.val2; count | sum | avg | ?column? | val2 | val2 -------+-----+---------------------+------------------+------+------ 6 | 96 | 16.0000000000000000 | 16 | 2 | 2 @@ -617,53 +613,49 @@ select count(*), sum(tab1.val * tab2.val), avg(tab1.val*tab2.val), sum(tab1.val* 3 | | | | | 4 (4 rows) -explain verbose select count(*), sum(tab1.val * tab2.val), avg(tab1.val*tab2.val), sum(tab1.val*tab2.val)::float8/count(*), tab1.val2, tab2.val2 from tab1 full outer join tab2 on tab1.val2 = tab2.val2 group by tab1.val2, tab2.val2; - QUERY PLAN ------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------ - HashAggregate (cost=2.09..2.13 rows=1 width=16) - Output: count(*), sum((tab1.val * tab2.val)), avg((tab1.val * tab2.val)), ((sum((tab1.val * tab2.val)))::double precision / (count(*))::double precision), tab1.val2, tab2.val2 - -> Merge Full Join (cost=2.05..2.07 rows=1 width=16) - Output: tab1.val, tab1.val2, tab2.val, tab2.val2 - Merge Cond: (tab1.val2 = tab2.val2) - -> Sort (cost=1.02..1.03 rows=1 width=8) - Output: tab1.val, tab1.val2 - Sort Key: tab1.val2 +explain verbose select count(*), sum(xc_groupby_tab1.val * xc_groupby_tab2.val), avg(xc_groupby_tab1.val*xc_groupby_tab2.val), sum(xc_groupby_tab1.val*xc_groupby_tab2.val)::float8/count(*), xc_groupby_tab1.val2, xc_groupby_tab2.val2 from xc_groupby_tab1 full outer join xc_groupby_tab2 on xc_groupby_tab1.val2 = xc_groupby_tab2.val2 group by xc_groupby_tab1.val2, xc_groupby_tab2.val2; + QUERY PLAN +--------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------- + HashAggregate (cost=2.08..2.10 rows=1 width=16) + Output: count(*), sum((xc_groupby_tab1.val * xc_groupby_tab2.val)), avg((xc_groupby_tab1.val * xc_groupby_tab2.val)), ((sum((xc_groupby_tab1.val * xc_groupby_tab2.val)))::double precision / (count(*))::double precision), xc_groupby_tab1.val2, xc_groupby_tab2.val2 + -> Hash Full Join (cost=1.03..2.06 rows=1 width=16) + Output: xc_groupby_tab1.val, xc_groupby_tab1.val2, xc_groupby_tab2.val, xc_groupby_tab2.val2 + Hash Cond: (xc_groupby_tab1.val2 = xc_groupby_tab2.val2) + -> Materialize (cost=0.00..1.01 rows=1 width=8) + Output: xc_groupby_tab1.val, xc_groupby_tab1.val2 + -> Data Node Scan (Node Count [1]) on xc_groupby_tab1 (cost=0.00..1.01 rows=1000 width=8) + Output: xc_groupby_tab1.val, xc_groupby_tab1.val2 + -> Hash (cost=1.01..1.01 rows=1 width=8) + Output: xc_groupby_tab2.val, xc_groupby_tab2.val2 -> Materialize (cost=0.00..1.01 rows=1 width=8) - Output: tab1.val, tab1.val2 - -> Data Node Scan (Node Count [1]) on tab1 (cost=0.00..1.01 rows=1000 width=8) - Output: tab1.val, tab1.val2 - -> Sort (cost=1.02..1.03 rows=1 width=8) - Output: tab2.val, tab2.val2 - Sort Key: tab2.val2 - -> Materialize (cost=0.00..1.01 rows=1 width=8) - Output: tab2.val, tab2.val2 - -> Data Node Scan (Node Count [1]) on tab2 (cost=0.00..1.01 rows=1000 width=8) - Output: tab2.val, tab2.val2 -(19 rows) + Output: xc_groupby_tab2.val, xc_groupby_tab2.val2 + -> Data Node Scan (Node Count [1]) on xc_groupby_tab2 (cost=0.00..1.01 rows=1000 width=8) + Output: xc_groupby_tab2.val, xc_groupby_tab2.val2 +(15 rows) -- aggregates over aggregates -select sum(y) from (select sum(val) y, val2%2 x from tab1 group by val2) q1 group by x; +select sum(y) from (select sum(val) y, val2%2 x from xc_groupby_tab1 group by val2) q1 group by x; sum ----- 8 17 (2 rows) -explain verbose select sum(y) from (select sum(val) y, val2%2 x from tab1 group by val2) q1 group by x; - QUERY PLAN ----------------------------------------------------------------------------------------- +explain verbose select sum(y) from (select sum(val) y, val2%2 x from xc_groupby_tab1 group by val2) q1 group by x; + QUERY PLAN +---------------------------------------------------------------------------------------------------------------- HashAggregate (cost=1.05..1.06 rows=1 width=12) - Output: sum((pg_catalog.sum((sum(tab1.val))))), ((tab1.val2 % 2)) + Output: sum((pg_catalog.sum((sum(xc_groupby_tab1.val))))), ((xc_groupby_tab1.val2 % 2)) -> HashAggregate (cost=1.02..1.03 rows=1 width=8) - Output: pg_catalog.sum((sum(tab1.val))), ((tab1.val2 % 2)), tab1.val2 + Output: pg_catalog.sum((sum(xc_groupby_tab1.val))), ((xc_groupby_tab1.val2 % 2)), xc_groupby_tab1.val2 -> Materialize (cost=0.00..0.00 rows=0 width=0) - Output: (sum(tab1.val)), ((tab1.val2 % 2)), tab1.val2 + Output: (sum(xc_groupby_tab1.val)), ((xc_groupby_tab1.val2 % 2)), xc_groupby_tab1.val2 -> Data Node Scan (Node Count [1]) (cost=0.00..1.01 rows=1000 width=8) - Output: sum(tab1.val), (tab1.val2 % 2), tab1.val2 + Output: sum(xc_groupby_tab1.val), (xc_groupby_tab1.val2 % 2), xc_groupby_tab1.val2 (8 rows) -- group by without aggregate -select val2 from tab1 group by val2; +select val2 from xc_groupby_tab1 group by val2; val2 ------ 1 @@ -671,18 +663,18 @@ select val2 from tab1 group by val2; 3 (3 rows) -explain verbose select val2 from tab1 group by val2; +explain verbose select val2 from xc_groupby_tab1 group by val2; QUERY PLAN ---------------------------------------------------------------------------------- HashAggregate (cost=1.02..1.03 rows=1 width=4) - Output: tab1.val2 + Output: xc_groupby_tab1.val2 -> Materialize (cost=0.00..0.00 rows=0 width=0) - Output: tab1.val2 + Output: xc_groupby_tab1.val2 -> Data Node Scan (Node Count [1]) (cost=0.00..1.01 rows=1000 width=4) - Output: tab1.val2 + Output: xc_groupby_tab1.val2 (6 rows) -select val + val2 from tab1 group by val + val2; +select val + val2 from xc_groupby_tab1 group by val + val2; ?column? ---------- 4 @@ -693,18 +685,18 @@ select val + val2 from tab1 group by val + val2; 2 (6 rows) -explain verbose select val + val2 from tab1 group by val + val2; +explain verbose select val + val2 from xc_groupby_tab1 group by val + val2; QUERY PLAN ---------------------------------------------------------------------------------- HashAggregate (cost=1.02..1.03 rows=1 width=8) - Output: ((tab1.val + tab1.val2)) + Output: ((xc_groupby_tab1.val + xc_groupby_tab1.val2)) -> Materialize (cost=0.00..0.00 rows=0 width=0) - Output: ((tab1.val + tab1.val2)) + Output: ((xc_groupby_tab1.val + xc_groupby_tab1.val2)) -> Data Node Scan (Node Count [1]) (cost=0.00..1.01 rows=1000 width=8) - Output: (tab1.val + tab1.val2) + Output: (xc_groupby_tab1.val + xc_groupby_tab1.val2) (6 rows) -select val + val2, val, val2 from tab1 group by val, val2; +select val + val2, val, val2 from xc_groupby_tab1 group by val, val2; ?column? | val | val2 ----------+-----+------ 7 | 4 | 3 @@ -717,18 +709,18 @@ select val + val2, val, val2 from tab1 group by val, val2; 9 | 6 | 3 (8 rows) -explain verbose select val + val2, val, val2 from tab1 group by val, val2; - QUERY PLAN ----------------------------------------------------------------------------------- +explain verbose select val + val2, val, val2 from xc_groupby_tab1 group by val, val2; + QUERY PLAN +--------------------------------------------------------------------------------------------------------------- HashAggregate (cost=1.02..1.03 rows=1 width=8) - Output: ((tab1.val + tab1.val2)), tab1.val, tab1.val2 + Output: ((xc_groupby_tab1.val + xc_groupby_tab1.val2)), xc_groupby_tab1.val, xc_groupby_tab1.val2 -> Materialize (cost=0.00..0.00 rows=0 width=0) - Output: ((tab1.val + tab1.val2)), tab1.val, tab1.val2 + Output: ((xc_groupby_tab1.val + xc_groupby_tab1.val2)), xc_groupby_tab1.val, xc_groupby_tab1.val2 -> Data Node Scan (Node Count [1]) (cost=0.00..1.01 rows=1000 width=8) - Output: (tab1.val + tab1.val2), tab1.val, tab1.val2 + Output: (xc_groupby_tab1.val + xc_groupby_tab1.val2), xc_groupby_tab1.val, xc_groupby_tab1.val2 (6 rows) -select tab1.val + tab2.val2, tab1.val, tab2.val2 from tab1, tab2 where tab1.val = tab2.val group by tab1.val, tab2.val2; +select xc_groupby_tab1.val + xc_groupby_tab2.val2, xc_groupby_tab1.val, xc_groupby_tab2.val2 from xc_groupby_tab1, xc_groupby_tab2 where xc_groupby_tab1.val = xc_groupby_tab2.val group by xc_groupby_tab1.val, xc_groupby_tab2.val2; ?column? | val | val2 ----------+-----+------ 5 | 3 | 2 @@ -739,18 +731,18 @@ select tab1.val + tab2.val2, tab1.val, tab2.val2 from tab1, tab2 where tab1.val 7 | 3 | 4 (6 rows) -explain verbose select tab1.val + tab2.val2, tab1.val, tab2.val2 from tab1, tab2 where tab1.val = tab2.val group by tab1.val, tab2.val2; - QUERY PLAN -------------------------------------------------------------------------------- +explain verbose select xc_groupby_tab1.val + xc_groupby_tab2.val2, xc_groupby_tab1.val, xc_groupby_tab2.val2 from xc_groupby_tab1, xc_groupby_tab2 where xc_groupby_tab1.val = xc_groupby_tab2.val group by xc_groupby_tab1.val, xc_groupby_tab2.val2; + QUERY PLAN +--------------------------------------------------------------------------------------------------------------- HashAggregate (cost=0.00..0.01 rows=1 width=0) - Output: ((tab1.val + tab2.val2)), tab1.val, tab2.val2 + Output: ((xc_groupby_tab1.val + xc_groupby_tab2.val2)), xc_groupby_tab1.val, xc_groupby_tab2.val2 -> Materialize (cost=0.00..0.00 rows=0 width=0) - Output: ((tab1.val + tab2.val2)), tab1.val, tab2.val2 + Output: ((xc_groupby_tab1.val + xc_groupby_tab2.val2)), xc_groupby_tab1.val, xc_groupby_tab2.val2 -> Data Node Scan (Node Count [1]) (cost=0.00..1.01 rows=1 width=4) - Output: (tab1.val + tab2.val2), tab1.val, tab2.val2 + Output: (xc_groupby_tab1.val + xc_groupby_tab2.val2), xc_groupby_tab1.val, xc_groupby_tab2.val2 (6 rows) -select tab1.val + tab2.val2 from tab1, tab2 where tab1.val = tab2.val group by tab1.val + tab2.val2; +select xc_groupby_tab1.val + xc_groupby_tab2.val2 from xc_groupby_tab1, xc_groupby_tab2 where xc_groupby_tab1.val = xc_groupby_tab2.val group by xc_groupby_tab1.val + xc_groupby_tab2.val2; ?column? ---------- 6 @@ -759,19 +751,19 @@ select tab1.val + tab2.val2 from tab1, tab2 where tab1.val = tab2.val group by t 5 (4 rows) -explain verbose select tab1.val + tab2.val2 from tab1, tab2 where tab1.val = tab2.val group by tab1.val + tab2.val2; +explain verbose select xc_groupby_tab1.val + xc_groupby_tab2.val2 from xc_groupby_tab1, xc_groupby_tab2 where xc_groupby_tab1.val = xc_groupby_tab2.val group by xc_groupby_tab1.val + xc_groupby_tab2.val2; QUERY PLAN ------------------------------------------------------------------------------- HashAggregate (cost=0.00..0.01 rows=1 width=0) - Output: ((tab1.val + tab2.val2)) + Output: ((xc_groupby_tab1.val + xc_groupby_tab2.val2)) -> Materialize (cost=0.00..0.00 rows=0 width=0) - Output: ((tab1.val + tab2.val2)) + Output: ((xc_groupby_tab1.val + xc_groupby_tab2.val2)) -> Data Node Scan (Node Count [1]) (cost=0.00..1.01 rows=1 width=4) - Output: (tab1.val + tab2.val2) + Output: (xc_groupby_tab1.val + xc_groupby_tab2.val2) (6 rows) -- group by with aggregates in expression -select count(*) + sum(val) + avg(val), val2 from tab1 group by val2; +select count(*) + sum(val) + avg(val), val2 from xc_groupby_tab1 group by val2; ?column? | val2 ---------------------+------ 11.0000000000000000 | 1 @@ -779,10 +771,10 @@ select count(*) + sum(val) + avg(val), val2 from tab1 group by val2; 17.6666666666666667 | 3 (3 rows) -explain verbose select count(*) + sum(val) + avg(val), val2 from tab1 group by val2; +explain verbose select count(*) + sum(val) + avg(val), val2 from xc_groupby_tab1 group by val2; QUERY PLAN ---------------------------------------------------------------------------------- - HashAggregate (cost=1.02..1.05 rows=1 width=8) + HashAggregate (cost=1.02..1.04 rows=1 width=8) Output: (((count(*) + sum(val)))::numeric + avg(val)), val2 -> Materialize (cost=0.00..1.01 rows=1 width=8) Output: val, val2 @@ -791,7 +783,7 @@ explain verbose select count(*) + sum(val) + avg(val), val2 from tab1 group by v (6 rows) -- group by with expressions in group by clause -select sum(val), avg(val), 2 * val2 from tab1 group by 2 * val2; +select sum(val), avg(val), 2 * val2 from xc_groupby_tab1 group by 2 * val2; sum | avg | ?column? -----+--------------------+---------- 11 | 3.6666666666666667 | 6 @@ -799,35 +791,35 @@ select sum(val), avg(val), 2 * val2 from tab1 group by 2 * val2; 8 | 4.0000000000000000 | 4 (3 rows) -explain verbose select sum(val), avg(val), 2 * val2 from tab1 group by 2 * val2; - QUERY PLAN ------------------------------------------------------------------------------------------------ +explain verbose select sum(val), avg(val), 2 * val2 from xc_groupby_tab1 group by 2 * val2; + QUERY PLAN +-------------------------------------------------------------------------------------------------------------------------------- HashAggregate (cost=1.02..1.04 rows=1 width=8) - Output: pg_catalog.sum((sum(tab1.val))), pg_catalog.avg((avg(tab1.val))), ((2 * tab1.val2)) + Output: pg_catalog.sum((sum(xc_groupby_tab1.val))), pg_catalog.avg((avg(xc_groupby_tab1.val))), ((2 * xc_groupby_tab1.val2)) -> Materialize (cost=0.00..0.00 rows=0 width=0) - Output: (sum(tab1.val)), (avg(tab1.val)), ((2 * tab1.val2)) + Output: (sum(xc_groupby_tab1.val)), (avg(xc_groupby_tab1.val)), ((2 * xc_groupby_tab1.val2)) -> Data Node Scan (Node Count [1]) (cost=0.00..1.01 rows=1000 width=8) - Output: sum(tab1.val), avg(tab1.val), (2 * tab1.val2) + Output: sum(xc_groupby_tab1.val), avg(xc_groupby_tab1.val), (2 * xc_groupby_tab1.val2) (6 rows) -drop table tab1; -drop table tab2; +drop table xc_groupby_tab1; +drop table xc_groupby_tab2; -- some tests involving nulls, characters, float type etc. -create table def(a int, b varchar(25)) distribute by replication; -insert into def VALUES (NULL, NULL); -insert into def VALUES (1, NULL); -insert into def VALUES (NULL, 'One'); -insert into def VALUES (2, 'Two'); -insert into def VALUES (2, 'Two'); -insert into def VALUES (3, 'Three'); -insert into def VALUES (4, 'Three'); -insert into def VALUES (5, 'Three'); -insert into def VALUES (6, 'Two'); -insert into def VALUES (7, NULL); -insert into def VALUES (8, 'Two'); -insert into def VALUES (9, 'Three'); -insert into def VALUES (10, 'Three'); -select a,count(a) from def group by a order by a; +create table xc_groupby_def(a int, b varchar(25)) distribute by replication; +insert into xc_groupby_def VALUES (NULL, NULL); +insert into xc_groupby_def VALUES (1, NULL); +insert into xc_groupby_def VALUES (NULL, 'One'); +insert into xc_groupby_def VALUES (2, 'Two'); +insert into xc_groupby_def VALUES (2, 'Two'); +insert into xc_groupby_def VALUES (3, 'Three'); +insert into xc_groupby_def VALUES (4, 'Three'); +insert into xc_groupby_def VALUES (5, 'Three'); +insert into xc_groupby_def VALUES (6, 'Two'); +insert into xc_groupby_def VALUES (7, NULL); +insert into xc_groupby_def VALUES (8, 'Two'); +insert into xc_groupby_def VALUES (9, 'Three'); +insert into xc_groupby_def VALUES (10, 'Three'); +select a,count(a) from xc_groupby_def group by a order by a; a | count ----+------- 1 | 1 @@ -843,14 +835,14 @@ select a,count(a) from def group by a order by a; | 0 (11 rows) -explain verbose select a,count(a) from def group by a order by a; +explain verbose select a,count(a) from xc_groupby_def group by a order by a; QUERY PLAN ---------------------------------------------------------------------------------------------- - GroupAggregate (cost=1.02..1.05 rows=1 width=4) + GroupAggregate (cost=1.02..1.04 rows=1 width=4) Output: a, count(a) -> Sort (cost=1.02..1.03 rows=1 width=4) Output: a - Sort Key: def.a + Sort Key: xc_groupby_def.a -> Result (cost=0.00..1.01 rows=1 width=4) Output: a -> Materialize (cost=0.00..1.01 rows=1 width=4) @@ -859,7 +851,7 @@ explain verbose select a,count(a) from def group by a order by a; Output: a, b (11 rows) -select avg(a) from def group by a; +select avg(a) from xc_groupby_def group by a; avg ------------------------ @@ -875,18 +867,18 @@ select avg(a) from def group by a; 4.0000000000000000 (11 rows) -explain verbose select avg(a) from def group by a; +explain verbose select avg(a) from xc_groupby_def group by a; QUERY PLAN ---------------------------------------------------------------------------------- HashAggregate (cost=1.02..1.03 rows=1 width=4) - Output: pg_catalog.avg((avg(def.a))), def.a + Output: pg_catalog.avg((avg(xc_groupby_def.a))), xc_groupby_def.a -> Materialize (cost=0.00..0.00 rows=0 width=0) - Output: (avg(def.a)), def.a + Output: (avg(xc_groupby_def.a)), xc_groupby_def.a -> Data Node Scan (Node Count [1]) (cost=0.00..1.01 rows=1000 width=4) - Output: avg(def.a), def.a + Output: avg(xc_groupby_def.a), xc_groupby_def.a (6 rows) -select avg(a) from def group by a; +select avg(a) from xc_groupby_def group by a; avg ------------------------ @@ -902,18 +894,18 @@ select avg(a) from def group by a; 4.0000000000000000 (11 rows) -explain verbose select avg(a) from def group by a; +explain verbose select avg(a) from xc_groupby_def group by a; ... [truncated message content] |
From: Ashutosh B. <ash...@us...> - 2011-07-06 07:17:23
|
Project "Postgres-XC". The branch, master has been updated via e1083f299ae559b26ad1b224f8452ef4d697d23e (commit) from 0bbfc1e6338b5d98d6cb83fa75f2c38f527d4d4b (commit) - Log ----------------------------------------------------------------- commit e1083f299ae559b26ad1b224f8452ef4d697d23e Author: Ashutosh Bapat <ash...@en...> Date: Wed Jul 6 12:43:58 2011 +0530 A query with a HAVING clause, is routed through standard planner for planning. XC specific tests are added for testing HAVING clause support in XC. diff --git a/src/backend/pgxc/plan/planner.c b/src/backend/pgxc/plan/planner.c index 56eabba..1cd33d1 100644 --- a/src/backend/pgxc/plan/planner.c +++ b/src/backend/pgxc/plan/planner.c @@ -2832,7 +2832,11 @@ pgxc_planner(Query *query, int cursorOptions, ParamListInfo boundParams) * distribution the table has. */ if (query->commandType == CMD_SELECT - && (query->hasAggs || query->groupClause || query->hasWindowFuncs || query->hasRecursive)) + && (query->hasAggs || + query->groupClause || + query->havingQual || + query->hasWindowFuncs || + query->hasRecursive)) { result = standard_planner(query, cursorOptions, boundParams); return result; diff --git a/src/test/regress/expected/xc_having.out b/src/test/regress/expected/xc_having.out new file mode 100644 index 0000000..b2817b5 --- /dev/null +++ b/src/test/regress/expected/xc_having.out @@ -0,0 +1,746 @@ +-- this file contains tests for HAVING clause with combinations of following +-- 1. enable_hashagg = on/off (to force the grouping by sorting) +-- 2. distributed or replicated tables across the datanodes +-- If a testcase is added to any of the combinations, please check if it's +-- applicable in other combinations as well. +-- Combination 1: enable_hashagg on and distributed tables +set enable_hashagg to on; +-- create required tables and fill them with data +create table tab1 (val int, val2 int); +create table tab2 (val int, val2 int); +insert into tab1 values (1, 1), (2, 1), (3, 1), (2, 2), (6, 2), (4, 3), (1, 3), (6, 3); +insert into tab2 values (1, 1), (4, 1), (8, 1), (2, 4), (9, 4), (3, 4), (4, 2), (5, 2), (3, 2); +-- having clause not containing any aggregate +select count(*), sum(val), avg(val), sum(val)::float8/count(*), val2 from tab1 group by val2 having val2 + 1 > 3; + count | sum | avg | ?column? | val2 +-------+-----+--------------------+------------------+------ + 3 | 11 | 3.6666666666666667 | 3.66666666666667 | 3 +(1 row) + +explain verbose select count(*), sum(val), avg(val), sum(val)::float8/count(*), val2 from tab1 group by val2 having val2 + 1 > 3; + QUERY PLAN +------------------------------------------------------------------------------------------------------------- + HashAggregate (cost=1.03..1.06 rows=1 width=8) + Output: count(*), sum(val), avg(val), ((sum(val))::double precision / (count(*))::double precision), val2 + -> Materialize (cost=0.00..1.02 rows=1 width=8) + Output: val, val2 + -> Data Node Scan (Node Count [2]) (cost=0.00..1.01 rows=1000 width=8) + Output: val, val2 +(6 rows) + +-- having clause containing aggregate +select count(*), sum(val), avg(val), sum(val)::float8/count(*), val2 from tab1 group by val2 having avg(val) > 3.75; + count | sum | avg | ?column? | val2 +-------+-----+--------------------+----------+------ + 2 | 8 | 4.0000000000000000 | 4 | 2 +(1 row) + +explain verbose select count(*), sum(val), avg(val), sum(val)::float8/count(*), val2 from tab1 group by val2 having avg(val) > 3.75; + QUERY PLAN +------------------------------------------------------------------------------------------------------------- + HashAggregate (cost=1.03..1.07 rows=1 width=8) + Output: count(*), sum(val), avg(val), ((sum(val))::double precision / (count(*))::double precision), val2 + Filter: (avg(tab1.val) > 3.75) + -> Materialize (cost=0.00..1.01 rows=1 width=8) + Output: val, val2 + -> Data Node Scan (Node Count [2]) (cost=0.00..1.01 rows=1000 width=8) + Output: val, val2 +(7 rows) + +select count(*), sum(val), avg(val), sum(val)::float8/count(*), val2 from tab1 group by val2 having avg(val) > 3.75 or val2 > 2; + count | sum | avg | ?column? | val2 +-------+-----+--------------------+------------------+------ + 2 | 8 | 4.0000000000000000 | 4 | 2 + 3 | 11 | 3.6666666666666667 | 3.66666666666667 | 3 +(2 rows) + +explain verbose select count(*), sum(val), avg(val), sum(val)::float8/count(*), val2 from tab1 group by val2 having avg(val) > 3.75 or val2 > 2; + QUERY PLAN +------------------------------------------------------------------------------------------------------------- + HashAggregate (cost=1.03..1.07 rows=1 width=8) + Output: count(*), sum(val), avg(val), ((sum(val))::double precision / (count(*))::double precision), val2 + Filter: ((avg(tab1.val) > 3.75) OR (tab1.val2 > 2)) + -> Materialize (cost=0.00..1.01 rows=1 width=8) + Output: val, val2 + -> Data Node Scan (Node Count [2]) (cost=0.00..1.01 rows=1000 width=8) + Output: val, val2 +(7 rows) + +select count(*), sum(val), avg(val), sum(val)::float8/count(*), val2 from tab1 group by val2 having avg(val) > 3.75 and val2 > 2; + count | sum | avg | ?column? | val2 +-------+-----+-----+----------+------ +(0 rows) + +explain verbose select count(*), sum(val), avg(val), sum(val)::float8/count(*), val2 from tab1 group by val2 having avg(val) > 3.75 and val2 > 2; + QUERY PLAN +------------------------------------------------------------------------------------------------------------- + HashAggregate (cost=1.03..1.07 rows=1 width=8) + Output: count(*), sum(val), avg(val), ((sum(val))::double precision / (count(*))::double precision), val2 + Filter: (avg(tab1.val) > 3.75) + -> Materialize (cost=0.00..1.02 rows=1 width=8) + Output: val, val2 + -> Data Node Scan (Node Count [2]) (cost=0.00..1.01 rows=1000 width=8) + Output: val, val2 +(7 rows) + +-- joins and group by and having +select count(*), sum(tab1.val * tab2.val), avg(tab1.val*tab2.val), sum(tab1.val*tab2.val)::float8/count(*), tab1.val2, tab2.val2 from tab1 full outer join tab2 on tab1.val2 = tab2.val2 group by tab1.val2, tab2.val2 having tab1.val2 + tab2.val2 > 2; + count | sum | avg | ?column? | val2 | val2 +-------+-----+---------------------+----------+------+------ + 6 | 96 | 16.0000000000000000 | 16 | 2 | 2 +(1 row) + +explain verbose select count(*), sum(tab1.val * tab2.val), avg(tab1.val*tab2.val), sum(tab1.val*tab2.val)::float8/count(*), tab1.val2, tab2.val2 from tab1 full outer join tab2 on tab1.val2 = tab2.val2 group by tab1.val2, tab2.val2 having tab1.val2 + tab2.val2 > 2; + QUERY PLAN +----------------------------------------------------------------------------------------------------------------------------------------------------------------------------------- + HashAggregate (cost=2.06..2.10 rows=1 width=16) + Output: count(*), sum((tab1.val * tab2.val)), avg((tab1.val * tab2.val)), ((sum((tab1.val * tab2.val)))::double precision / (count(*))::double precision), tab1.val2, tab2.val2 + -> Nested Loop (cost=0.00..2.05 rows=1 width=16) + Output: tab1.val, tab1.val2, tab2.val, tab2.val2 + Join Filter: ((tab1.val2 = tab2.val2) AND ((tab1.val2 + tab2.val2) > 2)) + -> Materialize (cost=0.00..1.01 rows=1 width=8) + Output: tab1.val, tab1.val2 + -> Data Node Scan (Node Count [2]) on tab1 (cost=0.00..1.01 rows=1000 width=8) + Output: tab1.val, tab1.val2 + -> Materialize (cost=0.00..1.01 rows=1 width=8) + Output: tab2.val, tab2.val2 + -> Data Node Scan (Node Count [2]) on tab2 (cost=0.00..1.01 rows=1000 width=8) + Output: tab2.val, tab2.val2 +(13 rows) + +-- group by and having, without aggregate in the target list +select val2 from tab1 group by val2 having sum(val) > 8; + val2 +------ + 3 +(1 row) + +explain verbose select val2 from tab1 group by val2 having sum(val) > 8; + QUERY PLAN +---------------------------------------------------------------------------------- + HashAggregate (cost=1.02..1.03 rows=1 width=8) + Output: val2 + Filter: (sum(tab1.val) > 8) + -> Materialize (cost=0.00..1.01 rows=1 width=8) + Output: val, val2 + -> Data Node Scan (Node Count [2]) (cost=0.00..1.01 rows=1000 width=8) + Output: val, val2 +(7 rows) + +select val + val2 from tab1 group by val + val2 having sum(val) > 5; + ?column? +---------- + 4 + 8 + 9 +(3 rows) + +explain verbose select val + val2 from tab1 group by val + val2 having sum(val) > 5; + QUERY PLAN +---------------------------------------------------------------------------------------- + HashAggregate (cost=1.02..1.04 rows=1 width=8) + Output: ((val + val2)) + Filter: (sum(tab1.val) > 5) + -> Result (cost=0.00..1.02 rows=1 width=8) + Output: val, val2, (val + val2) + -> Materialize (cost=0.00..1.01 rows=1 width=8) + Output: val, val2 + -> Data Node Scan (Node Count [2]) (cost=0.00..1.01 rows=1000 width=8) + Output: val, val2 +(9 rows) + +-- group by with aggregates in expression +select count(*) + sum(val) + avg(val), val2 from tab1 group by val2 having min(val) < val2; + ?column? | val2 +---------------------+------ + 17.6666666666666667 | 3 +(1 row) + +explain verbose select count(*) + sum(val) + avg(val), val2 from tab1 group by val2 having min(val) < val2; + QUERY PLAN +---------------------------------------------------------------------------------- + HashAggregate (cost=1.03..1.06 rows=1 width=8) + Output: (((count(*) + sum(val)))::numeric + avg(val)), val2 + Filter: (min(tab1.val) < tab1.val2) + -> Materialize (cost=0.00..1.01 rows=1 width=8) + Output: val, val2 + -> Data Node Scan (Node Count [2]) (cost=0.00..1.01 rows=1000 width=8) + Output: val, val2 +(7 rows) + +drop table tab1; +drop table tab2; +-- Combination 2, enable_hashagg on and replicated tables. +-- repeat the same tests for replicated tables +-- create required tables and fill them with data +create table tab1 (val int, val2 int) distribute by replication; +create table tab2 (val int, val2 int) distribute by replication; +insert into tab1 values (1, 1), (2, 1), (3, 1), (2, 2), (6, 2), (4, 3), (1, 3), (6, 3); +insert into tab2 values (1, 1), (4, 1), (8, 1), (2, 4), (9, 4), (3, 4), (4, 2), (5, 2), (3, 2); +-- having clause not containing any aggregate +select count(*), sum(val), avg(val), sum(val)::float8/count(*), val2 from tab1 group by val2 having val2 + 1 > 3; + count | sum | avg | ?column? | val2 +-------+-----+--------------------+------------------+------ + 3 | 11 | 3.6666666666666667 | 3.66666666666667 | 3 +(1 row) + +explain verbose select count(*), sum(val), avg(val), sum(val)::float8/count(*), val2 from tab1 group by val2 having val2 + 1 > 3; + QUERY PLAN +------------------------------------------------------------------------------------------------------------- + HashAggregate (cost=1.03..1.06 rows=1 width=8) + Output: count(*), sum(val), avg(val), ((sum(val))::double precision / (count(*))::double precision), val2 + -> Materialize (cost=0.00..1.02 rows=1 width=8) + Output: val, val2 + -> Data Node Scan (Node Count [1]) (cost=0.00..1.01 rows=1000 width=8) + Output: val, val2 +(6 rows) + +-- having clause containing aggregate +select count(*), sum(val), avg(val), sum(val)::float8/count(*), val2 from tab1 group by val2 having avg(val) > 3.75; + count | sum | avg | ?column? | val2 +-------+-----+--------------------+----------+------ + 2 | 8 | 4.0000000000000000 | 4 | 2 +(1 row) + +explain verbose select count(*), sum(val), avg(val), sum(val)::float8/count(*), val2 from tab1 group by val2 having avg(val) > 3.75; + QUERY PLAN +------------------------------------------------------------------------------------------------------------- + HashAggregate (cost=1.03..1.07 rows=1 width=8) + Output: count(*), sum(val), avg(val), ((sum(val))::double precision / (count(*))::double precision), val2 + Filter: (avg(tab1.val) > 3.75) + -> Materialize (cost=0.00..1.01 rows=1 width=8) + Output: val, val2 + -> Data Node Scan (Node Count [1]) (cost=0.00..1.01 rows=1000 width=8) + Output: val, val2 +(7 rows) + +select count(*), sum(val), avg(val), sum(val)::float8/count(*), val2 from tab1 group by val2 having avg(val) > 3.75 or val2 > 2; + count | sum | avg | ?column? | val2 +-------+-----+--------------------+------------------+------ + 2 | 8 | 4.0000000000000000 | 4 | 2 + 3 | 11 | 3.6666666666666667 | 3.66666666666667 | 3 +(2 rows) + +explain verbose select count(*), sum(val), avg(val), sum(val)::float8/count(*), val2 from tab1 group by val2 having avg(val) > 3.75 or val2 > 2; + QUERY PLAN +------------------------------------------------------------------------------------------------------------- + HashAggregate (cost=1.03..1.07 rows=1 width=8) + Output: count(*), sum(val), avg(val), ((sum(val))::double precision / (count(*))::double precision), val2 + Filter: ((avg(tab1.val) > 3.75) OR (tab1.val2 > 2)) + -> Materialize (cost=0.00..1.01 rows=1 width=8) + Output: val, val2 + -> Data Node Scan (Node Count [1]) (cost=0.00..1.01 rows=1000 width=8) + Output: val, val2 +(7 rows) + +select count(*), sum(val), avg(val), sum(val)::float8/count(*), val2 from tab1 group by val2 having avg(val) > 3.75 and val2 > 2; + count | sum | avg | ?column? | val2 +-------+-----+-----+----------+------ +(0 rows) + +explain verbose select count(*), sum(val), avg(val), sum(val)::float8/count(*), val2 from tab1 group by val2 having avg(val) > 3.75 and val2 > 2; + QUERY PLAN +------------------------------------------------------------------------------------------------------------- + HashAggregate (cost=1.03..1.07 rows=1 width=8) + Output: count(*), sum(val), avg(val), ((sum(val))::double precision / (count(*))::double precision), val2 + Filter: (avg(tab1.val) > 3.75) + -> Materialize (cost=0.00..1.02 rows=1 width=8) + Output: val, val2 + -> Data Node Scan (Node Count [1]) (cost=0.00..1.01 rows=1000 width=8) + Output: val, val2 +(7 rows) + +-- joins and group by and having +select count(*), sum(tab1.val * tab2.val), avg(tab1.val*tab2.val), sum(tab1.val*tab2.val)::float8/count(*), tab1.val2, tab2.val2 from tab1 full outer join tab2 on tab1.val2 = tab2.val2 group by tab1.val2, tab2.val2 having tab1.val2 + tab2.val2 > 2; + count | sum | avg | ?column? | val2 | val2 +-------+-----+---------------------+----------+------+------ + 6 | 96 | 16.0000000000000000 | 16 | 2 | 2 +(1 row) + +explain verbose select count(*), sum(tab1.val * tab2.val), avg(tab1.val*tab2.val), sum(tab1.val*tab2.val)::float8/count(*), tab1.val2, tab2.val2 from tab1 full outer join tab2 on tab1.val2 = tab2.val2 group by tab1.val2, tab2.val2 having tab1.val2 + tab2.val2 > 2; + QUERY PLAN +----------------------------------------------------------------------------------------------------------------------------------------------------------------------------------- + HashAggregate (cost=2.06..2.10 rows=1 width=16) + Output: count(*), sum((tab1.val * tab2.val)), avg((tab1.val * tab2.val)), ((sum((tab1.val * tab2.val)))::double precision / (count(*))::double precision), tab1.val2, tab2.val2 + -> Nested Loop (cost=0.00..2.05 rows=1 width=16) + Output: tab1.val, tab1.val2, tab2.val, tab2.val2 + Join Filter: ((tab1.val2 = tab2.val2) AND ((tab1.val2 + tab2.val2) > 2)) + -> Materialize (cost=0.00..1.01 rows=1 width=8) + Output: tab1.val, tab1.val2 + -> Data Node Scan (Node Count [1]) on tab1 (cost=0.00..1.01 rows=1000 width=8) + Output: tab1.val, tab1.val2 + -> Materialize (cost=0.00..1.01 rows=1 width=8) + Output: tab2.val, tab2.val2 + -> Data Node Scan (Node Count [1]) on tab2 (cost=0.00..1.01 rows=1000 width=8) + Output: tab2.val, tab2.val2 +(13 rows) + +-- group by and having, without aggregate in the target list +select val2 from tab1 group by val2 having sum(val) > 8; + val2 +------ + 3 +(1 row) + +explain verbose select val2 from tab1 group by val2 having sum(val) > 8; + QUERY PLAN +---------------------------------------------------------------------------------- + HashAggregate (cost=1.02..1.03 rows=1 width=8) + Output: val2 + Filter: (sum(tab1.val) > 8) + -> Materialize (cost=0.00..1.01 rows=1 width=8) + Output: val, val2 + -> Data Node Scan (Node Count [1]) (cost=0.00..1.01 rows=1000 width=8) + Output: val, val2 +(7 rows) + +select val + val2 from tab1 group by val + val2 having sum(val) > 5; + ?column? +---------- + 4 + 8 + 9 +(3 rows) + +explain verbose select val + val2 from tab1 group by val + val2 having sum(val) > 5; + QUERY PLAN +---------------------------------------------------------------------------------------- + HashAggregate (cost=1.02..1.04 rows=1 width=8) + Output: ((val + val2)) + Filter: (sum(tab1.val) > 5) + -> Result (cost=0.00..1.02 rows=1 width=8) + Output: val, val2, (val + val2) + -> Materialize (cost=0.00..1.01 rows=1 width=8) + Output: val, val2 + -> Data Node Scan (Node Count [1]) (cost=0.00..1.01 rows=1000 width=8) + Output: val, val2 +(9 rows) + +-- group by with aggregates in expression +select count(*) + sum(val) + avg(val), val2 from tab1 group by val2 having min(val) < val2; + ?column? | val2 +---------------------+------ + 17.6666666666666667 | 3 +(1 row) + +explain verbose select count(*) + sum(val) + avg(val), val2 from tab1 group by val2 having min(val) < val2; + QUERY PLAN +---------------------------------------------------------------------------------- + HashAggregate (cost=1.03..1.06 rows=1 width=8) + Output: (((count(*) + sum(val)))::numeric + avg(val)), val2 + Filter: (min(tab1.val) < tab1.val2) + -> Materialize (cost=0.00..1.01 rows=1 width=8) + Output: val, val2 + -> Data Node Scan (Node Count [1]) (cost=0.00..1.01 rows=1000 width=8) + Output: val, val2 +(7 rows) + +drop table tab1; +drop table tab2; +-- Combination 3 enable_hashagg off and distributed tables +set enable_hashagg to off; +-- create required tables and fill them with data +create table tab1 (val int, val2 int); +create table tab2 (val int, val2 int); +insert into tab1 values (1, 1), (2, 1), (3, 1), (2, 2), (6, 2), (4, 3), (1, 3), (6, 3); +insert into tab2 values (1, 1), (4, 1), (8, 1), (2, 4), (9, 4), (3, 4), (4, 2), (5, 2), (3, 2); +-- having clause not containing any aggregate +select count(*), sum(val), avg(val), sum(val)::float8/count(*), val2 from tab1 group by val2 having val2 + 1 > 3; + count | sum | avg | ?column? | val2 +-------+-----+--------------------+------------------+------ + 3 | 11 | 3.6666666666666667 | 3.66666666666667 | 3 +(1 row) + +explain verbose select count(*), sum(val), avg(val), sum(val)::float8/count(*), val2 from tab1 group by val2 having val2 + 1 > 3; + QUERY PLAN +------------------------------------------------------------------------------------------------------------- + GroupAggregate (cost=1.03..1.08 rows=1 width=8) + Output: count(*), sum(val), avg(val), ((sum(val))::double precision / (count(*))::double precision), val2 + -> Sort (cost=1.03..1.03 rows=1 width=8) + Output: val, val2 + Sort Key: tab1.val2 + -> Result (cost=0.00..1.02 rows=1 width=8) + Output: val, val2 + -> Materialize (cost=0.00..1.02 rows=1 width=8) + Output: val, val2 + -> Data Node Scan (Node Count [2]) (cost=0.00..1.01 rows=1000 width=8) + Output: val, val2 +(11 rows) + +-- having clause containing aggregate +select count(*), sum(val), avg(val), sum(val)::float8/count(*), val2 from tab1 group by val2 having avg(val) > 3.75; + count | sum | avg | ?column? | val2 +-------+-----+--------------------+----------+------ + 2 | 8 | 4.0000000000000000 | 4 | 2 +(1 row) + +explain verbose select count(*), sum(val), avg(val), sum(val)::float8/count(*), val2 from tab1 group by val2 having avg(val) > 3.75; + QUERY PLAN +------------------------------------------------------------------------------------------------------------- + GroupAggregate (cost=1.02..1.08 rows=1 width=8) + Output: count(*), sum(val), avg(val), ((sum(val))::double precision / (count(*))::double precision), val2 + Filter: (avg(tab1.val) > 3.75) + -> Sort (cost=1.02..1.03 rows=1 width=8) + Output: val, val2 + Sort Key: tab1.val2 + -> Result (cost=0.00..1.01 rows=1 width=8) + Output: val, val2 + -> Materialize (cost=0.00..1.01 rows=1 width=8) + Output: val, val2 + -> Data Node Scan (Node Count [2]) (cost=0.00..1.01 rows=1000 width=8) + Output: val, val2 +(12 rows) + +select count(*), sum(val), avg(val), sum(val)::float8/count(*), val2 from tab1 group by val2 having avg(val) > 3.75 or val2 > 2; + count | sum | avg | ?column? | val2 +-------+-----+--------------------+------------------+------ + 2 | 8 | 4.0000000000000000 | 4 | 2 + 3 | 11 | 3.6666666666666667 | 3.66666666666667 | 3 +(2 rows) + +explain verbose select count(*), sum(val), avg(val), sum(val)::float8/count(*), val2 from tab1 group by val2 having avg(val) > 3.75 or val2 > 2; + QUERY PLAN +------------------------------------------------------------------------------------------------------------- + GroupAggregate (cost=1.02..1.08 rows=1 width=8) + Output: count(*), sum(val), avg(val), ((sum(val))::double precision / (count(*))::double precision), val2 + Filter: ((avg(tab1.val) > 3.75) OR (tab1.val2 > 2)) + -> Sort (cost=1.02..1.03 rows=1 width=8) + Output: val, val2 + Sort Key: tab1.val2 + -> Result (cost=0.00..1.01 rows=1 width=8) + Output: val, val2 + -> Materialize (cost=0.00..1.01 rows=1 width=8) + Output: val, val2 + -> Data Node Scan (Node Count [2]) (cost=0.00..1.01 rows=1000 width=8) + Output: val, val2 +(12 rows) + +select count(*), sum(val), avg(val), sum(val)::float8/count(*), val2 from tab1 group by val2 having avg(val) > 3.75 and val2 > 2; + count | sum | avg | ?column? | val2 +-------+-----+-----+----------+------ +(0 rows) + +explain verbose select count(*), sum(val), avg(val), sum(val)::float8/count(*), val2 from tab1 group by val2 having avg(val) > 3.75 and val2 > 2; + QUERY PLAN +------------------------------------------------------------------------------------------------------------- + GroupAggregate (cost=1.03..1.08 rows=1 width=8) + Output: count(*), sum(val), avg(val), ((sum(val))::double precision / (count(*))::double precision), val2 + Filter: (avg(tab1.val) > 3.75) + -> Sort (cost=1.03..1.03 rows=1 width=8) + Output: val, val2 + Sort Key: tab1.val2 + -> Result (cost=0.00..1.02 rows=1 width=8) + Output: val, val2 + -> Materialize (cost=0.00..1.02 rows=1 width=8) + Output: val, val2 + -> Data Node Scan (Node Count [2]) (cost=0.00..1.01 rows=1000 width=8) + Output: val, val2 +(12 rows) + +-- joins and group by and having +select count(*), sum(tab1.val * tab2.val), avg(tab1.val*tab2.val), sum(tab1.val*tab2.val)::float8/count(*), tab1.val2, tab2.val2 from tab1 full outer join tab2 on tab1.val2 = tab2.val2 group by tab1.val2, tab2.val2 having tab1.val2 + tab2.val2 > 2; + count | sum | avg | ?column? | val2 | val2 +-------+-----+---------------------+----------+------+------ + 6 | 96 | 16.0000000000000000 | 16 | 2 | 2 +(1 row) + +explain verbose select count(*), sum(tab1.val * tab2.val), avg(tab1.val*tab2.val), sum(tab1.val*tab2.val)::float8/count(*), tab1.val2, tab2.val2 from tab1 full outer join tab2 on tab1.val2 = tab2.val2 group by tab1.val2, tab2.val2 having tab1.val2 + tab2.val2 > 2; + QUERY PLAN +----------------------------------------------------------------------------------------------------------------------------------------------------------------------------------- + GroupAggregate (cost=2.06..2.12 rows=1 width=16) + Output: count(*), sum((tab1.val * tab2.val)), avg((tab1.val * tab2.val)), ((sum((tab1.val * tab2.val)))::double precision / (count(*))::double precision), tab1.val2, tab2.val2 + -> Sort (cost=2.06..2.06 rows=1 width=16) + Output: tab1.val, tab2.val, tab1.val2, tab2.val2 + Sort Key: tab1.val2, tab2.val2 + -> Nested Loop (cost=0.00..2.05 rows=1 width=16) + Output: tab1.val, tab2.val, tab1.val2, tab2.val2 + Join Filter: ((tab1.val2 = tab2.val2) AND ((tab1.val2 + tab2.val2) > 2)) + -> Materialize (cost=0.00..1.01 rows=1 width=8) + Output: tab1.val, tab1.val2 + -> Data Node Scan (Node Count [2]) on tab1 (cost=0.00..1.01 rows=1000 width=8) + Output: tab1.val, tab1.val2 + -> Materialize (cost=0.00..1.01 rows=1 width=8) + Output: tab2.val, tab2.val2 + -> Data Node Scan (Node Count [2]) on tab2 (cost=0.00..1.01 rows=1000 width=8) + Output: tab2.val, tab2.val2 +(16 rows) + +-- group by and having, without aggregate in the target list +select val2 from tab1 group by val2 having sum(val) > 8; + val2 +------ + 3 +(1 row) + +explain verbose select val2 from tab1 group by val2 having sum(val) > 8; + QUERY PLAN +---------------------------------------------------------------------------------------------- + GroupAggregate (cost=1.02..1.05 rows=1 width=8) + Output: val2 + Filter: (sum(tab1.val) > 8) + -> Sort (cost=1.02..1.03 rows=1 width=8) + Output: val2, val + Sort Key: tab1.val2 + -> Result (cost=0.00..1.01 rows=1 width=8) + Output: val2, val + -> Materialize (cost=0.00..1.01 rows=1 width=8) + Output: val, val2 + -> Data Node Scan (Node Count [2]) (cost=0.00..1.01 rows=1000 width=8) + Output: val, val2 +(12 rows) + +select val + val2 from tab1 group by val + val2 having sum(val) > 5; + ?column? +---------- + 4 + 8 + 9 +(3 rows) + +explain verbose select val + val2 from tab1 group by val + val2 having sum(val) > 5; + QUERY PLAN +---------------------------------------------------------------------------------------------- + GroupAggregate (cost=1.03..1.05 rows=1 width=8) + Output: ((val + val2)) + Filter: (sum(tab1.val) > 5) + -> Sort (cost=1.03..1.03 rows=1 width=8) + Output: val, val2, ((val + val2)) + Sort Key: ((tab1.val + tab1.val2)) + -> Result (cost=0.00..1.02 rows=1 width=8) + Output: val, val2, (val + val2) + -> Materialize (cost=0.00..1.01 rows=1 width=8) + Output: val, val2 + -> Data Node Scan (Node Count [2]) (cost=0.00..1.01 rows=1000 width=8) + Output: val, val2 +(12 rows) + +-- group by with aggregates in expression +select count(*) + sum(val) + avg(val), val2 from tab1 group by val2 having min(val) < val2; + ?column? | val2 +---------------------+------ + 17.6666666666666667 | 3 +(1 row) + +explain verbose select count(*) + sum(val) + avg(val), val2 from tab1 group by val2 having min(val) < val2; + QUERY PLAN +---------------------------------------------------------------------------------------------- + GroupAggregate (cost=1.02..1.07 rows=1 width=8) + Output: (((count(*) + sum(val)))::numeric + avg(val)), val2 + Filter: (min(tab1.val) < tab1.val2) + -> Sort (cost=1.02..1.03 rows=1 width=8) + Output: val, val2 + Sort Key: tab1.val2 + -> Result (cost=0.00..1.01 rows=1 width=8) + Output: val, val2 + -> Materialize (cost=0.00..1.01 rows=1 width=8) + Output: val, val2 + -> Data Node Scan (Node Count [2]) (cost=0.00..1.01 rows=1000 width=8) + Output: val, val2 +(12 rows) + +drop table tab1; +drop table tab2; +-- Combination 4 enable_hashagg off and replicated tables. +-- repeat the same tests for replicated tables +-- create required tables and fill them with data +create table tab1 (val int, val2 int) distribute by replication; +create table tab2 (val int, val2 int) distribute by replication; +insert into tab1 values (1, 1), (2, 1), (3, 1), (2, 2), (6, 2), (4, 3), (1, 3), (6, 3); +insert into tab2 values (1, 1), (4, 1), (8, 1), (2, 4), (9, 4), (3, 4), (4, 2), (5, 2), (3, 2); +-- having clause not containing any aggregate +select count(*), sum(val), avg(val), sum(val)::float8/count(*), val2 from tab1 group by val2 having val2 + 1 > 3; + count | sum | avg | ?column? | val2 +-------+-----+--------------------+------------------+------ + 3 | 11 | 3.6666666666666667 | 3.66666666666667 | 3 +(1 row) + +explain verbose select count(*), sum(val), avg(val), sum(val)::float8/count(*), val2 from tab1 group by val2 having val2 + 1 > 3; + QUERY PLAN +------------------------------------------------------------------------------------------------------------- + GroupAggregate (cost=1.03..1.08 rows=1 width=8) + Output: count(*), sum(val), avg(val), ((sum(val))::double precision / (count(*))::double precision), val2 + -> Sort (cost=1.03..1.03 rows=1 width=8) + Output: val, val2 + Sort Key: tab1.val2 + -> Result (cost=0.00..1.02 rows=1 width=8) + Output: val, val2 + -> Materialize (cost=0.00..1.02 rows=1 width=8) + Output: val, val2 + -> Data Node Scan (Node Count [1]) (cost=0.00..1.01 rows=1000 width=8) + Output: val, val2 +(11 rows) + +-- having clause containing aggregate +select count(*), sum(val), avg(val), sum(val)::float8/count(*), val2 from tab1 group by val2 having avg(val) > 3.75; + count | sum | avg | ?column? | val2 +-------+-----+--------------------+----------+------ + 2 | 8 | 4.0000000000000000 | 4 | 2 +(1 row) + +explain verbose select count(*), sum(val), avg(val), sum(val)::float8/count(*), val2 from tab1 group by val2 having avg(val) > 3.75; + QUERY PLAN +------------------------------------------------------------------------------------------------------------- + GroupAggregate (cost=1.02..1.08 rows=1 width=8) + Output: count(*), sum(val), avg(val), ((sum(val))::double precision / (count(*))::double precision), val2 + Filter: (avg(tab1.val) > 3.75) + -> Sort (cost=1.02..1.03 rows=1 width=8) + Output: val, val2 + Sort Key: tab1.val2 + -> Result (cost=0.00..1.01 rows=1 width=8) + Output: val, val2 + -> Materialize (cost=0.00..1.01 rows=1 width=8) + Output: val, val2 + -> Data Node Scan (Node Count [1]) (cost=0.00..1.01 rows=1000 width=8) + Output: val, val2 +(12 rows) + +select count(*), sum(val), avg(val), sum(val)::float8/count(*), val2 from tab1 group by val2 having avg(val) > 3.75 or val2 > 2; + count | sum | avg | ?column? | val2 +-------+-----+--------------------+------------------+------ + 2 | 8 | 4.0000000000000000 | 4 | 2 + 3 | 11 | 3.6666666666666667 | 3.66666666666667 | 3 +(2 rows) + +explain verbose select count(*), sum(val), avg(val), sum(val)::float8/count(*), val2 from tab1 group by val2 having avg(val) > 3.75 or val2 > 2; + QUERY PLAN +------------------------------------------------------------------------------------------------------------- + GroupAggregate (cost=1.02..1.08 rows=1 width=8) + Output: count(*), sum(val), avg(val), ((sum(val))::double precision / (count(*))::double precision), val2 + Filter: ((avg(tab1.val) > 3.75) OR (tab1.val2 > 2)) + -> Sort (cost=1.02..1.03 rows=1 width=8) + Output: val, val2 + Sort Key: tab1.val2 + -> Result (cost=0.00..1.01 rows=1 width=8) + Output: val, val2 + -> Materialize (cost=0.00..1.01 rows=1 width=8) + Output: val, val2 + -> Data Node Scan (Node Count [1]) (cost=0.00..1.01 rows=1000 width=8) + Output: val, val2 +(12 rows) + +select count(*), sum(val), avg(val), sum(val)::float8/count(*), val2 from tab1 group by val2 having avg(val) > 3.75 and val2 > 2; + count | sum | avg | ?column? | val2 +-------+-----+-----+----------+------ +(0 rows) + +explain verbose select count(*), sum(val), avg(val), sum(val)::float8/count(*), val2 from tab1 group by val2 having avg(val) > 3.75 and val2 > 2; + QUERY PLAN +------------------------------------------------------------------------------------------------------------- + GroupAggregate (cost=1.03..1.08 rows=1 width=8) + Output: count(*), sum(val), avg(val), ((sum(val))::double precision / (count(*))::double precision), val2 + Filter: (avg(tab1.val) > 3.75) + -> Sort (cost=1.03..1.03 rows=1 width=8) + Output: val, val2 + Sort Key: tab1.val2 + -> Result (cost=0.00..1.02 rows=1 width=8) + Output: val, val2 + -> Materialize (cost=0.00..1.02 rows=1 width=8) + Output: val, val2 + -> Data Node Scan (Node Count [1]) (cost=0.00..1.01 rows=1000 width=8) + Output: val, val2 +(12 rows) + +-- joins and group by and having +select count(*), sum(tab1.val * tab2.val), avg(tab1.val*tab2.val), sum(tab1.val*tab2.val)::float8/count(*), tab1.val2, tab2.val2 from tab1 full outer join tab2 on tab1.val2 = tab2.val2 group by tab1.val2, tab2.val2 having tab1.val2 + tab2.val2 > 2; + count | sum | avg | ?column? | val2 | val2 +-------+-----+---------------------+----------+------+------ + 6 | 96 | 16.0000000000000000 | 16 | 2 | 2 +(1 row) + +explain verbose select count(*), sum(tab1.val * tab2.val), avg(tab1.val*tab2.val), sum(tab1.val*tab2.val)::float8/count(*), tab1.val2, tab2.val2 from tab1 full outer join tab2 on tab1.val2 = tab2.val2 group by tab1.val2, tab2.val2 having tab1.val2 + tab2.val2 > 2; + QUERY PLAN +----------------------------------------------------------------------------------------------------------------------------------------------------------------------------------- + GroupAggregate (cost=2.06..2.12 rows=1 width=16) + Output: count(*), sum((tab1.val * tab2.val)), avg((tab1.val * tab2.val)), ((sum((tab1.val * tab2.val)))::double precision / (count(*))::double precision), tab1.val2, tab2.val2 + -> Sort (cost=2.06..2.06 rows=1 width=16) + Output: tab1.val, tab2.val, tab1.val2, tab2.val2 + Sort Key: tab1.val2, tab2.val2 + -> Nested Loop (cost=0.00..2.05 rows=1 width=16) + Output: tab1.val, tab2.val, tab1.val2, tab2.val2 + Join Filter: ((tab1.val2 = tab2.val2) AND ((tab1.val2 + tab2.val2) > 2)) + -> Materialize (cost=0.00..1.01 rows=1 width=8) + Output: tab1.val, tab1.val2 + -> Data Node Scan (Node Count [1]) on tab1 (cost=0.00..1.01 rows=1000 width=8) + Output: tab1.val, tab1.val2 + -> Materialize (cost=0.00..1.01 rows=1 width=8) + Output: tab2.val, tab2.val2 + -> Data Node Scan (Node Count [1]) on tab2 (cost=0.00..1.01 rows=1000 width=8) + Output: tab2.val, tab2.val2 +(16 rows) + +-- group by and having, without aggregate in the target list +select val2 from tab1 group by val2 having sum(val) > 8; + val2 +------ + 3 +(1 row) + +explain verbose select val2 from tab1 group by val2 having sum(val) > 8; + QUERY PLAN +---------------------------------------------------------------------------------------------- + GroupAggregate (cost=1.02..1.05 rows=1 width=8) + Output: val2 + Filter: (sum(tab1.val) > 8) + -> Sort (cost=1.02..1.03 rows=1 width=8) + Output: val2, val + Sort Key: tab1.val2 + -> Result (cost=0.00..1.01 rows=1 width=8) + Output: val2, val + -> Materialize (cost=0.00..1.01 rows=1 width=8) + Output: val, val2 + -> Data Node Scan (Node Count [1]) (cost=0.00..1.01 rows=1000 width=8) + Output: val, val2 +(12 rows) + +select val + val2 from tab1 group by val + val2 having sum(val) > 5; + ?column? +---------- + 4 + 8 + 9 +(3 rows) + +explain verbose select val + val2 from tab1 group by val + val2 having sum(val) > 5; + QUERY PLAN +---------------------------------------------------------------------------------------------- + GroupAggregate (cost=1.03..1.05 rows=1 width=8) + Output: ((val + val2)) + Filter: (sum(tab1.val) > 5) + -> Sort (cost=1.03..1.03 rows=1 width=8) + Output: val, val2, ((val + val2)) + Sort Key: ((tab1.val + tab1.val2)) + -> Result (cost=0.00..1.02 rows=1 width=8) + Output: val, val2, (val + val2) + -> Materialize (cost=0.00..1.01 rows=1 width=8) + Output: val, val2 + -> Data Node Scan (Node Count [1]) (cost=0.00..1.01 rows=1000 width=8) + Output: val, val2 +(12 rows) + +-- group by with aggregates in expression +select count(*) + sum(val) + avg(val), val2 from tab1 group by val2 having min(val) < val2; + ?column? | val2 +---------------------+------ + 17.6666666666666667 | 3 +(1 row) + +explain verbose select count(*) + sum(val) + avg(val), val2 from tab1 group by val2 having min(val) < val2; + QUERY PLAN +---------------------------------------------------------------------------------------------- + GroupAggregate (cost=1.02..1.07 rows=1 width=8) + Output: (((count(*) + sum(val)))::numeric + avg(val)), val2 + Filter: (min(tab1.val) < tab1.val2) + -> Sort (cost=1.02..1.03 rows=1 width=8) + Output: val, val2 + Sort Key: tab1.val2 + -> Result (cost=0.00..1.01 rows=1 width=8) + Output: val, val2 + -> Materialize (cost=0.00..1.01 rows=1 width=8) + Output: val, val2 + -> Data Node Scan (Node Count [1]) (cost=0.00..1.01 rows=1000 width=8) + Output: val, val2 +(12 rows) + +drop table tab1; +drop table tab2; +reset enable_hashagg; diff --git a/src/test/regress/serial_schedule b/src/test/regress/serial_schedule index e7f7b3f..66d965d 100644 --- a/src/test/regress/serial_schedule +++ b/src/test/regress/serial_schedule @@ -131,4 +131,5 @@ test: xml test: stats test: xc_groupby test: xc_distkey +test: xc_having diff --git a/src/test/regress/sql/xc_having.sql b/src/test/regress/sql/xc_having.sql new file mode 100644 index 0000000..c078c9c --- /dev/null +++ b/src/test/regress/sql/xc_having.sql @@ -0,0 +1,131 @@ +-- this file contains tests for HAVING clause with combinations of following +-- 1. enable_hashagg = on/off (to force the grouping by sorting) +-- 2. distributed or replicated tables across the datanodes +-- If a testcase is added to any of the combinations, please check if it's +-- applicable in other combinations as well. + +-- Combination 1: enable_hashagg on and distributed tables +set enable_hashagg to on; +-- create required tables and fill them with data +create table tab1 (val int, val2 int); +create table tab2 (val int, val2 int); +insert into tab1 values (1, 1), (2, 1), (3, 1), (2, 2), (6, 2), (4, 3), (1, 3), (6, 3); +insert into tab2 values (1, 1), (4, 1), (8, 1), (2, 4), (9, 4), (3, 4), (4, 2), (5, 2), (3, 2); +-- having clause not containing any aggregate +select count(*), sum(val), avg(val), sum(val)::float8/count(*), val2 from tab1 group by val2 having val2 + 1 > 3; +explain verbose select count(*), sum(val), avg(val), sum(val)::float8/count(*), val2 from tab1 group by val2 having val2 + 1 > 3; +-- having clause containing aggregate +select count(*), sum(val), avg(val), sum(val)::float8/count(*), val2 from tab1 group by val2 having avg(val) > 3.75; +explain verbose select count(*), sum(val), avg(val), sum(val)::float8/count(*), val2 from tab1 group by val2 having avg(val) > 3.75; +select count(*), sum(val), avg(val), sum(val)::float8/count(*), val2 from tab1 group by val2 having avg(val) > 3.75 or val2 > 2; +explain verbose select count(*), sum(val), avg(val), sum(val)::float8/count(*), val2 from tab1 group by val2 having avg(val) > 3.75 or val2 > 2; +select count(*), sum(val), avg(val), sum(val)::float8/count(*), val2 from tab1 group by val2 having avg(val) > 3.75 and val2 > 2; +explain verbose select count(*), sum(val), avg(val), sum(val)::float8/count(*), val2 from tab1 group by val2 having avg(val) > 3.75 and val2 > 2; +-- joins and group by and having +select count(*), sum(tab1.val * tab2.val), avg(tab1.val*tab2.val), sum(tab1.val*tab2.val)::float8/count(*), tab1.val2, tab2.val2 from tab1 full outer join tab2 on tab1.val2 = tab2.val2 group by tab1.val2, tab2.val2 having tab1.val2 + tab2.val2 > 2; +explain verbose select count(*), sum(tab1.val * tab2.val), avg(tab1.val*tab2.val), sum(tab1.val*tab2.val)::float8/count(*), tab1.val2, tab2.val2 from tab1 full outer join tab2 on tab1.val2 = tab2.val2 group by tab1.val2, tab2.val2 having tab1.val2 + tab2.val2 > 2; +-- group by and having, without aggregate in the target list +select val2 from tab1 group by val2 having sum(val) > 8; +explain verbose select val2 from tab1 group by val2 having sum(val) > 8; +select val + val2 from tab1 group by val + val2 having sum(val) > 5; +explain verbose select val + val2 from tab1 group by val + val2 having sum(val) > 5; +-- group by with aggregates in expression +select count(*) + sum(val) + avg(val), val2 from tab1 group by val2 having min(val) < val2; +explain verbose select count(*) + sum(val) + avg(val), val2 from tab1 group by val2 having min(val) < val2; +drop table tab1; +drop table tab2; + +-- Combination 2, enable_hashagg on and replicated tables. +-- repeat the same tests for replicated tables +-- create required tables and fill them with data +create table tab1 (val int, val2 int) distribute by replication; +create table tab2 (val int, val2 int) distribute by replication; +insert into tab1 values (1, 1), (2, 1), (3, 1), (2, 2), (6, 2), (4, 3), (1, 3), (6, 3); +insert into tab2 values (1, 1), (4, 1), (8, 1), (2, 4), (9, 4), (3, 4), (4, 2), (5, 2), (3, 2); +-- having clause not containing any aggregate +select count(*), sum(val), avg(val), sum(val)::float8/count(*), val2 from tab1 group by val2 having val2 + 1 > 3; +explain verbose select count(*), sum(val), avg(val), sum(val)::float8/count(*), val2 from tab1 group by val2 having val2 + 1 > 3; +-- having clause containing aggregate +select count(*), sum(val), avg(val), sum(val)::float8/count(*), val2 from tab1 group by val2 having avg(val) > 3.75; +explain verbose select count(*), sum(val), avg(val), sum(val)::float8/count(*), val2 from tab1 group by val2 having avg(val) > 3.75; +select count(*), sum(val), avg(val), sum(val)::float8/count(*), val2 from tab1 group by val2 having avg(val) > 3.75 or val2 > 2; +explain verbose select count(*), sum(val), avg(val), sum(val)::float8/count(*), val2 from tab1 group by val2 having avg(val) > 3.75 or val2 > 2; +select count(*), sum(val), avg(val), sum(val)::float8/count(*), val2 from tab1 group by val2 having avg(val) > 3.75 and val2 > 2; +explain verbose select count(*), sum(val), avg(val), sum(val)::float8/count(*), val2 from tab1 group by val2 having avg(val) > 3.75 and val2 > 2; +-- joins and group by and having +select count(*), sum(tab1.val * tab2.val), avg(tab1.val*tab2.val), sum(tab1.val*tab2.val)::float8/count(*), tab1.val2, tab2.val2 from tab1 full outer join tab2 on tab1.val2 = tab2.val2 group by tab1.val2, tab2.val2 having tab1.val2 + tab2.val2 > 2; +explain verbose select count(*), sum(tab1.val * tab2.val), avg(tab1.val*tab2.val), sum(tab1.val*tab2.val)::float8/count(*), tab1.val2, tab2.val2 from tab1 full outer join tab2 on tab1.val2 = tab2.val2 group by tab1.val2, tab2.val2 having tab1.val2 + tab2.val2 > 2; +-- group by and having, without aggregate in the target list +select val2 from tab1 group by val2 having sum(val) > 8; +explain verbose select val2 from tab1 group by val2 having sum(val) > 8; +select val + val2 from tab1 group by val + val2 having sum(val) > 5; +explain verbose select val + val2 from tab1 group by val + val2 having sum(val) > 5; +-- group by with aggregates in expression +select count(*) + sum(val) + avg(val), val2 from tab1 group by val2 having min(val) < val2; +explain verbose select count(*) + sum(val) + avg(val), val2 from tab1 group by val2 having min(val) < val2; +drop table tab1; +drop table tab2; + +-- Combination 3 enable_hashagg off and distributed tables +set enable_hashagg to off; +-- create required tables and fill them with data +create table tab1 (val int, val2 int); +create table tab2 (val int, val2 int); +insert into tab1 values (1, 1), (2, 1), (3, 1), (2, 2), (6, 2), (4, 3), (1, 3), (6, 3); +insert into tab2 values (1, 1), (4, 1), (8, 1), (2, 4), (9, 4), (3, 4), (4, 2), (5, 2), (3, 2); +-- having clause not containing any aggregate +select count(*), sum(val), avg(val), sum(val)::float8/count(*), val2 from tab1 group by val2 having val2 + 1 > 3; +explain verbose select count(*), sum(val), avg(val), sum(val)::float8/count(*), val2 from tab1 group by val2 having val2 + 1 > 3; +-- having clause containing aggregate +select count(*), sum(val), avg(val), sum(val)::float8/count(*), val2 from tab1 group by val2 having avg(val) > 3.75; +explain verbose select count(*), sum(val), avg(val), sum(val)::float8/count(*), val2 from tab1 group by val2 having avg(val) > 3.75; +select count(*), sum(val), avg(val), sum(val)::float8/count(*), val2 from tab1 group by val2 having avg(val) > 3.75 or val2 > 2; +explain verbose select count(*), sum(val), avg(val), sum(val)::float8/count(*), val2 from tab1 group by val2 having avg(val) > 3.75 or val2 > 2; +select count(*), sum(val), avg(val), sum(val)::float8/count(*), val2 from tab1 group by val2 having avg(val) > 3.75 and val2 > 2; +explain verbose select count(*), sum(val), avg(val), sum(val)::float8/count(*), val2 from tab1 group by val2 having avg(val) > 3.75 and val2 > 2; +-- joins and group by and having +select count(*), sum(tab1.val * tab2.val), avg(tab1.val*tab2.val), sum(tab1.val*tab2.val)::float8/count(*), tab1.val2, tab2.val2 from tab1 full outer join tab2 on tab1.val2 = tab2.val2 group by tab1.val2, tab2.val2 having tab1.val2 + tab2.val2 > 2; +explain verbose select count(*), sum(tab1.val * tab2.val), avg(tab1.val*tab2.val), sum(tab1.val*tab2.val)::float8/count(*), tab1.val2, tab2.val2 from tab1 full outer join tab2 on tab1.val2 = tab2.val2 group by tab1.val2, tab2.val2 having tab1.val2 + tab2.val2 > 2; +-- group by and having, without aggregate in the target list +select val2 from tab1 group by val2 having sum(val) > 8; +explain verbose select val2 from tab1 group by val2 having sum(val) > 8; +select val + val2 from tab1 group by val + val2 having sum(val) > 5; +explain verbose select val + val2 from tab1 group by val + val2 having sum(val) > 5; +-- group by with aggregates in expression +select count(*) + sum(val) + avg(val), val2 from tab1 group by val2 having min(val) < val2; +explain verbose select count(*) + sum(val) + avg(val), val2 from tab1 group by val2 having min(val) < val2; +drop table tab1; +drop table tab2; + +-- Combination 4 enable_hashagg off and replicated tables. +-- repeat the same tests for replicated tables +-- create required tables and fill them with data +create table tab1 (val int, val2 int) distribute by replication; +create table tab2 (val int, val2 int) distribute by replication; +insert into tab1 values (1, 1), (2, 1), (3, 1), (2, 2), (6, 2), (4, 3), (1, 3), (6, 3); +insert into tab2 values (1, 1), (4, 1), (8, 1), (2, 4), (9, 4), (3, 4), (4, 2), (5, 2), (3, 2); +-- having clause not containing any aggregate +select count(*), sum(val), avg(val), sum(val)::float8/count(*), val2 from tab1 group by val2 having val2 + 1 > 3; +explain verbose select count(*), sum(val), avg(val), sum(val)::float8/count(*), val2 from tab1 group by val2 having val2 + 1 > 3; +-- having clause containing aggregate +select count(*), sum(val), avg(val), sum(val)::float8/count(*), val2 from tab1 group by val2 having avg(val) > 3.75; +explain verbose select count(*), sum(val), avg(val), sum(val)::float8/count(*), val2 from tab1 group by val2 having avg(val) > 3.75; +select count(*), sum(val), avg(val), sum(val)::float8/count(*), val2 from tab1 group by val2 having avg(val) > 3.75 or val2 > 2; +explain verbose select count(*), sum(val), avg(val), sum(val)::float8/count(*), val2 from tab1 group by val2 having avg(val) > 3.75 or val2 > 2; +select count(*), sum(val), avg(val), sum(val)::float8/count(*), val2 from tab1 group by val2 having avg(val) > 3.75 and val2 > 2; +explain verbose select count(*), sum(val), avg(val), sum(val)::float8/count(*), val2 from tab1 group by val2 having avg(val) > 3.75 and val2 > 2; +-- joins and group by and having +select count(*), sum(tab1.val * tab2.val), avg(tab1.val*tab2.val), sum(tab1.val*tab2.val)::float8/count(*), tab1.val2, tab2.val2 from tab1 full outer join tab2 on tab1.val2 = tab2.val2 group by tab1.val2, tab2.val2 having tab1.val2 + tab2.val2 > 2; +explain verbose select count(*), sum(tab1.val * tab2.val), avg(tab1.val*tab2.val), sum(tab1.val*tab2.val)::float8/count(*), tab1.val2, tab2.val2 from tab1 full outer join tab2 on tab1.val2 = tab2.val2 group by tab1.val2, tab2.val2 having tab1.val2 + tab2.val2 > 2; +-- group by and having, without aggregate in the target list +select val2 from tab1 group by val2 having sum(val) > 8; +explain verbose select val2 from tab1 group by val2 having sum(val) > 8; +select val + val2 from tab1 group by val + val2 having sum(val) > 5; +explain verbose select val + val2 from tab1 group by val + val2 having sum(val) > 5; +-- group by with aggregates in expression +select count(*) + sum(val) + avg(val), val2 from tab1 group by val2 having min(val) < val2; +explain verbose select count(*) + sum(val) + avg(val), val2 from tab1 group by val2 having min(val) < val2; +drop table tab1; +drop table tab2; + +reset enable_hashagg; ----------------------------------------------------------------------- Summary of changes: src/backend/pgxc/plan/planner.c | 6 +- src/test/regress/expected/xc_having.out | 746 +++++++++++++++++++++++++++++++ src/test/regress/serial_schedule | 1 + src/test/regress/sql/xc_having.sql | 131 ++++++ 4 files changed, 883 insertions(+), 1 deletions(-) create mode 100644 src/test/regress/expected/xc_having.out create mode 100644 src/test/regress/sql/xc_having.sql hooks/post-receive -- Postgres-XC |
From: Michael P. <mic...@us...> - 2011-07-06 04:26:32
|
Project "Postgres-XC". The branch, PGXC-merge9_1 has been deleted was 115f6971a95da236c91a9cebf1fa06a091634536 ----------------------------------------------------------------------- 115f6971a95da236c91a9cebf1fa06a091634536 Merge commit 'a4bebdd92624e018108c2610fc3f2c1584b6c687' into PGXC-merge9_1 ----------------------------------------------------------------------- hooks/post-receive -- Postgres-XC |
From: Michael P. <mic...@us...> - 2011-07-05 07:31:07
|
Project "website". The branch, master has been updated via 40e36fdcec40f733921471b43a5777c8425d20cd (commit) from 840b5e1950e29ef18adf871df493b2c17885bc84 (commit) - Log ----------------------------------------------------------------- commit 40e36fdcec40f733921471b43a5777c8425d20cd Author: Michael P <mic...@us...> Date: Tue Jul 5 16:30:34 2011 +0900 Add links to GIT repository and project Wiki in menu diff --git a/menu.html b/menu.html index 2010eb1..209f36c 100755 --- a/menu.html +++ b/menu.html @@ -22,6 +22,12 @@ <!-- Download --> <a href="download.html" target="main">Download</a>   +<!-- Wiki --> +<a href="http://sourceforge.net/apps/mediawiki/postgres-xc/index.php?title=Main_Page" target="_blank">Project Wiki</a> +  +<!-- GIT repository --> +<a href="http://postgres-xc.git.sourceforge.net/git/gitweb.cgi?p=postgres-xc/postgres-xc;a=summary" target="_blank">GIT repository</a> +  <!-- Development Home --> <a href="https://sourceforge.net/projects/postgres-xc/" target="_blank"> Development ----------------------------------------------------------------------- Summary of changes: menu.html | 6 ++++++ 1 files changed, 6 insertions(+), 0 deletions(-) hooks/post-receive -- website |
From: Michael P. <mic...@us...> - 2011-07-05 07:22:35
|
Project "Postgres-XC". The branch, pgxc-barrier has been deleted was 3b00d08bae2d2b48e957c61d9714a749d493a7ec ----------------------------------------------------------------------- 3b00d08bae2d2b48e957c61d9714a749d493a7ec Correction of spelling mistakes ----------------------------------------------------------------------- hooks/post-receive -- Postgres-XC |