Age | Commit message (Collapse) | Author |
|
We create a coordinator-only LOCAL temporary table for REFRESH MATERIALIZED
VIEW CONCURRENTLY. Since this table does not exist on the remote nodes, we must
not use explicit "ANALYZE <temptable>". Instead, just analyze it locally like
we were doing at other places.
Restore the matview test case to use REFRESH MATERIALIZED VIEW CONCURRENTLY now
that the underlying bug is fixed.
|
|
|
|
The system may and very likely choose different namespace for temporary tables
on different nodes. So it was erroneous to explicitly add the coordinator side
nampspace to the queries constructed for fetching stats from the remote nodes.
A regression test was non-deterministically failing for this reason for long,
but only now we could fully understand the problem and fix it. We now use
pg_my_temp_schema() to derive the current temporary schema used by the remote
node instead of hardcoding that in the query using coordinator side
information.
|
|
|
|
Chi Gao and Hengbing Wang reported certain issues around transaction handling
and demonstrated via xlogdump how certain transactions were getting marked
committed/aborted repeatedly on a datanode. When an already committed
transaction is attempted to be aborted again, it results in a PANIC. Upon
investigation, this uncovered a very serious yet long standing bug in
transaction handling.
If the client is running in autocommit mode, we try to avoid starting a
transaction block on the datanode side if only one datanode is going to be
involved in the transaction. This is an optimisation to speed up short queries
touching only a single node. But when the query rewriter transforms a single
statement into multiple statements, we would still (and incorrectly) run each
statement in an autocommit mode on the datanode. This can cause inconsistencies
when one statement commits but the next statement aborts. And it may also lead
to the PANIC situations if we continue to use the same global transaction
identifier for the statements.
This can also happen when the user invokes a user-defined function. If the
function has multiple statements, each statement will run in an autocommit
mode, if it's FQSed, thus again creating inconsistency if a following statement
in the function fails.
We now have a more elaborate mechanism to tackle autocommit and transaction
block needs. The special casing for force_autocommit is now removed, thus
making it more predictable. We also have specific conditions to check to ensure
that we don't mixup autocommit and transaction block for the same global xid.
Finally, if a query rewriter transforms a single statement into multiple
statements, we run those statements in a transaction block. Together these
changes should help us fix the problems.
|
|
We do some special processing for RemoteSubplan with returning lists. But the
EXPLAIN plan mechanism is not adequetly trained to handle that special
crafting. So for now do not try to print the target list in the EXPLAIN output.
|
|
ANALYZE was not collecting index statistics, which may have negative
impact for example on selectivity estimates for expressions. This also
fixes some incorrect plan changes in updatable_views regression test.
Discussion: <[email protected]>
|
|
While rewriting UPDATE/DELETE commands in rewriteTargetListUD, we've
been pulling all Vars from quals, and adding them to target lists. As
multiple Vars may reference the same column, this sometimes produced
plans with duplicate targetlist entries like this one:
Update on public.t111
-> Index Scan using t1_a_idx on public.t1
Output: 100, t1.b, t1.c, t1.a, t1.a, t1.a, t1.a, t1.a, t1.a,
t1.a, t1.a, t1.ctid
-> ...
Getting rid of the duplicate entries would be simple - before adding
entry for eachh Vars, check that a matching entry does not exist yet.
The question however is if we actually need any of this.
The comment in rewriteTargetListUD() claims we need to add the Vars
because of "coordinator quals" - which is not really defined anywhere,
but it probably means quals evaluated at the Remote Subquery node.
But we push all quals to the remote node, so there should not be any
cases where a qual would have to be evaluated locally (or where that
would be preferable).
So just remove all the relevant code from rewriteHandler.c, which
means we produce this plan instead:
Update on public.t111
-> Index Scan using t1_a_idx on public.t1
Output: 100, t1.b, t1.c, t1.ctid
-> ...
This affects a number of plans in regression tests, but the changes
seem fine - we simply remove unnecessary target list entries.
I've also added an assert to EXPLAIN enforcing the "no quals" rule
for Remote Subquery nodes.
Discussion: <[email protected]>
|
|
When checking if a query is eligible for FQS (fast-query shipping),
disable the optimization for queries in SCROLL cursors, as FQS does
not support backward scans.
Discussion: <[email protected]>
|
|
Our efforts to improve shared queue synchronization continues. We now have a
per queue producer lwlock that must be held for synchronization between
consumers and the producer. Consumers must hold this lock before setting the
producer latch to ensure the producer does not miss out any signals and does
not go into unnecessary waits.
We still can't get rid of all the timeouts, especially we see that sometimes a
producer finishes and tries to unbind from the queue, even before a consumer
gets chance to connect to the queue. We left the 10s wait to allow consumers to
connect. There is still net improvement because when the consumer is not going
to connect, it tells the producer and we avoid the 10s timeout, like we used to
see earlier.
|
|
gcc 6.4.1 is complaining when ExecRemoteQuery(PlanState *) gets called
with (RemoteSubqueryState*) parameter. This commit adds explicit cast on
a few places to silence the warnings noise.
An alternative fix might be to use (RemoteSubqueryState*), but that does
not quite work as ResponseCombiner needs to keep a pointer to either
ExecRemoteQuery or ExecRemoteSubplan. So the explicit cast seems better.
|
|
This is the merge-base of PostgreSQL's master branch and REL_10_STABLE branch.
This should be the last merge from PG's master branch into XL 10 branch.
Subsequent merges must happen from REL_10_STABLE branch
|
|
|
|
Author: Masahiko Sawada <[email protected]>
|
|
This affects mostly code comments, some documentation, and tests.
Official APIs already used "standby".
|
|
find_composite_type_dependencies correctly found columns that are of
the specified type, and columns that are of arrays of that type, but
not columns that are domains or ranges over the given type, its array
type, etc. The most general way to handle this seems to be to assume
that any type that is directly dependent on the specified type can be
treated as a container type, and processed recursively (allowing us
to handle nested cases such as ranges over domains over arrays ...).
Since a type's array type already has such a dependency, we can drop
the existing special case for the array type.
The very similar logic in get_rels_with_domain was likewise a few
bricks shy of a load, as it supposed that a directly dependent type
could *only* be a sub-domain. This is already wrong for ranges over
domains, and it'll someday be wrong for arrays over domains.
Add test cases illustrating the problems, and back-patch to all
supported branches.
Discussion: https://postgr.es/m/[email protected]
|
|
Similar to what was fixed in commit 9915de6c1cb2 for replication slots,
but this time it's related to replication origins: DROP SUBSCRIPTION
attempts to drop the replication origin, but that fails if the
replication worker process hasn't yet marked it unused. This causes
failures in the buildfarm:
ERROR: could not drop replication origin with OID 1, in use by PID 34069
Like the aforementioned commit, fix by having the process running DROP
SUBSCRIPTION sleep until the worker marks the the replication origin
struct as free. This uses a condition variable on each replication
origin shmem state struct, so that the session trying to drop can sleep
and expect to be awakened by the process keeping the origin open.
Also fix a SGML markup in the previous commit.
Discussion: https://postgr.es/m/[email protected]
|
|
We now also do checks during CREATE TABLE. Also amend alter_table test case so
that a few tables are distributed using round robin method so that the new
checks/limitations don't come in their way. Also new test cases added to ensure
that the other checks for inheritance are exercised too.
|
|
Some authentication methods allowed it, others did not. In the client-side,
libpq does not even try to authenticate with an empty password, which makes
using empty passwords hazardous: an administrator might think that an
account with an empty password cannot be used to log in, because psql
doesn't allow it, and not realize that a different client would in fact
allow it. To clear that confusion and to be be consistent, disallow empty
passwords in all authentication methods.
All the authentication methods that used plaintext authentication over the
wire, except for BSD authentication, already checked that the password
received from the user was not empty. To avoid forgetting it in the future
again, move the check to the recv_password_packet function. That only
forbids using an empty password with plaintext authentication, however.
MD5 and SCRAM need a different fix:
* In stable branches, check that the MD5 hash stored for the user does not
not correspond to an empty string. This adds some overhead to MD5
authentication, because the server needs to compute an extra MD5 hash, but
it is not noticeable in practice.
* In HEAD, modify CREATE and ALTER ROLE to clear the password if an empty
string, or a password hash that corresponds to an empty string, is
specified. The user-visible behavior is the same as in the stable branches,
the user cannot log in, but it seems better to stop the empty password from
entering the system in the first place. Secondly, it is fairly expensive to
check that a SCRAM hash doesn't correspond to an empty string, because
computing a SCRAM hash is much more expensive than an MD5 hash by design,
so better avoid doing that on every authentication.
We could clear the password on CREATE/ALTER ROLE also in stable branches,
but we would still need to check at authentication time, because even if we
prevent empty passwords from being stored in pg_authid, there might be
existing ones there already.
Reported by Jeroen van der Ham, Ben de Graaff and Jelte Fennema.
Security: CVE-2017-7546
|
|
Reported-by: Peter Geoghegan <[email protected]>
|
|
Reported-by: Yugo Nagata <[email protected]>
|
|
The NOTICE messages about tables being added or removed during
subscription refresh would be incorrect and possibly confusing if the
transaction rolls back, so silence them but keep them available for
debugging.
Discussion: https://www.postgresql.org/message-id/CAD21AoAvaXizc2h7aiNyK_i0FQSa-tmhpdOGwbhh7Jy544Ad4Q%40mail.gmail.com
|
|
Tidy-up for commit eccead9ed.
|
|
Supporting ICU 4.2 seems useful because it ships with CentOS 6.
Versions before ICU 4.6 don't support pkg-config, so document an
installation method without using pkg-config.
In ICU 4.2, ucol_getKeywordsForLocale() sometimes returns values that
will not be accepted by uloc_toLanguageTag(). Skip loading keyword
variants in that version.
Reported-by: Victor Wagner <[email protected]>
|
|
If the table being attached had different attribute numbers than the
parent, the old code could incorrectly decide it needed to be scanned.
Amit Langote, reviewed by Ashutosh Bapat
Discussion: http://postgr.es/m/CA+TgmobexgbBr2+Utw-pOMw9uxaBRKRjMW_-mmzKKx9PejPLMg@mail.gmail.com
|
|
This allows a transaction abort to avoid killing those workers.
Author: Petr Jelinek <[email protected]>
|
|
We were incorrectly overwriting the 'cached' value in the SeqTable element,
thus causing another request to the GTM when nextval is fetched. This resulted
in an unintentional gaps in the sequence values. This patch fixes that, though
we might still get gaps unless sequence_range is set to 1. But this is by
design to reduce repeated round trips to the GTM.
|
|
Amit Langote
Discussion: http://postgr.es/m/CAFjFpReT_kq_uwU_B8aWDxR7jNGE=P0iELycdq5oupi=xSQTOw@mail.gmail.com
|
|
Amit Langote
Discussion: http://postgr.es/m/CAFjFpReT_kq_uwU_B8aWDxR7jNGE=P0iELycdq5oupi=xSQTOw@mail.gmail.com
|
|
Otherwise, partitioned tables with RETURNING expressions or subject
to a WITH CHECK OPTION do not work properly.
Amit Langote, reviewed by Amit Khandekar and Etsuro Fujita. A few
comment changes by me.
Discussion: http://postgr.es/m/[email protected]
|
|
Most of our collations code has special handling for the locale names
"C" and "POSIX", allowing those collations to be used whether or not
the system libraries think those locale names are valid, or indeed
whether said libraries even have any locale support. But we missed
handling things that way in CREATE COLLATION. This meant you couldn't
clone the C/POSIX collations, nor explicitly define a new collation
using those locale names, unless the libraries allow it. That's pretty
pointless, as well as being a violation of pg_newlocale_from_collation's
API specification.
The practical effect of this change is quite limited: it allows creating
such collations even on platforms that don't HAVE_LOCALE_T, and it allows
making "POSIX" collation objects on Windows, which before this would only
let you make "C" collation objects. Hence, even though this is a bug fix
IMO, it doesn't seem worth the trouble to back-patch.
In passing, suppress the DROP CASCADE detail messages at the end of the
collation regression test. I'm surprised we've never been bit by
message ordering issues there.
Per report from Murtuza Zabuawala.
Discussion: https://postgr.es/m/CAKKotZS-wcDcofXDCH=sidiuajE+nqHn2CGjLLX78anyDmi3gQ@mail.gmail.com
|
|
Produce the right error message for COPY (SELECT INTO) queries, that is
ERROR: COPY (SELECT INTO) is not supported
instead of the incorrect
ERROR: COPY query must have a RETURNING clause
The root cause is that the check in BeginCopy() was testing raw_query,
but XL wraps the original command in RawStmt, so we should be checking
raw_query->stmt instead.
|
|
Commit fd31cd265138 renamed the variable to skipping_blocks, but forgot
to update this comment.
Noticed while inspecting code.
|
|
Before, we always used a dummy value of 1, but that's not right when
the partitioned table being modified is inside of a WITH clause
rather than part of the main query.
Amit Langote, reported and reviewd by Etsuro Fujita, with a comment
change by me.
Discussion: http://postgr.es/m/[email protected]
|
|
When running ANALYZE on a coordinator, we simply fetch the statistics
built on datanodes, and keep stats from a random datanode (assuming all
datanodes are similar in terms of data volume and data distribution).
This was only done for regular per-attribute stats, though, not for the
extended statistics added in PostgreSQL 10, causing various failures in
stats_ext tests due to missing statistics. This commit fixes this gap
by using the same approach as for simple statistics - we collect stats
from datanodes and keep the first result we receive for each statistic.
While working on this I realized this approach has some inherent issues,
particularly on columns that are distribution keys. As we keep stats
from a random node, we completely ignore MCV and histograms from the
remaining nodes. That may cause planning issues, but addressing it is
out of scope for this commit.
|
|
This merge includes all commits upto bc2d716ad09fceeb391c755f78c256ddac9d3b9f
of PG 10.
|
|
To optimise and simplify XL's distributed query planning, we enforce that all
partitions of a partitioned table use the same distribution strategy. We also
put further restrictions that all columns in the partitions and the partitioned
table has matching positions. This can cause some problems when tables have
dropped columns etc, but we think it's far better to optimise XL's plans than
supporting all corner cases. We can look at removing some of these
restrictions later once the more usual queries run faster.
These restrictions allow us to unconditionally push down Append and MergeAppend
nodes to datanodes when these nodes are processing partitioned tables.
Some regression tests currently fail because of these added restrictions. We
would look at them in due course of time.
|
|
Commit c46c0e5202e8cfe750c6629db7852fdb15d528f3 failed to pass the
TransitionCaptureState object to ExecARInsertTriggers() in the case
where it's using heap_multi_insert and there are indexes. Repair.
Thomas Munro, from a report by David Fetter
Discussion: https://postgr.es/m/20170708084213.GA14720%40fetter.org
|
|
On platforms lacking both locale_t and ICU, collationcmds.c failed
to make any use of its static function is_all_ascii(), thus probably
drawing a compiler warning. Oversight in my commit ddb5fdc06.
Per buildfarm member gaur.
|
|
Noticed while reviewing code.
|
|
Multiple switch statements on oclass values are intentionally missing
the default clause. As the PGXC oclass options were missing, compilers
were complaining about it.
|
|
This avoids "tuple concurrently updated" errors when a ALTER or DROP
SUBSCRIPTION writes to pg_subscription_rel at the same time as a worker.
Author: Petr Jelinek <[email protected]>
|
|
This command used to compute the collencoding entry like when a
completely new collation is created. But for example when copying the
"C" collation, this would then result in a collation that has a
collencoding entry for the current database encoding rather than -1,
thus not making an exact copy. This has probably no practical impact,
but making this change keeps the catalog contents neat.
Reported-by: Tom Lane <[email protected]>
|
|
We now disallow having triggers with both transition tables and ON
INSERT OR UPDATE (which was a PG extension to the spec anyway),
because in this case it's not at all clear how the transition tables
should work for an INSERT ... ON CONFLICT query. Separate ON INSERT
and ON UPDATE triggers with transition tables are allowed, and the
transition tables for these reflect only the inserted and only the
updated tuples respectively.
Patch by Thomas Munro
Discussion: https://postgr.es/m/CAEepm%3D11KHQ0JmETJQihSvhZB5mUZL2xrqHeXbCeLhDiqQ39%3Dw%40mail.gmail.com
|
|
The original coding didn't handle this case properly; each separate
DML substatement needs its own set of transitions.
Patch by Thomas Munro
Discussion: https://postgr.es/m/CAL9smLCDQ%3D2o024rBgtD4WihzX8B3C6u_oSQ2K3%2BR5grJrV0bg%40mail.gmail.com
|
|
We disallow row-level triggers with transition tables on child tables.
Transition tables for triggers on the parent table contain only those
columns present in the parent. (We can't mix tuple formats in a
single transition table.)
Patch by Thomas Munro
Discussion: https://postgr.es/m/CA%2BTgmoZzTBBAsEUh4MazAN7ga%3D8SsMC-Knp-6cetts9yNZUCcg%40mail.gmail.com
|
|
This merges the current master branch of XL with the XL 10 development branch.
Commits upto f72330316ea5796a2b11a05710b98eba4e706788 are included in this
merge.
|
|
This commit merges PG10 branch upto commit
2710ccd782d0308a3fa1ab193531183148e9b626. Regression tests show no noteworthy
additional failures. This merge includes major pgindent work done with the
newer version of pgindent
|
|
pg_import_system_collations() refused to create any ICU collations if
the current database's encoding didn't support ICU. This is wrongheaded:
initdb must initialize pg_collation in an encoding-independent way
since it might be used in other databases with different encodings.
The reason for the restriction seems to be that get_icu_locale_comment()
used icu_from_uchar() to convert the UChar-format display name, and that
unsurprisingly doesn't know what to do in unsupported encodings.
But by the same token that the initial catalog contents must be
encoding-independent, we can't allow non-ASCII characters in the comment
strings. So we don't really need icu_from_uchar() here: just check for
Unicode codes outside the ASCII range, and if there are none, the format
conversion is trivial. If there are some, we can simply not install the
comment. (In my testing, this affects only Norwegian Bokmål, which has
given us trouble before.)
For paranoia's sake, also check for non-ASCII characters in ICU locale
names, and skip such locales, as we do for libc locales. I don't
currently have a reason to believe that this will ever reject anything,
but then again the libc maintainers should have known better too.
With just the import changes, ICU collations can be found in pg_collation
in databases with unsupported encodings. This resulted in more or less
clean failures at runtime, but that's not how things act for unsupported
encodings with libc collations. Make it work the same as our traditional
behavior for libc collations by having collation lookup take into account
whether is_encoding_supported_by_icu().
Adjust documentation to match. Also, expand Table 23.1 to show which
encodings are supported by ICU.
catversion bump because of likely change in pg_collation/pg_description
initial contents in ICU-enabled builds.
Discussion: https://postgr.es/m/[email protected]
|
|
The maxResultSize argument of uloc_getDisplayName is the number of
UChars in the output buffer, not the number of bytes. In principle
this could result in a stack smash, although at least in my Fedora 25
install there are no ICU locales with display names long enough to
overrun the buffer. But it's easily proven to be wrong by reducing
the length of displayname to around 20, whereupon a stack smash
does happen.
(This is a rather scary bug, because the same mistake could easily
have been made in other places; but in a quick code search looking
at uses of UChar I could not find any other instances.)
|