Age | Commit message (Collapse) | Author |
|
The new view lists all IO handles that are currently in use and is mainly
useful for PG developers, but may also be useful when tuning PG.
Bumps catversion.
Reviewed-by: Noah Misch <[email protected]>
Discussion: https://postgr.es/m/uvrtrknj4kdytuboidbhwclo4gxhswwcpgadptsjvjqcluzmah%40brqs62irg4dt
|
|
Add a test to pg_upgrade's test suite that verifies that
dump-restore-dump of regression database produces equivalent output to
dumping it directly. This was already being tested by running
pg_upgrade itself, but non-binary-upgrade mode was not being covered.
The regression database has accrued, over time, a sufficient collection
of interesting objects to ensure good coverage, but there hasn't been a
concerted effort to be completely exhaustive, so it is likely still
possible to have more.
This'd belong more naturally in the pg_dump test suite, but we chose to
put it in src/bin/pg_upgrade/t/002_pg_upgrade.pl because we need a run
of the regression tests which is already done here, so this has less
total test runtime impact. Also, experiments have shown that using
parallel dump/restore is slightly faster, so we use --format=directory -j2.
This test has already reported pg_dump bugs, as fixed in fd41ba93e463,
74563f6b9021, d611f8b1587b, 4694aedf63bf.
Author: Ashutosh Bapat <[email protected]>
Reviewed-by: Michael Paquier <[email protected]>
Reviewed-by: Daniel Gustafsson <[email protected]>
Reviewed-by: Tom Lane <[email protected]>
Reviewed-by: Álvaro Herrera <[email protected]>
Discussion: https://www.postgresql.org/message-id/CAExHW5uF5V=Cjecx3_Z=7xfh4rg2Wf61PT+hfquzjBqouRzQJQ@mail.gmail.com
|
|
95d6e9af0 added code to display the tuplestore storage type for
WindowAgg nodes and added a test to ensure the "Disk" storage method was
working correctly by setting work_mem to 64 and running a test which
caused the WindowAgg to go to disk. Seemingly, the number of rows
chosen there wasn't quite enough for that to happen in x86 32-bit.
Fix this by increasing the number of rows slightly.
I suspect the buildfarm didn't catch this as MEMORY_CONTEXT_CHECKING
builds will use a bit more memory for MemoryChunks to store the
requested_size and also because of the additional space to store the
chunk's sentinel byte.
Reported-by: Christoph Berg <[email protected]>
Discussion: https://postgr.es/m/[email protected]
|
|
Author: Corey Huinker <[email protected]>
Discussion: https://postgr.es/m/CADkLM=desCuf3dVHasADvdUVRmb-5gO0mhMO5u9nzgv6i7U86Q@mail.gmail.com
|
|
ExecInitPartitionInfo() duplicates much of the logic in
ExecInitMerge(), except that it failed to handle DO NOTHING
actions. This would cause an "unknown action in MERGE WHEN clause"
error if a MERGE with any DO NOTHING actions attempted to insert into
a partition not already initialised by ExecInitModifyTable().
Bug: #18871
Reported-by: Alexander Lakhin <[email protected]>
Author: Tender Wang <[email protected]>
Reviewed-by: Gurjeet Singh <[email protected]>
Discussion: https://postgr.es/m/18871-b44e3c96de3bd2e8%40postgresql.org
Backpatch-through: 15
|
|
Continuation of work started in commit 15a79c73, after initial trial.
Author: Thomas Munro <[email protected]>
Discussion: https://postgr.es/m/b936d2fb-590d-49c3-a615-92c3a88c6c19%40eisentraut.org
|
|
Currently, group_similar_or_args() permutes original positions of clauses
independently on whether it manages to find any groups of similar clauses.
While we are not providing any strict warranties on saving the original order
of OR-clauses, it is preferred that the original order be modified as little
as possible.
This commit changes the reordering algorithm of group_similar_or_args() in
the following way. We reorder each group of similar clauses so that the
first item of the group stays in place, but all the other items are moved
after it. So, if there are no similar clauses, the order of clauses stays
the same. When there are some groups, only required reordering happens while
the rest of the clauses remain in their places.
Reported-by: Andrei Lepikhov <[email protected]>
Discussion: https://postgr.es/m/3ac7c436-81e1-4191-9caf-b0dd70b51511%40gmail.com
Reviewed-by: Pavel Borisov <[email protected]>
Reviewed-by: Andrei Lepikhov <[email protected]>
Reviewed-by: Alena Rybakina <[email protected]>
|
|
The uuidv7_interval() function previously converted a shifted
microsecond-precision timestamp (64-bit integer) to another 64-bit
integer representing a timestamp with nanosecond precision. This
conversion caused overflow for dates beyond the year 2262. The
millisecond and sub-millisecond parts were then extracted from this
nanosecond-precision timestamp and stored in UUIDv7 values.
With this commit, the millisecond and sub-millisecond parts are stored
directly into the UUIDv7 value without being converted back to a
nanosecond precision timestamp. Following RFC 9562, the timestamp is
stored as an unsigned integer, enabling support for dates up to the
year 10889.
Reported and fixed by Andrey Borodin, with cosmetic changes and
regression tests by me.
Reported-by: Andrey Borodin <[email protected]>
Author: Andrey Borodin <[email protected]>
Discussion: https://postgr.es/m/[email protected]
|
|
This was left out of the original patch for virtual generated columns
(commit 83ea6c54025).
This just involves a bit of extra work in the executor to expand the
generation expressions and run a "IS NOT NULL" test against them.
There is also a bit of work to make sure that not-null constraints are
checked during a table rewrite.
Author: jian he <[email protected]>
Reviewed-by: Xuneng Zhou <[email protected]>
Reviewed-by: Navneet Kumar <[email protected]>
Reviewed-by: Álvaro Herrera <[email protected]>
Discussion: https://postgr.es/m/CACJufxHArQysbDkWFmvK+D1TPHQWWTxWN15cMuUaTYX3xhQXgg@mail.gmail.com
|
|
The problem is that after the ALTER SUBSCRIPTION tap_sub SET PUBLICATION
command, we didn't wait for the new walsender to start on the publisher.
Immediately after ALTER, we performed Insert and expected it to replicate.
However, the replication could start from a point after the INSERT location,
and as the subscription isn't copying initial data, we could miss such an
Insert.
The fix is to wait for connection to be established between publisher and
subscriber before starting DML operations that are expected to replicate.
As per CI.
Reported-by: Andres Freund <[email protected]>
Author: Hayato Kuroda <[email protected]>
Discussion: https://postgr.es/m/CALDaNm2ms1deM5EYNLFEfESv_Kw=Y4AiTB0LP=qGS-UpFwGbPg@mail.gmail.com
|
|
Make pg_constraint output in inherit test show the convalidated column
as well. This shows the interaction between convalidated and
conenforced.
This is extracted from a larger patch so that this reformatting isn't
distracting there.
Author: Amul Sul <[email protected]>
Discussion: https://www.postgresql.org/message-id/flat/CAAJ_b962c5AcYW9KUt_R_ER5qs3fUGbe4az-SP-vuwPS-w-AGA@mail.gmail.com
|
|
Commit d45597f72fe5 introduced the ability to change a not-null
constraint from NO INHERIT to INHERIT and vice versa, but we included
the SET noise word in the syntax for it. The SET turns out not to be
necessary and goes against what the SQL standard says for other ALTER
TABLE subcommands, so remove it.
This changes the way this command is processed for constraint types
other than not-null, so there are some error message changes.
Reviewed-by: Peter Eisentraut <[email protected]>
Reviewed-by: Suraj Kharage <[email protected]>
Discussion: https://postgr.es/m/[email protected]
|
|
It seems potentially useful to label our shared libraries with version
information, now that a facility exists for retrieving that. This
patch labels them with the PG_VERSION string. There was some
discussion about using semantic versioning conventions, but that
doesn't seem terribly helpful for modules with no SQL-level presence;
and for those that do have SQL objects, we typically expect them
to support multiple revisions of the SQL definitions, so it'd still
not be very helpful.
I did not label any of src/test/modules/. It seems unnecessary since
we don't install those, and besides there ought to be someplace that
still provides test coverage for the original PG_MODULE_MAGIC macro.
Author: Tom Lane <[email protected]>
Discussion: https://postgr.es/m/[email protected]
|
|
These are useful general-purpose math functions which are included in
POSIX and C99, and are commonly included in other math libraries, so
expose them as SQL-callable functions.
Author: Dean Rasheed <[email protected]>
Reviewed-by: Stepan Neretin <[email protected]>
Reviewed-by: Peter Eisentraut <[email protected]>
Reviewed-by: Tom Lane <[email protected]>
Reviewed-by: Dmitry Koval <[email protected]>
Reviewed-by: Alexandra Wang <[email protected]>
Discussion: https://postgr.es/m/CAEZATCXpGyfjXCirFk9au+FvM0y2Ah+2-0WSJx7MO368ysNUPA@mail.gmail.com
|
|
We need the -I switch for libpq_srcdir to come before any -I switches
injected by configure. Otherwise there is a risk of pulling in a
mismatched version of libpq_fe.h from someplace like
/usr/local/include, if the platform has another Postgres version
installed there. This evidently accounts for today's buildfarm
failures on "anaconda".
In principle the -I switch for src/port/ is at similar hazard, and has
been for a very long time. But the only .h files we keep there are
pg_config_paths.h and pthread-win32.h, neither of which get installed
on Unix-ish systems, so the odds of picking up a conflicting header
seem pretty small. That doubtless accounts for the lack of prior
reports.
Back-patch to v17 where pg_regress acquired a build dependency on
libpq_fe.h. We could go back further to fix the hazard for src/port/
in older branches, but it seems unlikely to be worth troubling over.
Reported-by: Nathan Bossart <[email protected]>
Author: Tom Lane <[email protected]>
Discussion: https://postgr.es/m/Z-MhRzoc7t-nPUQG@nathan
Backpatch-through: 17
|
|
This new option instructs pg_dump to dump sequence data when the
--no-data, --schema-only, or --statistics-only option is specified.
This was originally considered for commit a7e5457db8, but it was
left out at that time because there was no known use-case. A
follow-up commit will use this to optimize pg_upgrade's file
transfer step.
Reviewed-by: Robert Haas <[email protected]>
Discussion: https://postgr.es/m/Zyvop-LxLXBLrZil%40nathan
|
|
For import and export, use schemaname/relname rather than
regclass.
This is more natural during export, fits with the other arguments
better, and it gives better control over error handling in case we
need to downgrade more errors to warnings.
Also, use text for the argument types for schemaname, relname, and
attname so that casts to "name" are not required.
Author: Corey Huinker <[email protected]>
Discussion: https://postgr.es/m/CADkLM=ceOSsx_=oe73QQ-BxUFR2Cwqum7-UP_fPe22DBY0NerA@mail.gmail.com
|
|
Previously we used pg_int64 in three function prototypes in libpq. It
was added by commit 461ef73f to expose the platform-dependent type used
for int64 in the C89 era. As of commit 962da900 it is defined as
standard int64_t, and the dust seems to have settled.
Let's just use int64_t directly in these three client-facing functions
instead of (yet) another name. We've required C99 and thus <stdint.h>
since PostgreSQL 12, C89 and C++98 compilers are long gone, and client
applications very likely use standard types for their own 64-bit needs.
This also cleans up the obscure placement of a new #include <stdint.h>
directive in postgres_ext.h, required for the new definition. The
typedef was hiding in there for historical reasons, but it doesn't fit
postgres_ext.h's own description of its purpose and there is no evidence
of client applications including postgres_ext.h directly to see it.
Keep a typedef marked deprecated for backward compatibility, but move it
into libpq-fe.h where it was used.
Reviewed-by: Peter Eisentraut <[email protected]>
Discussion: https://postgr.es/m/CA%2BhUKGKn_EkNNGMY5RzMcKP%2Ba6urT4JF%3DCPhw_zHtQwjvX6P2g%40mail.gmail.com
|
|
Author: vignesh C <[email protected]>
Discussion: https://postgr.es/m/CALDaNm2ms1deM5EYNLFEfESv_Kw=Y4AiTB0LP=qGS-UpFwGbPg@mail.gmail.com
Discussion: https://postgr.es/m/CABdArM7FW-_dnthGkg2s0fy1HhUB8C3ELA0gZX1kkbs1ZZoV3Q@mail.gmail.com
|
|
The bitmap heap scan skip fetch optimization skips fetching the heap
block when a page is set all-visible in the visibility map and no
columns from the table are needed to satisfy the query.
2b73a8cd33b and c3953226a07 changed the control flow of bitmap heap scan
to use the read stream API. The read stream API returns buffers
containing blocks to the user. To make this work with the skip fetch
optimization, we keep a count of the empty tuples we need to emit for
all the blocks skipped and only emit the empty tuples after processing
the next block fetched from the heap or at the end of the scan.
It's incorrect to recheck NULL tuples, so we must set `recheck` to false
before yielding control back to BitmapHeapNext(). This was done before
emitting any remaining empty tuples at the end of the scan but not for
empty tuples emitted during the scan. This meant that if a page fetched
from the heap did require recheck and set `recheck` to true and then we
emitted empty tuples for subsequent blocks, we would get wrong results.
Fix this by always setting `recheck` to false before emitting empty
tuples.
Reported-by: Alexander Lakhin <[email protected]>
Tested-by: Andres Freund <[email protected]>
Discussion: https://postgr.es/m/496f7acd-881c-4df3-9bd3-8f8534dfec26%40gmail.com
|
|
|
|
Reviewed-By: Julien Rouhaud <[email protected]>
Reviewed-By: Michael Banck <[email protected]>
Reviewed-By: Yugo Nagata <[email protected]>
Reviewed-By: Nathan Bossart <[email protected]>
Reviewed-By: Jelte Fennema-Nio <[email protected]>
Discussion: https://postgr.es/m/CABUevEyTMyXC6OvCWkj+rPnHrfi8_Rw_+DD_jzgFFNPqgf+Oig@mail.gmail.com
|
|
We haven't had bugs in this area, but there's some not-entirely
trivial code to detect that case, so it seems good to have test
coverage for it.
Author: Peter Smith <[email protected]>
Reviewed-by: vignesh C <[email protected]>
Reviewed-by: Tomas Vondra <[email protected]>
Discussion: https://www.postgresql.org/message-id/CAHut%2BPtX8P0EGhsk9p%[email protected]
|
|
Introduce a new conflict type, multiple_unique_conflicts, to handle cases
where an incoming row during logical replication violates multiple UNIQUE
constraints.
Previously, the apply worker detected and reported only the first
encountered key conflict (insert_exists/update_exists), causing repeated
failures as each constraint violation needs to be handled one by one
making the process slow and error-prone.
With this patch, the apply worker checks all unique constraints upfront
once the first key conflict is detected and reports
multiple_unique_conflicts if multiple violations exist. This allows users
to resolve all conflicts at once by deleting all conflicting tuples rather
than dealing with them individually or skipping the transaction.
In the future, this will also allow us to specify different resolution
handlers for such a conflict type.
Add the stats for this conflict type in pg_stat_subscription_stats.
Author: Nisha Moond <[email protected]>
Author: Zhijie Hou <[email protected]>
Reviewed-by: Amit Kapila <[email protected]>
Reviewed-by: Peter Smith <[email protected]>
Reviewed-by: Dilip Kumar <[email protected]>
Discussion: https://postgr.es/m/CABdArM7FW-_dnthGkg2s0fy1HhUB8C3ELA0gZX1kkbs1ZZoV3Q@mail.gmail.com
|
|
Previously there was no coverage for this function.
Author: Aleksander Alekseev <[email protected]>
Reviewed-by: Peter Smith <[email protected]>
Reviewed-by: Rustam ALLAKOV <[email protected]>
Discussion: https://postgr.es/m/CAJ7c6TMT6XCooMVKnCd_tR2oBdGcnjefSeCDCv8jzKy9VkWA5w@mail.gmail.com
|
|
This new test checks all of pg_upgrade's file transfer modes. For
each mode, we verify that pg_upgrade either succeeds (and some test
objects successfully reach the new version) or fails with an error
that indicates the mode is not supported on the current platform.
For cross-version tests, we also check that pg_upgrade transfers
non-default tablespaces. (Tablespaces can't be tested on same
version upgrades because of the version-specific subdirectory
conflict, but we might be able to enable such tests once we teach
pg_upgrade how to handle in-place tablespaces.)
Suggested-by: Robert Haas <[email protected]>
Reviewed-by: Andres Freund <[email protected]>
Discussion: https://postgr.es/m/Zyvop-LxLXBLrZil%40nathan
|
|
This new parameter works just like the storage parameter of the
same name: if set to true (which is the default), autovacuum and
VACUUM attempt to truncate any empty pages at the end of the table.
It is primarily intended to help users avoid locking issues on hot
standbys. The setting can be overridden with the storage parameter
or VACUUM's TRUNCATE option.
Since there's presently no way to determine whether a Boolean
storage parameter is explicitly set or has just picked up the
default value, this commit also introduces an isset_offset member
to relopt_parse_elt.
Suggested-by: Will Storey <[email protected]>
Author: Nathan Bossart <[email protected]>
Co-authored-by: Gurjeet Singh <[email protected]>
Reviewed-by: Laurenz Albe <[email protected]>
Reviewed-by: Fujii Masao <[email protected]>
Reviewed-by: Robert Treat <[email protected]>
Discussion: https://postgr.es/m/Z2DE4lDX4tHqNGZt%40dev.null
|
|
Change expected extension to be installed from amcheck to plpgsql since
not all build farm animals has the contrib module installed.
Author: Matheus Alcantara <[email protected]>
Reported-by: Tom Lane <[email protected]>
Discussion: https://www.postgresql.org/message-id/flat/[email protected]
|
|
The new GUC extension_control_path specifies a path to look for
extension control files. The default value is $system, which looks in
the compiled-in location, as before.
The path search uses the same code and works in the same way as
dynamic_library_path.
Some use cases of this are: (1) testing extensions during package
builds, (2) installing extensions outside security-restricted
containers like Python.app (on macOS), (3) adding extensions to
PostgreSQL running in a Kubernetes environment using operators such as
CloudNativePG without having to rebuild the base image for each new
extension.
There is also a tweak in Makefile.global so that it is possible to
install extensions using PGXS into an different directory than the
default, using 'make install prefix=/else/where'. This previously
only worked when specifying the subdirectories, like 'make install
datadir=/else/where/share pkglibdir=/else/where/lib', for purely
implementation reasons. (Of course, without the path feature,
installing elsewhere was rarely useful.)
Author: Peter Eisentraut <[email protected]>
Co-authored-by: Matheus Alcantara <[email protected]>
Reviewed-by: David E. Wheeler <[email protected]>
Reviewed-by: Gabriele Bartolini <[email protected]>
Reviewed-by: Marco Nenciarini <[email protected]>
Reviewed-by: Niccolò Fei <[email protected]>
Discussion: https://www.postgresql.org/message-id/flat/[email protected]
|
|
Currently, the only way to pipe queries in an ongoing pipeline (in a
\startpipeline block) is to leverage the meta-commands able to create
extended queries such as \bind, \parse or \bind_named.
While this is good enough for testing the backend with pipelines, it has
been mentioned that it can also be very useful to allow queries
terminated by semicolons to be appended to a pipeline. For example, it
would be possible to migrate existing psql scripts to use pipelines by
just adding a set of \startpipeline and \endpipeline meta-commands,
making such scripts more efficient.
Doing such a change is proving to be simple in psql: queries terminated
by semicolons can be executed through PQsendQueryParams() without any
parameters set when the pipeline mode is active, instead of
PQsendQuery(), the default, like pgbench. \watch is still forbidden
while in a pipeline, as it expects its results to be processed
synchronously.
The large portion of this commit consists in providing more test
coverage, with mixes of extended queries appended in a pipeline by \bind
and friends, and queries terminated by semicolons.
This improvement has been suggested by Daniel Vérité.
Author: Anthonin Bonnefoy <[email protected]>
Discussion: https://postgr.es/m/[email protected]
|
|
The test authorization server implemented in oauth_server.py does not
listen on IPv6. Most of the time, libcurl happily falls back to IPv4
after failing its initial connection, but on NetBSD, something is
consistently showing up on the unreserved IPv6 port and causing a test
failure.
Rather than deal with dual-stack details across all test platforms,
change the issuer to enforce the use of IPv4 only. (This elicits more
punishing timeout behavior from libcurl, so it's a useful change from
the testing perspective as well.)
Author: Jacob Champion <[email protected]>
Reported-by: Thomas Munro <[email protected]>
Discussion: https://postgr.es/m/CAOYmi%2Bn4EDOOUL27_OqYT2-F2rS6S%2B3mK-ppWb2Ec92UEoUbYA%40mail.gmail.com
|
|
Commit cbc127917e introduced tracking of unpruned relids to avoid
processing pruned relations, and changed ExecInitModifyTable() to
initialize only unpruned result relations. As a result, MERGE
statements that prune all target partitions can now lead to crashes
or incorrect behavior during execution.
The crash occurs because some executor code paths rely on
ModifyTableState.resultRelInfo[0] being present and initialized,
even when no result relations remain after pruning. For example,
ExecMerge() and ExecMergeNotMatched() use the first resultRelInfo
to determine the appropriate action. Similarly,
ExecInitPartitionInfo() assumes that at least one result relation
exists.
To preserve these assumptions, ExecInitModifyTable() now includes the
first result relation in the initialized result relation list if all
result relations for that ModifyTable were pruned. To enable that,
ExecDoInitialPruning() ensures the first relation is locked if it was
pruned and locking is necessary.
To support this exception to the pruning logic, PlannedStmt now
includes a list of RT indexes identifying the first result relation
of each ModifyTable node in the plan. This allows
ExecDoInitialPruning() to check whether each such relation was
pruned and, if so, lock it if necessary.
Bug: #18830
Reported-by: Robins Tharakan <[email protected]>
Diagnozed-by: Tender Wang <[email protected]>
Diagnozed-by: Dean Rasheed <[email protected]>
Co-authored-by: Dean Rasheed <[email protected]>
Reviewed-by: Tender Wang <[email protected]>
Reviewed-by: Dean Rasheed <[email protected]>
Discussion: https://postgr.es/m/18830-1f31ea1dc930d444%40postgresql.org
|
|
setting.
bbf668d66fbf lowered the minimum value of maintenance_work_mem to
64kB. However, in parallel vacuum cases, since the initial underlying
DSA size is 256kB, it attempts to perform a cycle of index vacuuming
and table vacuuming with an empty TID store, resulting in an assertion
failure.
This commit ensures that at least one page is processed before index
vacuuming and table vacuuming begins.
Backpatch to 17, where the minimum maintenance_work_mem value was
lowered.
Reviewed-by: David Rowley <[email protected]>
Discussion: https://postgr.es/m/CAD21AoCEAmbkkXSKbj4dB+5pJDRL4ZHxrCiLBgES_g_g8mVi1Q@mail.gmail.com
Backpatch-through: 17
|
|
This commit adds a new --missing-stats-only option that can be used
with --analyze-only or --analyze-in-stages. When this option is
specified, vacuumdb will analyze a relation if it lacks any
statistics for a column, expression index, or extended statistics
object. This new option is primarily intended for use after
pg_upgrade (since it can now retain most optimizer statistics), but
it might be useful in other situations, too.
Author: Corey Huinker <[email protected]>
Co-authored-by: Nathan Bossart <[email protected]>
Reviewed-by: John Naylor <[email protected]>
Discussion: https://postgr.es/m/Z5O1bpcwDrMgyrYy%40nathan
|
|
This commit contains the basic, system-wide, infrastructure for
io_method=worker. It does not yet actually execute IO, this commit just
provides the infrastructure for running IO workers, kept separate for easier
review.
The number of IO workers can be adjusted with a PGC_SIGHUP GUC. Eventually
we'd like to make the number of workers dynamically scale up/down based on the
current "IO load".
To allow the number of IO workers to be increased without a restart, we need
to reserve PGPROC entries for the workers unconditionally. This has been
judged to be worth the cost. If it turns out to be problematic, we can
introduce a PGC_POSTMASTER GUC to control the maximum number.
As io workers might be needed during shutdown, e.g. for AIO during the
shutdown checkpoint, a new PMState phase is added. IO workers are shut down
after the shutdown checkpoint has been performed and walsender/archiver have
shut down, but before the checkpointer itself shuts down. See also
87a6690cc69.
Updates PGSTAT_FILE_FORMAT_ID due to the addition of a new BackendType.
Reviewed-by: Noah Misch <[email protected]>
Co-authored-by: Thomas Munro <[email protected]>
Co-authored-by: Andres Freund <[email protected]>
Discussion: https://postgr.es/m/uvrtrknj4kdytuboidbhwclo4gxhswwcpgadptsjvjqcluzmah%40brqs62irg4dt
Discussion: https://postgr.es/m/[email protected]
Discussion: https://postgr.es/m/stj36ea6yyhoxtqkhpieia2z4krnam7qyetc57rfezgk4zgapf@gcnactj4z56m
|
|
Since many clients default to the X25519 curve in the TLS handshake,
the fact that the server by defualt doesn't support it cause an extra
roundtrip for each TLS connection. By adding multiple curves, which
is supported since 3d1ef3a15c3eb68da, we can reduce the risk of extra
roundtrips.
Author: Daniel Gustafsson <[email protected]>
Co-authored-by: Jacob Champion <[email protected]>
Reported-by: Andres Freund <[email protected]>
Reviewed-by: Jacob Champion <[email protected]>
Discussion: https://postgr.es/m/[email protected]
|
|
In the initial pipeline support for psql added in 41625ab8ea3d, \g was
used as the way to push extended query into an ongoing pipeline. \gx
was blocked.
These two meta-commands have format-related options that can be applied
when fetching a query result (expanded, etc.). As the results of a
pipeline are fetched asynchronously, not at the moment of the
meta-command execution but at the moment of a \getresults or a
\endpipeline, authorizing \g while blocking \gx leads to a confusing
implementation, making one think that psql should be smart enough to
remember the output format options defined from the time when \g or \gx
were executed. Doing so would lead to more code complications when
retrieving a batch of results. There is an extra argument other than
simplicity here: the output format options defined at the point of a
\getresults or a \endpipeline execution should be what affect the output
format for a batch of results.
To avoid any confusion, we have settled to the introduction of a new
meta-command called \sendpipeline, replacing \g when within a pipeline.
An advantage of this design is that it is possible to add new options
specific to pipelines when sending a query buffer, independent of \g
and \gx, should it prove to be necessary.
Most of the changes of this commit happen in the regression tests, where
\g is replaced by \sendpipeline. More tests are added to check that \g
is not allowed.
Per discussion between the author, Daniel Vérité and me.
Author: Anthonin Bonnefoy <[email protected]>
Discussion: https://postgr.es/m/[email protected]
|
|
Added tests:
- recovery from running out of unpinned local buffers
- that we don't run out of unpinned buffers due to read stream (only recently
fixed, in 92fc6856cb4)
- temp tables can't be dropped while in use by cursors
Discussion: weskknhckugbdm2yt7sa2uq53xlsax67gcdkac34sanb7qpd3p@hcc2wadao5wy
Discussion: https://postgr.es/m/ge6nsuddurhpmll3xj22vucvqwp4agqz6ndtcf2mhyeydzarst@l75dman5x53p
|
|
This commit addresses some inconsistencies with how the options of some
routines from PostgreSQL/Test/ are written, mainly for init() and
init_from_backup() in Cluster.pm. These are written as unquoted, except
in the locations updated here.
Changes extracted from a larger patch by the same author.
Author: Dagfinn Ilmari Mannsåker <[email protected]>
Discussion: https://postgr.es/m/[email protected]
|
|
This commit reshapes the grammar of some commands to apply a more
consistent style across the board, following rules similar to
ce1b0f9da03e:
- Elimination of some pointless used-once variables.
- Use of long options, to self-document better the options used.
- Use of fat commas to link option names and their assigned values,
including redirections, so as perltidy can be tricked to put them
together.
Author: Dagfinn Ilmari Mannsåker <[email protected]>
Discussion: https://postgr.es/m/[email protected]
|
|
It doesn't actually work, even with allow_system_table_mods turned on:
the ALTER TABLE operation is rejected by ATSimplePermissions(), so even
the error message we're adding in this commit is unreachable.
Add a test case for it.
Author: Nikolay Shaplov <[email protected]>
Discussion: https://postgr.es/m/1913854.tdWV9SEqCh@thinkpad-pgpro
|
|
The problem is that ALTER SUBSCRIPTION ... SET PUBLICATION ... will lead
to restarting of apply worker and after the restart, the apply worker will
use the existing slot and replication origin corresponding to the
subscription. Now, it is possible that before the restart, the origin has
not been updated, and the WAL start location points to a location before
where PUBLICATION pointed to by SET PUBLICATION doesn't exist, and that
can lead to an error like: "ERROR: publication "pub1" does not exist".
Once this error occurs, apply worker will never be able to proceed and
will always return the same error.
We decided to skip loading the publication if the publication does not
exist. The publication is loaded later and updates the relation entry when
the publication gets created.
We decided not to backpatch this as this is a behaviour change, and we don't
see field reports. This problem has been found by intermittent buildfarm
failures.
Author: vignesh C <[email protected]>
Reviewed-by: Dilip Kumar <[email protected]>
Reviewed-by: Amit Kapila <[email protected]>
Reviewed-by: Masahiko Sawada <[email protected]>
Discussion: https://postgr.es/m/flat/CALDaNm0-n8FGAorM%2BbTxkzn%2BAOUyx5%3DL_XmnvOP6T24%2B-NcBKg%40mail.gmail.com
Discussion: https://postgr.es/m/CAA4eK1+T-ETXeRM4DHWzGxBpKafLCp__5bPA_QZfFQp7-0wj4Q@mail.gmail.com
|
|
If the given input_type yields valid results from both
get_element_type and get_array_type, initArrayResultAny believed the
former and treated the input as an array type. However this is
inconsistent with what get_promoted_array_type does, leading to
situations where the output of an ARRAY() subquery is labeled with
the wrong type: it's labeled as oidvector[] but is really a 2-D
array of OID. That at least results in strange output, and can
result in crashes if further processing such as unnest() is applied.
AFAIK this is only possible with the int2vector and oidvector
types, which are special-cased to be treated mostly as true arrays
even though they aren't quite.
Fix by switching the logic to match get_promoted_array_type by
testing get_array_type not get_element_type, and remove an Assert
thereby made pointless. (We need not introduce a symmetrical
check for get_element_type in the other if-branch, because
initArrayResultArr will check it.) This restores the behavior
that existed before bac27394a introduced initArrayResultAny:
the output really is int2vector[] or oidvector[].
Comparable confusion exists when an input of an ARRAY[] construct
is int2vector or oidvector: transformArrayExpr decides it's dealing
with a multidimensional array constructor, and we end up with
something that's a multidimensional OID array but is alleged to be
of type oidvector. I have not found a crashing case here, but it's
easy to demonstrate totally-wrong results. Adjust that code so
that what you get is an oidvector[] instead, for consistency with
ARRAY() subqueries. (This change also makes these types work like
domains-over-arrays in this context, which seems correct.)
Bug: #18840
Reported-by: yang lei <[email protected]>
Author: Tom Lane <[email protected]>
Discussion: https://postgr.es/m/[email protected]
Backpatch-through: 13
|
|
This commit introduces a function for reversing the order of the
bytes in binary strings.
Bumps catversion.
Author: Aleksander Alekseev <[email protected]>
Discussion: https://postgr.es/m/CAJ7c6TMe0QVRuNssUArbMi0bJJK32%2BzNA3at5m3osrBQ25MHuw%40mail.gmail.com
|
|
We want to support a "noreturn" decoration on more compilers besides
just GCC-compatible ones, but for that we need to move the decoration
in front of the function declaration instead of either behind it or
wherever, which is the current style afforded by GCC-style attributes.
Also rename the macro to "pg_noreturn" to be similar to the C11
standard "noreturn".
pg_noreturn is now supported on all compilers that support C11 (using
_Noreturn), as well as GCC-compatible ones (using __attribute__, as
before), as well as MSVC (using __declspec). (When PostgreSQL
requires C11, the latter two variants can be dropped.)
Now, all supported compilers effectively support pg_noreturn, so the
extra code for !HAVE_PG_ATTRIBUTE_NORETURN can be dropped.
This also fixes a possible problem if third-party code includes
stdnoreturn.h, because then the current definition of
#define pg_attribute_noreturn() __attribute__((noreturn))
would cause an error.
Note that the C standard does not support a noreturn attribute on
function pointer types. So we have to drop these here. There are
only two instances at this time, so it's not a big loss. In one case,
we can make up for it by adding the pg_noreturn to a wrapper function
and adding a pg_unreachable(), in the other case, the latter was
already done before.
Reviewed-by: Dagfinn Ilmari Mannsåker <[email protected]>
Reviewed-by: Andres Freund <[email protected]>
Discussion: https://www.postgresql.org/message-id/flat/pxr5b3z7jmkpenssra5zroxi7qzzp6eswuggokw64axmdixpnk@zbwxuq7gbbcw
|
|
When pulling up a subquery, if the subquery's target list items are
used in grouping set columns, we need to wrap them in PlaceHolderVars.
This ensures that expressions retain their separate identity so that
they will match grouping set columns when appropriate.
In 90947674f, we decided to wrap subquery outputs that are non-var
expressions in PlaceHolderVars. This prevents const-simplification
from merging them into the surrounding expressions after subquery
pullup, which could otherwise lead to failing to match those
subexpressions to grouping set columns, with the effect that they'd
not go to null when expected.
However, that left some loose ends. If the subquery's target list
contains two or more identical Var expressions, we can still fail to
match the Var expression to the expected grouping set expression.
This is not related to const-simplification, but rather to how we
match expressions to lower target items in setrefs.c.
For sort/group expressions, we use ressortgroupref matching, which
works well. For other expressions, we primarily rely on comparing the
expressions to determine if they are the same. Therefore, we need a
way to prevent setrefs.c from matching the expression to some other
identical ones.
To fix, wrap all subquery outputs in PlaceHolderVars if the parent
query uses grouping sets, ensuring that they preserve their separate
identity throughout the whole planning process.
Reported-by: Dean Rasheed <[email protected]>
Author: Richard Guo <[email protected]>
Reviewed-by: Dean Rasheed <[email protected]>
Discussion: https://postgr.es/m/CAMbWs4-meSahaanKskpBn0KKxdHAXC1_EJCVWHxEodqirrGJnw@mail.gmail.com
|
|
On Publication rename, we need to only invalidate the RelationSyncCache
entries corresponding to relations that are part of the publication being
renamed.
As part of this patch, we introduce a new invalidation message to
invalidate the cache maintained by the logical decoding output plugin. We
can't use existing relcache invalidation for this purpose, as that would
unnecessarily cause relcache invalidations in other backends.
This will improve performance by building fewer relation cache entries
during logical replication.
Author: Hayato Kuroda <[email protected]>
Author: Shlok Kyal <[email protected]>
Reviewed-by: Hou Zhijie <[email protected]>
Reviewed-by: Amit Kapila <[email protected]>
Discussion: https://postgr.es/m/OSCPR01MB14966C09AA201EFFA706576A7F5C92@OSCPR01MB14966.jpnprd01.prod.outlook.com
|
|
Commit 3c152a27b06 mistakenly repeated JSONTYPE_JSON in a condition,
omitting JSONTYPE_CAST. As a result, datum_to_jsonb_internal() failed
to reject inputs that were casts (e.g., from an enum to json as in the
example below) when used as keys in JSON constructors.
This led to a crash in cases like:
SELECT JSON_OBJECT('happy'::mood: '123'::jsonb);
where 'happy'::mood is implicitly cast to json. The missing check
meant such casted values weren’t properly rejected as invalid
(non-scalar) JSON keys.
Reported-by: Maciek Sakrejda <[email protected]>
Reviewed-by: Tender Wang <[email protected]>
Reviewed-by: Alvaro Herrera <[email protected]>
Reviewed-by: Maciek Sakrejda <[email protected]>
Discussion: https://postgr.es/m/CADXhmgTJtJZK9A3Na_ry+Xrq-ghjcejBRhcRMzWZvbd__QdgJA@mail.gmail.com
Backpatch-through: 17
|
|
The only caller, select_best_grantor(), can instead use
pg_popcount64(). This isn't performance-critical code, but we
might as well use the centralized implementation. While at it, add
some test coverage for this part of select_best_grantor().
Reviewed-by: Álvaro Herrera <[email protected]>
Discussion: https://postgr.es/m/Z9GtL7Nm6hsYyJnF%40nathan
|
|
makeWholeRowVar() has different rules for constructing a
whole-row Var depending on the kind of RTE it's representing.
This turns out to be problematic because the rewriter and planner
can convert view RTEs and set-returning-function RTEs into
subquery RTEs; so a whole-row Var made during planning might
look different from one made by the parser. In isolation this
doesn't cause any problem, but if a query contains Vars made
both ways for the same varno, there are cross-checks in the
executor that will complain. This manifests for UPDATE, DELETE,
and MERGE queries that use whole-row table references.
To fix, we need makeWholeRowVar() to produce the same result
from an inlined RTE as it would have for the original. For
an inlined view, we can use RangeTblEntry.relid to detect
that this had been a view RTE. For inlined SRFs, make a
data structure definition change akin to commit 47bb9db75,
and say that we won't clear RangeTblEntry.functions until
the end of planning. That allows makeWholeRowVar() to
repeat what it would have done with the unmodified RTE.
Reported-by: Duncan Sands <[email protected]>
Reported-by: Dean Rasheed <[email protected]>
Diagnosed-by: Tender Wang <[email protected]>
Author: Tom Lane <[email protected]>
Reviewed-by: Dean Rasheed <[email protected]>
Discussion: https://postgr.es/m/[email protected]
Backpatch-through: 13
|