Age | Commit message (Collapse) | Author |
|
check_foreign_key incorrectly used a single cache entry for its saved
plans for a 'c' (cascade) trigger, although there are two different
queries to execute depending on whether it fires for an update or a
delete. This caused the wrong things to be done if both types of
event occur in one session. (This was indeed visible in the triggers
regression test, but apparently nobody ever questioned it.) To fix,
add the operation type to the cache key.
Its debug log output failed to distinguish update from delete
events, too.
Also, change the intended trigger usage from BEFORE ROW to AFTER ROW,
and add checks insisting on that usage. BEFORE is really rather
unsafe, since if there are other BEFORE triggers they might change or
cancel the operation we are trying to check. AFTER triggers are the
standard way to propagate changes to other rows, so we should follow
that way here.
In passing, remove a useless duplicate lookup of the cache entry.
This code is mostly intended as a documentation example, so we
won't consider a back-patch.
Author: Dmitrii Bondar <[email protected]>
Reviewed-by: Paul Jungwirth <[email protected]>
Reviewed-by: Lilian Ontowhee <[email protected]>
Reviewed-by: Tom Lane <[email protected]>
Discussion: https://postgr.es/m/[email protected]
|
|
In some edge cases valgrind flags issues with the memory referenced by
IOs. All of the cases addressed in this change are false positives.
Most of the false positives are caused by UnpinBuffer[NoOwner] marking buffer
data as inaccessible. This happens even though the AIO subsystem still holds a
pin. That's good, there shouldn't be accesses to the buffer outside of AIO
related code until it is pinned by "user" code again. But it requires some
explicit work - if the buffer is not pinned by the current backend, we need to
explicitly mark the buffer data accessible/inaccessible while executing
completion callbacks.
That however causes a cascading issue in IO workers: After the completion
callbacks for a buffer is executed, the page is marked as inaccessible. If
subsequently the same worker is executing IO targeting the same buffer, we
would get an error, as the memory is still marked inaccessible. To avoid that,
we need to explicitly mark the memory as accessible in IO workers.
Another issue is that IO executed in workers or via io_uring will not mark
memory as DEFINED. In the case of workers that is because valgrind does not
track memory definedness across processes. For io_uring that is because
valgrind does not understand io_uring, and therefore its IOs never mark memory
as defined, whether the completions are processed in the defining process or
in another context. It's not entirely clear how to best solve that. The
current user of AIO is not affected, as it explicitly marks buffers as DEFINED
& NOACCESS anyway. Defer solving this issue until we have a user with
different needs.
Per buildfarm animal skink.
Reviewed-by: Noah Misch <[email protected]>
Co-authored-by: Noah Misch <[email protected]>
Discussion: https://postgr.es/m/3pd4322mogfmdd5nln3zphdwhtmq3rzdldqjwb2sfqzcgs22lf@ok2gletdaoe6
|
|
This mirrors 1e0dfd166b3 (+ 46ef520b9566), for temporary table buffers. This
is mainly interesting right now because the AIO work currently triggers
spurious valgrind errors, and the fix for that is cleaner if temp buffers
behave the same as shared buffers.
This requires one change beyond the annotations themselves, namely to pin
local buffers while writing them out in FlushRelationBuffers().
Reviewed-by: Noah Misch <[email protected]>
Co-authored-by: Noah Misch <[email protected]>
Discussion: https://postgr.es/m/3pd4322mogfmdd5nln3zphdwhtmq3rzdldqjwb2sfqzcgs22lf@ok2gletdaoe6
|
|
Oversight in cf2655a9029a.
Author: Zhijie Hou <[email protected]>
Discussion: https://postgr.es/m/OS3PR01MB5718DD1466E2B9043448AE5094AA2@OS3PR01MB5718.jpnprd01.prod.outlook.com
|
|
This changes the check for valid characters in the salt string to
only allow plain ASCII letters and digits. The previous coding was
locale-dependent which doesn't really seem like a great idea here;
moreover it could not work correctly in multibyte encodings.
This fixes a careless pointer-use-after-pfree, too.
Reported-by: Tom Lane <[email protected]>
Reported-by: Andres Freund <[email protected]>
Author: Bernd Helmle <[email protected]>
Discussion: https://postgr.es/m/[email protected]
|
|
The list of transform objects that a function should use is specified
in CREATE FUNCTION's TRANSFORM clause, and then represented indirectly
in pg_proc.protrftypes. However, ProcedureCreate completely ignored
that for purposes of constructing pg_depend entries, and instead made
the function depend on any transforms that exist for its parameter or
return data types. This is bad in both directions: the function could
be made dependent on a transform it does not actually use, or it
could try to use a transform that's since been dropped. (The latter
scenario would require use of a transform that's not for any of the
parameter or return types, but that seems legit for cases where the
function performs SQL operations internally.)
To fix, pass in the list of transform objects that CreateFunction
identified, and build pg_depend entries from that not from the
parameter/return types. This results in changes in the expected
test outputs in contrib/bool_plperl, which I guess are due to
different ordering of pg_depend entries -- that test case is
surely not exercising either of the problem scenarios.
This fix is not back-patchable as-is: changing the signature of
ProcedureCreate seems too risky in stable branches. We could
do something like making ProcedureCreate a wrapper around
ProcedureCreateExt or so. However, I'm more inclined to do
nothing in the back branches. We had no field complaints up to
now, so the hazards don't seem to be a big issue in practice.
And we couldn't do anything about existing pg_depend entries,
so a back-patched fix would result in a mishmash of dependencies
created according to different rules. That cure could be worse
than the disease, perhaps.
I bumped catversion just to lay down a marker that the expected
contents of pg_depend are a bit different than before.
Reported-by: Chapman Flack <[email protected]>
Author: Tom Lane <[email protected]>
Discussion: https://postgr.es/m/[email protected]
|
|
This allows them to be added without scanning the table, and validating
them afterwards without holding access exclusive lock on the table after
any violating rows have been deleted or fixed.
Doing ALTER TABLE ... SET NOT NULL for a column that has an invalid
not-null constraint validates that constraint. ALTER TABLE .. VALIDATE
CONSTRAINT is also supported. There are various checks on whether an
invalid constraint is allowed in a child table when the parent table has
a valid constraint; this should match what we do for enforced/not
enforced constraints.
pg_attribute.attnotnull is now only an indicator for whether a not-null
constraint exists for the column; whether it's valid or invalid must be
queried in pg_constraint. Applications can continue to query
pg_attribute.attnotnull as before, but now it's possible that NULL rows
are present in the column even when that's set to true.
For backend internal purposes, we cache the nullability status in
CompactAttribute->attnullability that each tuple descriptor carries
(replacing CompactAttribute.attnotnull, which was a mirror of
Form_pg_attribute.attnotnull). During the initial tuple descriptor
creation, based on the pg_attribute scan, we set this to UNRESTRICTED if
pg_attribute.attnotnull is false, or to UNKNOWN if it's true; then we
update the latter to VALID or INVALID depending on the pg_constraint
scan. This flag is also copied when tupledescs are copied.
Comparing tuple descs for equality must also compare the
CompactAttribute.attnullability flag and return false in case of a
mismatch.
pg_dump deals with these constraints by storing the OIDs of invalid
not-null constraints in a separate array, and running a query to obtain
their properties. The regular table creation SQL omits them entirely.
They are then dealt with in the same way as "separate" CHECK
constraints, and dumped after the data has been loaded. Because no
additional pg_dump infrastructure was required, we don't bump its
version number.
I decided not to bump catversion either, because the old catalog state
works perfectly in the new world. (Trying to run with new catalog state
and the old server version would likely run into issues, however.)
System catalogs do not support invalid not-null constraints (because
commit 14e87ffa5c54 didn't allow them to have pg_constraint rows
anyway.)
Author: Rushabh Lathia <[email protected]>
Author: Jian He <[email protected]>
Reviewed-by: Álvaro Herrera <[email protected]>
Tested-by: Ashutosh Bapat <[email protected]>
Discussion: https://postgr.es/m/CAGPqQf0KitkNack4F5CFkFi-9Dqvp29Ro=EpcWt=4_hs-Rt+bQ@mail.gmail.com
|
|
Quote file names, and mostly avoid hard coded file names. Along the way
make a few other minor improvements.
Discussion: https://postgr.es/m/[email protected]
|
|
This escape shows the numeric server IP address that the client
has connected to. Unix-socket connections will show "[local]".
Non-client processes (e.g. background processes) will show "[none]".
We expect that this option will be of interest to only a fairly
small number of users. Therefore the implementation is optimized
for the case where it's not used (that is, we don't do the string
conversion until we have to), and we've not added the field to
csvlog or jsonlog formats.
Author: Greg Sabino Mullane <[email protected]>
Reviewed-by: Cary Huang <[email protected]>
Reviewed-by: David Steele <[email protected]>
Reviewed-by: Jim Jones <[email protected]>
Reviewed-by: Tom Lane <[email protected]>
Discussion: https://postgr.es/m/CAKAnmmK-U+UicE-qbNU23K--Q5XTLdM6bj+gbkZBZkjyjrd3Ow@mail.gmail.com
|
|
This reverts commit c313fa4602defe1be947370ab5b217ca163a1e3c.
This is found to cause issues on x86_64 Windows even when using UCRT.
Discussion: https://postgr.es/m/[email protected]
|
|
If the limit returned by GetAdditionalPinLimit() is large, the buffer_limit
variable in read_stream_start_pending_read() can overflow. While the code is
careful to limit buffer_limit PG_INT16_MAX, we subsequently add the number of
forwarded buffers.
The overflow can lead to assertion failures, crashes or wrong query results
when using large shared buffers.
It seems easier to avoid this if we make the buffer_limit variable an int,
instead of an int16. Do so, and clamp buffer_limit after adding the number of
forwarded buffers.
It's possible we might want to address this and related issues more widely by
changing to int instead of int16 more widely, but since the consequences of
this bug can be confusing, it seems better to fix it now.
This bug was introduced in ed0b87caaca.
Discussion: https://postgr.es/m/ewvz3cbtlhrwqk7h6ca6cctiqh7r64ol3pzb3iyjycn2r5nxk5@tnhw3a5zatlr
|
|
fc069a3a6319 implements Self-Join Elimination (SJE) and provides a new GUC
variable: enable_self_join_elimination. This new GUC variable was marked
as GUC_NOT_IN_SAMPLE. However, enable_self_join_elimination is documented
and is not different from any other enable_* GUCs. Thus, remove
GUC_NOT_IN_SAMPLE from it and add it to the postgresql.conf.sample.
Discussion: https://postgr.es/m/CAPpHfdsqMTEsmxk3aQwt6xPz%2BKpUELO%3D6fzmER9ZRGrbs4uMfA%40mail.gmail.com
Author: Tender Wang <[email protected]>
Reviewed-by: Tom Lane <[email protected]>
|
|
The help message for WATCH_INTERVAL was hard to interpret and didn't
follow the style of other messages, this updates it to nake it fit in
better and be easier to interpret.
Author: Daniel Gustafsson <[email protected]>
Reported-by: Kyotaro Horiguchi <[email protected]>
Reviewed-by: David G. Johnston <[email protected]>
Discussion: https://postgr.es/m/[email protected]
|
|
Introduced by 1495eff7bdb0.
Author: Kyotaro Horiguchi <[email protected]>
Discussion: https://postgr.es/m/[email protected]
|
|
The valid service file was not correctly shaped, as append_to_file() was
called with an array as input. This is changed so as the parameter and
value pairs from the valid connection string are appended to the valid
service file one by one.
Even with the first issue fixed, the tests should fail. However, they
have been passing because all the connection attempts relied on the
default values given to PGPORT and PGHOST from the node when using
Cluster.pm's connect_ok() and connect_fails(), rather than the data in
the service file. The test is updated to use an interesting trick: a
dummy node is initialized but not started, and all the connection
attempts are done through it. This ensures that the data inside the
service file is used for all the connection tests. Note that breaking
the contents of the valid service file on purpose makes all the tests
that rely on it fail.
Issues introduced by 72c2f36d5727.
Author: Andrew Jackson <[email protected]>
Discussion: https://postgr.es/m/CAKK5BkG_6_YSaebM6gG=8EuKaY7_VX1RFgYeySuwFPh8FZY73g@mail.gmail.com
|
|
palloc() is invoked with a specific formula for its allocation size in
quote_literal_cstr(). This wastes some memory, but the size is large
enough to cover even the worst-case scenarios.
No explanations were given about the reasons behind these numbers. This
commit adds more documentation about all that.
Author: Steve Chavez <[email protected]>
Discussion: https://postgr.es/m/CAGRrpzZ9bToRWS+fAnjxDJrxwZN1QcJ-y1Pn2yg=Hst6rydLtw@mail.gmail.com
|
|
stats_fetch_consistency set to "snapshot" causes the backend entry
"beentry" retrieved by pgstat_get_beentry_by_proc_number() to be reset
at the beginning of pgstat_fetch_stat_backend() when fetching the
backend pgstats entry. As coded, "beentry" was being accessed after
being freed. This commit moves all the accesses to "beentry" to happen
before calling pgstat_fetch_stat_backend(), fixing the problem.
This problem could be reached by calling the SQL functions
pg_stat_get_backend_io() or pg_stat_get_backend_wal().
Issue caught by valgrind.
Reported-by: Alexander Lakhin <[email protected]>
Author: Bertrand Drouvot <[email protected]>
Discussion: https://postgr.es/m/[email protected]
|
|
The XLOG_CONTROL_FILE macro (defined in access/xlog_internal.h)
represents the control file name. While some parts of the codebase already
use this macro, others previously hardcoded the file name as a string.
This commit replaces those hardcoded strings with the macro,
ensuring consistent usage throughout the code. This makes future
maintenance easier and improves searchability, for example when
grepping for control file usage.
Author: Anton A. Melnikov <[email protected]>
Reviewed-by: Daniel Gustafsson <[email protected]>
Reviewed-by: Kyotaro Horiguchi <[email protected]>
Reviewed-by: Masao Fujii <[email protected]>
Discussion: https://postgr.es/m/[email protected]
|
|
Clarify the project naming in the history section of the docs
to match the recent license preamble changes.
Backpatch to all supported versions.
Author: Dave Page <[email protected]>
Reviewed-by: Daniel Gustafsson <[email protected]>
Discussion: https://postgr.es/m/CA+OCxozLzK2+Jc14XZyWXSp6L9Ot+3efwXUE35FJG=fsbib2EA@mail.gmail.com
Backpatch-through: 13
|
|
Coverity objected to the original code, and in any case this is much
cleaner, using the existing routine pg_check_dir() instead of rolling
its own test.
Per suggestion from Tom Lane.
|
|
Tweak column widths in a new table, similarly to some previous
fixes such as b62381d9a.
Per buildfarm.
|
|
This function was initializing the "task" variable before a couple
of early returns. To fix, postpone the initialization until just
before it's needed.
Per Coverity.
Discussion: https://postgr.es/m/Z_KMsUH2-FEbiNjC%40nathan
|
|
PgAioResult.result is never accessed in the relevant path, but coverity
complains about an uninitialized access anyway. So just zero-initialize the
whole thing. While at it, reduce the scope of the variable.
Reported-by: Ranier Vilela <[email protected]>
Reviewed-by: Noah Misch <[email protected]>
Discussion: https://postgr.es/m/CAEudQApsKqd-s+fsUQ0OmxJAMHmBSXxrAz3dCs+uvqb3iRtjSw@mail.gmail.com
|
|
Per Coverity. I don't think these are of any actual significance
since the function ought to be invoked in a short-lived context.
Still, if it's trying to be neat it should get it right.
Also const-ify a constant and fix up typedef formatting.
|
|
This should silence Coverity's complaints about the result being
sometimes ignored.
I'm inclined to think that these routines are simply misdesigned,
because sometimes it's okay to ignore the result and sometimes it
isn't, and we have no way to enforce the latter. But for now
I just added a comment.
|
|
Coverity complained about a possible overrun in the copy, but there is
no actual need to copy the string at all.
|
|
per complaint from Coverity.
|
|
There were several places in ordering-related planning where a
requirement for btree was hardcoded but an amcanorder index could
suffice. This fixes that. We just need to do the necessary mapping
between strategy numbers and compare types and adjust some related
APIs so that this works independent of btree strategy numbers. For
instance, non-btree amcanorder indexes can now be used to support
sorting and merge joins. Also, predtest.c works independent of btree
strategy numbers now.
To avoid performance regressions, some details on btree and other
built-in index types are still hardcoded as shortcuts, but other index
types now have access to the same features by providing the required
flags and callbacks.
Author: Mark Dilger <[email protected]>
Co-authored-by: Peter Eisentraut <[email protected]>
Discussion: https://www.postgresql.org/message-id/flat/[email protected]
|
|
This reverts commit c2d329260cd8.
Reported-by: Daniel Gustafsson <[email protected]>
Discussion: https://postgr.es/m/D292EB44-806E-439A-82A4-491A1BA59E7A%40yesql.se
|
|
fc069a3a6319 implements Self-Join Elimination (SJE) and provides a new
GUC variable: enable_self_join_elimination. This commit adds
enable_self_join_elimination to the postgresql.conf.sample, as it was
forgotten in the original commit.
Discussion: https://postgr.es/m/CAHewXN%3D%2Bghd6O6im46q7j2u6c3H6vkXtXmF%3D_v4CfGSnjje8PA%40mail.gmail.com
Author: Tender Wang <[email protected]>
|
|
The previous implementation of CRC32C on x86 relied on the native
CRC32 instruction from the SSE 4.2 extension, which operates on
up to 8 bytes at a time. We can get a substantial speedup by using
carryless multiplication on SIMD registers, processing 64 bytes per
loop iteration. Shorter inputs fall back to ordinary CRC instructions.
On Intel Tiger Lake hardware (2020), CRC is now 50% faster for inputs
between 64 and 112 bytes, and 3x faster for 256 bytes.
The VPCLMULQDQ instruction on 512-bit registers has been available
on Intel hardware since 2019 and AMD since 2022. There is an older
variant for 128-bit registers, but at least on Zen 2 it performs worse
than normal CRC instructions for short inputs.
We must now do a runtime check, even for builds that target SSE
4.2. This doesn't matter in practice for WAL (arguably the most
critical case), because since commit e2809e3a1 the final computation
with the 20-byte WAL header is inlined and unrolled when targeting
that extension. Compared with two direct function calls, testing
showed equal or slightly faster performance in performing an indirect
function call on several dozen bytes followed by inlined instructions
on constant input of 20 bytes.
The MIT-licensed implementation was generated with the "generate"
program from
https://github.com/corsix/fast-crc32/
Based on: "Fast CRC Computation for Generic Polynomials Using PCLMULQDQ
Instruction" V. Gopal, E. Ozturk, et al., 2009
Co-authored-by: Raghuveer Devulapalli <[email protected]>
Co-authored-by: Paul Amonson <[email protected]>
Reviewed-by: Nathan Bossart <[email protected]>
Reviewed-by: Andres Freund <[email protected]> (earlier version)
Reviewed-by: Matthew Sterrett <[email protected]> (earlier version)
Tested-by: Raghuveer Devulapalli <[email protected]>
Tested-by: David Rowley <<[email protected]>> (earlier version)
Discussion: https://postgr.es/m/BL1PR11MB530401FA7E9B1CA432CF9DC3DC192@BL1PR11MB5304.namprd11.prod.outlook.com
Discussion: https://postgr.es/m/PH8PR11MB82869FF741DFA4E9A029FF13FBF72@PH8PR11MB8286.namprd11.prod.outlook.com
|
|
Project standard is to quote filenames in error and log messages, which
commit 2da74d8d640 missed in two error messages.
Author: Kyotaro Horiguchi <[email protected]>
Reported-by: Kyotaro Horiguchi <[email protected]>
Reviewed-by: Daniel Gustafsson <[email protected]>
Discussion: https://postgr.es/m/[email protected]
|
|
makeDependencyGraphWalker thought that only SelectStmt nodes could
contain a WithClause. Which was true in our original implementation
of WITH, but astonishingly we missed updating this code when we added
the ability to attach WITH to INSERT/UPDATE/DELETE (and later MERGE).
Moreover, since it was coded to deliberately block recursion to a
WithClause, even updating raw_expression_tree_walker didn't save it.
The upshot of this was that we didn't see references to outer CTE
names appearing within an inner WITH, and would neither complain about
disallowed recursion nor account for such references when sorting CTEs
into a usable order. The lack of complaints about this is perhaps not
so surprising, because typical usage of WITH wouldn't hit either case.
Still, it's pretty broken; failing to detect recursion here leads to
assert failures or worse later on.
Fix by factoring out the processing of sub-WITHs into a new function
WalkInnerWith, and invoking that for all the statement types that
can have WITH.
Bug: #18878
Reported-by: Yu Liang <[email protected]>
Author: Tom Lane <[email protected]>
Discussion: https://postgr.es/m/[email protected]
Backpatch-through: 13
|
|
This adapts the publicly available reference implementation on
https://www.akkadia.org/drepper/SHA-crypt.txt and adds the new hash
algorithms sha256crypt and sha512crypt to crypt() and gen_salt()
respectively.
Author: Bernd Helmle <[email protected]>
Reviewed-by: Japin Li <[email protected]>
Discussion: https://postgr.es/m/[email protected]
|
|
transformJsonArrayQueryConstructor() applied transformStmt() to
the same subquery tree twice. While this causes no issue in many
cases, there are some where it causes a coredump, thanks to the
parser's habit of scribbling on its input.
Fix by making a copy before the first transformation (compare
0f43083d1). This is quite brute-force, but then so is the
whole business of transforming the input twice. Per discussion
in the bug thread, this implementation of json_array() parsing
should be replaced completely. But that will take some work
and will surely not be back-patchable, so for the moment let's
take the easy way out.
Oversight in 7081ac46a. Back-patch to v16 where that came in.
Bug: #18877
Reported-by: Yu Liang <[email protected]>
Author: Tom Lane <[email protected]>
Discussion: https://postgr.es/m/[email protected]
Backpatch-through: 16
|
|
Fix some comments, and remove the hacky way of quoting database names in
favor of appendStringLiteralConn.
|
|
This should make the test a wee bit faster on high-load machines (e.g.,
when running under valgrind).
Per complaint from Andres Freund.
Discussion: https://postgr.es/m/cwbcyjp2ts7o7xgy5y5gwtcd4zltvncsj67el7xgci7xbwrhlu@k363vk5tce4g
|
|
In commits 9c02e3a986da and 8ec0aaeae094, Nathan added a duplicate
TocEntry typedef forward declaration (plus assorted #ifdef hackery to
avoid C99 preprocessor issues) to deal with some very old untidyness
regarding DefnDumperPtr function prototype being located in pg_backup.h.
But there's no reason to have the DefnDumperPtr typedef (and the
accompanying DataDumperPtr typedef) in that file at all; they are better
placed in pg_backup_archiver.h, the internal header, because they are
only used internally. That also requires zero #ifdef hackery, so move
them there.
Reviewed-by: Tom Lane <[email protected]>
Discussion: https://postgr.es/m/[email protected]
|
|
Commit 9c02e3a986 taught pg_dump to retrieve attribute statistics
for 64 relations at a time. pg_dump supports dumping from v9.2 and
newer versions, but our query for retrieving statistics for
multiple relations uses WITH ORDINALITY and multi-argument
UNNEST(), both of which were introduced in v9.4. To fix, we resort
to gathering statistics for a single relation at a time on versions
older than v9.4.
Per buildfarm member crake.
Author: Corey Huinker <[email protected]>
Discussion: https://postgr.es/m/Z_BcWVMvlUIJ_iuZ%40nathan
|
|
Since v15 we've had an option to apply a foreign key constraint's
ON DELETE SET DEFAULT or SET NULL action to just some of the
referencing columns. There was not a check for duplicate entries in
the list of columns-to-set, though. That caused a potential memory
stomp in CreateConstraintEntry(), which incautiously assumed that
the list of columns-to-set couldn't be longer than the number of key
columns. Even after fixing that, the case doesn't work because you
get an error like "multiple assignments to same column" from the SQL
command that is generated to do the update.
We could either raise an error for duplicate columns or silently
suppress the dups, and after a bit of thought I chose to do the
latter. This is motivated by the fact that duplicates in the FK
column list are legal, so it's not real clear why duplicates
in the columns-to-set list shouldn't be. Of course there's no
need to actually set the column more than once.
I left in the fix in CreateConstraintEntry() too, just because
it didn't seem like such low-level code ought to be making
assumptions about what it's handed.
Bug: #18879
Reported-by: Yu Liang <[email protected]>
Author: Tom Lane <[email protected]>
Discussion: https://postgr.es/m/[email protected]
Backpatch-through: 15
|
|
This is yet another bit of fallout from the fact that backend/parser
(like other code) feels free to scribble on the parse tree it's
handed. In this case that resulted in modifying the
relatively-short-lived copy in the cached function's source_list.
That would be fine since we only need each source_list tree once
... except that if the parser fails after making some changes,
the function cache entry remains as-is and will still be there
if the user tries to execute the function again. Then we have
problems because we're feeding a non-pristine tree to the parser.
The most expedient fix is a quick copyObject(). I considered
other answers like somehow marking the cache entry invalid
temporarily, but that would add complexity and I'm not sure
it's worth it. In typical scenarios we'd only do this once
per function query per session.
Reported-by: Alexander Lakhin <[email protected]>
Author: Tom Lane <[email protected]>
Discussion: https://postgr.es/m/[email protected]
|
|
oversights in 1495eff7bdb and 289f74d0cb2. Mea culpa.
|
|
Commit 9c02e3a986 added a forward declaration for this typedef that
caused redeclarations, which is not valid in C99. To fix, add some
preprocessor guards to avoid a redefinition, as is done elsewhere
(e.g., commit 382092a0cd).
Per buildfarm.
|
|
Author: Matheus Alcantara <[email protected]>
Author: Mahendra Singh Thalor <[email protected]>
|
|
pg_dumpall acquires a new -F/--format option, with the same meanings as
pg_dump. The default is p, meaning plain text. For any other value, a
directory is created containing two files, globals.data and map.dat. The
first contains SQL for restoring the global data, and the second
contains a map from oids to database names. It will also contain a
subdirectory called databases, inside which it will create archives in
the specified format, named using the database oids.
In these casess the -f argument is required.
If pg_restore encounters a directory containing globals.dat, and no
toc.dat, it restores the global settings and then restores each
database.
pg_restore acquires two new options: -g/--globals-only which suppresses
restoration of any databases, and --exclude-database which inhibits
restoration of particualr database(s) in the same way the same option
works in pg_dumpall.
Author: Mahendra Singh Thalor <[email protected]>
Co-authored-by: Andrew Dunstan <[email protected]>
Reviewed-by: jian he <[email protected]>
Reviewed-by: Srinath Reddy <[email protected]>
Reviewed-by: Álvaro Herrera <[email protected]>
Discussion: https://postgr.es/m/[email protected]
|
|
This type contains both an oid and a string.
This will be used in forthcoming changes to pg_restore.
Author: Andrew Dunstan <[email protected]>
|
|
ConnectDatabase is used by pg_dumpall, pg_restore and pg_dump so move
common code to new file.
new file name: connectdb.c
Author: Mahendra Singh Thalor <[email protected]>
|
|
Thanks to commit 9c02e3a986, which modified some of the changes
from commit a0a4601765, we can remove the now-unused ArchiveHandle
parameter from _tocEntryRestorePass() and move_to_ready_heap().
Reviewed-by: Jeff Davis <[email protected]>
Discussion: https://postgr.es/m/Z-3x2AnPCP331JA3%40nathan
|
|
Currently, pg_dump gathers attribute statistics with a query per
relation, which can cause pg_dump to take significantly longer,
especially when there are many relations. This commit addresses
this by teaching pg_dump to gather attribute statistics for 64
relations at a time. Some simple tests showed this was the optimal
batch size, but performance may vary depending on the workload.
Our lookahead code determines the next batch of relations by
searching the TOC sequentially for relevant entries. This approach
assumes that we will dump all such entries in TOC order, which
unfortunately isn't true for dump formats that use
RestoreArchive(). RestoreArchive() does multiple passes through
the TOC and selectively dumps certain groups of entries each time.
This is particularly problematic for index stats and a subset of
matview stats; both are in SECTION_POST_DATA, but matview stats
that depend on matview data are dumped in RESTORE_PASS_POST_ACL,
while all other stats are dumped in RESTORE_PASS_MAIN. To handle
this, this commit moves all statistics data entries in
SECTION_POST_DATA to RESTORE_PASS_POST_ACL, which ensures that we
always dump them in TOC order. A convenient side effect of this
change is that we can revert a decent chunk of commit a0a4601765,
but that is left for a follow-up commit.
Author: Corey Huinker <[email protected]>
Co-authored-by: Nathan Bossart <[email protected]>
Reviewed-by: Jeff Davis <[email protected]>
Discussion: https://postgr.es/m/CADkLM%3Dc%2Br05srPy9w%2B-%2BnbmLEo15dKXYQ03Q_xyK%2BriJerigLQ%40mail.gmail.com
|
|
Right now, pg_dump stores all generated commands for statistics in
memory. These commands can be quite large and therefore can
significantly increase pg_dump's memory footprint. To fix, wait
until we are about to write out the commands before generating
them, and be sure to free the commands after writing. This is
implemented via a new defnDumper callback that works much like the
dataDumper one but is specifically designed for TOC entries.
Custom dumps that include data might write the TOC twice (to update
data offset information), which would ordinarily cause pg_dump to
run the attribute statistics queries twice. However, as a hack, we
save the length of the written-out entry in the first pass and skip
over it in the second. While there is no known technical issue
with executing the queries multiple times and rewriting the
results, it's expensive and feels risky, so let's avoid it.
As an exception, we _do_ execute the queries twice for the tar
format. This format does a second pass through the TOC to generate
the restore.sql file. pg_restore doesn't use this file, so even if
the second round of queries returns different results than the
first, it won't corrupt the output; the archive and restore.sql
file will just have different content. A follow-up commit will
teach pg_dump to gather attribute statistics in batches, which our
testing indicates more than makes up for the added expense of
running the queries twice.
Author: Corey Huinker <[email protected]>
Co-authored-by: Nathan Bossart <[email protected]>
Reviewed-by: Jeff Davis <[email protected]>
Discussion: https://postgr.es/m/CADkLM%3Dc%2Br05srPy9w%2B-%2BnbmLEo15dKXYQ03Q_xyK%2BriJerigLQ%40mail.gmail.com
|