summaryrefslogtreecommitdiff
path: root/src/tools
AgeCommit message (Collapse)Author
2025-03-12Add connection establishment duration loggingMelanie Plageman
Add log_connections option 'setup_durations' which logs durations of several key parts of connection establishment and backend setup. For an incoming connection, starting from when the postmaster gets a socket from accept() and ending when the forked child backend is first ready for query, there are multiple steps that could each take longer than expected due to external factors. This logging provides visibility into authentication and fork duration as well as the end-to-end connection establishment and backend initialization time. To make this portable, the timings captured in the postmaster (socket creation time, fork initiation time) are passed through the BackendStartupData. Author: Melanie Plageman <[email protected]> Reviewed-by: Bertrand Drouvot <[email protected]> Reviewed-by: Fujii Masao <[email protected]> Reviewed-by: Daniel Gustafsson <[email protected]> Reviewed-by: Jacob Champion <[email protected]> Reviewed-by: Jelte Fennema-Nio <[email protected]> Reviewed-by: Guillaume Lelarge <[email protected]> Discussion: https://postgr.es/m/flat/CAAKRu_b_smAHK0ZjrnL5GRxnAVWujEXQWpLXYzGbmpcZd3nLYw%40mail.gmail.com
2025-03-12Modularize log_connections outputMelanie Plageman
Convert the boolean log_connections GUC into a list GUC comprised of the connection aspects to log. This gives users more control over the volume and kind of connection logging. The current log_connections options are 'receipt', 'authentication', and 'authorization'. The empty string disables all connection logging. 'all' enables all available connection logging. For backwards compatibility, the most common values for the log_connections boolean are still supported (on, off, 1, 0, true, false, yes, no). Note that previously supported substrings of on, off, true, false, yes, and no are no longer supported. Author: Melanie Plageman <[email protected]> Reviewed-by: Bertrand Drouvot <[email protected]> Reviewed-by: Fujii Masao <[email protected]> Reviewed-by: Daniel Gustafsson <[email protected]> Discussion: https://postgr.es/m/flat/CAAKRu_b_smAHK0ZjrnL5GRxnAVWujEXQWpLXYzGbmpcZd3nLYw%40mail.gmail.com
2025-03-11Show index search count in EXPLAIN ANALYZE, take 2.Peter Geoghegan
Expose the count of index searches/index descents in EXPLAIN ANALYZE's output for index scan/index-only scan/bitmap index scan nodes. This information is particularly useful with scans that use ScalarArrayOp quals, where the number of index searches can be unpredictable due to implementation details that interact with physical index characteristics (at least with nbtree SAOP scans, since Postgres 17 commit 5bf748b8). The information shown also provides useful context when EXPLAIN ANALYZE runs a plan with an index scan node that successfully applied the skip scan optimization (set to be added to nbtree by an upcoming patch). The instrumentation works by teaching all index AMs to increment a new nsearches counter whenever a new index search begins. The counter is incremented at exactly the same point that index AMs already increment the pg_stat_*_indexes.idx_scan counter (we're counting the same event, but at the scan level rather than the relation level). Parallel queries have workers copy their local counter struct into shared memory when an index scan node ends -- even when it isn't a parallel aware scan node. An earlier version of this patch that only worked with parallel aware scans became commit 5ead85fb (though that was quickly reverted by commit d00107cd following "debug_parallel_query=regress" buildfarm failures). Our approach doesn't match the approach used when tracking other index scan related costs (e.g., "Rows Removed by Filter:"). It is comparable to the approach used in similar cases involving costs that are only readily accessible inside an access method, not from the executor proper (e.g., "Heap Blocks:" output for a Bitmap Heap Scan, which was recently enhanced to show per-worker costs by commit 5a1e6df3, using essentially the same scheme as the one used here). It is necessary for index AMs to have direct responsibility for maintaining the new counter, since the counter might need to be incremented multiple times per amgettuple call (or per amgetbitmap call). But it is also necessary for the executor proper to manage the shared memory now used to transfer each worker's counter struct to the leader. Author: Peter Geoghegan <[email protected]> Reviewed-By: Robert Haas <[email protected]> Reviewed-By: Tomas Vondra <[email protected]> Reviewed-By: Masahiro Ikeda <[email protected]> Reviewed-By: Matthias van de Meent <[email protected]> Discussion: https://postgr.es/m/CAH2-WzkRqvaqR2CTNqTZP0z6FuL4-3ED6eQB0yx38XBNj1v-4Q@mail.gmail.com Discussion: https://postgr.es/m/CAH2-Wz=PKR6rB7qbx+Vnd7eqeB5VTcrW=iJvAsTsKbdG+kW_UA@mail.gmail.com
2025-03-04Compress TID lists when writing GIN tuples to diskTomas Vondra
When serializing GIN tuples to tuplesorts during parallel index builds, we can significantly reduce the amount of data by compressing the TID lists. The GIN opclasses may produce a lot of data (depending on how many keys are extracted from each row), and the TID compression is very efficient and effective. If the number of distinct keys is high, the first worker pass (reading data from the table and writing them into a private tuplesort) may not benefit from the compression very much. It is likely to spill data to disk before the TID lists get long enough for the compression to help. The second pass (writing the merged data into the shared tuplesort) is more likely to benefit from compression. The compression can be seen as a way to reduce the amount of disk space needed by the parallel builds, because the data is written twice. First into the per-worker tuplesorts, then into the shared tuplesort. Author: Tomas Vondra Reviewed-by: Matthias van de Meent, Andy Fan, Kirill Reshke Discussion: https://postgr.es/m/6ab4003f-a8b8-4d75-a67f-f25ad98582dc%40enterprisedb.com
2025-03-03ci: Use a RAM disk for NetBSD and OpenBSD.Thomas Munro
Put the RAM disk setup for all three *BSD CI tasks into a common script, replacing the old FreeBSD-specific one from commit 0265e5c1. This makes them run 3 times and a bit over 2 times faster, respectively. NetBSD and FreeBSD now share the same one-liner to mount tmpfs. OpenBSD needs a GCP-image specific recipe that knows where to steal an unused disk partition needed to reserve swap space for an mfs RAM disk, because its tmpfs is deprecated and currently broken. The configured size is enough for our current tests but could potentially need future expansion. Thanks to Bilal for the disklabel incantation. Reviewed-by: Nazir Bilal Yavuz <[email protected]> Discussion: https://postgr.es/m/CA%2BhUKGJJ-XrPhN%2BQA4ZUfYAAXcwOSDty9t0vE9Z8__AdacKnQg%40mail.gmail.com
2025-03-03Allow parallel CREATE INDEX for GIN indexesTomas Vondra
Allow using parallel workers to build a GIN index, similarly to BTREE and BRIN. For large tables this may result in significant speedup when the build is CPU-bound. The work is divided so that each worker builds index entries on a subset of the table, determined by the regular parallel scan used to read the data. Each worker uses a local tuplesort to sort and merge the entries for the same key. The TID lists do not overlap (for a given key), which means the merge sort simply concatenates the two lists. The merged entries are written into a shared tuplesort for the leader. The leader needs to merge the sorted entries again, before writing them into the index. But this way a significant part of the work happens in the workers, and the leader is left with merging fewer large entries, which is more efficient. Most of the parallelism infrastructure is a simplified copy of the code used by BTREE indexes, omitting the parts irrelevant for GIN indexes (e.g. uniqueness checks). Original patch by me, with reviews and substantial improvements by Matthias van de Meent, certainly enough to make him a co-author. Author: Tomas Vondra, Matthias van de Meent Reviewed-by: Matthias van de Meent, Andy Fan, Kirill Reshke Discussion: https://postgr.es/m/6ab4003f-a8b8-4d75-a67f-f25ad98582dc%40enterprisedb.com
2025-02-28Refactor COPY FROM to use format callback functions.Masahiko Sawada
This commit introduces a new CopyFromRoutine struct, which is a set of callback routines to read tuples in a specific format. It also makes COPY FROM with the existing formats (text, CSV, and binary) utilize these format callbacks. This change is a preliminary step towards making the COPY FROM command extensible in terms of input formats. Similar to 2e4127b6d2d, this refactoring contributes to a performance improvement by reducing the number of "if" branches that need to be checked on a per-row basis when sending field representations in text or CSV mode. The performance benchmark results showed ~5% performance gain in text or CSV mode. Author: Sutou Kouhei <[email protected]> Reviewed-by: Masahiko Sawada <[email protected]> Reviewed-by: Michael Paquier <[email protected]> Reviewed-by: Andres Freund <[email protected]> Reviewed-by: Tomas Vondra <[email protected]> Reviewed-by: Junwang Zhao <[email protected]> Discussion: https://postgr.es/m/[email protected]
2025-02-27Refactor COPY TO to use format callback functions.Masahiko Sawada
This commit introduces a new CopyToRoutine struct, which is a set of callback routines to copy tuples in a specific format. It also makes the existing formats (text, CSV, and binary) utilize these format callbacks. This change is a preliminary step towards making the COPY TO command extensible in terms of output formats. Additionally, this refactoring contributes to a performance improvement by reducing the number of "if" branches that need to be checked on a per-row basis when sending field representations in text or CSV mode. The performance benchmark results showed ~5% performance gain in text or CSV mode. Author: Sutou Kouhei <[email protected]> Reviewed-by: Masahiko Sawada <[email protected]> Reviewed-by: Michael Paquier <[email protected]> Reviewed-by: Andres Freund <[email protected]> Reviewed-by: Tomas Vondra <[email protected]> Reviewed-by: Junwang Zhao <[email protected]> Discussion: https://postgr.es/m/[email protected]
2025-02-26Adding new PgStat_WalCounters structure in pgstat.hMichael Paquier
This new structure contains the counters and the data related to the WAL activity statistics gathered from WalUsage, separated into its own structure so as it can be shared across more than one Stats structure in pg_stat.h. This refactoring will be used by an upcoming patch introducing backend-level WAL statistics. Bump PGSTAT_FILE_FORMAT_ID. Author: Bertrand Drouvot <[email protected]> Discussion: https://postgr.es/m/Z3zqc4o09dM/[email protected]
2025-02-25Change _mdfd_segpath() to return paths by valueAndres Freund
This basically mirrors the changes done in the predecessor commit. While there isn't currently a need to get these paths in critical sections, it seems a shame to unnecessarily allocate memory in these paths now that relpath() doesn't allocate anymore. Discussion: https://postgr.es/m/xeri5mla4b5syjd5a25nok5iez2kr3bm26j2qn4u7okzof2bmf@kwdh2vf7npra
2025-02-25Change relpath() et al to return path by valueAndres Freund
For AIO, and also some other recent patches, we need the ability to call relpath() in a critical section. Until now that was not feasible, as it allocated memory. The fact that relpath() allocated memory also made it awkward to use in log messages because we had to take care to free the memory afterwards. Which we e.g. didn't do for when zeroing out an invalid buffer. We discussed other solutions, e.g. filling a pre-allocated buffer that's passed to relpath(), but they all came with plenty downsides or were larger projects. The easiest fix seems to be to make relpath() return the path by value. To be able to return the path by value we need to determine the maximum length of a relation path. This patch adds a long #define that computes the exact maximum, which is verified to be correct in a regression test. As this change the signature of relpath(), extensions using it will need to adapt their code. We discussed leaving a backward-compat shim in place, but decided it's not worth it given the use of relpath() doesn't seem widespread. Discussion: https://postgr.es/m/xeri5mla4b5syjd5a25nok5iez2kr3bm26j2qn4u7okzof2bmf@kwdh2vf7npra
2025-02-24Remove read/sync fields from pg_stat_wal and GUC track_wal_io_timingMichael Paquier
The four following attributes are removed from pg_stat_wal: * wal_write * wal_sync * wal_write_time * wal_sync_time a051e71e28a1 has added an equivalent of this information in pg_stat_io with more granularity as this now spreads across the backend types, IO context and IO objects. So, keeping the same information in pg_stat_wal has little benefits. Another benefit of this commit is the removal of PendingWalStats, simplifying an upcoming patch to add per-backend WAL statistics, which already support IO statistics and which have access to the write/sync stats data of WAL. The GUC track_wal_io_timing, that was used to enable or disable the aggregation of the write and sync timings for WAL, is also removed. pgstat_prepare_io_time() is simplified. Bump catalog version. Bump PGSTAT_FILE_FORMAT_ID, due to the update of PgStat_WalStats. Author: Bertrand Drouvot <[email protected]> Discussion: https://postgr.es/m/Z7RkQ0EfYaqqjgz/@ip-10-97-1-34.eu-west-3.compute.internal
2025-02-20Ignore blank lines in pgindent exclude filesAndrew Dunstan
Currently a blank line matches everything, which is almost never what someone would want. If they really want that they can use a wildcard regex to do it. Author: Zsolt Parragi <[email protected]> Discussion: https://postgr.es/m/CAN4CZFNka+2q3=-Dithr4w65RJfwPaV92T62spEzLn+T4MgcMg@mail.gmail.com
2025-02-20Add support for OAUTHBEARER SASL mechanismDaniel Gustafsson
This commit implements OAUTHBEARER, RFC 7628, and OAuth 2.0 Device Authorization Grants, RFC 8628. In order to use this there is a new pg_hba auth method called oauth. When speaking to a OAuth- enabled server, it looks a bit like this: $ psql 'host=example.org oauth_issuer=... oauth_client_id=...' Visit https://oauth.example.org/login and enter the code: FPQ2-M4BG Device authorization is currently the only supported flow so the OAuth issuer must support that in order for users to authenticate. Third-party clients may however extend this and provide their own flows. The built-in device authorization flow is currently not supported on Windows. In order for validation to happen server side a new framework for plugging in OAuth validation modules is added. As validation is implementation specific, with no default specified in the standard, PostgreSQL does not ship with one built-in. Each pg_hba entry can specify a specific validator or be left blank for the validator installed as default. This adds a requirement on libcurl for the client side support, which is optional to build, but the server side has no additional build requirements. In order to run the tests, Python is required as this adds a https server written in Python. Tests are gated behind PG_TEST_EXTRA as they open ports. This patch has been a multi-year project with many contributors involved with reviews and in-depth discussions: Michael Paquier, Heikki Linnakangas, Zhihong Yu, Mahendrakar Srinivasarao, Andrey Chudnovsky and Stephen Frost to name a few. While Jacob Champion is the main author there have been some levels of hacking by others. Daniel Gustafsson contributed the validation module and various bits and pieces; Thomas Munro wrote the client side support for kqueue. Author: Jacob Champion <[email protected]> Co-authored-by: Daniel Gustafsson <[email protected]> Co-authored-by: Thomas Munro <[email protected]> Reviewed-by: Daniel Gustafsson <[email protected]> Reviewed-by: Peter Eisentraut <[email protected]> Reviewed-by: Antonin Houska <[email protected]> Reviewed-by: Kashif Zeeshan <[email protected]> Discussion: https://postgr.es/m/[email protected]
2025-02-20Transfer statistics during pg_upgrade.Jeff Davis
Add support to pg_dump for dumping stats, and use that during pg_upgrade so that statistics are transferred during upgrade. In most cases this removes the need for a costly re-analyze after upgrade. Some statistics are not transferred, such as extended statistics or statistics with a custom stakind. Now pg_dump accepts the options --schema-only, --no-schema, --data-only, --no-data, --statistics-only, and --no-statistics; which allow all combinations of schema, data, and/or stats. The options are named this way to preserve compatibility with the previous --schema-only and --data-only options. Statistics are in SECTION_DATA, unless the object itself is in SECTION_POST_DATA. The stats are represented as calls to pg_restore_relation_stats() and pg_restore_attribute_stats(). Author: Corey Huinker, Jeff Davis Reviewed-by: Jian He Discussion: https://postgr.es/m/CADkLM=fzX7QX6r78fShWDjNN3Vcr4PVAnvXxQ4DiGy6V=0bCUA@mail.gmail.com Discussion: https://postgr.es/m/CADkLM%3DcB0rF3p_FuWRTMSV0983ihTRpsH%2BOCpNyiqE7Wk0vUWA%40mail.gmail.com
2025-02-19Add ATAlterConstraint struct for ALTER .. CONSTRAINTÁlvaro Herrera
Replace the use of Constraint with a new ATAlterConstraint struct, which allows us to pass additional information. No functionality is added by this commit. This is necessary for future work that allows altering constraints in other ways. I (Álvaro) took the liberty of restructuring the code for ALTER CONSTRAINT beyond what Amul did. The original coding before Amul's patch was unnecessarily baroque, and this change makes things simpler by removing one level of subroutine. Also, partly remove the assumption that only partitioned tables are relevant (by passing sensible 'recurse' arguments) and no longer ignore whether ONLY was specified. I say 'partly' because the current coding only walks down via the 'conparentid' relationship, which is only used for partitioned tables; but future patches could handle ONLY or not for other types of constraint changes for legacy inheritance trees too. Author: Amul Sul <[email protected]> Author: Álvaro Herrera <[email protected]> Discussion: https://postgr.es/m/CAAJ_b94bfgPV-8Mw_HwSBeheVwaK9=5s+7+KbBj_NpwXQFgDGg@mail.gmail.com
2025-02-19Invalidate inactive replication slots.Amit Kapila
This commit introduces idle_replication_slot_timeout GUC that allows inactive slots to be invalidated at the time of checkpoint. Because checkpoints happen checkpoint_timeout intervals, there can be some lag between when the idle_replication_slot_timeout was exceeded and when the slot invalidation is triggered at the next checkpoint. To avoid such lags, users can force a checkpoint to promptly invalidate inactive slots. Note that the idle timeout invalidation mechanism is not applicable for slots that do not reserve WAL or for slots on the standby server that are synced from the primary server (i.e., standby slots having 'synced' field 'true'). Synced slots are always considered to be inactive because they don't perform logical decoding to produce changes. The slots can become inactive for a long period if a subscriber is down due to a system error or inaccessible because of network issues. If such a situation persists, it might be more practical to recreate the subscriber rather than attempt to recover the node and wait for it to catch up which could be time-consuming. Then, external tools could create replication slots (e.g., for migrations or upgrades) that may fail to remove them if an error occurs, leaving behind unused slots that take up space and resources. Manually cleaning them up can be tedious and error-prone, and without intervention, these lingering slots can cause unnecessary WAL retention and system bloat. As the duration of idle_replication_slot_timeout is in minutes, any test using that would be time-consuming. We are planning to commit a follow up patch for tests by using the injection point framework. Author: Nisha Moond <[email protected]> Author: Bharath Rupireddy <[email protected]> Reviewed-by: Peter Smith <[email protected]> Reviewed-by: Hayato Kuroda <[email protected]> Reviewed-by: Vignesh C <[email protected]> Reviewed-by: Amit Kapila <[email protected]> Reviewed-by: Hou Zhijie <[email protected]> Reviewed-by: Bertrand Drouvot <[email protected]> Discussion: https://postgr.es/m/CALj2ACW4aUe-_uFQOjdWCEN-xXoLGhmvRFnL8SNw_TZ5nJe+aw@mail.gmail.com Discussion: https://postgr.es/m/OS0PR01MB5716C131A7D80DAE8CB9E88794FC2@OS0PR01MB5716.jpnprd01.prod.outlook.com
2025-02-18Add PGErrorVerbosity to typedefs.listDaniel Gustafsson
PGErrorVerbosity was missing which resulted in incorrect whitespace alignment going back all the way to e3860ffa4dd0. No backpatch for this though since we don't pgindent backbranches. Author: Jelte Fennema-Nio <[email protected]> Reviewed-by: Daniel Gustafsson <[email protected]> Discussion: https://postgr.es/m/CAGECzQTVi8n-HW4Q27je-b9ckQk7zf6bS_it42gNvQu+DX0NCQ@mail.gmail.com
2025-02-17Implement Self-Join EliminationAlexander Korotkov
The Self-Join Elimination (SJE) feature removes an inner join of a plain table to itself in the query tree if it is proven that the join can be replaced with a scan without impacting the query result. Self-join and inner relation get replaced with the outer in query, equivalence classes, and planner info structures. Also, the inner restrictlist moves to the outer one with the removal of duplicated clauses. Thus, this optimization reduces the length of the range table list (this especially makes sense for partitioned relations), reduces the number of restriction clauses and, in turn, selectivity estimations, and potentially improves total planner prediction for the query. This feature is dedicated to avoiding redundancy, which can appear after pull-up transformations or the creation of an EquivalenceClass-derived clause like the below. SELECT * FROM t1 WHERE x IN (SELECT t3.x FROM t1 t3); SELECT * FROM t1 WHERE EXISTS (SELECT t3.x FROM t1 t3 WHERE t3.x = t1.x); SELECT * FROM t1,t2, t1 t3 WHERE t1.x = t2.x AND t2.x = t3.x; In the future, we could also reduce redundancy caused by subquery pull-up after unnecessary outer join removal in cases like the one below. SELECT * FROM t1 WHERE x IN (SELECT t3.x FROM t1 t3 LEFT JOIN t2 ON t2.x = t1.x); Also, it can drastically help to join partitioned tables, removing entries even before their expansion. The SJE proof is based on innerrel_is_unique() machinery. We can remove a self-join when for each outer row: 1. At most, one inner row matches the join clause; 2. Each matched inner row must be (physically) the same as the outer one; 3. Inner and outer rows have the same row mark. In this patch, we use the next approach to identify a self-join: 1. Collect all merge-joinable join quals which look like a.x = b.x; 2. Add to the list above the baseretrictinfo of the inner table; 3. Check innerrel_is_unique() for the qual list. If it returns false, skip this pair of joining tables; 4. Check uniqueness, proved by the baserestrictinfo clauses. To prove the possibility of self-join elimination, the inner and outer clauses must match exactly. The relation replacement procedure is not trivial and is partly combined with the one used to remove useless left joins. Tests covering this feature were added to join.sql. Some of the existing regression tests changed due to self-join removal logic. Discussion: https://postgr.es/m/flat/64486b0b-0404-e39e-322d-0801154901f3%40postgrespro.ru Author: Andrey Lepikhov <[email protected]> Author: Alexander Kuzmenkov <[email protected]> Co-authored-by: Alexander Korotkov <[email protected]> Co-authored-by: Alena Rybakina <[email protected]> Reviewed-by: Tom Lane <[email protected]> Reviewed-by: Robert Haas <[email protected]> Reviewed-by: Andres Freund <[email protected]> Reviewed-by: Simon Riggs <[email protected]> Reviewed-by: Jonathan S. Katz <[email protected]> Reviewed-by: David Rowley <[email protected]> Reviewed-by: Thomas Munro <[email protected]> Reviewed-by: Konstantin Knizhnik <[email protected]> Reviewed-by: Heikki Linnakangas <[email protected]> Reviewed-by: Hywel Carver <[email protected]> Reviewed-by: Laurenz Albe <[email protected]> Reviewed-by: Ronan Dunklau <[email protected]> Reviewed-by: vignesh C <[email protected]> Reviewed-by: Zhihong Yu <[email protected]> Reviewed-by: Greg Stark <[email protected]> Reviewed-by: Jaime Casanova <[email protected]> Reviewed-by: Michał Kłeczek <[email protected]> Reviewed-by: Alena Rybakina <[email protected]> Reviewed-by: Alexander Korotkov <[email protected]>
2025-02-12ci: Collect core files on NetBSD and OpenBSDAndres Freund
Support for NetBSD and OpenBSD operating systems have been added to CI in the prior commit. Now add support for collect core files and generating backtraces using for all core files. Author: Nazir Bilal Yavuz <[email protected]> Discussion: https://postgr.es/m/CAN55FZ32ySyYa06k9MFd+VY5vHhUyBpvgmJUZae5PihjzaurVg@mail.gmail.com
2025-02-12ci: Test NetBSD and OpenBSDAndres Freund
NetBSD and OpenBSD Postgres CI images are now generated [1], but aren't yet utilized for Postgres' CI. This commit adds CI support for them. For now the tasks will be manually triggered, to save on CI credits. [1] https://github.com/anarazel/pg-vm-images Author: Nazir Bilal Yavuz <[email protected]> Discussion: https://postgr.es/m/CAN55FZ32ySyYa06k9MFd+VY5vHhUyBpvgmJUZae5PihjzaurVg@mail.gmail.com
2025-02-11Allow extension functions to participate in in-place updates.Tom Lane
Commit 1dc5ebc90 allowed PL/pgSQL to perform in-place updates of expanded-object variables that are being updated with assignments like "x := f(x, ...)". However this was allowed only for a hard-wired list of functions f(), since we need to be sure that f() will not modify the variable if it fails. It was always envisioned that we should make that extensible, but at the time we didn't have a good way to do so. Since then we've invented the idea of "support functions" to allow attaching specialized optimization knowledge to functions, and that is a perfect mechanism for doing this. Hence, adjust PL/pgSQL to use a support function request instead of hard-wired logic to decide if in-place update is safe. Preserve the previous optimizations by creating support functions for the three functions that were previously hard-wired. Author: Tom Lane <[email protected]> Reviewed-by: Andrey Borodin <[email protected]> Reviewed-by: Pavel Borisov <[email protected]> Discussion: https://postgr.es/m/CACxu=vJaKFNsYxooSnW1wEgsAO5u_v1XYBacfVJ14wgJV_PYeg@mail.gmail.com
2025-02-11Implement new optimization rule for updates of expanded variables.Tom Lane
If a read/write expanded variable is declared locally to the assignment statement that is updating it, and it is referenced exactly once in the assignment RHS, then we can optimize the operation as a direct update of the expanded value, whether or not the function(s) operating on it can be trusted not to modify the value before throwing an error. This works because if an error does get thrown, we no longer care what value the variable has. In cases where that doesn't work, fall back to the previous rule that checks for safety of the top-level function. In any case, postpone determination of whether these optimizations are feasible until we are executing a Param referencing the target variable and that variable holds a R/W expanded object. While the previous incarnation of exec_check_rw_parameter was pretty cheap, this is a bit less so, and our plan to invoke support functions will make it even less so. So avoiding the check for variables where it couldn't be useful should be a win. Author: Tom Lane <[email protected]> Reviewed-by: Andrey Borodin <[email protected]> Reviewed-by: Pavel Borisov <[email protected]> Discussion: https://postgr.es/m/CACxu=vJaKFNsYxooSnW1wEgsAO5u_v1XYBacfVJ14wgJV_PYeg@mail.gmail.com
2025-02-10Add test of various escape functionsAndres Freund
As highlighted by the prior commit, writing correct escape functions is less trivial than one might hope. This test module tries to verify that different escaping functions behave reasonably. It e.g. tests: - Invalidly encoded input to an escape function leads to invalidly encoded output - Trailing incomplete multi-byte characters are handled sensibly - Escaped strings are parsed as single statement by psql's parser (which derives from the backend parser) There are further tests that would be good to add. But even in the current state it was rather useful for writing the fix in the prior commit. Reviewed-by: Noah Misch <[email protected]> Backpatch-through: 13 Security: CVE-2025-1094
2025-02-02Convert strategies to and from compare typesPeter Eisentraut
For each Index AM, provide a mapping between operator strategies and the system-wide generic concept of a comparison type. For example, for btree, BTLessStrategyNumber maps to and from COMPARE_LT. Numerous places in the planner and executor think directly in terms of btree strategy numbers (and a few in terms of hash strategy numbers.) These should be converted over subsequent commits to think in terms of CompareType instead. (This commit doesn't make any use of this API yet.) Author: Mark Dilger <[email protected]> Reviewed-by: Peter Eisentraut <[email protected]> Discussion: https://www.postgresql.org/message-id/flat/[email protected]
2025-02-01Add script to keep .editorconfig in sync with .gitattributesPeter Eisentraut
Our repo already contained an .editorconfig file, but it was not kept up to date with .gitattributes. This adds a script that keeps these files in sync. A big advantage of the editorconfig file is that it many editors/IDEs get automatically configured to trim trailing newlines and add a final newline on save, while .gitattributes only complains about these problems instead of automatically fixing them. This also adds rules to .gitattributes for Python files as well as for C files in pg_bsd_indent directory (which have a different tab_width than most C files due to being vendored in). Author: Jelte Fennema-Nio <[email protected]> Discussion: https://www.postgresql.org/message-id/flat/CAGECzQQGzbroAXi+Yicp3HvcCo4=g84kaOgjuvQ5MW9F0ubOGg@mail.gmail.com
2025-01-23Change publication's publish_generated_columns option type to enum.Amit Kapila
The current boolean publish_generated_columns option only supports a binary choice, which is insufficient for future enhancements where generated columns can be of different types (e.g., stored or virtual). The supported values for the publish_generated_columns option are 'none' and 'stored'. Author: Vignesh C <[email protected]> Reviewed-by: Peter Smith <[email protected]> Reviewed-by: Peter Eisentraut <[email protected]> Reviewed-by: Amit Kapila <[email protected]> Discussion: https://postgr.es/m/[email protected] Discussion: https://postgr.es/m/[email protected]
2025-01-22Run perltidyMichael Paquier
A follow-up patch will adjust the TAP tests to follow a more-structured format for option lists in commands, that perltidy is able to cope better with. Putting the tree first in a clean state makes the next change a bit easier. v20230309 has been used. Author: Dagfinn Ilmari Mannsåker Discussion: https://postgr.es/m/[email protected]
2025-01-20Add PageData C typePeter Eisentraut
This adds the C type PageData and makes the existing type Page a pointer to it. This follows the usual PostgreSQL C type naming scheme of Foo/FooData pairs. (Prior to commit ddbba3aac86, PageData existed as an unrelated type.) The type definitions are compatible, so this doesn't change anything except some of the naming. Discussion: https://www.postgresql.org/message-id/flat/[email protected]
2025-01-17Support Unicode full case mapping and conversion.Jeff Davis
Generate tables from Unicode SpecialCasing.txt to support more sophisticated case mapping behavior: * support case mappings to multiple codepoints, such as "ß" uppercasing to "SS" * support conditional case mappings, such as the "final sigma" * support titlecase variants, such as "dž" uppercasing to "DŽ" but titlecasing to "Dž" Discussion: https://postgr.es/m/[email protected] Discussion: https://postgr.es/m/[email protected] Reviewed-by: Peter Eisentraut, Daniel Verite
2025-01-16Add and use BitmapHeapScanDescData structMelanie Plageman
Move the several members of HeapScanDescData which are specific to Bitmap Heap Scans into a new struct, BitmapHeapScanDescData, which inherits from HeapScanDescData. This reduces the size of the HeapScanDescData for other types of scans and will allow us to add additional bitmap heap scan-specific members in the future without fear of bloating the HeapScanDescData. Reviewed-by: Tomas Vondra Discussion: https://postgr.es/m/c736f6aa-8b35-4e20-9621-62c7c82e2168%40vondra.me
2025-01-16Seek zone abbreviations in the IANA data before timezone_abbreviations.Tom Lane
If a time zone abbreviation used in datetime input is defined in the currently active timezone, use that definition in preference to looking in the timezone_abbreviations list. That allows us to correctly handle abbreviations that have different meanings in different timezones. Also, it eliminates an inconsistency between datetime input and datetime output: the non-ISO datestyles for timestamptz have always printed abbreviations taken from the IANA data, not from timezone_abbreviations. Before this fix, it was possible to demonstrate cases where casting a timestamp to text and back fails or changes the value significantly because of that inconsistency. While this change removes the ability to override the IANA data about an abbreviation known in the current zone, it's not clear that there's any real use-case for doing so. But it is clear that this makes life a lot easier for dealing with abbreviations that have conflicts across different time zones. Also update the pg_timezone_abbrevs view to report abbreviations that are recognized via the IANA data, and *not* report any timezone_abbreviations entries that are thereby overridden. Under the hood, there are now two SRFs, one that pulls the IANA data and one that pulls timezone_abbreviations entries. They're combined by logic in the view. This approach was useful for debugging (since the functions can be called on their own). While I don't intend to document the functions explicitly, they might be useful to call directly. Also improve DecodeTimezoneAbbrev's caching logic so that it can cache zone abbreviations found in the IANA data. Without that, this patch would have caused a noticeable degradation of the runtime of timestamptz_in. Per report from Aleksander Alekseev and additional investigation. Discussion: https://postgr.es/m/CAJ7c6TOATjJqvhnYsui0=CO5XFMF4dvTGH+skzB--jNhqSQu5g@mail.gmail.com
2025-01-16Add OLD/NEW support to RETURNING in DML queries.Dean Rasheed
This allows the RETURNING list of INSERT/UPDATE/DELETE/MERGE queries to explicitly return old and new values by using the special aliases "old" and "new", which are automatically added to the query (if not already defined) while parsing its RETURNING list, allowing things like: RETURNING old.colname, new.colname, ... RETURNING old.*, new.* Additionally, a new syntax is supported, allowing the names "old" and "new" to be changed to user-supplied alias names, e.g.: RETURNING WITH (OLD AS o, NEW AS n) o.colname, n.colname, ... This is useful when the names "old" and "new" are already defined, such as inside trigger functions, allowing backwards compatibility to be maintained -- the interpretation of any existing queries that happen to already refer to relations called "old" or "new", or use those as aliases for other relations, is not changed. For an INSERT, old values will generally be NULL, and for a DELETE, new values will generally be NULL, but that may change for an INSERT with an ON CONFLICT ... DO UPDATE clause, or if a query rewrite rule changes the command type. Therefore, we put no restrictions on the use of old and new in any DML queries. Dean Rasheed, reviewed by Jian He and Jeff Davis. Discussion: https://postgr.es/m/CAEZATCWx0J0-v=Qjc6gXzR=KtsdvAE7Ow=D=mu50AgOe+pvisQ@mail.gmail.com
2025-01-15Add a bit of documentation related to IWYUPeter Eisentraut
Add some basic information about IWYU to src/tools/pginclude/README. Discussion: https://www.postgresql.org/message-id/flat/[email protected]
2025-01-15Drop warning-free support for Flex 2.5.35Peter Eisentraut
This removes all the various workarounds for avoiding compiler warnings with Flex 2.5.35. Several recent patches have added additional warnings that would either need to be fixed along the lines of the existing workarounds, or we decide to no longer care about this, which we do here. Flex 2.5.35 is extremely outdated, and you can't even download it anymore from any of the Flex project sites, so it's nearly impossible to support. After this, using Flex 2.5.35 will still work, but the generated code will produce numerous compiler warnings. Reviewed-by: Tom Lane <[email protected]> Discussion: https://www.postgresql.org/message-id/[email protected]
2025-01-15Rename RowCompareType to CompareTypePeter Eisentraut
RowCompareType served as a way to describe the fundamental meaning of an operator, notionally independent of an operator class (although so far this was only really supported for btrees). Its original purpose was for use inside RowCompareExpr, and it has also found some small use outside, such as for get_op_btree_interpretation(). We want to expand this now, as a more general way to describe operator semantics for other index access methods, including gist (to improve GistTranslateStratnum()) and others not written yet. To avoid future confusion, we rename the type to CompareType and the symbols from ROWCOMPARE_XXX to COMPARE_XXX to reflect their more general purpose. Reviewed-by: Mark Dilger <[email protected]> Discussion: https://www.postgresql.org/message-id/flat/[email protected]
2025-01-14Fix catcache invalidation of a list entry that's being builtHeikki Linnakangas
If a new catalog tuple is inserted that belongs to a catcache list entry, and cache invalidation happens while the list entry is being built, the list entry might miss the newly inserted tuple. To fix, change the way we detect concurrent invalidations while a catcache entry is being built. Keep a stack of entries that are being built, and apply cache invalidation to those entries in addition to the real catcache entries. This is similar to the in-progress list in relcache.c. Back-patch to all supported versions. Reviewed-by: Noah Misch Discussion: https://www.postgresql.org/message-id/[email protected]
2025-01-10Refactor some code related to backend statisticsMichael Paquier
This commit changes the way pending backend statistics are tracked by moving them into a new structure called PgStat_BackendPending, removing PgStat_BackendPendingIO. PgStat_BackendPending currently only includes PgStat_PendingIO for the pending I/O stats. pgstat_flush_backend() is extended with a "flags" argument to control which parts of the stats of a backend should be flushed. With this refactoring, it becomes easier to plug into backend statistics more data. A patch to add information related to WAL in this stats kind is under discussion. Author: Bertrand Drouvot Discussion: https://postgr.es/m/Z3zqc4o09dM/[email protected]
2025-01-08Set exit status for pgindent if pg_bsd_indent failsAndrew Dunstan
Also document the exit codes in the script. The new exit code is 3, and is not overridden by the exit code set in --check mode. Author: Ashutosh Bapat Discussion: https://postgr.es/m/CAExHW5sPRSiFeLdP-u1Fa5ba7YS2f0gvLjmKOobopKadJwQ_GQ@mail.gmail.com
2025-01-07Add missing typedefs.list entry for AggStatePerGroupData.Jeff Davis
Discussion: https://postgr.es/m/7530bd8783b1a78d53a3c70383e38d8da0a5ffe5.camel%40j-davis.com
2025-01-01Update copyright for 2025Bruce Momjian
Backpatch-through: 13
2024-12-24Remove pgrminclude and associated scriptsPeter Eisentraut
Per git log, the last time someone tried to do something with pgrminclude was around 2011. And it's always had a tendency of causing trouble when it was active. Also, pgcominclude is redundant with headerscheck. Discussion: https://www.postgresql.org/message-id/flat/2d4dc7b2-cb2e-49b1-b8ca-ba5f7024f05b%40eisentraut.org
2024-12-20Introduce CompactAttribute array in TupleDesc, take 2David Rowley
The new compact_attrs array stores a few select fields from FormData_pg_attribute in a more compact way, using only 16 bytes per column instead of the 104 bytes that FormData_pg_attribute uses. Using CompactAttribute allows performance-critical operations such as tuple deformation to be performed without looking at the FormData_pg_attribute element in TupleDesc which means fewer cacheline accesses. For some workloads, tuple deformation can be the most CPU intensive part of processing the query. Some testing with 16 columns on a table where the first column is variable length showed around a 10% increase in transactions per second for an OLAP type query performing aggregation on the 16th column. However, in certain cases, the increases were much higher, up to ~25% on one AMD Zen4 machine. This also makes pg_attribute.attcacheoff redundant. A follow-on commit will remove it, thus shrinking the FormData_pg_attribute struct by 4 bytes. Author: David Rowley Reviewed-by: Andres Freund, Victor Yegorov Discussion: https://postgr.es/m/CAApHDvrBztXP3yx=NKNmo3xwFAFhEdyPnvrDg3=M0RhDs+4vYw@mail.gmail.com
2024-12-19Convert SetOp to read its inputs as outerPlan and innerPlan.Tom Lane
The original design for set operations involved appending the two input relations into one and adding a flag column that allows distinguishing which side each row came from. Then the SetOp node pries them apart again based on the flag. This is bizarre. The only apparent reason to do it is that when sorting, we'd only need one Sort node not two. But since sorting is at least O(N log N), sorting all the data is actually worse than sorting each side separately --- plus, we have no chance of taking advantage of presorted input. On top of that, adding the flag column frequently requires an additional projection step that adds cycles, and then the Append node isn't free either. Let's get rid of all of that and make the SetOp node have two separate children, using the existing outerPlan/innerPlan infrastructure. This initial patch re-implements nodeSetop.c and does a bare minimum of work on the planner side to generate correctly-shaped plans. In particular, I've tried not to change the cost estimates here, so that the visible changes in the regression test results will only involve removal of useless projection steps and not any changes in whether to use sorted vs hashed mode. For SORTED mode, we combine successive identical tuples from each input into groups, and then merge-join the groups. The tuple comparisons now use SortSupport instead of simple equality, but the group-formation part should involve roughly the same number of tuple comparisons as before. The cross-comparisons between left and right groups probably add to that, but I'm not sure to quantify how many more comparisons we might need. For HASHED mode, nodeSetop's logic is almost the same as before, just refactored into two separate loops instead of one loop that has an assumption that it will see all the left-hand inputs first. In both modes, I added early-exit logic to not bother reading the right-hand relation if the left-hand input is empty, since neither INTERSECT nor EXCEPT modes can produce any output if the left input is empty. This could have been done before in the hashed mode, but not in sorted mode. Sorted mode can also stop as soon as it exhausts the left input; any remaining right-hand tuples cannot have matches. Also, this patch adds some infrastructure for detecting whether child plan nodes all output the same type of tuple table slot. If they do, the hash table logic can use slightly more efficient code based on assuming that that's the input slot type it will see. We'll make use of that infrastructure in other plan node types later. Patch by me; thanks to Richard Guo and David Rowley for review. Discussion: https://postgr.es/m/[email protected]
2024-12-19Add backend-level statistics to pgstatsMichael Paquier
This adds a new variable-numbered statistics kind in pgstats, where the object ID key of the stats entries is based on the proc number of the backends. This acts as an upper-bound for the number of stats entries that can exist at once. The entries are created when a backend starts after authentication succeeds, and are removed when the backend exits, making the stats entry exist for as long as their backend is up and running. These are not written to the pgstats file at shutdown (note that write_to_file is disabled, as a safety measure). Currently, these stats include only information about the I/O generated by a backend, using the same layer as pg_stat_io, except that it is now possible to know how much activity is happening in each backend rather than an overall aggregate of all the activity. A function called pg_stat_get_backend_io() is added to access this data depending on the PID of a backend. The existing structure could be expanded in the future to add more information about other statistics related to backends, depending on requirements or ideas. Auxiliary processes are not included in this set of statistics. These are less interesting to have than normal backends as they have dedicated entries in pg_stat_io, and stats kinds of their own. This commit includes also pg_stat_reset_backend_stats(), function able to reset all the stats associated to a single backend. Bump catalog version and PGSTAT_FILE_FORMAT_ID. Author: Bertrand Drouvot Reviewed-by: Álvaro Herrera, Kyotaro Horiguchi, Michael Paquier, Nazir Bilal Yavuz Discussion: https://postgr.es/m/ZtXR+CtkEVVE/[email protected]
2024-12-18Add common interface for TBMIteratorsMelanie Plageman
Add and use TBMPrivateIterator, which replaces the current TBMIterator for serial use cases, and repurpose TBMIterator to be a unified interface for both the serial ("private") and parallel ("shared") TID Bitmap iterator interfaces. This encapsulation simplifies call sites for callers supporting both parallel and serial TID Bitmap access. TBMIterator is not yet used in this commit. Author: Melanie Plageman Reviewed-by: Tomas Vondra, Heikki Linnakangas Discussion: https://postgr.es/m/063e4eb4-32d9-439e-a0b1-75565a9835a8%40iki.fi
2024-12-04Provide a better error message for misplaced dispatch options.Nathan Bossart
Before this patch, misplacing a special must-be-first option for dispatching to a subprogram (e.g., postgres -D . --single) would fail with an error like FATAL: --single requires a value This patch adjusts this error to more accurately complain that the special option wasn't listed first. The aforementioned error message now looks like FATAL: --single must be first argument The dispatch option parsing code has been refactored for use wherever ParseLongOption() is called. Beyond the obvious advantage of avoiding code duplication, this should prevent similar problems when new dispatch options are added. Note that we assume that none of the dispatch option names match another valid command-line argument, such as the name of a configuration parameter. Ideally, we'd remove this must-be-first requirement for these options, but after some investigation, we decided that wasn't worth the added complexity and behavior changes. Author: Nathan Bossart, Greg Sabino Mullane Reviewed-by: Greg Sabino Mullane, Peter Eisentraut, Álvaro Herrera, Tom Lane Discussion: https://postgr.es/m/CAKAnmmJkZtZAiSryho%3DgYpbvC7H-HNjEDAh16F3SoC9LPu8rqQ%40mail.gmail.com
2024-12-03Revert "Introduce CompactAttribute array in TupleDesc"David Rowley
This reverts commit d28dff3f6cd6a7562fb2c211ac0fb74a33ffd032. Quite a large number of buildfarm members didn't like this commit and it's not yet clear why. Reverting this before too many animals turn red. Discussion: https://postgr.es/m/CAApHDvr9i6T5=iAwQCxFDgMsthr_obVxgwBaEJkC8KUH6yM3Hw@mail.gmail.com
2024-12-03Introduce CompactAttribute array in TupleDescDavid Rowley
The new compact_attrs array stores a few select fields from FormData_pg_attribute in a more compact way, using only 16 bytes per column instead of the 104 bytes that FormData_pg_attribute uses. Using CompactAttribute allows performance-critical operations such as tuple deformation to be performed without looking at the FormData_pg_attribute element in TupleDesc which means fewer cacheline accesses. With this change, NAMEDATALEN could be increased with a much smaller negative impact on performance. For some workloads, tuple deformation can be the most CPU intensive part of processing the query. Some testing with 16 columns on a table where the first column is variable length showed around a 10% increase in transactions per second for an OLAP type query performing aggregation on the 16th column. However, in certain cases, the increases were much higher, up to ~25% on one AMD Zen4 machine. This also makes pg_attribute.attcacheoff redundant. A follow-on commit will remove it, thus shrinking the FormData_pg_attribute struct by 4 bytes. Author: David Rowley Discussion: https://postgr.es/m/CAApHDvrBztXP3yx=NKNmo3xwFAFhEdyPnvrDg3=M0RhDs+4vYw@mail.gmail.com Reviewed-by: Andres Freund, Victor Yegorov
2024-11-30Avoid mislabeling of lateral references, redux.Tom Lane
As I'd feared, commit 5c9d8636d was still a few bricks shy of a load. We can't just leave pulled-up lateral-reference Vars with no new nullingrels: we have to carefully compute what subset of the to-be-replaced Var's nullingrels apply to them, else we still get "wrong varnullingrels" errors. This is a bit tedious, but it looks like we can use the nullingrel data this patch computes for other purposes, enabling better optimization. We don't want to inject unnecessary plan changes into stable branches though, so leave that idea for a later HEAD-only patch. Patch by me, but thanks to Richard Guo for devising a test case that broke 5c9d8636d, and for preliminary investigation about how to fix it. As before, back-patch to v16. Discussion: https://postgr.es/m/[email protected]