summaryrefslogtreecommitdiff
path: root/src/test
AgeCommit message (Collapse)Author
2025-04-19Rename injection points used in AIO testsMichael Paquier
The format of the injection point names used by the AIO code does not match the existing naming convention used everywhere else in the code, so let's be consistent. These points are used in test_aio. Reviewed-by: Hayato Kuroda <[email protected]> Discussion: https://postgr.es/m/[email protected]
2025-04-17Assert lack of hazardous buffer locks before possible catalog read.Noah Misch
Commit 0bada39c83a150079567a6e97b1a25a198f30ea3 fixed a bug of this kind, which existed in all branches for six days before detection. While the probability of reaching the trouble was low, the disruption was extreme. No new backends could start, and service restoration needed an immediate shutdown. Hence, add this to catch the next bug like it. The new check in RelationIdGetRelation() suffices to make autovacuum detect the bug in commit 243e9b40f1b2dd09d6e5bf91ebf6e822a2cd3704 that led to commit 0bada39. This also checks in a number of similar places. It replaces each Assert(IsTransactionState()) that pertained to a conditional catalog read. No back-patch for now, but a back-patch of commit 243e9b4 should back-patch this, too. A back-patch could omit the src/test/regress changes, since back branches won't gain new index columns. Reported-by: Alexander Lakhin <[email protected]> Discussion: https://postgr.es/m/[email protected] Discussion: https://postgr.es/m/[email protected]
2025-04-16Fixup various new-to-v18 usages of appendPQExpBufferDavid Rowley
Use appendPQExpBufferStr when there are no parameters and appendPQExpBufferChar when the string length is 1. Author: David Rowley <[email protected]> Discussion: https://postgr.es/m/CAApHDvoARMvPeXTTC0HnpARBHn-WgVstc8XFCyMGOzvgu_1HvQ@mail.gmail.com
2025-04-16Elide not-null constraint checks on child tables during PK creationÁlvaro Herrera
We were unnecessarily acquiring AccessExclusiveLock on all child tables when "ALTER TABLE ONLY sometab ADD PRIMARY KEY" was run on their parent table, an oversight in commit 14e87ffa5c54. This caused deadlocks during pg_restore of partitioned tables. The reason to acquire the AEL was that we need to verify that child tables have the involved columns already marked as not-null; but if the parent table has an inheritable not-null constraint, then all children must necessarily be in the correct state already, so we can skip the check, which avoids acquiring the lock. Reorder the code so that it works that way. This doesn't change things in the case where the constraint doesn't exist, but that case is of lesser importance because it doesn't occur during parallel pg_restore. While at it, reword some errmsg() and add errhint() to similar cases in related but not adjacent code. Diagnosed-by: Tom Lane <[email protected]> Reviewed-by: Tender Wang <[email protected]> Discussion: https://postgr.es/m/[email protected] Discussion: https://postgr.es/m/[email protected] Discussion: https://postgr.es/m/[email protected]
2025-04-15Fix failure for generated column with a not-null domain constraint.Tom Lane
If a GENERATED column is declared to have a domain data type where the domain's constraints disallow null values, INSERT commands failed because we built a targetlist that included coercing a null constant to the domain's type. The failure occurred even when the generated value would have been perfectly OK. This is adjacent to the issues fixed in 0da39aa76, but we didn't notice for lack of testing a domain with such a constraint. We aren't going to use the result of the targetlist entry for the generated column --- ExecComputeStoredGenerated will overwrite it. So it's not really necessary that it have the exact datatype of the generated column. This patch fixes the problem by changing the targetlist entry to be a null Const of the domain's base type, which should be sufficiently legal. (We do have to tweak ExecCheckPlanOutput to accept the situation, though.) This has been broken since we implemented generated columns. However, this patch only applies easily as far back as v14, partly because I (tgl) only carried 0da39aa76 back that far, but mostly because v14 significantly refactored the handling of INSERT/UPDATE targetlists. Given the lack of field complaints and the short remaining support lifetime of v13, I judge the cost-benefit ratio not good for devising a version that would work in v13. Reported-by: jian he <[email protected]> Author: jian he <[email protected]> Reviewed-by: Tom Lane <[email protected]> Discussion: https://postgr.es/m/CACJufxG59tip2+9h=rEv-ykOFjt0cbsPVchhi0RTij8bABBA0Q@mail.gmail.com Backpatch-through: 14
2025-04-13Make AIO error test more portableAndrew Dunstan
Alpine Linux's C library (musl) spells one error message differently. Reported-by: Wolfgang Walther
2025-04-12Harmonize function parameter names for Postgres 18.Peter Geoghegan
Make sure that function declarations use names that exactly match the corresponding names from function definitions in a few places. These inconsistencies were all introduced during Postgres 18 development. This commit was written with help from clang-tidy, by mechanically applying the same rules as similar clean-up commits (the earliest such commit was commit 035ce1fe).
2025-04-12Fix instability with WAL fsync test in stats.sqlMichael Paquier
A backend using wal_sync_method set to "open_sync" or "open_datasync" would fail the test checking the WAL sync data in pg_stat_io. These modes guarantee that a sync is done when WAL is written to disk, and the data checked by the test is not incremented in this case, issue_xlog_fsync() doing nothing. Oversight in commit a051e71e28a1. Author: Sami Imseih <[email protected]> Discussion: https://postgr.es/m/CAA5RZ0uxwg3xAi4nvdBMJ-zJQEeyg+RotuU+ebM2F6CKmnvaYA@mail.gmail.com
2025-04-09Fix performance issue in deadlock-parallel isolation test.Tom Lane
With debug_discard_caches = 1, the runtime of this test script increased by about a factor of 10 after commit 0dca5d68d. That's causing some of our buildfarm animals to fail with a timeout. The reason for the increased time is that now we are re-planning some intentionally-non-inlineable SQL functions on every execution, where the previous coding held onto the original plans throughout the outer query. The previous behavior was arguably quite buggy, so I don't think 0dca5d68d deserves blame here. But we would like this test script to not take so long. To fix, instead of forcing a "parallel safe" label via a non-inlineable SQL function, apply it directly to the advisory-lock functions by making internal-language aliases for them. A small problem is that the advisory-lock functions return void but this test would really like them to return integer 1. I cheated here by declaring the aliases as returning "int". That's perhaps undue familiarity with the implementation of PG_RETURN_VOID(), but that hasn't changed in twenty years and is unlikely to do so in the next twenty. That gets us an integer 0 result, and then an inline-able wrapper to convert that to an integer 1 allows the rest of the script to remain unchanged. For me, this reduces the runtime with debug_discard_caches = 1 by about 100x, making the test comfortably faster than before instead of slower. Discussion: https://postgr.es/m/[email protected]
2025-04-09Fix test races between syscache-update-pruned.spec and autovacuum.Noah Misch
This spec fails ~3% of my Valgrind runs, and the spec has failed on Valgrind buildfarm member skink at a similar rate. Two problems contributed to that: - A competing buffer pin triggered VACUUM's lazy_scan_noprune() path, causing "tuples missed: 1 dead from 1 pages not removed due to cleanup lock contention". FREEZE fixes that. - The spec ran lazy VACUUM immediately after VACUUM FULL. The spec implicitly assumed lazy VACUUM prunes the one tuple that VACUUM FULL made dead. First wait for old snapshots, making that assumption reliable. This also adds two forms of defense in depth: - Wait for snapshots using shared catalog pruning rules (VISHORIZON_SHARED). This avoids the removable cutoff moving backward when an XID-bearing autoanalyze process runs in another database. That may never happen in this test, but it's cheap insurance. - Use lazy VACUUM option DISABLE_PAGE_SKIPPING. Commit c2dc1a79767a0f947e1145f82eb65dfe4360d25f did this for a related requirement in other tests, but I suspect FREEZE is necessary and sufficient in all these tests. Back-patch to v17, where the test first appeared. Reported-by: Andres Freund <[email protected]> Discussion: https://postgr.es/m/sv3taq4e6ea4qckimien3nxp3sz4b6cw6sfcy4nhwl52zpur4g@h6i6tohxmizu Backpatch-through: 17
2025-04-09Adjust AdjustUpgrade.pm for commit b1720fe63.Tom Lane
Need to delete the functions we no longer have available from the dumps to be reloaded from old versions. Per buildfarm.
2025-04-08Move contrib/spi testing from core regression tests to contrib/spi.Tom Lane
It's weird to have the core regression tests depending on contrib code, and coverage testing shows that those test queries add nothing to the core-code coverage of the core tests. So pull those test bits out and put them into ordinary test scripts inside contrib/spi/, making that more like other contrib modules. Aside from being structurally nicer, anything we can take out of the core tests (which are executed multiple times per check-world run) and put into tests executed only once should be a win. It doesn't look like this change will buy a whole lot of milliseconds, but a cycle saved is a cycle earned. Also, there is some discussion around possibly removing refint and/or autoinc altogether. I don't know if that will happen, but we'd certainly need to decouple them from the core tests to do so. The tests for autoinc were quite intertwined with the undocumented "ttdummy" trigger in regress.c. That made the tests very hard to understand and contributed nothing to autoinc's testing either. So I just deleted ttdummy and rewrote the autoinc tests without it. I realized while doing this that the description of autoinc in the SGML docs is not a great description of what the function actually does, so the patch includes some updates to those docs. Author: Tom Lane <[email protected]> Reviewed-by: Heikki Linnakangas <[email protected]> Discussion: https://postgr.es/m/[email protected]
2025-04-08Fix uninitialized index information access during apply.Amit Kapila
The issue happens when building conflict information during apply of INSERT or UPDATE operations that violate unique constraints on leaf partitions. The problem was introduced in commit 9ff68679b5, which removed the redundant calls to ExecOpenIndices/ExecCloseIndices. The previous code was relying on the redundant ExecOpenIndices call in apply_handle_tuple_routing() to build the index information required for unique key conflict detection. The fix is to delay building the index information until a conflict is detected instead of relying on ExecOpenIndices to do the same. The additional benefit of this approach is that it avoids building index information when there is no conflict. Author: Hou Zhijie <[email protected]> Reviewed-by:Reviewed-by: Amit Kapila <[email protected]> Discussion: https://postgr.es/m/TYAPR01MB57244ADA33DDA57119B9D26494A62@TYAPR01MB5724.jpnprd01.prod.outlook.com
2025-04-08Add function to get memory context stats for processesDaniel Gustafsson
This adds a function for retrieving memory context statistics and information from backends as well as auxiliary processes. The intended usecase is cluster debugging when under memory pressure or unanticipated memory usage characteristics. When calling the function it sends a signal to the specified process to submit statistics regarding its memory contexts into dynamic shared memory. Each memory context is returned in detail, followed by a cumulative total in case the number of contexts exceed the max allocated amount of shared memory. Each process is limited to use at most 1Mb memory for this. A summary can also be explicitly requested by the user, this will return the TopMemoryContext and a cumulative total of all lower contexts. In order to not block on busy processes the caller specifies the number of seconds during which to retry before timing out. In the case where no statistics are published within the set timeout, the last known statistics are returned, or NULL if no previously published statistics exist. This allows dash- board type queries to continually publish even if the target process is temporarily congested. Context records contain a timestamp to indicate when they were submitted. Author: Rahila Syed <[email protected]> Reviewed-by: Daniel Gustafsson <[email protected]> Reviewed-by: Andres Freund <[email protected]> Reviewed-by: Tomas Vondra <[email protected]> Reviewed-by: Atsushi Torikoshi <[email protected]> Reviewed-by: Fujii Masao <[email protected]> Reviewed-by: Alexander Korotkov <[email protected]> Discussion: https://postgr.es/m/CAH2L28v8mc9HDt8QoSJ8TRmKau_8FM_HKS41NeO9-6ZAkuZKXw@mail.gmail.com
2025-04-08Add pg_buffercache_evict_{relation,all} functionsAndres Freund
In addition to the added functions, the pg_buffercache_evict() function now shows whether the buffer was flushed. pg_buffercache_evict_relation(): Evicts all shared buffers in a relation at once. pg_buffercache_evict_all(): Evicts all shared buffers at once. Both functions provide mechanism to evict multiple shared buffers at once. They are designed to address the inefficiency of repeatedly calling pg_buffercache_evict() for each individual buffer, which can be time-consuming when dealing with large shared buffer pools. (e.g., ~477ms vs. ~2576ms for 16GB of fully populated shared buffers). These functions are intended for developer testing and debugging purposes and are available to superusers only. Minimal tests for the new functions are included. Also, there was no test for pg_buffercache_evict(), test for this added too. No new extension version is needed, as it was already increased this release by ba2a3c2302f. Author: Nazir Bilal Yavuz <[email protected]> Reviewed-by: Andres Freund <[email protected]> Reviewed-by: Aidar Imamov <[email protected]> Reviewed-by: Joseph Koshakow <[email protected]> Discussion: https://postgr.es/m/CAN55FZ0h_YoSqqutxV6DES1RW8ig6wcA8CR9rJk358YRMxZFmw%40mail.gmail.com
2025-04-08Stabilize 035_standby_logical_decoding.pl.Amit Kapila
Some tests try to invalidate logical slots on the standby server by running VACUUM on the primary. The problem is that xl_running_xacts was getting generated and replayed before the VACUUM command, leading to the advancement of the active slot's catalog_xmin. Due to this, active slots were not getting invalidated, leading to test failures. We fix it by skipping the generation of xl_running_xacts for the required tests with the help of injection points. As the required interface for injection points was not present in back branches, we fixed the failing tests in them by disallowing the slot to become active for the required cases (where rows_removed conflict could be generated). Author: Hayato Kuroda <[email protected]> Reviewed-by: Bertrand Drouvot <[email protected]> Reviewed-by: Amit Kapila <[email protected]> Backpatch-through: 16, where it was introduced Discussion: https://postgr.es/m/[email protected]
2025-04-07Flush the IO statistics of active WAL senders more frequentlyMichael Paquier
WAL senders do not flush their statistics until they exit, limiting the monitoring possible for live processes. This is penalizing when WAL senders are running for a long time, like in streaming or logical replication setups, because it is not possible to know the amount of IO they generate while running. This commit makes WAL senders more aggressive with their statistics flush, using an internal of 1 second, with the flush timing calculated based on the existing GetCurrentTimestamp() done before the sleeps done to wait for some activity. Note that the sleep done for logical and physical WAL senders happens in two different code paths, so the stats flushes need to happen in these two places. One test is added for the physical WAL sender case, and one for the logical WAL sender case. This can be done in a stable fashion by relying on the WAL generated by the TAP tests in combination with a stats reset while a server is running, but only on HEAD as WAL data has been added to pg_stat_io in a051e71e28a1. This issue exists since a9c70b46dbe and the introduction of pg_stat_io, so backpatch down to v16. Author: Bertrand Drouvot <[email protected]> Reviewed-by: vignesh C <[email protected]> Reviewed-by: Xuneng Zhou <[email protected]> Discussion: https://postgr.es/m/[email protected] Backpatch-through: 16
2025-04-07Introduce pg_shmem_allocations_numa viewTomas Vondra
Introduce new pg_shmem_alloctions_numa view with information about how shared memory is distributed across NUMA nodes. For each shared memory segment, the view returns one row for each NUMA node backing it, with the total amount of memory allocated from that node. The view may be relatively expensive, especially when executed for the first time in a backend, as it has to touch all memory pages to get reliable information about the NUMA node. This may also force allocation of the shared memory. Unlike pg_shmem_allocations, the view does not show anonymous shared memory allocations. It also does not show memory allocated using the dynamic shared memory infrastructure. Author: Jakub Wartak <[email protected]> Reviewed-by: Andres Freund <[email protected]> Reviewed-by: Bertrand Drouvot <[email protected]> Reviewed-by: Tomas Vondra <[email protected]> Discussion: https://postgr.es/m/CAKZiRmxh6KWo0aqRqvmcoaX2jUxZYb4kGp3N%3Dq1w%2BDiH-696Xw%40mail.gmail.com
2025-04-07Use specific collation where needed in new testÁlvaro Herrera
Oversight in commit a379061a22a8. Per Czech buildfarm members jay and hippopotamus.
2025-04-07Fix some issues in contrib/spi/refint.c.Tom Lane
check_foreign_key incorrectly used a single cache entry for its saved plans for a 'c' (cascade) trigger, although there are two different queries to execute depending on whether it fires for an update or a delete. This caused the wrong things to be done if both types of event occur in one session. (This was indeed visible in the triggers regression test, but apparently nobody ever questioned it.) To fix, add the operation type to the cache key. Its debug log output failed to distinguish update from delete events, too. Also, change the intended trigger usage from BEFORE ROW to AFTER ROW, and add checks insisting on that usage. BEFORE is really rather unsafe, since if there are other BEFORE triggers they might change or cancel the operation we are trying to check. AFTER triggers are the standard way to propagate changes to other rows, so we should follow that way here. In passing, remove a useless duplicate lookup of the cache entry. This code is mostly intended as a documentation example, so we won't consider a back-patch. Author: Dmitrii Bondar <[email protected]> Reviewed-by: Paul Jungwirth <[email protected]> Reviewed-by: Lilian Ontowhee <[email protected]> Reviewed-by: Tom Lane <[email protected]> Discussion: https://postgr.es/m/[email protected]
2025-04-07Allow NOT NULL constraints to be added as NOT VALIDÁlvaro Herrera
This allows them to be added without scanning the table, and validating them afterwards without holding access exclusive lock on the table after any violating rows have been deleted or fixed. Doing ALTER TABLE ... SET NOT NULL for a column that has an invalid not-null constraint validates that constraint. ALTER TABLE .. VALIDATE CONSTRAINT is also supported. There are various checks on whether an invalid constraint is allowed in a child table when the parent table has a valid constraint; this should match what we do for enforced/not enforced constraints. pg_attribute.attnotnull is now only an indicator for whether a not-null constraint exists for the column; whether it's valid or invalid must be queried in pg_constraint. Applications can continue to query pg_attribute.attnotnull as before, but now it's possible that NULL rows are present in the column even when that's set to true. For backend internal purposes, we cache the nullability status in CompactAttribute->attnullability that each tuple descriptor carries (replacing CompactAttribute.attnotnull, which was a mirror of Form_pg_attribute.attnotnull). During the initial tuple descriptor creation, based on the pg_attribute scan, we set this to UNRESTRICTED if pg_attribute.attnotnull is false, or to UNKNOWN if it's true; then we update the latter to VALID or INVALID depending on the pg_constraint scan. This flag is also copied when tupledescs are copied. Comparing tuple descs for equality must also compare the CompactAttribute.attnullability flag and return false in case of a mismatch. pg_dump deals with these constraints by storing the OIDs of invalid not-null constraints in a separate array, and running a query to obtain their properties. The regular table creation SQL omits them entirely. They are then dealt with in the same way as "separate" CHECK constraints, and dumped after the data has been loaded. Because no additional pg_dump infrastructure was required, we don't bump its version number. I decided not to bump catversion either, because the old catalog state works perfectly in the new world. (Trying to run with new catalog state and the old server version would likely run into issues, however.) System catalogs do not support invalid not-null constraints (because commit 14e87ffa5c54 didn't allow them to have pg_constraint rows anyway.) Author: Rushabh Lathia <[email protected]> Author: Jian He <[email protected]> Reviewed-by: Álvaro Herrera <[email protected]> Tested-by: Ashutosh Bapat <[email protected]> Discussion: https://postgr.es/m/CAGPqQf0KitkNack4F5CFkFi-9Dqvp29Ro=EpcWt=4_hs-Rt+bQ@mail.gmail.com
2025-04-06Compute CRC32C using AVX-512 instructions where availableJohn Naylor
The previous implementation of CRC32C on x86 relied on the native CRC32 instruction from the SSE 4.2 extension, which operates on up to 8 bytes at a time. We can get a substantial speedup by using carryless multiplication on SIMD registers, processing 64 bytes per loop iteration. Shorter inputs fall back to ordinary CRC instructions. On Intel Tiger Lake hardware (2020), CRC is now 50% faster for inputs between 64 and 112 bytes, and 3x faster for 256 bytes. The VPCLMULQDQ instruction on 512-bit registers has been available on Intel hardware since 2019 and AMD since 2022. There is an older variant for 128-bit registers, but at least on Zen 2 it performs worse than normal CRC instructions for short inputs. We must now do a runtime check, even for builds that target SSE 4.2. This doesn't matter in practice for WAL (arguably the most critical case), because since commit e2809e3a1 the final computation with the 20-byte WAL header is inlined and unrolled when targeting that extension. Compared with two direct function calls, testing showed equal or slightly faster performance in performing an indirect function call on several dozen bytes followed by inlined instructions on constant input of 20 bytes. The MIT-licensed implementation was generated with the "generate" program from https://github.com/corsix/fast-crc32/ Based on: "Fast CRC Computation for Generic Polynomials Using PCLMULQDQ Instruction" V. Gopal, E. Ozturk, et al., 2009 Co-authored-by: Raghuveer Devulapalli <[email protected]> Co-authored-by: Paul Amonson <[email protected]> Reviewed-by: Nathan Bossart <[email protected]> Reviewed-by: Andres Freund <[email protected]> (earlier version) Reviewed-by: Matthew Sterrett <[email protected]> (earlier version) Tested-by: Raghuveer Devulapalli <[email protected]> Tested-by: David Rowley <<[email protected]>> (earlier version) Discussion: https://postgr.es/m/BL1PR11MB530401FA7E9B1CA432CF9DC3DC192@BL1PR11MB5304.namprd11.prod.outlook.com Discussion: https://postgr.es/m/PH8PR11MB82869FF741DFA4E9A029FF13FBF72@PH8PR11MB8286.namprd11.prod.outlook.com
2025-04-05Fix parse_cte.c's failure to examine sub-WITHs in DML statements.Tom Lane
makeDependencyGraphWalker thought that only SelectStmt nodes could contain a WithClause. Which was true in our original implementation of WITH, but astonishingly we missed updating this code when we added the ability to attach WITH to INSERT/UPDATE/DELETE (and later MERGE). Moreover, since it was coded to deliberately block recursion to a WithClause, even updating raw_expression_tree_walker didn't save it. The upshot of this was that we didn't see references to outer CTE names appearing within an inner WITH, and would neither complain about disallowed recursion nor account for such references when sorting CTEs into a usable order. The lack of complaints about this is perhaps not so surprising, because typical usage of WITH wouldn't hit either case. Still, it's pretty broken; failing to detect recursion here leads to assert failures or worse later on. Fix by factoring out the processing of sub-WITHs into a new function WalkInnerWith, and invoking that for all the statement types that can have WITH. Bug: #18878 Reported-by: Yu Liang <[email protected]> Author: Tom Lane <[email protected]> Discussion: https://postgr.es/m/[email protected] Backpatch-through: 13
2025-04-05Avoid double transformation of json_array()'s subquery.Tom Lane
transformJsonArrayQueryConstructor() applied transformStmt() to the same subquery tree twice. While this causes no issue in many cases, there are some where it causes a coredump, thanks to the parser's habit of scribbling on its input. Fix by making a copy before the first transformation (compare 0f43083d1). This is quite brute-force, but then so is the whole business of transforming the input twice. Per discussion in the bug thread, this implementation of json_array() parsing should be replaced completely. But that will take some work and will surely not be back-patchable, so for the moment let's take the easy way out. Oversight in 7081ac46a. Back-patch to v16 where that came in. Bug: #18877 Reported-by: Yu Liang <[email protected]> Author: Tom Lane <[email protected]> Discussion: https://postgr.es/m/[email protected] Backpatch-through: 16
2025-04-05Repair misbehavior with duplicate entries in FK SET column lists.Tom Lane
Since v15 we've had an option to apply a foreign key constraint's ON DELETE SET DEFAULT or SET NULL action to just some of the referencing columns. There was not a check for duplicate entries in the list of columns-to-set, though. That caused a potential memory stomp in CreateConstraintEntry(), which incautiously assumed that the list of columns-to-set couldn't be longer than the number of key columns. Even after fixing that, the case doesn't work because you get an error like "multiple assignments to same column" from the SQL command that is generated to do the update. We could either raise an error for duplicate columns or silently suppress the dups, and after a bit of thought I chose to do the latter. This is motivated by the fact that duplicates in the FK column list are legal, so it's not real clear why duplicates in the columns-to-set list shouldn't be. Of course there's no need to actually set the column more than once. I left in the fix in CreateConstraintEntry() too, just because it didn't seem like such low-level code ought to be making assumptions about what it's handed. Bug: #18879 Reported-by: Yu Liang <[email protected]> Author: Tom Lane <[email protected]> Discussion: https://postgr.es/m/[email protected] Backpatch-through: 15
2025-04-04Avoid extra index searches through preprocessing.Peter Geoghegan
Transform low_compare and high_compare nbtree skip array inequalities (with opclasses that offer skip support) in such a way as to allow _bt_first to consistently apply later keys when it descends the tree. This can lower the number of index searches for multi-column scans that use a ">" key on one of the index's prefix columns (or use a "<" key, when scanning backwards) when it precedes some later lower-order key. For example, an index qual "WHERE a > 5 AND b = 2" will now be converted to "WHERE a >= 6 AND b = 2" by a new preprocessing step that takes place after low_compare and high_compare have been finalized. That way, the initial call to _bt_first can use "WHERE a >= 6 AND b = 2" to find an initial position, rather than just using "WHERE a > 5" -- "b = 2" can be applied during every _bt_first call. There's a decent chance that this will allow such a scan to avoid the extra search that might otherwise be needed to determine the lowest "a" value still satisfying "WHERE a > 5". The transformation process can only lower the total number of index pages read when the use of a more restrictive set of initial positioning keys in _bt_first actually allows the scan to land on some later leaf page directly, relative to the unoptimized case (or on an earlier leaf page directly, when scanning backwards). But the savings can really add up in cases where an affected skip array comes after some other array. For example, a scan indexqual "WHERE x IN (1, 2, 3) AND y > 5 AND z = 2" can save as many as 3 _bt_first calls by applying the new transformation to its "y" array (up to 1 extra search can be avoided per "x" element). Follow-up to commit 92fe23d9, which added nbtree skip scan. Author: Peter Geoghegan <[email protected]> Reviewed-By: Matthias van de Meent <[email protected]> Discussion: https://postgr.es/m/CAH2-Wz=FJ78K3WsF3iWNxWnUCY9f=Jdg3QPxaXE=uYUbmuRz5Q@mail.gmail.com
2025-04-04Add nbtree skip scan optimization.Peter Geoghegan
Teach nbtree multi-column index scans to opportunistically skip over irrelevant sections of the index given a query with no "=" conditions on one or more prefix index columns. When nbtree is passed input scan keys derived from a predicate "WHERE b = 5", new nbtree preprocessing steps output "WHERE a = ANY(<every possible 'a' value>) AND b = 5" scan keys. That is, preprocessing generates a "skip array" (and an output scan key) for the omitted prefix column "a", which makes it safe to mark the scan key on "b" as required to continue the scan. The scan is therefore able to repeatedly reposition itself by applying both the "a" and "b" keys. A skip array has "elements" that are generated procedurally and on demand, but otherwise works just like a regular ScalarArrayOp array. Preprocessing can freely add a skip array before or after any input ScalarArrayOp arrays. Index scans with a skip array decide when and where to reposition the scan using the same approach as any other scan with array keys. This design builds on the design for array advancement and primitive scan scheduling added to Postgres 17 by commit 5bf748b8. Testing has shown that skip scans of an index with a low cardinality skipped prefix column can be multiple orders of magnitude faster than an equivalent full index scan (or sequential scan). In general, the cardinality of the scan's skipped column(s) limits the number of leaf pages that can be skipped over. The core B-Tree operator classes on most discrete types generate their array elements with the help of their own custom skip support routine. This infrastructure gives nbtree a way to generate the next required array element by incrementing (or decrementing) the current array value. It can reduce the number of index descents in cases where the next possible indexable value frequently turns out to be the next value stored in the index. Opclasses that lack a skip support routine fall back on having nbtree "increment" (or "decrement") a skip array's current element by setting the NEXT (or PRIOR) scan key flag, without directly changing the scan key's sk_argument. These sentinel values behave just like any other value from an array -- though they can never locate equal index tuples (they can only locate the next group of index tuples containing the next set of non-sentinel values that the scan's arrays need to advance to). A skip array's range is constrained by "contradictory" inequality keys. For example, a skip array on "x" will only generate the values 1 and 2 given a qual such as "WHERE x BETWEEN 1 AND 2 AND y = 66". Such a skip array qual usually has near-identical performance characteristics to a comparable SAOP qual "WHERE x = ANY('{1, 2}') AND y = 66". However, improved performance isn't guaranteed. Much depends on physical index characteristics. B-Tree preprocessing is optimistic about skipping working out: it applies static, generic rules when determining where to generate skip arrays, which assumes that the runtime overhead of maintaining skip arrays will pay for itself -- or lead to only a modest performance loss. As things stand, these assumptions are much too optimistic: skip array maintenance will lead to unacceptable regressions with unsympathetic queries (queries whose scan can't skip over many irrelevant leaf pages). An upcoming commit will address the problems in this area by enhancing _bt_readpage's approach to saving cycles on scan key evaluation, making it work in a way that directly considers the needs of = array keys (particularly = skip array keys). Author: Peter Geoghegan <[email protected]> Reviewed-By: Masahiro Ikeda <[email protected]> Reviewed-By: Heikki Linnakangas <[email protected]> Reviewed-By: Matthias van de Meent <[email protected]> Reviewed-By: Tomas Vondra <[email protected]> Reviewed-By: Aleksander Alekseev <[email protected]> Reviewed-By: Alena Rybakina <[email protected]> Discussion: https://postgr.es/m/CAH2-Wzmn1YsLzOGgjAQZdn1STSG_y8qP__vggTaPAYXJP+G4bw@mail.gmail.com
2025-04-04Stabilize regression test from c0962a113.Tom Lane
Per buildfarm. Co-authored-by: Alena Rybakina <[email protected]> Co-authored-by: Tom Lane <[email protected]> Discussion: https://postgr.es/m/srnuqlttuimzmvoulhsrbgvj4vnul6b65osswvua7sfkqsvmuy@yg7apybpxp34
2025-04-04Convert 'x IN (VALUES ...)' to 'x = ANY ...' then appropriateAlexander Korotkov
This commit implements the automatic conversion of 'x IN (VALUES ...)' into ScalarArrayOpExpr. That simplifies the query tree, eliminating the appearance of an unnecessary join. Since VALUES describes a relational table, and the value of such a list is a table row, the optimizer will likely face an underestimation problem due to the inability to estimate cardinality through MCV statistics. The cardinality evaluation mechanism can work with the array inclusion check operation. If the array is small enough (< 100 elements), it will perform a statistical evaluation element by element. We perform the transformation in the convert_ANY_sublink_to_join() if VALUES RTE is proper and the transformation is convertible. The conversion is only possible for operations on scalar values, not rows. Also, we currently support the transformation only when it ends up with a constant array. Otherwise, the evaluation of non-hashed SAOP might be slower than the corresponding Hash Join with VALUES. Discussion: https://postgr.es/m/0184212d-1248-4f1f-a42d-f5cb1c1976d2%40tantorlabs.com Author: Alena Rybakina <[email protected]> Author: Andrei Lepikhov <[email protected]> Reviewed-by: Ivan Kush <[email protected]> Reviewed-by: Alexander Korotkov <[email protected]>
2025-04-04Allow "COPY table TO" command to copy rows from materialized views.Fujii Masao
Previously, "COPY table TO" command worked only with plain tables and did not support materialized views, even when they were populated and had physical storage. To copy rows from materialized views, "COPY (query) TO" command had to be used, instead. This commit extends "COPY table TO" to support populated materialized views directly, improving usability and performance, as "COPY table TO" is generally faster than "COPY (query) TO". Note that copying from unpopulated materialized views will still result in an error. Author: jian he <[email protected]> Reviewed-by: Kirill Reshke <[email protected]> Reviewed-by: David G. Johnston <[email protected]> Reviewed-by: Vignesh C <[email protected]> Reviewed-by: Fujii Masao <[email protected]> Discussion: https://postgr.es/m/CACJufxHVxnyRYy67hiPePNCPwVBMzhTQ6FaL9_Te5On9udG=yg@mail.gmail.com
2025-04-04Extend ALTER DEFAULT PRIVILEGES to define default privileges for large objects.Fujii Masao
Previously, ALTER DEFAULT PRIVILEGES did not support large objects. This meant that to grant privileges to users other than the owner, permissions had to be manually assigned each time a large object was created, which was inconvenient. This commit extends ALTER DEFAULT PRIVILEGES to allow defining default access privileges for large objects. With this change, specified privileges will automatically apply to newly created large objects, making privilege management more efficient. As a side effect, this commit introduces the new keyword OBJECTS since it's used in the syntax of ALTER DEFAULT PRIVILEGES. Original patch by Haruka Takatsuka, with some fixes and tests by Yugo Nagata, and rebased by Laurenz Albe. Author: Takatsuka Haruka <[email protected]> Co-authored-by: Yugo Nagata <[email protected]> Co-authored-by: Laurenz Albe <[email protected]> Reviewed-by: Masao Fujii <[email protected]> Discussion: https://postgr.es/m/[email protected]
2025-04-04Fix logical decoding test to correctly check slot removal on standby.Fujii Masao
The regression test for logical decoding verifies whether a logical slot is correctly dropped on a standby when its associated database is dropped. However, the test mistakenly retrieved slot information from the primary instead of the standby, causing incorrect behavior. This commit fixes the issue by ensuring the test correctly checks the slot on the standby. Back-patch to all supported versions. Author: Hayato Kuroda <[email protected]> Reviewed-by: Fujii Masao <[email protected]> Discussion: https://postgr.es/m/[email protected] Backpatch-through: 13
2025-04-04Fix logical decoding regression tests to correctly check slot existence.Fujii Masao
The regression tests for logical decoding verify whether a logical slot exists or has been dropped. Previously, these tests attempted to retrieve "slot_name" from the result of slot(), but since "slot_name" was not included in the result, slot()->{'slot_name'} always returned undef, leading to incorrect behavior. This commit fixes the issue by checking the "plugin" field in the result of slot() instead, ensuring the tests properly verify slot existence. Back-patch to all supported versions. Author: Hayato Kuroda <[email protected]> Reviewed-by: Fujii Masao <[email protected]> Discussion: https://postgr.es/m/OSCPR01MB149667EC4E738769CA80B7EA5F5AE2@OSCPR01MB14966.jpnprd01.prod.outlook.com Backpatch-through: 13
2025-04-03Fix oversight in commit 0dca5d68d.Tom Lane
As coded, fmgr_sql() would get an assertion failure for a SQL function that has an empty body and is declared to return some type other than VOID. Typically you'd never get that far because fmgr_sql_validator() would reject such a definition (I suspect that's how come I managed to miss the bug). But if check_function_bodies is off or the function is polymorphic, the validation check wouldn't get made. Reported-by: Alexander Lakhin <[email protected]> Author: Tom Lane <[email protected]> Discussion: https://postgr.es/m/[email protected]
2025-04-03oauth: Remove timeout from t/002_client when not neededDaniel Gustafsson
The connect_timeout=1 setting for the --hang-forever test was left in place and used by later tests, causing unexpected timeouts on slower buildfarm animals. Remove it when no longer needed. Per buildfarm member skink, reported by Andres on Discord. Author: Jacob Champion <[email protected]> Reported-by: Andres Freund <[email protected]>
2025-04-03oauth: Fix build on platforms without epoll/kqueueDaniel Gustafsson
register_socket() missed a variable declaration if neither HAVE_SYS_EPOLL_H nor HAVE_SYS_EVENT_H was defined. While we're fixing that, adjust the tests to check pg_config.h for one of the multiplexer implementations, rather than assuming that Windows is the only platform without support. (Christoph reported this on hurd-amd64, an experimental Debian.) Author: Jacob Champion <[email protected]> Reported-by: Christoph Berg <[email protected]> Discussion: https://postgr.es/m/Z-sPFl27Y0ZC-VBl%40msg.df7cb.de
2025-04-03Restrict copying of invalidated replication slots.Masahiko Sawada
Previously, invalidated logical and physical replication slots could be copied using the pg_copy_logical_replication_slot and pg_copy_physical_replication_slot functions. Replication slots that were invalidated for reasons other than WAL removal retained their restart_lsn. This meant that a new slot copied from an invalidated slot could have a restart_lsn pointing to a WAL segment that might have already been removed. This commit restricts the copying of invalidated replication slots. Backpatch to v16, where slots could retain their restart_lsn when invalidated for reasons other than WAL removal. For v15 and earlier, this check is not required since slots can only be invalidated due to WAL removal, and existing checks already handle this issue. Author: Shlok Kyal <[email protected]> Reviewed-by: vignesh C <[email protected]> Reviewed-by: Zhijie Hou <[email protected]> Reviewed-by: Peter Smith <[email protected]> Reviewed-by: Masahiko Sawada <[email protected]> Reviewed-by: Amit Kapila <[email protected]> Discussion: https://postgr.es/m/CANhcyEU65aH0VYnLiu%3DOhNNxhnhNhwcXBeT-jvRe1OiJTo_Ayg%40mail.gmail.com Backpatch-through: 16
2025-04-03libpq: Add support for dumping SSL key material to fileDaniel Gustafsson
This adds a new connection parameter which instructs libpq to write out keymaterial clientside into a file in order to make connection debugging with Wireshark and similar tools possible. The file format used is the standardized NSS format. Author: Abhishek Chanda <[email protected]> Co-authored-by: Daniel Gustafsson <[email protected]> Reviewed-by: Jacob Champion <[email protected]> Discussion: https://postgr.es/m/CAKiP-K85C8uQbzXKWf5wHQPkuygGUGcufke713iHmYWOe9q2dA@mail.gmail.com
2025-04-03Fix slot synchronization for two_phase enabled slots.Amit Kapila
The issue is that the transactions prepared before two-phase decoding is enabled can fail to replicate to the subscriber after being committed on a promoted standby following a failover. This is because the two_phase_at field of a slot, which tracks the LSN from which two-phase decoding starts, is not synchronized to standby servers. Without two_phase_at, the logical decoding might incorrectly identify prepared transaction as already replicated to the subscriber after promotion of standby server, causing them to be skipped. To address the issue on HEAD, the two_phase_at field of the slot is exposed by the pg_replication_slots view and allows the slot synchronization to copy this value to the corresponding synced slot on the standby server. This bug is likely to occur if the user toggles the two_phase option to true after initial slot creation. Given that altering the two_phase option of a replication slot is not allowed in PostgreSQL 17, this bug is less likely to occur. We can't change the view/function definition in backbranch so we can't push the same fix but we are brainstorming an appropriate solution for PG17. Author: Zhijie Hou <[email protected]> Reviewed-by: Amit Kapila <[email protected]> Reviewed-by: Masahiko Sawada <[email protected]> Discussion: https://postgr.es/m/TYAPR01MB5724CC7C288535BBCEEE65DA94A72@TYAPR01MB5724.jpnprd01.prod.outlook.com
2025-04-02Add test for HeapBitmapScan's broken skip_fetch optimizationAndres Freund
In the previous commit HeapBitmapScan's skip_fetch optimization was removed, due to being broken in not easily fixable ways. Add a test that verifies we don't re-introduce this bug if somebody tries to re-add the feature. Only add the test to master for now, it's possible it's not entirely stable. That seems sufficient, as we're not going to re-introduce the feature on the backbranches. I did verify that the test passes on all branches. If the test turns out to be unproblematic, we can backpatch it later, should we feel a need to do so. Discussion: https://postgr.es/m/CAEze2Wg3gXXZTr6_rwC+s4-o2ZVFB5F985uUSgJTsECx6AmGcQ@mail.gmail.com
2025-04-02Change SQL-language functions to use the plan cache.Tom Lane
In the historical implementation of SQL functions (if they don't get inlined), we built plans for all the contained queries at first call within an outer query, and then re-used those plans for the duration of the outer query, and then forgot everything. This was not ideal, not least because the plans could not be customized to specific values of the function's parameters. Our plancache infrastructure seems mature enough to be used here. That will solve both the problem with not being able to build custom plans and the problem with not being able to share work across successive outer queries. Aside from those performance concerns, this change fixes a longstanding bugaboo with SQL functions: you could not write DDL that would affect later statements in the same function. That's mostly still true with new-style SQL functions, since the results of parse analysis are baked into the stored query trees (and protected by dependency records). But for old-style SQL functions, it will now work much as it does with PL/pgSQL functions, because we delay parse analysis and planning of each query until we're ready to run it. Some edge cases that require replanning are now handled better too; see for example the new rowsecurity test, where we now detect an RLS context change that was previously missed. One other edge-case change that might be worthy of a release note is that we now insist that a SQL function's result be generated by the physically-last query within it. Previously, if the last original query was deleted by a DO INSTEAD NOTHING rule, we'd be willing to take the result from the preceding query instead. This behavior was undocumented except in source-code comments, and it seems hard to believe that anyone's relying on it. Along the way to this feature, we needed a few infrastructure changes: * The plancache can now take either a raw parse tree or an analyzed-but-not-rewritten Query as the starting point for a CachedPlanSource. If given a Query, it is caller's responsibility that nothing will happen to invalidate that form of the query. We use this for new-style SQL functions, where what's in pg_proc is serialized Query(s) and we trust the dependency mechanism to disallow DDL that would break those. * The plancache now offers a way to invoke a post-rewrite callback to examine/modify the rewritten parse tree when it is rebuilding the parse trees after a cache invalidation. We need this because SQL functions sometimes adjust the parse tree to make its output exactly match the declared result type; if the plan gets rebuilt, that has to be re-done. * There is a new backend module utils/cache/funccache.c that abstracts the idea of caching data about a specific function usage (a particular function and set of input data types). The code in it is moved almost verbatim from PL/pgSQL, which has done that for a long time. We use that logic now for SQL-language functions too, and maybe other PLs will have use for it in the future. Author: Alexander Pyhalov <[email protected]> Co-authored-by: Tom Lane <[email protected]> Reviewed-by: Pavel Stehule <[email protected]> Discussion: https://postgr.es/m/8216639.NyiUUSuA9g@aivenlaptop
2025-04-02Need to do CommandCounterIncrement after StoreAttrMissingVal.Tom Lane
Without this, an additional change to the same pg_attribute row within the same command will fail. This is possible at least with ALTER TABLE ADD COLUMN on a multiple-inheritance-pathway structure. (Another potential hazard is that immediately-following operations might not see the missingval.) Introduced by 95f650674, which split the former coding that used a single pg_attribute update to change both atthasdef and atthasmissing/attmissingval into two updates, but missed that this should entail two CommandCounterIncrements as well. Like that fix, back-patch through v13. Reported-by: Alexander Lakhin <[email protected]> Author: Tender Wang <[email protected]> Reviewed-by: Tom Lane <[email protected]> Discussion: https://postgr.es/m/[email protected] Backpatch-through: 13
2025-04-02Make cancel request keys longerHeikki Linnakangas
Currently, the cancel request key is a 32-bit token, which isn't very much entropy. If you want to cancel another session's query, you can brute-force it. In most environments, an unauthorized cancellation of a query isn't very serious, but it nevertheless would be nice to have more protection from it. Hence make the key longer, to make it harder to guess. The longer cancellation keys are generated when using the new protocol version 3.2. For connections using version 3.0, short 4-bytes keys are still used. The new longer key length is not hardcoded in the protocol anymore, the client is expected to deal with variable length keys, up to 256 bytes. This flexibility allows e.g. a connection pooler to add more information to the cancel key, which might be useful for finding the connection. Reviewed-by: Jelte Fennema-Nio <[email protected]> Reviewed-by: Robert Haas <[email protected]> (earlier versions) Discussion: https://www.postgresql.org/message-id/[email protected]
2025-04-02libpq: Add min/max_protocol_version connection optionsHeikki Linnakangas
All supported version of the PostgreSQL server send the NegotiateProtocolVersion message when an unsupported minor protocol version is requested by a client. But many other applications that implement the PostgreSQL protocol (connection poolers, or other databases) do not, and the same is true for PostgreSQL server versions older than 9.3. Connecting to such other applications thus fails if a client requests a protocol version different than 3.0. This patch adds a max_protocol_version connection option to libpq that specifies the protocol version that libpq should request from the server. Currently only 3.0 is supported, but that will change in a future commit that bumps the protocol version. Even after that version bump the default will likely stay 3.0 for the time being. Once more of the ecosystem supports the NegotiateProtocolVersion message we might want to change the default to the latest minor version. This also adds the similar min_protocol_version connection option, to allow the client to specify that connecting should fail if a lower protocol version is attempted by the server. This can be used to ensure that certain protocol features are used, which can be particularly useful if those features impact security. Author: Jelte Fennema-Nio <[email protected]> Reviewed-by: Robert Haas <[email protected]> (earlier versions) Discussion: https://www.postgresql.org/message-id/CAGECzQTfc_O%[email protected] Discussion: https://www.postgresql.org/message-id/CAGECzQRbAGqJnnJJxTdKewTsNOovUt4bsx3NFfofz3m2j-t7tA@mail.gmail.com
2025-04-02tests: Fix incompatibility of test_aio with *_FORCE_RELEASEAndres Freund
The test added in 93bc3d75d8e failed in a build with RELCACHE_FORCE_RELEASE and CATCACHE_FORCE_RELEASE defined. The test intentionally forgets to exit batchmode - normally that would trigger an error at the end of the transaction, which the test verifies. However, with RELCACHE_FORCE_RELEASE and CATCACHE_FORCE_RELEASE defined, we get other code (output function lookup) entering batchmode and erroring out because batchmode isn't allowed to be entered recursively. Fix that by changing the queries in question to not output any rows. That's not exactly pretty, but seems to avoid the problem reliably. Eventually we might want to make RELCACHE_FORCE_RELEASE and CATCACHE_FORCE_RELEASE GUCs, so we can disable them where necessary - this isn't the first test having difficulty with those debug options. But that's for later. Per buildfarm member prion. Discussion: https://postgr.es/m/uc62i6vi5gd4bi6wtjj5poadqxolgy55e7ihkmf3mthjegb6zl@zqo7xez7sc2r
2025-04-02tests: Cope with WARNINGs during failed CREATE DB on windowsAndres Freund
The test added in 93bc3d75d8e sometimes fails on windows, due to warnings like WARNING: some useless files may be left behind in old database directory "base/16514" The reason for that is createdb_failure_callback() does not ensure that there are no open file descriptors for files in the partially created, to-be-dropped, database. We do take care in dropdb(), but that involves waiting for checkpoints and a ProcSignalBarrier, which we probably don't want to do in an error callback. This should probably be fixed one day, but for now 001_aio.pl needs to cope. Per buildfarm animals fairywren and drongo. Discussion: https://postgr.es/m/uc62i6vi5gd4bi6wtjj5poadqxolgy55e7ihkmf3mthjegb6zl@zqo7xez7sc2r
2025-04-02Add support for NOT ENFORCED in foreign key constraintsPeter Eisentraut
This expands the NOT ENFORCED constraint flag, previously only supported for CHECK constraints (commit ca87c415e2f), to foreign key constraints. Normally, when a foreign key constraint is created on a table, action and check triggers are added to maintain data integrity. With this patch, if a constraint is marked as NOT ENFORCED, integrity checks are no longer required, making these triggers unnecessary. Consequently, when creating a NOT ENFORCED foreign key constraint, triggers will not be created, and the constraint will be marked as NOT VALID. Similarly, if an existing foreign key constraint is changed to NOT ENFORCED, the associated triggers will be dropped, and the constraint will also be marked as NOT VALID. Conversely, if a NOT ENFORCED foreign key constraint is changed to ENFORCED, the necessary triggers will be created, and the will be changed to VALID by performing necessary validation. Since not-enforced foreign key constraints have no triggers, the shortcut used for example in psql and pg_dump to skip looking for foreign keys if the relation is known not to have triggers no longer applies. (It already didn't work for partitioned tables.) Author: Amul Sul <[email protected]> Reviewed-by: Joel Jacobson <[email protected]> Reviewed-by: Andrew Dunstan <[email protected]> Reviewed-by: Peter Eisentraut <[email protected]> Reviewed-by: jian he <[email protected]> Reviewed-by: Alvaro Herrera <[email protected]> Reviewed-by: Ashutosh Bapat <[email protected]> Reviewed-by: Isaac Morland <[email protected]> Reviewed-by: Alexandra Wang <[email protected]> Tested-by: Triveni N <[email protected]> Discussion: https://www.postgresql.org/message-id/flat/CAAJ_b962c5AcYW9KUt_R_ER5qs3fUGbe4az-SP-vuwPS-w-AGA@mail.gmail.com
2025-04-02tests: Cope with io_method in TEMP_CONFIG in test_aioAndres Freund
If io_method is set in TEMP_CONFIG the test added in 93bc3d75d8e fails, because it assumes the io_method specified at initdb is actually used. Fix that by appending the io_method again, after initdb (and thus after TEMP_CONFIG has been added by Cluster.pm). Per buildfarm animal bumblebee Discussion: https://postgr.es/m/zh5u22wbpcyfw2ddl3lsvmsxf4yvsrvgxqwwmfjddc4c2khsgp@gfysyjsaelr5
2025-04-01Introduce a SQL-callable function array_sort(anyarray).Tom Lane
Create a function that will sort the elements of an array according to the element type's sort order. If the array has more than one dimension, the sub-arrays of the first dimension are sorted per normal array-comparison rules, leaving their contents alone. In support of this, add pg_type.typarray to the set of fields cached by the typcache. Author: Junwang Zhao <[email protected]> Co-authored-by: Jian He <[email protected]> Reviewed-by: Aleksander Alekseev <[email protected]> Discussion: https://postgr.es/m/CAEG8a3J41a4dpw_-F94fF-JPRXYxw-GfsgoGotKcjs9LVfEEvw@mail.gmail.com
2025-04-01aio: Add test_aio moduleAndres Freund
To make the tests possible, a few functions from bufmgr.c/localbuf.c had to be exported, via buf_internals.h. Reviewed-by: Noah Misch <[email protected]> Co-authored-by: Andres Freund <[email protected]> Co-authored-by: Nazir Bilal Yavuz <[email protected]> Discussion: https://postgr.es/m/uvrtrknj4kdytuboidbhwclo4gxhswwcpgadptsjvjqcluzmah%40brqs62irg4dt