Age | Commit message (Collapse) | Author |
|
While in restore mode, that we use to load schema when a new node is added to
the cluster, the partition child tables should correctly inherit the
distribution properties from the parent table. This support was lacking, thus
leading to incorrect handling of such tables.
Per report by Virendra Kumar.
|
|
Similar to what we did in e688c0c23c962d425b82fdfad014bace4207af1d, we must not
rely on the temporary namespace on the coordinator since it may change on the
remote nodes. Instead we use the pg_my_temp_schema() function to find the
currently active temporary schema on the remote node.
|
|
Chi Gao and Hengbing Wang reported certain issues around transaction handling
and demonstrated via xlogdump how certain transactions were getting marked
committed/aborted repeatedly on a datanode. When an already committed
transaction is attempted to be aborted again, it results in a PANIC. Upon
investigation, this uncovered a very serious yet long standing bug in
transaction handling.
If the client is running in autocommit mode, we try to avoid starting a
transaction block on the datanode side if only one datanode is going to be
involved in the transaction. This is an optimisation to speed up short queries
touching only a single node. But when the query rewriter transforms a single
statement into multiple statements, we would still (and incorrectly) run each
statement in an autocommit mode on the datanode. This can cause inconsistencies
when one statement commits but the next statement aborts. And it may also lead
to the PANIC situations if we continue to use the same global transaction
identifier for the statements.
This can also happen when the user invokes a user-defined function. If the
function has multiple statements, each statement will run in an autocommit
mode, if it's FQSed, thus again creating inconsistency if a following statement
in the function fails.
We now have a more elaborate mechanism to tackle autocommit and transaction
block needs. The special casing for force_autocommit is now removed, thus
making it more predictable. We also have specific conditions to check to ensure
that we don't mixup autocommit and transaction block for the same global xid.
Finally, if a query rewriter transforms a single statement into multiple
statements, we run those statements in a transaction block. Together these
changes should help us fix the problems.
|
|
Rules are converted in their string representation and stored in the catalog.
While building relation descriptor, this information is read back and converted
into a Node representation. Since relation descriptors could be built when we
are reading plan information sent by the remote server in a stringified
representation, trying to read the rules with portable input on may lead to
unpleasant behaviour. So we must first reset portable input and restore it back
after reading the rules. The same applies to RLS policies (even though we don't
have a test showing the impact, but it looks like a sane thing to fix anyways)
|
|
The coordinator_lxid GUC is internally stored as uint32, but was defined
as plaint int32, triggering a compiler warning. It's also unclear what
would happen for transaction IDs outside the signed range (possibly some
strange issues).
This adds a new GUC type (UInt), used only for this one GUC. The patch
is fairly large, but most of it is boilerplate infrastructure to support
the new GUC type. We have considered simpler workarounds (e.g. treating
the GUC as string and converting it to/from uint32 using the GUC hooks,
but this seems much cleaner and tidier.
|
|
gcc 6.4.1 is complaining when ExecRemoteQuery(PlanState *) gets called
with (RemoteSubqueryState*) parameter. This commit adds explicit cast on
a few places to silence the warnings noise.
An alternative fix might be to use (RemoteSubqueryState*), but that does
not quite work as ResponseCombiner needs to keep a pointer to either
ExecRemoteQuery or ExecRemoteSubplan. So the explicit cast seems better.
|
|
Recent changes in PG 10 generates a nextval() expression (there was no support
for NextValExpr in ruleutils before that). But that fails on the datanode side
because only DEFAULT values are accepted for identity columns, unless
overridden. This patch restores the XL behaviour, thus helping the regression.
|
|
This is the merge-base of PostgreSQL's master branch and REL_10_STABLE branch.
This should be the last merge from PG's master branch into XL 10 branch.
Subsequent merges must happen from REL_10_STABLE branch
|
|
|
|
Stress testing by Andreas Seltenreich disclosed longstanding problems that
occur if a FATAL exit (e.g. due to receipt of SIGTERM) occurs while we are
trying to execute a ROLLBACK of an already-failed transaction. In such a
case, xact.c is in TBLOCK_ABORT state, so that AbortOutOfAnyTransaction
would skip AbortTransaction and go straight to CleanupTransaction. This
led to an assert failure in an assert-enabled build (due to the ROLLBACK's
portal still having a cleanup hook) or without assertions, to a FATAL exit
complaining about "cannot drop active portal". The latter's not
disastrous, perhaps, but it's messy enough to want to improve it.
We don't really want to run all of AbortTransaction in this code path.
The minimum required to clean up the open portal safely is to do
AtAbort_Memory and AtAbort_Portals. It seems like a good idea to
do AtAbort_Memory unconditionally, to be entirely sure that we are
starting with a safe CurrentMemoryContext. That means that if the
main loop in AbortOutOfAnyTransaction does nothing, we need an extra
step at the bottom to restore CurrentMemoryContext = TopMemoryContext,
which I chose to do by invoking AtCleanup_Memory. This'll result in
calling AtCleanup_Memory twice in many of the paths through this function,
but that seems harmless and reasonably inexpensive.
The original motivation for the assertion in AtCleanup_Portals was that
we wanted to be sure that any user-defined code executed as a consequence
of the cleanup hook runs during AbortTransaction not CleanupTransaction.
That still seems like a valid concern, and now that we've seen one case
of the assertion firing --- which means that exactly that would have
happened in a production build --- let's replace the Assert with a runtime
check. If we see the cleanup hook still set, we'll emit a WARNING and
just drop the hook unexecuted.
This has been like this a long time, so back-patch to all supported
branches.
Discussion: https://postgr.es/m/[email protected]
|
|
The sole useful effect of this function, to check that no catcache
entries have positive refcounts at transaction end, has really been
obsolete since we introduced ResourceOwners in PG 8.1. We reduced the
checks to assertions years ago, so that the function was a complete
no-op in production builds. There have been previous discussions about
removing it entirely, but consensus up to now was that it had some small
value as a cross-check for bugs in the ResourceOwner logic.
However, it now emerges that it's possible to trigger these assertions
if you hit an assert-enabled backend with SIGTERM during a call to
SearchCatCacheList, because that function temporarily increases the
refcounts of entries it's intending to add to a catcache list construct.
In a normal ERROR scenario, the extra refcounts are cleaned up by
SearchCatCacheList's PG_CATCH block; but in a FATAL exit we do a
transaction abort and exit without ever executing PG_CATCH handlers.
There's a case to be made that this is a generic hazard and we should
consider restructuring elog(FATAL) handling so that pending PG_CATCH
handlers do get run. That's pretty scary though: it could easily create
more problems than it solves. Preliminary stress testing by Andreas
Seltenreich suggests that there are not many live problems of this ilk,
so we rejected that idea.
There are more-localized ways to fix the problem; the most principled
one would be to use PG_ENSURE_ERROR_CLEANUP instead of plain PG_TRY.
But adding cycles to SearchCatCacheList isn't very appealing. We could
also weaken the assertions in AtEOXact_CatCache in some more or less
ad-hoc way, but that just makes its raison d'etre even less compelling.
In the end, the most reasonable solution seems to be to just remove
AtEOXact_CatCache altogether, on the grounds that it's not worth trying
to fix it. It hasn't found any bugs for us in many years.
Per report from Jeevan Chalke. Back-patch to all supported branches.
Discussion: https://postgr.es/m/CAM2+6=VEE30YtRQCZX7_sCFsEpoUkFBV1gZazL70fqLn8rcvBA@mail.gmail.com
|
|
The previous message didn't mention the name of the table or the
bounds. Put the table name in the primary error message and the
bounds in the detail message.
Amit Langote, changed slightly by me. Suggestions on the exac
phrasing from Tom Lane, David G. Johnston, and Dean Rasheed.
Discussion: http://postgr.es/m/CA+Tgmoae6bpwVa-1BMaVcwvCCeOoJ5B9Q9-RHWo-1gJxfPBZ5Q@mail.gmail.com
|
|
Commit 1efc7e538 did a poor job of emulating existing logic for touching
Datums that might be expanded-object pointers. It didn't check for typlen
being -1 first, which meant it could crash on fixed-length pass-by-ref
values, and probably on cstring values as well. It also didn't use
DatumGetPointer before VARATT_IS_EXTERNAL_EXPANDED, which while currently
harmless is not according to documentation nor prevailing style.
I also think the lack of any explanation as to why datumSerialize makes
these particular nonobvious choices is pretty awful, so fix that.
Per report from Jarred Ward. Back-patch to 9.6 where this code came in.
Discussion: https://postgr.es/m/[email protected]
|
|
|
|
init_htab(), with #define HASH_DEBUG, prints a bunch of hashtable
parameters. It used to also print nentries, but commit 44ca4022f changed
that to "hash_get_num_entries(hctl)", which is wrong (the parameter should
be "hashp").
Rather than correct the coding, though, let's just remove that field from
the printout. The table must be empty, since we just finished building
it, so expensively calculating the number of entries is rather pointless.
Moreover hash_get_num_entries makes assumptions (about not needing locks)
which we could do without in debugging code.
Noted by Choi Doo-Won in bug #14764. Back-patch to 9.6 where the
faulty code was introduced.
Discussion: https://postgr.es/m/[email protected]
|
|
The early buildfarm returns for commit 1e165d05f are pretty awful:
not only does Windows not return a useful error, but it looks like
a lot of Unix-ish platforms don't either. Given the number of
different errnos seen so far, guess that what's really going on is
that some newlocale() implementations fail to set errno at all.
Hence, let's try zeroing errno just before newlocale() and then
if it's still zero report as though it's ENOENT. That should cover
the Windows case too.
It's clear that we'll have to drop the regression test case, unless
we want to maintain a separate expected-file for platforms without
HAVE_LOCALE_T. But I'll leave it there awhile longer to see if this
actually improves matters or not.
Discussion: https://postgr.es/m/CAKKotZS-wcDcofXDCH=sidiuajE+nqHn2CGjLLX78anyDmi3gQ@mail.gmail.com
|
|
We were just printing errno, which is certainly not gonna work on
Windows. Now, it's not entirely clear from Microsoft's documentation
whether _create_locale() adheres to standard Windows error reporting
conventions, but let's assume it does and try to map the GetLastError
result to an errno. If this turns out not to work, probably the best
thing to do will be to assume the error is always ENOENT on Windows.
This is a longstanding bug, but given the lack of previous field
complaints, I'm not excited about back-patching it.
Per report from Murtuza Zabuawala.
Discussion: https://postgr.es/m/CAKKotZS-wcDcofXDCH=sidiuajE+nqHn2CGjLLX78anyDmi3gQ@mail.gmail.com
|
|
1024 bits is considered weak these days, but OpenSSL always passes 1024 as
the key length to the tmp_dh callback. All the code to handle other key
lengths is, in fact, dead.
To remedy those issues:
* Only include hard-coded 2048-bit parameters.
* Set the parameters directly with SSL_CTX_set_tmp_dh(), without the
callback
* The name of the file containing the DH parameters is now a GUC. This
replaces the old hardcoded "dh1024.pem" filename. (The files for other
key lengths, dh512.pem, dh2048.pem, etc. were never actually used.)
This is not a new problem, but it doesn't seem worth the risk and churn to
backport. If you care enough about the strength of the DH parameters on
old versions, you can create custom DH parameters, with as many bits as you
wish, and put them in the "dh1024.pem" file.
Per report by Nicolas Guini and Damian Quiroga. Reviewed by Michael Paquier.
Discussion: https://www.postgresql.org/message-id/CAMxBoUyjOOautVozN6ofzym828aNrDjuCcOTcCquxjwS-L2hGQ@mail.gmail.com
|
|
current_source requires to restart server to reflect the new
value. Per Yugo Nagata and Masahiko Sawada.
Back patched to 9.2 and beyond.
|
|
dynamic_shared_memory_type requires to restart server to reflect
the new value. Per Yugo Nagata and Masahiko Sawada.
Back pached to 9.4 and beyond.
|
|
max_logical_replication_workers requires to restart server to reflect
the new value. Per Yugo Nagata. Minor editing by me.
|
|
The XL code did two function calls in the else branch, about like this:
else
/* Postgres-XC tablespaces also include node name in path */
sprintf(fctx->location, "pg_tblspc/%u/%s_%s", tablespaceOid,
TABLESPACE_VERSION_DIRECTORY, PGXCNodeName);
fctx->location = psprintf("pg_tblspc/%u/%s_%s", tablespaceOid,
TABLESPACE_VERSION_DIRECTORY,
PGXCNodeName);
which is wrong, as only the first call is actually the else branch, the
second call is executed unconditionally.
In fact, the two calls attempt to construct the same location string,
but the sprintf call assumes the 'fctx->location' string is already
allocated. But it actually is not, so it's likely to cause a segfault.
Fixed by removing the sprintf() call, keeping just the psprintf() one.
Noticed thanks to GCC 6.3 complaining about incorrect indentation.
Backpatch to XL 9.5.
|
|
Various cases involving renaming of view columns are handled by having
make_viewdef pass down the view's current relation tupledesc to
get_query_def, which then takes care to use the column names from the
tupledesc for the output column names of the SELECT. For some reason
though, we'd missed teaching make_ruledef to do similarly when it is
printing an ON SELECT rule, even though this is exactly the same case.
The results from pg_get_ruledef would then be different and arguably wrong.
In particular, this breaks pre-v10 versions of pg_dump, which in some
situations would define views by means of emitting a CREATE RULE ... ON
SELECT command. Third-party tools might not be happy either.
In passing, clean up some crufty code in make_viewdef; we'd apparently
modernized the equivalent code in make_ruledef somewhere along the way,
and missed this copy.
Per report from Gilles Darold. Back-patch to all supported versions.
Discussion: https://postgr.es/m/[email protected]
|
|
Add error checks in some places that were calling get_opfamily_member
or get_opfamily_proc and just assuming that the call could never fail.
Also, standardize the wording for such errors in some other places.
None of these errors are expected in normal use, hence they're just
elog not ereport. But they may be handy for diagnosing omissions in
custom opclasses.
Rushabh Lathia found the oversight in RelationBuildPartitionKey();
I found the others by grepping for all callers of these functions.
Discussion: https://postgr.es/m/CAGPqQf2R9Nk8htpv0FFi+FP776EwMyGuORpc9zYkZKC8sFQE3g@mail.gmail.com
|
|
While I couldn't find any live bugs in commit 44ca4022f, the comments
seemed pretty far from adequate; in particular it was not made plain that
"borrowing" entries from other freelists is critical for correctness.
Try to improve the commentary. A couple of very minor code style
tweaks, as well.
Discussion: https://postgr.es/m/[email protected]
|
|
Previously, UNBOUNDED meant no lower bound when used in the FROM list,
and no upper bound when used in the TO list, which was OK for
single-column range partitioning, but problematic with multiple
columns. For example, an upper bound of (10.0, UNBOUNDED) would not be
collocated with a lower bound of (10.0, UNBOUNDED), thus making it
difficult or impossible to define contiguous multi-column range
partitions in some cases.
Fix this by using MINVALUE and MAXVALUE instead of UNBOUNDED to
represent a partition column that is unbounded below or above
respectively. This syntax removes any ambiguity, and ensures that if
one partition's lower bound equals another partition's upper bound,
then the partitions are contiguous.
Also drop the constraint prohibiting finite values after an unbounded
column, and just document the fact that any values after MINVALUE or
MAXVALUE are ignored. Previously it was necessary to repeat UNBOUNDED
multiple times, which was needlessly verbose.
Note: Forces a post-PG 10 beta2 initdb.
Report by Amul Sul, original patch by Amit Langote with some
additional hacking by me.
Discussion: https://postgr.es/m/CAAJ_b947mowpLdxL3jo3YLKngRjrq9+Ej4ymduQTfYR+8=YAYQ@mail.gmail.com
|
|
Normally, a JoinExpr would have empty "quals" only if it came from CROSS
JOIN syntax. However, it's possible to get to this state by specifying
NATURAL JOIN between two tables with no common column names, and there
might be other ways too. The code previously printed no ON clause if
"quals" was empty; that's right for CROSS JOIN but syntactically invalid
if it's some type of outer join. Fix by printing ON TRUE in that case.
This got broken by commit 2ffa740be, which stopped using NATURAL JOIN
syntax in ruleutils output due to its brittleness in the face of
column renamings. Back-patch to 9.3 where that commit appeared.
Per report from Tushar Ahuja.
Discussion: https://postgr.es/m/[email protected]
|
|
It seemed a bit silly that each caller of make_tsvector() was laboriously
special-casing the situation where no lexemes were found, when it would
be easy and much more bullet-proof to make make_tsvector() handle that.
|
|
Add missing infrastructure for this node type, notably in ruleutils.c where
its lack could demonstrably cause EXPLAIN to fail. Add outfuncs/readfuncs
support. (outfuncs support is useful today for debugging purposes. The
readfuncs support may never be needed, since at present it would only
matter for parallel query and NextValueExpr should never appear in a
parallelizable query; but it seems like a bad idea to have a primnode type
that isn't fully supported here.) Teach planner infrastructure that
NextValueExpr is a volatile, parallel-unsafe, non-leaky expression node
with cost cpu_operator_cost. Given its limited scope of usage, there
*might* be no live bug today from the lack of that knowledge, but it's
certainly going to bite us on the rear someday. Teach pg_stat_statements
about the new node type, too.
While at it, also teach cost_qual_eval() that MinMaxExpr, SQLValueFunction,
XmlExpr, and CoerceToDomain should be charged as cpu_operator_cost.
Failing to do this for SQLValueFunction was an oversight in my commit
0bb51aa96. The others are longer-standing oversights, but no time like the
present to fix them. (In principle, CoerceToDomain could have cost much
higher than this, but it doesn't presently seem worth trying to examine the
domain's constraints here.)
Modify execExprInterp.c to execute NextValueExpr as an out-of-line
function; it seems quite unlikely to me that it's worth insisting that
it be inlined in all expression eval methods. Besides, providing the
out-of-line function doesn't stop anyone from inlining if they want to.
Adjust some places where NextValueExpr support had been inserted with the
aid of a dartboard rather than keeping it in the same order as elsewhere.
Discussion: https://postgr.es/m/[email protected]
|
|
The grammar will only accept something syntactically similar to a function
call in a function-in-FROM expression. However, there are various ways
to input something that ruleutils.c won't deparse that way, potentially
leading to a view or rule that fails dump/reload. Fix by inserting a
dummy CAST around anything that isn't going to deparse as a function
(which is one of the ways to get something like that in there in the
first place).
In HEAD, also make use of the infrastructure added by this to avoid
emitting unnecessary parentheses in CREATE INDEX deparsing. I did
not change that in back branches, thinking that people might find it
to be unexpected/unnecessary behavioral change.
In HEAD, also fix incorrect logic for when to add extra parens to
partition key expressions. Somebody apparently thought they could
get away with simpler logic than pg_get_indexdef_worker has, but
they were wrong --- a counterexample is PARTITION BY LIST ((a[1])).
Ignoring the prettyprint flag for partition expressions isn't exactly
a nice solution anyway.
This has been broken all along, so back-patch to all supported branches.
Discussion: https://postgr.es/m/[email protected]
|
|
This merge includes all commits upto bc2d716ad09fceeb391c755f78c256ddac9d3b9f
of PG 10.
|
|
Further investigation shows that ruleutils isn't quite up to speed either
for cases where we have a domain-over-array: it needs to be prepared to
look past a CoerceToDomain at the top level of field and element
assignments, else it decompiles them incorrectly. Potentially this would
result in failure to dump/reload a rule, if it looked like the one in the
new test case. (I also added a test for EXPLAIN; that output isn't broken,
but clearly we need more test coverage here.)
Like commit b1cb32fb6, this bug is reachable in cases we already support,
so back-patch all the way.
|
|
If the number of tuples in the heap exceeds approximately INT_MAX/2,
this loop's calculation "2*i+1" could overflow, resulting in a crash.
Fix it by using unsigned int rather than int for the relevant local
variables; that shouldn't cost anything extra on any popular hardware.
Per bug #14722 from Sergey Koposov.
Original patch by Sergey Koposov, modified by me per a suggestion
from Heikki Linnakangas to use unsigned int not int64.
Back-patch to 9.4, where tuplesort.c grew the ability to sort as many
as INT_MAX tuples in-memory (commit 263865a48).
Discussion: https://postgr.es/m/[email protected]
|
|
The storm_catalog schema is supposed to contain the same catalogs and
views as pg_catalog, but filtered to the current database. The use case
for this is multi-tenant systems, which was a StormDB feature.
But on XL this is mostly irrelevant, and the schema was not populated
since commit 8096e3edf17b260de15472eb04567d1beec1e3e6 which disabled
this part of initdb.
So instead of fixing the regression failures in misc_sanity caused by
this (initdb-time schema with no pinned objects), just rip all the
remaining bits out, including the pgxc_catalog_remap GUC etc.
This also removes the setup_storm() call disabled by 8096e3edf1, as the
function got removed since then.
|
|
The simple calculations done to estimate the size of the output buffers
for ucnv_fromUChars() and ucnv_toUChars() could overflow int32_t for
large strings. To avoid that, go the long way and run the function
first without an output buffer to get the correct output buffer size
requirement.
|
|
ICU does not support "collate" and "ctype" being different, so the
collctype catalog column is ignored. But for catalog neatness, ensure
that they are the same.
|
|
Traditionally, "pg_ctl start -w" has waited for the server to become
ready to accept connections by attempting a connection once per second.
That has the major problem that connection issues (for instance, a
kernel packet filter blocking traffic) can't be reliably told apart
from server startup issues, and the minor problem that if server startup
isn't quick, we accumulate "the database system is starting up" spam
in the server log. We've hacked around many of the possible connection
issues, but it resulted in ugly and complicated code in pg_ctl.c.
In commit c61559ec3, I changed the probe rate to every tenth of a second.
That prompted Jeff Janes to complain that the log-spam problem had become
much worse. In the ensuing discussion, Andres Freund pointed out that
we could dispense with connection attempts altogether if the postmaster
were changed to report its status in postmaster.pid, which "pg_ctl start"
already relies on being able to read. This patch implements that, teaching
postmaster.c to report a status string into the pidfile at the same
state-change points already identified as being of interest for systemd
status reporting (cf commit 7d17e683f). pg_ctl no longer needs to link
with libpq at all; all its functions now depend on reading server files.
In support of this, teach AddToDataDirLockFile() to allow addition of
postmaster.pid lines in not-necessarily-sequential order. This is needed
on Windows where the SHMEM_KEY line will never be written at all. We still
have the restriction that we don't want to truncate the pidfile; document
the reasons for that a bit better.
Also, fix the pg_ctl TAP tests so they'll notice if "start -w" mode
is broken --- before, they'd just wait out the sixty seconds until
the loop gives up, and then report success anyway. (Yes, I found that
out the hard way.)
While at it, arrange for pg_ctl to not need to #include miscadmin.h;
as a rather low-level backend header, requiring that to be compilable
client-side is pretty dubious. This requires moving the #define's
associated with the pidfile into a new header file, and moving
PG_BACKEND_VERSIONSTR someplace else. For lack of a clearly better
"someplace else", I put it into port.h, beside the declaration of
find_other_exec(), since most users of that macro are passing the value to
find_other_exec(). (initdb still depends on miscadmin.h, but at least
pg_ctl and pg_upgrade no longer do.)
In passing, fix main.c so that PG_BACKEND_VERSIONSTR actually defines the
output of "postgres -V", which remarkably it had never done before.
Discussion: https://postgr.es/m/CAMkU=1xJW8e+CTotojOMBd-yzUvD0e_JZu2xHo=MnuZ4__m7Pg@mail.gmail.com
|
|
This commit merges PG10 branch upto commit
2710ccd782d0308a3fa1ab193531183148e9b626. Regression tests show no noteworthy
additional failures. This merge includes major pgindent work done with the
newer version of pgindent
|
|
Fix its header comment, which described the old behavior of the <N>
phrase distance operator; we missed updating that in commit 028350f61.
Also, reset errno before strtol() call, to defend against the possibility
that it was already ERANGE at entry. (The lack of complaints says that
it generally isn't, but this is at least a latent bug.) Very minor
stylistic improvements as well.
Victor Drobny noted the obsolete comment, I noted the errno issue.
Back-patch to 9.6 where this code was added, just in case the errno
issue is a live bug in some cases.
Discussion: https://postgr.es/m/[email protected]
|
|
Author: Masahiko Sawada
|
|
Reported-by: Tom Lane
|
|
Callers of icu_to_uchar() neglected to pfree the result string when done
with it. This results in catastrophic memory leaks in varstr_cmp(),
because of our prevailing assumption that btree comparison functions don't
leak memory. For safety, make all the call sites clean up leaks, though
I suspect that we could get away without it in formatting.c. I audited
callers of icu_from_uchar() as well, but found no places that seemed to
have a comparable issue.
Add function API specifications for icu_to_uchar() and icu_from_uchar();
the lack of any thought-through specification is perhaps not unrelated
to the existence of this bug in the first place. Fix icu_to_uchar()
to guarantee a nul-terminated result; although no existing caller appears
to care, the fact that it would have been nul-terminated except in
extreme corner cases seems ideally designed to bite someone on the rear
someday. Fix ucnv_fromUChars() destCapacity argument --- in the worst
case, that could perhaps have led to a non-nul-terminated result, too.
Fix icu_from_uchar() to have a more reasonable definition of the function
result --- no callers are actually paying attention, so this isn't a live
bug, but it's certainly sloppily designed. Const-ify icu_from_uchar()'s
input string for consistency.
That is not the end of what needs to be done to these functions, but
it's as much as I have the patience for right now.
Discussion: https://postgr.es/m/[email protected]
|
|
These will no longer get re-split by pgindent runs, so it's worth cleaning
them up now.
Discussion: https://postgr.es/m/[email protected]
Discussion: https://postgr.es/m/[email protected]
|
|
Don't move parenthesized lines to the left, even if that means they
flow past the right margin.
By default, BSD indent lines up statement continuation lines that are
within parentheses so that they start just to the right of the preceding
left parenthesis. However, traditionally, if that resulted in the
continuation line extending to the right of the desired right margin,
then indent would push it left just far enough to not overrun the margin,
if it could do so without making the continuation line start to the left of
the current statement indent. That makes for a weird mix of indentations
unless one has been completely rigid about never violating the 80-column
limit.
This behavior has been pretty universally panned by Postgres developers.
Hence, disable it with indent's new -lpl switch, so that parenthesized
lines are always lined up with the preceding left paren.
This patch is much less interesting than the first round of indent
changes, but also bulkier, so I thought it best to separate the effects.
Discussion: https://postgr.es/m/[email protected]
Discussion: https://postgr.es/m/[email protected]
|
|
Change pg_bsd_indent to follow upstream rules for placement of comments
to the right of code, and remove pgindent hack that caused comments
following #endif to not obey the general rule.
Commit e3860ffa4dd0dad0dd9eea4be9cc1412373a8c89 wasn't actually using
the published version of pg_bsd_indent, but a hacked-up version that
tried to minimize the amount of movement of comments to the right of
code. The situation of interest is where such a comment has to be
moved to the right of its default placement at column 33 because there's
code there. BSD indent has always moved right in units of tab stops
in such cases --- but in the previous incarnation, indent was working
in 8-space tab stops, while now it knows we use 4-space tabs. So the
net result is that in about half the cases, such comments are placed
one tab stop left of before. This is better all around: it leaves
more room on the line for comment text, and it means that in such
cases the comment uniformly starts at the next 4-space tab stop after
the code, rather than sometimes one and sometimes two tabs after.
Also, ensure that comments following #endif are indented the same
as comments following other preprocessor commands such as #else.
That inconsistency turns out to have been self-inflicted damage
from a poorly-thought-through post-indent "fixup" in pgindent.
This patch is much less interesting than the first round of indent
changes, but also bulkier, so I thought it best to separate the effects.
Discussion: https://postgr.es/m/[email protected]
Discussion: https://postgr.es/m/[email protected]
|
|
The new indent version includes numerous fixes thanks to Piotr Stefaniak.
The main changes visible in this commit are:
* Nicer formatting of function-pointer declarations.
* No longer unexpectedly removes spaces in expressions using casts,
sizeof, or offsetof.
* No longer wants to add a space in "struct structname *varname", as
well as some similar cases for const- or volatile-qualified pointers.
* Declarations using PG_USED_FOR_ASSERTS_ONLY are formatted more nicely.
* Fixes bug where comments following declarations were sometimes placed
with no space separating them from the code.
* Fixes some odd decisions for comments following case labels.
* Fixes some cases where comments following code were indented to less
than the expected column 33.
On the less good side, it now tends to put more whitespace around typedef
names that are not listed in typedefs.list. This might encourage us to
put more effort into typedef name collection; it's not really a bug in
indent itself.
There are more changes coming after this round, having to do with comment
indentation and alignment of lines appearing within parentheses. I wanted
to limit the size of the diffs to something that could be reviewed without
one's eyes completely glazing over, so it seemed better to split up the
changes as much as practical.
Discussion: https://postgr.es/m/[email protected]
Discussion: https://postgr.es/m/[email protected]
|
|
This is just to have a clean basis for comparison with the results of
the new version (which will indeed end up reverting some of these
changes...)
Discussion: https://postgr.es/m/[email protected]
Discussion: https://postgr.es/m/[email protected]
|
|
load_libraries(), which processes the various xxx_preload_libraries GUCs,
was parsing them using SplitIdentifierString() which isn't really
appropriate for values that could be path names: it downcases unquoted
text, and it doesn't allow embedded whitespace unless quoted.
Use SplitDirectoriesString() instead. That also allows us to simplify
load_libraries() a bit, since canonicalize_path() is now done for it.
While this definitely seems like a bug fix, it has the potential to
break configuration settings that accidentally worked before because
of the downcasing behavior. Also, there's an easy workaround for the
bug, namely to double-quote troublesome text. Hence, no back-patch.
QL Zhuo, tweaked a bit by me
Discussion: https://postgr.es/m/CAB-oJtxHVDc3H+Km3CjB9mY1VDzuyaVH_ZYSz7iXcRqCtb93Ew@mail.gmail.com
|
|
Windows uses a separate code path for libc locales. The code previously
ended up there also if an ICU collation should be used, leading to a
crash.
Reported-by: Ashutosh Sharma <[email protected]>
|
|
read/out functions for this node type were missing. So implement those
functions. In addition, the FQS code path was not recongnizing this new node
type correctly. Fix that too.
The ruleutils also missed ability to deparse this expression. For now we just
emit a DEFAULT clause while deparsing NextValueExpr and assume that the remote
node will do the necessary lookups to find the correct sequence and invoke
nextval() on the sequence.
|