summaryrefslogtreecommitdiff
path: root/src/backend/tcop/postgres.c
AgeCommit message (Collapse)Author
2018-05-18Fix post-cherry-pick problems.Pavan Deolasee
2018-05-18Track clearly whether to run a remote transaction in autocommit or a blockPavan Deolasee
Chi Gao and Hengbing Wang reported certain issues around transaction handling and demonstrated via xlogdump how certain transactions were getting marked committed/aborted repeatedly on a datanode. When an already committed transaction is attempted to be aborted again, it results in a PANIC. Upon investigation, this uncovered a very serious yet long standing bug in transaction handling. If the client is running in autocommit mode, we try to avoid starting a transaction block on the datanode side if only one datanode is going to be involved in the transaction. This is an optimisation to speed up short queries touching only a single node. But when the query rewriter transforms a single statement into multiple statements, we would still (and incorrectly) run each statement in an autocommit mode on the datanode. This can cause inconsistencies when one statement commits but the next statement aborts. And it may also lead to the PANIC situations if we continue to use the same global transaction identifier for the statements. This can also happen when the user invokes a user-defined function. If the function has multiple statements, each statement will run in an autocommit mode, if it's FQSed, thus again creating inconsistency if a following statement in the function fails. We now have a more elaborate mechanism to tackle autocommit and transaction block needs. The special casing for force_autocommit is now removed, thus making it more predictable. We also have specific conditions to check to ensure that we don't mixup autocommit and transaction block for the same global xid. Finally, if a query rewriter transforms a single statement into multiple statements, we run those statements in a transaction block. Together these changes should help us fix the problems.
2017-07-09Remove storm_catalog schemaTomas Vondra
The storm_catalog schema is supposed to contain the same catalogs and views as pg_catalog, but filtered to the current database. The use case for this is multi-tenant systems, which was a StormDB feature. But on XL this is mostly irrelevant, and the schema was not populated since commit 8096e3edf17b260de15472eb04567d1beec1e3e6 which disabled this part of initdb. So instead of fixing the regression failures in misc_sanity caused by this (initdb-time schema with no pinned objects), just rip all the remaining bits out, including the pgxc_catalog_remap GUC etc. This also removes the setup_storm() call disabled by 8096e3edf1, as the function got removed since then.
2017-06-27Merge PG10 master branch into xl10develPavan Deolasee
This commit merges PG10 branch upto commit 2710ccd782d0308a3fa1ab193531183148e9b626. Regression tests show no noteworthy additional failures. This merge includes major pgindent work done with the newer version of pgindent
2017-06-21Phase 3 of pgindent updates.Tom Lane
Don't move parenthesized lines to the left, even if that means they flow past the right margin. By default, BSD indent lines up statement continuation lines that are within parentheses so that they start just to the right of the preceding left parenthesis. However, traditionally, if that resulted in the continuation line extending to the right of the desired right margin, then indent would push it left just far enough to not overrun the margin, if it could do so without making the continuation line start to the left of the current statement indent. That makes for a weird mix of indentations unless one has been completely rigid about never violating the 80-column limit. This behavior has been pretty universally panned by Postgres developers. Hence, disable it with indent's new -lpl switch, so that parenthesized lines are always lined up with the preceding left paren. This patch is much less interesting than the first round of indent changes, but also bulkier, so I thought it best to separate the effects. Discussion: https://postgr.es/m/[email protected] Discussion: https://postgr.es/m/[email protected]
2017-06-21Phase 2 of pgindent updates.Tom Lane
Change pg_bsd_indent to follow upstream rules for placement of comments to the right of code, and remove pgindent hack that caused comments following #endif to not obey the general rule. Commit e3860ffa4dd0dad0dd9eea4be9cc1412373a8c89 wasn't actually using the published version of pg_bsd_indent, but a hacked-up version that tried to minimize the amount of movement of comments to the right of code. The situation of interest is where such a comment has to be moved to the right of its default placement at column 33 because there's code there. BSD indent has always moved right in units of tab stops in such cases --- but in the previous incarnation, indent was working in 8-space tab stops, while now it knows we use 4-space tabs. So the net result is that in about half the cases, such comments are placed one tab stop left of before. This is better all around: it leaves more room on the line for comment text, and it means that in such cases the comment uniformly starts at the next 4-space tab stop after the code, rather than sometimes one and sometimes two tabs after. Also, ensure that comments following #endif are indented the same as comments following other preprocessor commands such as #else. That inconsistency turns out to have been self-inflicted damage from a poorly-thought-through post-indent "fixup" in pgindent. This patch is much less interesting than the first round of indent changes, but also bulkier, so I thought it best to separate the effects. Discussion: https://postgr.es/m/[email protected] Discussion: https://postgr.es/m/[email protected]
2017-06-21Restart logical replication launcher when killedPeter Eisentraut
Author: Yugo Nagata <[email protected]>
2017-06-21Initial pgindent run with pg_bsd_indent version 2.0.Tom Lane
The new indent version includes numerous fixes thanks to Piotr Stefaniak. The main changes visible in this commit are: * Nicer formatting of function-pointer declarations. * No longer unexpectedly removes spaces in expressions using casts, sizeof, or offsetof. * No longer wants to add a space in "struct structname *varname", as well as some similar cases for const- or volatile-qualified pointers. * Declarations using PG_USED_FOR_ASSERTS_ONLY are formatted more nicely. * Fixes bug where comments following declarations were sometimes placed with no space separating them from the code. * Fixes some odd decisions for comments following case labels. * Fixes some cases where comments following code were indented to less than the expected column 33. On the less good side, it now tends to put more whitespace around typedef names that are not listed in typedefs.list. This might encourage us to put more effort into typedef name collection; it's not really a bug in indent itself. There are more changes coming after this round, having to do with comment indentation and alignment of lines appearing within parentheses. I wanted to limit the size of the diffs to something that could be reviewed without one's eyes completely glazing over, so it seemed better to split up the changes as much as practical. Discussion: https://postgr.es/m/[email protected] Discussion: https://postgr.es/m/[email protected]
2017-06-15Merge 'remotes/PGSQL/master' into xl10develPavan Deolasee
Merge upstream master branch upto e800656d9a9b40b2f55afabe76354ab6d93353b3. Code compiles and regression works ok (with lots and lots of failures though).
2017-06-15Take into account the fact that pg_parse_query() returns a list of RawStmtPavan Deolasee
Starting PG 10, pg_parse_query() returns a list of RawStmt unlike a list of parse trees. The actual parse tree is now available as RawStmt->stmt. So we must look into the correct place to check if the supplied query is one of the special statements such as VACUUM, CLUSTER or CREATE INDEX statement, which needs special handling.
2017-06-14Revert "Collect and return query substrings corresponding to each SQL statement"Pavan Deolasee
This reverts commit 455ff923454e78d80b77639a381db9b05c776577. Core Postgres has now added support for extracting query string for each command in a multi-command SQL. So we can use that facility instead of cooking up something on our own.
2017-06-14Merge from PG master upto d5cb3bab564e0927ffac7c8729eacf181a12dd40Pavan Deolasee
This is the result of the "git merge remotes/PGSQL/master" upto the said commit point. We have done some basic analysis, fixed compilation problems etc, but bulk of the logical problems in conflict resolution etc will be handled by subsequent commits.
2017-06-13Re-run pgindent.Tom Lane
This is just to have a clean base state for testing of Piotr Stefaniak's latest version of FreeBSD indent. I fixed up a couple of places where pgindent would have changed format not-nicely. perltidy not included. Discussion: https://postgr.es/m/VI1PR03MB119959F4B65F000CA7CD9F6BF2CC0@VI1PR03MB1199.eurprd03.prod.outlook.com
2017-06-08Use standard interrupt handling in logical replication launcher.Andres Freund
Previously the exit handling was only able to exit from within the main loop, and not from within the backend code it calls. Fix that by using the standard die() SIGTERM handler, and adding the necessary CHECK_FOR_INTERRUPTS() call. This requires adding yet another process-type-specific branch to ProcessInterrupts(), which hints that we probably should generalize that handling. But that's work for another day. Author: Petr Jelinek Reviewed-By: Andres Freund Discussion: https://postgr.es/m/[email protected]
2017-06-06Unify SIGHUP handling between normal and walsender backends.Andres Freund
Because walsender and normal backends share the same main loop it's problematic to have two different flag variables, set in signal handlers, indicating a pending configuration reload. Only certain walsender commands reach code paths checking for the variable (START_[LOGICAL_]REPLICATION, CREATE_REPLICATION_SLOT ... LOGICAL, notably not base backups). This is a bug present since the introduction of walsender, but has gotten worse in releases since then which allow walsender to do more. A later patch, not slated for v10, will similarly unify SIGHUP handling in other types of processes as well. Author: Petr Jelinek, Andres Freund Reviewed-By: Michael Paquier Discussion: https://postgr.es/m/[email protected] Backpatch: 9.2-, bug is present since 9.0
2017-06-02Fix signal handling in logical replication workersPeter Eisentraut
The logical replication worker processes now use the normal die() handler for SIGTERM and CHECK_FOR_INTERRUPTS() instead of custom code. One problem before was that the apply worker would not exit promptly when a subscription was dropped, which could lead to deadlocks. Author: Petr Jelinek <[email protected]> Reported-by: Masahiko Sawada <[email protected]>
2017-04-10Improve castNode notation by introducing list-extraction-specific variants.Tom Lane
This extends the castNode() notation introduced by commit 5bcab1114 to provide, in one step, extraction of a list cell's pointer and coercion to a concrete node type. For example, "lfirst_node(Foo, lc)" is the same as "castNode(Foo, lfirst(lc))". Almost half of the uses of castNode that have appeared so far include a list extraction call, so this is pretty widely useful, and it saves a few more keystrokes compared to the old way. As with the previous patch, back-patch the addition of these macros to pg_list.h, so that the notation will be available when back-patching. Patch by me, after an idea of Andrew Gierth's. Discussion: https://postgr.es/m/[email protected]
2017-04-06Remove dead code and fix comments in fast-path function handling.Heikki Linnakangas
HandleFunctionRequest() is no longer responsible for reading the protocol message from the client, since commit 2b3a8b20c2. Fix the outdated comments. HandleFunctionRequest() now always returns 0, because the code that used to return EOF was moved in 2b3a8b20c2. Therefore, the caller no longer needs to check the return value. Reported by Andres Freund. Backpatch to all supported versions, even though this doesn't have any user-visible effect, to make backporting future patches in this area easier. Discussion: https://www.postgresql.org/message-id/[email protected]
2017-04-01Fix two undocumented parameters to functions from ENR patch.Kevin Grittner
On ProcessUtility document the parameter, to match others. On CreateCachedPlan drop the queryEnv parameter. It was not referenced within the function, and had been added on the assumption that with some unknown future usage of QueryEnvironment it might be useful to do something there. We have avoided other "just in case" implementation of unused paramters, so drop it here. Per gripe from Tom Lane
2017-04-01Add infrastructure to support EphemeralNamedRelation references.Kevin Grittner
A QueryEnvironment concept is added, which allows new types of objects to be passed into queries from parsing on through execution. At this point, the only thing implemented is a collection of EphemeralNamedRelation objects -- relations which can be referenced by name in queries, but do not exist in the catalogs. The only type of ENR implemented is NamedTuplestore, but provision is made to add more types fairly easily. An ENR can carry its own TupleDesc or reference a relation in the catalogs by relid. Although these features can be used without SPI, convenience functions are added to SPI so that ENRs can easily be used by code run through SPI. The initial use of all this is going to be transition tables in AFTER triggers, but that will be added to each PL as a separate commit. An incidental effect of this patch is to produce a more informative error message if an attempt is made to modify the contents of a CTE from a referencing DML statement. No tests previously covered that possibility, so one is added. Kevin Grittner and Thomas Munro Reviewed by Heikki Linnakangas, David Fetter, and Thomas Munro with valuable comments and suggestions from many others
2017-03-29Cast result of copyObject() to correct typePeter Eisentraut
copyObject() is declared to return void *, which allows easily assigning the result independent of the input, but it loses all type checking. If the compiler supports typeof or something similar, cast the result to the input type. This creates a greater amount of type safety. In some cases, where the result is assigned to a generic type such as Node * or Expr *, new casts are now necessary, but in general casts are now unnecessary in the normal case and indicate that something unusual is happening. Reviewed-by: Mark Dilger <[email protected]>
2017-03-23Allow for parallel execution whenever ExecutorRun() is done only once.Robert Haas
Previously, it was unsafe to execute a plan in parallel if ExecutorRun() might be called with a non-zero row count. However, it's quite easy to fix things up so that we can support that case, provided that it is known that we will never call ExecutorRun() a second time for the same QueryDesc. Add infrastructure to signal this, and cross-checks to make sure that a caller who claims this is true doesn't later reneg. While that pattern never happens with queries received directly from a client -- there's no way to know whether multiple Execute messages will be sent unless the first one requests all the rows -- it's pretty common for queries originating from procedural languages, which often limit the result to a single tuple or to a user-specified number of tuples. This commit doesn't actually enable parallelism in any additional cases, because currently none of the places that would be able to benefit from this infrastructure pass CURSOR_OPT_PARALLEL_OK in the first place, but it makes it much more palatable to pass CURSOR_OPT_PARALLEL_OK in places where we currently don't, because it eliminates some cases where we'd end up having to run the parallel plan serially. Patch by me, based on some ideas from Rafia Sabih and corrected by Rafia Sabih based on feedback from Dilip Kumar and myself. Discussion: http://postgr.es/m/CA+TgmobXEhvHbJtWDuPZM9bVSLiTj-kShxQJ2uM5GPDze9fRYA@mail.gmail.com
2017-03-23Logical replication support for initial data copyPeter Eisentraut
Add functionality for a new subscription to copy the initial data in the tables and then sync with the ongoing apply process. For the copying, add a new internal COPY option to have the COPY source data provided by a callback function. The initial data copy works on the subscriber by receiving COPY data from the publisher and then providing it locally into a COPY that writes to the destination table. A WAL receiver can now execute full SQL commands. This is used here to obtain information about tables and publications. Several new options were added to CREATE and ALTER SUBSCRIPTION to control whether and when initial table syncing happens. Change pg_dump option --no-create-subscription-slots to --no-subscription-connect and use the new CREATE SUBSCRIPTION ... NOCONNECT option for that. Author: Petr Jelinek <[email protected]> Tested-by: Erik Rijkers <[email protected]>
2017-01-27Use castNode() in a bunch of statement-list-related code.Tom Lane
When I wrote commit ab1f0c822, I really missed the castNode() macro that Peter E. had proposed shortly before. This back-fills the uses I would have put it to. It's probably not all that significant, but there are more assertions here than there were before, and conceivably they will help catch any bugs associated with those representation changes. I left behind a number of usages like "(Query *) copyObject(query_var)". Those could have been converted as well, but Peter has proposed another notational improvement that would handle copyObject cases automatically, so I let that be for now.
2017-01-27Add castNode(type, ptr) for safe casting between NodeTag based types.Andres Freund
The new function allows to cast from one NodeTag based type to another, while asserting that the conversion is valid. This replaces the common pattern of doing a cast and a Assert(IsA(ptr, type)) close-by. As this seems likely to be used pervasively, we decided to backpatch this change the addition of this macro. Otherwise backpatched fixes are more likely not to work on back-branches. On branches before 9.6, where we do not yet rely on inline functions being available, the type assertion is only performed if PG_USE_INLINE support is detected. The cast obviously is performed regardless. For the benefit of verifying the macro compiles in the back-branches, this commit contains a single use of the new macro. On master, a somewhat larger conversion will be committed separately. Author: Peter Eisentraut and Andres Freund Reviewed-By: Tom Lane Discussion: https://postgr.es/m/[email protected] Backpatch: 9.2-
2017-01-20Avoid core dump for empty prepared statement in an aborted transaction.Tom Lane
Brown-paper-bag bug in commit ab1f0c822: the old code here coped with null CachedPlanSource.raw_parse_tree, the new code not so much. Per report from Dave Cramer. No regression test, because our core testing infrastructure doesn't provide any easy way to exercise this path. Fortunately, the JDBC crew test it regularly. Discussion: https://postgr.es/m/CADK3HH+Ug3xCysKqw_dZOnaNnytZ1Rh5yP05hjO-e4NoyRxVvA@mail.gmail.com
2017-01-14Change representation of statement lists, and add statement location info.Tom Lane
This patch makes several changes that improve the consistency of representation of lists of statements. It's always been the case that the output of parse analysis is a list of Query nodes, whatever the types of the individual statements in the list. This patch brings similar consistency to the outputs of raw parsing and planning steps: * The output of raw parsing is now always a list of RawStmt nodes; the statement-type-dependent nodes are one level down from that. * The output of pg_plan_queries() is now always a list of PlannedStmt nodes, even for utility statements. In the case of a utility statement, "planning" just consists of wrapping a CMD_UTILITY PlannedStmt around the utility node. This list representation is now used in Portal and CachedPlan plan lists, replacing the former convention of intermixing PlannedStmts with bare utility-statement nodes. Now, every list of statements has a consistent head-node type depending on how far along it is in processing. This allows changing many places that formerly used generic "Node *" pointers to use a more specific pointer type, thus reducing the number of IsA() tests and casts needed, as well as improving code clarity. Also, the post-parse-analysis representation of DECLARE CURSOR is changed so that it looks more like EXPLAIN, PREPARE, etc. That is, the contained SELECT remains a child of the DeclareCursorStmt rather than getting flipped around to be the other way. It's now true for both Query and PlannedStmt that utilityStmt is non-null if and only if commandType is CMD_UTILITY. That allows simplifying a lot of places that were testing both fields. (I think some of those were just defensive programming, but in many places, it was actually necessary to avoid confusing DECLARE CURSOR with SELECT.) Because PlannedStmt carries a canSetTag field, we're also able to get rid of some ad-hoc rules about how to reconstruct canSetTag for a bare utility statement; specifically, the assumption that a utility is canSetTag if and only if it's the only one in its list. While I see no near-term need for relaxing that restriction, it's nice to get rid of the ad-hocery. The API of ProcessUtility() is changed so that what it's passed is the wrapper PlannedStmt not just the bare utility statement. This will affect all users of ProcessUtility_hook, but the changes are pretty trivial; see the affected contrib modules for examples of the minimum change needed. (Most compilers should give pointer-type-mismatch warnings for uncorrected code.) There's also a change in the API of ExplainOneQuery_hook, to pass through cursorOptions instead of expecting hook functions to know what to pick. This is needed because of the DECLARE CURSOR changes, but really should have been done in 9.6; it's unlikely that any extant hook functions know about using CURSOR_OPT_PARALLEL_OK. Finally, teach gram.y to save statement boundary locations in RawStmt nodes, and pass those through to Query and PlannedStmt nodes. This allows more intelligent handling of cases where a source query string contains multiple statements. This patch doesn't actually do anything with the information, but a follow-on patch will. (Passing this information through cleanly is the true motivation for these changes; while I think this is all good cleanup, it's unlikely we'd have bothered without this end goal.) catversion bump because addition of location fields to struct Query affects stored rules. This patch is by me, but it owes a good deal to Fabien Coelho who did a lot of preliminary work on the problem, and also reviewed the patch. Discussion: https://postgr.es/m/alpine.DEB.2.20.1612200926310.29821@lancre
2017-01-03Update copyright via script for 2017Bruce Momjian
2016-12-12Add support for temporary replication slotsPeter Eisentraut
This allows creating temporary replication slots that are removed automatically at the end of the session or on error. From: Petr Jelinek <[email protected]>
2016-11-17Remove or reduce verbosity of some debug messages.Robert Haas
The debug messages that merely print StartTransactionCommand, CommitTransactionCommand, ProcessUtilty, or ProcessQuery with no additional details seem to be useless. Get rid of them. The transaction status messages produced by ShowTransactionState are occasionally useful, but they are extremely verbose, producing multiple lines of log output every time they fire, which can happens multiple times per transaction. So, reduce the level to DEBUG5; avoid emitting an extra line just to explain which debug point is at issue; and tighten up the rest of the message so it doesn't use quite so much horizontal space. With these changes, it's possible to run a somewhat busy system with a log level even as high as DEBUG4, whereas previously anything above DEBUG2 would flood the log with output that probably wasn't really all that useful.
2016-11-15Account for catalog snapshot in PGXACT->xmin updates.Tom Lane
The CatalogSnapshot was not plugged into SnapshotResetXmin()'s accounting for whether MyPgXact->xmin could be cleared or advanced. In normal transactions this was masked by the fact that the transaction snapshot would be older, but during backend startup and certain utility commands it was possible to re-use the CatalogSnapshot after MyPgXact->xmin had been cleared, meaning that recently-deleted rows could be pruned even though this snapshot could still see them, causing unexpected catalog lookup failures. This effect appears to be the explanation for a recent failure on buildfarm member piculet. To fix, add the CatalogSnapshot to the RegisteredSnapshots heap whenever it is valid. In the previous logic, it was possible for the CatalogSnapshot to remain valid across waits for client input, but with this change that would mean it delays advance of global xmin in cases where it did not before. To avoid possibly causing new table-bloat problems with clients that sit idle for long intervals, add code to invalidate the CatalogSnapshot before waiting for client input. (When the backend is busy, it's unlikely that the CatalogSnapshot would be the oldest snap for very long, so we don't worry about forcing early invalidation of it otherwise.) In passing, remove the CatalogSnapshotStale flag in favor of using "CatalogSnapshot != NULL" to represent validity, as we do for the other special snapshots in snapmgr.c. And improve some obsolete comments. No regression test because I don't know a deterministic way to cause this failure. But the stress test shown in the original discussion provokes "cache lookup failed for relation 1255" within a few dozen seconds for me. Back-patch to 9.4 where MVCC catalog scans were introduced. (Note: it's quite easy to produce similar failures with the same test case in branches before 9.4. But MVCC catalog scans were supposed to fix that.) Discussion: <[email protected]>
2016-10-27Merge commit 'b5bce6c1ec6061c8a4f730d927e162db7e2ce365'Pavan Deolasee
2016-10-19Make getrusage() output a little more readablePeter Eisentraut
Reviewed-by: Robert Haas <[email protected]> Reviewed-by: Peter Geoghegan <[email protected]>
2016-10-18Use OS timestamp for computing if the current statement has taken more thanPavan Deolasee
log_min_statement_duration for execution Since we compare the statement start time with the one obtained by GetCurrentTimestamp(), which returns OS time without any adjustments for clock skew, it seems fair to use statement start time obtained by the same clock.
2016-10-18Show originating coordinator name:pid and remote node name:pid in the ps outputPavan Deolasee
while running REMOTE SUBPLAN This should help administrator to quickly find the originating session and remote session, which can be immensely useful for debugging
2016-10-18Add a mechanism to selectively refresh pooler information when only connectionPavan Deolasee
options, such as hostname/port changes for a node This allows us to retain connections to all other nodes in the cluster and just recreate connections to the node whose connection information is changed. This will be especially handy while dealing with datanode/coordinator failover
2016-10-18Fix several compiler warningsPavan Deolasee
2016-10-18Add support for process-level control for overriding log levels.Pavan Deolasee
This patch changes the behaviour of pg_msgmodule_set/change() functions. These functions now only change the log levels for various messages, but the actual logging won't start until one of the following enable() function is called. This patch adds a few more functions: - pg_msgmodule_enable(pid) - the given pid will start logging as per the current settings for various msgs. - pg_msgmodule_disable(pid) - the given pid will stop logging and use the compile time settings - pg_msgmodule_enable_all(persistent) - all current processes will start logging as per the current setting. If "persistent" is set to true then all new processes will also log as per the setting - pg_msgmodule_disable_all() - all current and future processes will stop logging and only use compile time settings.
2016-10-18Avoid repeated palloc for query strings while handling multi-statement SQLsPavan Deolasee
We now only pass pointers until we have the complete query string. At that point, we only required bytes and copy the query string
2016-10-18Collect and return query substrings corresponding to each SQL statementPavan Deolasee
while parsing a multi-statement query separated by ';' raw_parser() returns a list of parsetrees after parsing a multi-statement SQL query, where each parsetree corresponds to one SQL statement. It does not have any mechanism to return the source text of the SQL statement. In Postgres-XL, we send out the query text as it is to remote datanodes and coordinators while dealing with utility statements. Not having access to individual SQL statement is a problem because we end up sending the same text again and again, leading to various issues. This patch adds some rudimentary mechanism to return a list of query strings along with the list of parsetress.
2016-10-18There was a missing commit from when the repo was forked,Mason Sharp
applying to the new repo. Original commit from the sourceforge repo: commit e61639b864e83b6b45d11b737ec3c3d67aeb4b56 Author: Mason Sharp <[email protected]> Date: Sun Jul 26 17:54:08 2015 -0700 Changed license from the Mozilla Public License to the PostgreSQL License
2016-10-18Fix a few compiler warnings.Pavan Deolasee
2016-10-18Sprinkle SQueue and portal management code with DEBUG messagesPavan Deolasee
2016-10-18Send a single command-complete message from remote coordinator or datanodePavan Deolasee
When a client sends a multi-command string for execution, Postgres server executes each command separately and sends a command-complete for each command. The coordinator is not well equipped though to handle this since it sends the whole query string as it is to the datanode and expects a single command-complete. What we need is a mechanism where coordinator sends each command separately to the datanode. It already parses multi-command string into multiple parse-trees. Earlier we did not have mechanism to deparse utility commands, but IIRC we now have that. So we should look at using that infrastructure
2016-10-11Drop server support for FE/BE protocol version 1.0.Tom Lane
While this isn't a lot of code, it's been essentially untestable for a very long time, because libpq doesn't support anything older than protocol 2.0, and has not since release 6.3. There's no reason to believe any other client-side code still uses that protocol, either. Discussion: <[email protected]>
2016-08-27Add macros to make AllocSetContextCreate() calls simpler and safer.Tom Lane
I found that half a dozen (nearly 5%) of our AllocSetContextCreate calls had typos in the context-sizing parameters. While none of these led to especially significant problems, they did create minor inefficiencies, and it's now clear that expecting people to copy-and-paste those calls accurately is not a great idea. Let's reduce the risk of future errors by introducing single macros that encapsulate the common use-cases. Three such macros are enough to cover all but two special-purpose contexts; those two calls can be left as-is, I think. While this patch doesn't in itself improve matters for third-party extensions, it doesn't break anything for them either, and they can gradually adopt the simplified notation over time. In passing, change TopMemoryContext to use the default allocation parameters. Formerly it could only be extended 8K at a time. That was probably reasonable when this code was written; but nowadays we create many more contexts than we did then, so that it's not unusual to have a couple hundred K in TopMemoryContext, even without considering various dubious code that sticks other things there. There seems no good reason not to let it use growing blocks like most other contexts. Back-patch to 9.6, mostly because that's still close enough to HEAD that it's easy to do so, and keeping the branches in sync can be expected to avoid some future back-patching pain. The bugs fixed by these changes don't seem to be significant enough to justify fixing them further back. Discussion: <[email protected]>
2016-07-10Revert "Add some temporary code to record stack usage at server process exit."Tom Lane
This reverts commit 88cf37d2a86d5b66380003d7c3384530e3f91e40 as well as follow-on commits ea9c4a16d5ad88a1d28d43ef458e3209b53eb106 and c57562725d219c4249b82f4a4fb5aaeee3ae0d53. We've learned about as much as we can from the buildfarm.
2016-07-08Add more temporary code to record stack usage at server process exit.Tom Lane
After a look at preliminary results from commit 88cf37d2a86d5b66, I realized it'd be a good idea to spew out the maximum depth measurement seen by check_stack_depth. So add some quick-n-dirty code to do that. Like the previous commit, this will be reverted once we've gathered a set of buildfarm runs with it.
2016-05-27Be more predictable about reporting "lock timeout" vs "statement timeout".Tom Lane
If both timeout indicators are set when we arrive at ProcessInterrupts, we've historically just reported "lock timeout". However, some buildfarm members have been observed to fail isolationtester's timeouts test by reporting "lock timeout" when the statement timeout was expected to fire first. The cause seems to be that the process is allowed to sleep longer than expected (probably due to heavy machine load) so that the lock timeout happens before we reach the point of reporting the error, and then this arbitrary tiebreak rule does the wrong thing. We can improve matters by comparing the scheduled timeout times to decide which error to report. I had originally proposed greatly reducing the 1-second window between the two timeouts in the test cases. On reflection that is a bad idea, at least for the case where the lock timeout is expected to fire first, because that would assume that it takes negligible time to get from statement start to the beginning of the lock wait. Thus, this patch doesn't completely remove the risk of test failures on slow machines. Empirically, however, the case this handles is the one we are seeing in the buildfarm. The explanation may be that the other case requires the scheduler to take the CPU away from a busy process, whereas the case fixed here only requires the scheduler to not give the CPU back right away to a process that has been woken from a multi-second sleep (and, perhaps, has been swapped out meanwhile). Back-patch to 9.3 where the isolationtester timeouts test was added. Discussion: <[email protected]>
2016-03-16Add idle_in_transaction_session_timeout.Robert Haas
Vik Fearing, reviewed by Stéphane Schildknecht and me, and revised slightly by me.