Age | Commit message (Collapse) | Author |
|
Per reports from Andres Freund and Luke Campbell, a server failure during
set_pglocale_pgservice results in a segfault rather than a useful error
message, because the infrastructure needed to use ereport hasn't been
initialized; specifically, MemoryContextInit hasn't been called.
One known cause of this is starting the server in a directory it
doesn't have permission to read.
We could try to prevent set_pglocale_pgservice from using anything that
depends on palloc or elog, but that would be messy, and the odds of future
breakage seem high. Moreover there are other things being called in main.c
that look likely to use palloc or elog too --- perhaps those things
shouldn't be there, but they are there today. The best solution seems to
be to move the call of MemoryContextInit to very early in the backend's
real main() function. I've verified that an elog or ereport occurring
immediately after that is now capable of sending something useful to
stderr.
I also added code to elog.c to print something intelligible rather than
just crashing if MemoryContextInit hasn't created the ErrorContext.
This could happen if MemoryContextInit itself fails (due to malloc
failure), and provides some future-proofing against someone trying to
sneak in new code even earlier in server startup.
Back-patch to all supported branches. Since we've only heard reports of
this type of failure recently, it may be that some recent change has made
it more likely to see a crash of this kind; but it sure looks like it's
broken all the way back.
|
|
Update all files in head, and files COPYRIGHT and legal.sgml in all back
branches.
|
|
Prevent handle_sig_alarm from losing control partway through due to a query
cancel (either an asynchronous SIGINT, or a cancel triggered by one of the
timeout handler functions). That would at least result in failure to
schedule any required future interrupt, and might result in actual
corruption of timeout.c's data structures, if the interrupt happened while
we were updating those.
We could still lose control if an asynchronous SIGINT arrives just as the
function is entered. This wouldn't break any data structures, but it would
have the same effect as if the SIGALRM interrupt had been silently lost:
we'd not fire any currently-due handlers, nor schedule any new interrupt.
To forestall that scenario, forcibly reschedule any pending timer interrupt
during AbortTransaction and AbortSubTransaction. We can avoid any extra
kernel call in most cases by not doing that until we've allowed
LockErrorCleanup to kill the DEADLOCK_TIMEOUT and LOCK_TIMEOUT events.
Another hazard is that some platforms (at least Linux and *BSD) block a
signal before calling its handler and then unblock it on return. When we
longjmp out of the handler, the unblock doesn't happen, and the signal is
left blocked indefinitely. Again, we can fix that by forcibly unblocking
signals during AbortTransaction and AbortSubTransaction.
These latter two problems do not manifest when the longjmp reaches
postgres.c, because the error recovery code there kills all pending timeout
events anyway, and it uses sigsetjmp(..., 1) so that the appropriate signal
mask is restored. So errors thrown outside any transaction should be OK
already, and cleaning up in AbortTransaction and AbortSubTransaction should
be enough to fix these issues. (We're assuming that any code that catches
a query cancel error and doesn't re-throw it will do at least a
subtransaction abort to clean up; but that was pretty much required already
by other subsystems.)
Lastly, ProcSleep should not clear the LOCK_TIMEOUT indicator flag when
disabling that event: if a lock timeout interrupt happened after the lock
was granted, the ensuing query cancel is still going to happen at the next
CHECK_FOR_INTERRUPTS, and we want to report it as a lock timeout not a user
cancel.
Per reports from Dan Wood.
Back-patch to 9.3 where the new timeout handling infrastructure was
introduced. We may at some point decide to back-patch the signal
unblocking changes further, but I'll desist from that until we hear
actual field complaints about it.
|
|
These functions must be careful that they return the intended value of
errno to their callers. There were several scenarios where this might
not happen:
1. The recent SSL renegotiation patch added a hunk of code that would
execute after setting errno. In the first place, it's doubtful that we
should consider renegotiation to be successfully completed after a failure,
and in the second, there's no real guarantee that the called OpenSSL
routines wouldn't clobber errno. Fix by not executing that hunk except
during success exit.
2. errno was left in an unknown state in case of an unrecognized return
code from SSL_get_error(). While this is a "can't happen" case, it seems
like a good idea to be sure we know what would happen, so reset errno to
ECONNRESET in such cases. (The corresponding code in libpq's fe-secure.c
already did this.)
3. There was an (undocumented) assumption that client_read_ended() wouldn't
change errno. While true in the current state of the code, this seems less
than future-proof. Add explicit saving/restoring of errno to make sure
that changes in the called functions won't break things.
I see no need to back-patch, since #1 is new code and the other two issues
are mostly hypothetical.
Per discussion with Amit Kapila.
|
|
This shaves a few cycles, and generally seems like good programming
practice.
David Rowley
|
|
|
|
|
|
Once the administrator has called for an immediate shutdown or a backend
crash has triggered a reinitialization, no mere SIGINT or SIGTERM should
change that course. Such derailment remains possible when the signal
arrives before quickdie() blocks signals. That being a narrow race
affecting most PostgreSQL signal handlers in some way, leave it for
another patch. Back-patch this to all supported versions.
|
|
Doing so was helpful for some Valgrind usage and distracting for other
usage. One can achieve the same effect by changing log_statement and
pointing both PostgreSQL and Valgrind logging to stderr.
Per gripe from Andres Freund.
|
|
This is like shared_preload_libraries except that it takes effect at
backend start and can be changed without a full postmaster restart. It
is like local_preload_libraries except that it is still only settable by
a superuser. This can be a better way to load modules such as
auto_explain.
Since there are now three preload parameters, regroup the documentation
a bit. Put all parameters into one section, explain common
functionality only once, update the descriptions to reflect current and
future realities.
Reviewed-by: Dimitri Fontaine <[email protected]>
|
|
Set errcode to ERRCODE_LOCK_NOT_AVAILABLE
Zoltán Bsöszörményi
|
|
Valgrind "client requests" in aset.c and mcxt.c teach Valgrind and its
Memcheck tool about the PostgreSQL allocator. This makes Valgrind
roughly as sensitive to memory errors involving palloc chunks as it is
to memory errors involving malloc chunks. Further client requests in
PageAddItem() and printtup() verify that all bits being added to a
buffer page or furnished to an output function are predictably-defined.
Those tests catch failures of C-language functions to fully initialize
the bits of a Datum, which in turn stymie optimizations that rely on
_equalConst(). Define the USE_VALGRIND symbol in pg_config_manual.h to
enable these additions. An included "suppression file" silences nominal
errors we don't plan to fix.
Reviewed in earlier versions by Peter Geoghegan and Korry Douglas.
|
|
This is the first run of the Perl-based pgindent script. Also update
pgindent instructions.
|
|
An oversight in commit e710b65c1c56ca7b91f662c63d37ff2e72862a94 allowed
database names beginning with "-" to be treated as though they were secure
command-line switches; and this switch processing occurs before client
authentication, so that even an unprivileged remote attacker could exploit
the bug, needing only connectivity to the postmaster's port. Assorted
exploits for this are possible, some requiring a valid database login,
some not. The worst known problem is that the "-r" switch can be invoked
to redirect the process's stderr output, so that subsequent error messages
will be appended to any file the server can write. This can for example be
used to corrupt the server's configuration files, so that it will fail when
next restarted. Complete destruction of database tables is also possible.
Fix by keeping the database name extracted from a startup packet fully
separate from command-line switches, as had already been done with the
user name field.
The Postgres project thanks Mitsumasa Kondo for discovering this bug,
Kyotaro Horiguchi for drafting the fix, and Noah Misch for recognizing
the full extent of the danger.
Security: CVE-2013-1899
|
|
This GUC allows limiting the time spent waiting to acquire any one
heavyweight lock.
In support of this, improve the recently-added timeout infrastructure
to permit efficiently enabling or disabling multiple timeouts at once.
That reduces the performance hit from turning on lock_timeout, though
it's still not zero.
Zoltán Böszörményi, reviewed by Tom Lane,
Stephen Frost, and Hari Babu
|
|
|
|
... not on auxiliary processes. I managed to overlook the fact that I
had disabled assertions on my HEAD checkout long ago.
Hopefully this will turn the buildfarm green again, and put an end to
today's silliness.
|
|
Fully update git head, and update back branches in ./COPYRIGHT and
legal.sgml files.
|
|
brought up within an existing cluster.
|
|
This reverts commit d573e239f03506920938bf0be56c868d9c3416da, "Take fewer
snapshots". While that seemed like a good idea at the time, it caused
execution to use a snapshot that had been acquired before locking any of
the tables mentioned in the query. This created user-visible anomalies
that were not present in any prior release of Postgres, as reported by
Tomas Vondra. While this whole area could do with a redesign (since there
are related cases that have anomalies anyway), it doesn't seem likely that
any future patch would be reasonably back-patchable; and we don't want 9.2
to exhibit a behavior that's subtly unlike either past or future releases.
Hence, revert to prior code while we rethink the problem.
|
|
|
|
The regular backend's main loop handles signal handling and error recovery
better than the current WAL sender command loop does. For example, if the
client hangs and a SIGTERM is received before starting streaming, the
walsender will now terminate immediately, rather than hang until the
connection times out.
|
|
This commit provides a fix for parameter handling for queries
called through SPI. Most of the issues were related to plpgsql
functions with DML and SELECT queries.
When sending a list of parameter to remote nodes, the PostgreSQL
protocol does not allow types with incorrect OIDs or names, in
the case of XC, incorrect names.
In the case of queries called with SPI, some parameter have no
type as they are not used by all the queries involved on the call
The reason of this behavior is that parameter management is centralized
in the SPI itself. In the case of such parameters, the same method as
dropped attributes of relations is used, meaning that a parameter that
has no type will be considered as a NULL entry and so remote node will
not complain about any unexisting type.
This fix is done on top of the existing infrastructure, but honestly
this should be rewritten with a set of APIs that makes remote parameter
management more easily manageable in cluster, however this is out of
scope for the time being.
2 regression tests are fixed thanks to that: plpgsql and rangefuncs.
|
|
Replace unix_socket_directory with unix_socket_directories, which is a list
of socket directories, and adjust postmaster's code to allow zero or more
Unix-domain sockets to be created.
This is mostly a straightforward change, but since the Unix sockets ought
to be created after the TCP/IP sockets for safety reasons (better chance
of detecting a port number conflict), AddToDataDirLockFile needs to be
fixed to support out-of-order updates of data directory lockfile lines.
That's a change that had been foreseen to be necessary someday anyway.
Honza Horak, reviewed and revised by Tom Lane
|
|
Using long options is seen as a long-term solution to avoid any
other option conflicts. In Postgres 9.2, -C is now used to print
values of runtime parameters.
|
|
Some error with CreateCachedPlan and CTAS have been fixed.
|
|
This is the merge of Postgres-XC master branch with the intersection of
PostgreSQL master branch and 9.2 stable branch.
All the manual conflicts are solved, please note that the code does not
compile yet. All the compilation will be solved later.
Conflicts:
COPYRIGHT
GNUmakefile.in
configure
configure.in
contrib/pgbench/pgbench.c
contrib/sepgsql/hooks.c
src/backend/access/common/heaptuple.c
src/backend/access/heap/heapam.c
src/backend/access/transam/Makefile
src/backend/access/transam/rmgr.c
src/backend/access/transam/twophase.c
src/backend/access/transam/varsup.c
src/backend/access/transam/xact.c
src/backend/catalog/Makefile
src/backend/commands/comment.c
src/backend/commands/copy.c
src/backend/commands/explain.c
src/backend/commands/indexcmds.c
src/backend/commands/prepare.c
src/backend/commands/tablecmds.c
src/backend/commands/view.c
src/backend/executor/functions.c
src/backend/executor/spi.c
src/backend/nodes/copyfuncs.c
src/backend/nodes/makefuncs.c
src/backend/optimizer/path/allpaths.c
src/backend/optimizer/plan/createplan.c
src/backend/optimizer/plan/planner.c
src/backend/optimizer/plan/setrefs.c
src/backend/optimizer/util/var.c
src/backend/parser/analyze.c
src/backend/parser/gram.y
src/backend/parser/parse_agg.c
src/backend/postmaster/postmaster.c
src/backend/storage/ipc/procarray.c
src/backend/storage/lmgr/proc.c
src/backend/tcop/postgres.c
src/backend/tcop/utility.c
src/backend/utils/adt/dbsize.c
src/backend/utils/adt/lockfuncs.c
src/backend/utils/adt/misc.c
src/backend/utils/adt/ruleutils.c
src/backend/utils/cache/plancache.c
src/backend/utils/misc/guc.c
src/bin/initdb/initdb.c
src/bin/pg_ctl/pg_ctl.c
src/bin/pg_dump/pg_dump.c
src/bin/psql/startup.c
src/bin/psql/tab-complete.c
src/include/Makefile
src/include/access/rmgr.h
src/include/access/xact.h
src/include/catalog/catversion.h
src/include/catalog/pg_aggregate.h
src/include/catalog/pg_proc.h
src/include/commands/explain.h
src/include/commands/schemacmds.h
src/include/nodes/parsenodes.h
src/include/nodes/pg_list.h
src/include/nodes/primnodes.h
src/include/optimizer/pathnode.h
src/include/optimizer/var.h
src/include/pg_config.h.win32
src/include/storage/proc.h
src/include/utils/plancache.h
src/include/utils/snapshot.h
src/include/utils/timestamp.h
src/test/regress/expected/aggregates.out
src/test/regress/expected/create_index.out
src/test/regress/expected/inherit.out
src/test/regress/expected/rangefuncs.out
src/test/regress/expected/sanity_check.out
src/test/regress/expected/sequence.out
src/test/regress/expected/with.out
src/test/regress/output/constraints.source
src/test/regress/sql/inherit.sql
src/test/regress/sql/rules.sql
|
|
This commit adds a feature to allow Postgres-XC nodes to communicate
a Command Id to remote nodes. Remote nodes can also send back a Command
Id to the Coordinator commanding the transaction.
This has as consequences to solve numerous issues with data view on cursors
as well as solving an old issue XC had with INSERT SELECT when INSERT is
done on a child after scanning the parent.
This also allows to correctly increment the command ID on Coordinator in
the case of triggers and/or constraints being fired on remote nodes.
In order to allow nodes to communicate command ID, and a new message type
'M' is added.
Note: this patch is also fixing some whitespaces in xact.c.
Patch from Abbas Butt, I just added some simplifications, comments,
and finalized the packing.
|
|
Management of timeouts was getting a little cumbersome; what we
originally had was more than enough back when we were only concerned
about deadlocks and query cancel; however, when we added timeouts for
standby processes, the code got considerably messier. Since there are
plans to add more complex timeouts, this seems a good time to introduce
a central timeout handling module.
External modules register their timeout handlers during process
initialization, and later enable and disable them as they see fit using
a simple API; timeout.c is in charge of keeping track of which timeouts
are in effect at any time, installing a common SIGALRM signal handler,
and calling setitimer() as appropriate to ensure timely firing of
external handlers.
timeout.c additionally supports pluggable modules to add their own
timeouts, though this capability isn't exercised anywhere yet.
Additionally, as of this commit, walsender processes are aware of
timeouts; we had a preexisting bug there that made those ignore SIGALRM,
thus being subject to unhandled deadlocks, particularly during the
authentication phase. This has already been fixed in back branches in
commit 0bf8eb2a, which see for more details.
Main author: Zoltán Böszörményi
Some review and cleanup by Álvaro Herrera
Extensive reworking by Tom Lane
|
|
The Solaris Studio compiler warns about these instances, unlike more
mainstream compilers such as gcc. But manual inspection showed that
the code is clearly not reachable, and we hope no worthy compiler will
complain about removing this code.
|
|
This merge with PostgreSQL master has been requested by Ashutosh.
Commit c1d9579 contains a modification of pull_var_clause allowing
to filter aggregate expressions inside Var clauses which is useful
for Postgres-XC planner optimizations regarding remote query path
determination.
At this point, Postgres-XC master is half-way between the intersection
of postgres/master&postgres/REL9_1_STABLE and the intersection of
postgres/master&postgres/REL9_2_STABLE. The rest of Postgres-XC merge
up to the merge-base of postgres/master&postgres/REL9_2_STABLE is
planned to be done in a couple of weeks. Until this time it is not planned
to release an XC version which is based on a stable branch of Postgres,
so it is OK to let XC master code at this point of postgres master.
Conflicts:
configure
configure.in
src/Makefile.shlib
src/backend/access/heap/heapam.c
src/backend/access/index/indexam.c
src/backend/access/transam/transam.c
src/backend/access/transam/xact.c
src/backend/access/transam/xlog.c
src/backend/commands/view.c
src/backend/executor/execAmi.c
src/backend/storage/lmgr/proc.c
src/include/catalog/catversion.h
src/include/catalog/pg_proc.h
src/include/pg_config.h.win32
src/pl/plpgsql/src/plpgsql--1.0.sql
src/test/regress/expected/prepared_xacts.out
src/test/regress/expected/subselect.out
src/test/regress/pg_regress.c
src/test/regress/sql/prepared_xacts.sql
|
|
There was a wild mix of calling conventions: Some were declared to
return void and didn't return, some returned an int exit code, some
claimed to return an exit code, which the callers checked, but
actually never returned, and so on.
Now all of these functions are declared to return void and decorated
with attribute noreturn and don't return. That's easiest, and most
code already worked that way.
|
|
commit-fest.
|
|
This unifies the convention name for Coordinator and Datanode in Postgres-XC
with documentation.
|
|
More information about "Postgres-XC Development Group" is available at
http://sourceforge.net/apps/mediawiki/postgres-xc/index.php?title=Charter
|
|
"Unexpected EOF on client connection" without an open transaction
is mostly noise, so turn it into DEBUG1. With an open transaction it's
still indicating a problem, so keep those as ERROR, and change the message
to indicate that it happened in a transaction.
|
|
Found these with grep -r "for for ".
|
|
The previous code could cause a backend crash after BEGIN; SAVEPOINT a;
LOCK TABLE foo (interrupted by ^C or statement timeout); ROLLBACK TO
SAVEPOINT a; LOCK TABLE foo, and might have leaked strong-lock counts
in other situations.
Report by Zoltán Böszörményi; patch review by Jeff Davis.
|
|
This was a thinko in previous commit. Now that stack base pointer is now set
in PostmasterMain and SubPostmasterMain, it doesn't need to be set in
PostgresMain anymore.
|
|
We used to only initialize the stack base pointer when starting up a regular
backend, not in other processes. In particular, autovacuum workers can run
arbitrary user code, and without stack-depth checking, infinite recursion
in e.g an index expression will bring down the whole cluster.
The comment about PL/Java using set_stack_base() is not yet true. As the
code stands, PL/java still modifies the stack_base_ptr variable directly.
However, it's been discussed in the PL/Java mailing list that it should be
changed to use the function, because PL/Java is currently oblivious to the
register stack used on Itanium. There's another issues with PL/Java, namely
that the stack base pointer it sets is not really the base of the stack, it
could be something close to the bottom of the stack. That's a separate issue
that might need some further changes to this code, but that's a different
story.
Backpatch to all supported releases.
|
|
DATESTYLE, TZ. If a psql client sets these as environment vars, these do
not get propagated to datanodes, as a result of which the date formats
returned do not match PGDATESTYLE for e.g. So, in general these session
default values should be passed on to session poolers connections to datanodes
and remote coordinator.
These values help maintain consistent param values between the datanodes and the
coordinator for a given client session. The params are passed from coordinator
backend to the pooler session using pgoptions (for e.g. '-c datestyle=Postgres,MDY'.
And the database pools are searched for unique combination of database, username
and pgoptions; earlier it was based on database and username only.
|
|
Add a queryId field to Query and PlannedStmt. This is not used by the
core backend, except for being copied around at appropriate times.
It's meant to allow plug-ins to track a particular query forward from
parse analysis to execution.
The queryId is intentionally not dumped into stored rules (and hence this
commit doesn't bump catversion). You could argue that choice either way,
but it seems better that stored rule strings not have any dependency
on plug-ins that might or might not be present.
Also, add a post_parse_analyze_hook that gets invoked at the end of
parse analysis (but only for top-level analysis of complete queries,
not cases such as analyzing a domain's default-value expression).
This is mainly meant to be used to compute and assign a queryId,
but it could have other applications.
Peter Geoghegan
|
|
Making this operation look like a utility statement seems generally a good
idea, and particularly so in light of the desire to provide command
triggers for utility statements. The original choice of representing it as
SELECT with an IntoClause appendage had metastasized into rather a lot of
places, unfortunately, so that this patch is a great deal more complicated
than one might at first expect.
In particular, keeping EXPLAIN working for SELECT INTO and CREATE TABLE AS
subcommands required restructuring some EXPLAIN-related APIs. Add-on code
that calls ExplainOnePlan or ExplainOneUtility, or uses
ExplainOneQuery_hook, will need adjustment.
Also, the cases PREPARE ... SELECT INTO and CREATE RULE ... SELECT INTO,
which formerly were accepted though undocumented, are no longer accepted.
The PREPARE case can be replaced with use of CREATE TABLE AS EXECUTE.
The CREATE RULE case doesn't seem to have much real-world use (since the
rule would work only once before failing with "table already exists"),
so we'll not bother with that one.
Both SELECT INTO and CREATE TABLE AS still return a command tag of
"SELECT nnnn". There was some discussion of returning "CREATE TABLE nnnn",
but for the moment backwards compatibility wins the day.
Andres Freund and Tom Lane
|
|
It now prints the argument that was at fault.
Also fix a small misbehavior where the error message issued by
getopt() would complain about a program named "--single", because
that's what argv[0] is in the server process.
|
|
This removes dependency of pooler process with catalog table cache.
Shared memory on pooler is organized now as follows:
- PoolerMemoryContext (well an existing one), allocated in TopMemoryContext
and used by the pooler process
- PoolerCoreContext, allocated in PoolerMemoryContext, used by database pool
contexts.
- PoolerAgentContext, pooler agent context and used by pooler agents.
The pooler agent now uses node Oids instead of node indexes. This
protects pooler agents in case of node reordering after catalog tables
being modified due to node DDL. The warning/error message which was thrown
back to client connection from server if connection information was
inconsistent is also removed thanks to that.
Two new GUC parameters are used to define the maximum number of
Coordinators and Datanodes on Coordinator respectively called
max_coordinators and max_datanodes. This represents the maximum
number of nodes that can be defined in cluster, and does not influence
the dynamic behavior of cluster.
A node definition slot in shared memory takes approximately 140 bytes.
Patch by Andrei Martsinchyk.
Review, some fix issues (preferred/primary support...) and some workarounds
by me. Performance has been checked by Sutou Takayuki.
|
|
We track the nodes involved in global transaction as and when we send
BEGIN commands to the node. The nodes are tracked as either read-only
participant or read-write participants. This is important to later
decide whether do perform a 2PC or a simple commit on the remote
nodes. We right now rely on the planner information to (and the patch
does not change that much) to know if a transaction is doing any write
activity or not. This can be problematic in the presence of volatile
functions that can change the database state.
The decision to whether run a statement in auto-commit mode on the
data node or within a transaction block is also hard. We can run a
statement in auto-commit mode if and only if one data node is involved
in the transaction AND the statement will be sent to the data node
only once. Before this work, there were places where we were not
encapsulating statements in a transaction block possibly leading to
wrong results. After fixing this, there is a chance that we might see
some performance impact.
There are many cleanups: Here is a list, but probably not complete:
1. We now have only one member (transactionId) to track local or
global transaction identifier. Since we should always be using global
identifier in XC, this reduces the risk of wrong IDs being assigned
2. I removed the additional transaction state - TBLOCK_END_NOT_GTM. It
wasn't clear why we need this
3. 2PC is now hooked into PrepareTransaction/CommitTransaction.
Similarly, rollbacks and part commits are handled in AbortTransaction
4. The GTM transaction termination is handled in
Commit/AbortTransaction. That gives is a single entry point for
transaction management
5. Instead of various flags, we now just track if the local node is
involved in the transaction or not. That helps us decide whether to
perform 2PC on the local and remote node. This can further be enhanced
by looking at some other existing globals that PostgreSQL maintains
6. I have ripped off many other functions which are either not needed
now or probably needed for explicit 2PC. I will take another look at
them when I add back support for explicit 2PC
|
|
Basics for DEFERRED constraints were not working for 2 reasons.
1) There was a bug with the management of transaction-related parameters
(parameters that can be used only inside a transaction block like SET
CONSTRAINT and SET LOCAL). When a transaction parameter was set in a
transaction block, it was correctly sent down to remote nodes where
connections were already in hold, but if new connections were obtained
the transaction parameter on hold was sent to remote node before any
BEGIN query, resulting in having no effects.
2) If an implicit prepare failed, remote nodes were not correctly
rollbacked, resulting in remote connections keeping locks on prepared
relations. This could result by a stale state if for example a DDL was
involved in transaction.
Patch by Abbas Butt, review and some improvements by me.
|
|
This separates the state (running/idle/idleintransaction etc) into
it's own field ("state"), and leaves the query field containing just
query text.
The query text will now mean "current query" when a query is running
and "last query" in other states. Accordingly,the field has been
renamed from current_query to query.
Since backwards compatibility was broken anyway to make that, the procpid
field has also been renamed to pid - along with the same field in
pg_stat_replication for consistency.
Scott Mead and Magnus Hagander, review work from Greg Smith
|
|
|
|
|