Age | Commit message (Collapse) | Author |
|
Pay attention to the attisdropped field and skip over TupleDesc fields
that have it set. Not a real problem until we get table returning
functions, but it's the right thing to do anyway.
Jan Urbański
|
|
As discussed, even if the PL needs a permanent memory location, it
should use palloc, not malloc. It also makes error handling easier.
Jan Urbański
|
|
If the function using yield to return rows fails halfway, the iterator
stays open and subsequent calls to the function will resume reading
from it. The fix is to unref the iterator and set it to NULL if there
has been an error.
Jan Urbański
|
|
Two separate hash tables are used for regular procedures and for
trigger procedures, since the way trigger procedures work is quite
different from normal stored procedures. Change the signatures of
PLy_procedure_{get,create} to accept the function OID and a Boolean
flag indicating whether it's a trigger. This should make implementing
a PL/Python validator easier.
Using HTABs instead of Python dictionaries makes error recovery
easier, and allows for procedures to be cached based on their OIDs,
not their names. It also allows getting rid of the PyCObject field
that used to hold a pointer to PLyProcedure, since PyCObjects are
deprecated in Python 2.7 and replaced by Capsules in Python 3.
Jan Urbański
|
|
Per bug #5835 by Julien Demoor
Author: Alex Hunsaker
|
|
We must stay in the function's SPI context until done calling the iterator
that returns the set result. Otherwise, any attempt to invoke SPI features
in the python code called by the iterator will malfunction. Diagnosis and
patch by Jan Urbanski, per bug report from Jean-Baptiste Quenot.
Back-patch to 8.2; there was no support for SRFs in previous versions of
plpython.
|
|
This was broken in 9.0 while improving plpython's conversion behavior for
bytea and boolean. Per bug report from maizi.
|
|
This patch adds the SQL-standard concept of an INSTEAD OF trigger, which
is fired instead of performing a physical insert/update/delete. The
trigger function is passed the entire old and/or new rows of the view,
and must figure out what to do to the underlying tables to implement
the update. So this feature can be used to implement updatable views
using trigger programming style rather than rule hacking.
In passing, this patch corrects the names of some columns in the
information_schema.triggers view. It seems the SQL committee renamed
them somewhere between SQL:99 and SQL:2003.
Dean Rasheed, reviewed by Bernd Helmle; some additional hacking by me.
|
|
Various places were testing TRIGGER_FIRED_BEFORE() where what they really
meant was !TRIGGER_FIRED_AFTER(), or vice versa. This needs to be cleaned
up because there are about to be more than two possible states.
We might want to note this in the 9.1 release notes as something for
trigger authors to double-check.
For consistency's sake I also changed some places that assumed that
TRIGGER_FIRED_FOR_ROW and TRIGGER_FIRED_FOR_STATEMENT are necessarily
mutually exclusive; that's not in immediate danger of breaking, but
it's still sloppier than it should be.
Extracted from Dean Rasheed's patch for triggers on views. I'm committing
this separately since it's an identifiable separate issue, and is the
only reason for the patch to touch most of these particular files.
|
|
|
|
This is reproducibly possible in Python 2.7 if the user turned
PendingDeprecationWarning into an error, but it's theoretically also possible
in earlier versions in case of exceptional conditions.
backpatched to 8.0
|
|
(_PG_init should be called only once anyway, but as long as it's got an
internal guard against repeat calls, that should be in front of the
version check.)
|
|
|
|
|
|
pg_pltemplate
This should have a catversion bump, but it's still being debated whether
it's worth it during beta.
|
|
conversion. Per bug #5497 from David Gardner.
|
|
Per report from Andres Freund.
|
|
memory if the result had zero rows, and also if there was any sort of error
while converting the result tuples into Python data. Reported and partially
fixed by Andres Freund.
Back-patch to all supported versions. Note: I haven't tested the 7.4 fix.
7.4's configure check for python is so obsolete it doesn't work on my
current machines :-(. The logic change is pretty straightforward though.
|
|
with a few strategically placed pg_verifymbstr calls.
|
|
In PLy_spi_execute_plan, use the data-type specific Python-to-PostgreSQL
conversion function instead of passing everything through InputFunctionCall
as a string. The equivalent fix was already done months ago for function
parameters and return values, but this other gateway between Python and
PostgreSQL was apparently forgotten. As a result, data types that need
special treatment, such as bytea, would misbehave when used with
plpy.execute.
|
|
|
|
old memory context in plpython. Before only one of them was marked
volatile, but per report from Zdenek Kotala, some compilers do the
wrong thing here.
|
|
The purpose of this change is to eliminate the need for every caller
of SearchSysCache, SearchSysCacheCopy, SearchSysCacheExists,
GetSysCacheOid, and SearchSysCacheList to know the maximum number
of allowable keys for a syscache entry (currently 4). This will
make it far easier to increase the maximum number of keys in a
future release should we choose to do so, and it makes the code
shorter, too.
Design and review by Tom Lane.
|
|
Also cleaned up some redundancies between the primary error messages and the
error context in PL/Python.
Hannu Valtonen
|
|
Mimic the Python interpreter's own logic for printing exceptions instead
of just using the straight str() call, so that
you get
plpy.SPIError
instead of
<class 'plpy.SPIError'>
and for built-in exceptions merely
UnicodeEncodeError
Besides looking better this cuts down on the endless version differences
in the regression test expected files.
|
|
Behaves more or less unchanged compared to Python 2, but the new language
variant is called plpython3u. Documentation describing the naming scheme
is included.
|
|
Support arrays as parameters and return values of PL/Python functions.
|
|
When the elog functions (plpy.info etc.) get a single argument, just print
that argument instead of printing the single-member tuple like ('foo',).
|
|
In PLy_output(), when the elog() call in the TRY branch throws an exception
(this can happen when a statement timeout kicks in, for example), the
PyErr_SetString() call in the CATCH branch can cause a segfault, because the
Py_XDECREF(so) call before it releases memory that is still used by the sv
variable that PyErr_SetString() uses as argument, because sv points into
memory owned by so.
Backpatched back to 8.0, where this code was introduced.
I also threw in a couple of volatile declarations for variables that are used
before and after the TRY. I don't think they caused the crash that I
observed, but they could become issues.
|
|
Check calls of PyUnicode_AsEncodedString() for NULL return, probably
because the encoding name is not known. Add special treatment for
SQL_ASCII, which Python definitely does not know.
Since using SQL_ASCII produces errors in the regression tests when
non-ASCII characters are involved, we have to put back various regression
test result variants.
|
|
PL/Python now accepts Unicode objects where it previously only accepted string
objects (for example, as return value). Unicode objects are converted to the
PostgreSQL server encoding as necessary.
This change is also necessary for future Python 3 support, which treats all
strings as Unicode objects.
Since this removes the error conditions that the plpython_unicode test file
tested for, the alternative result files are no longer necessary.
|
|
Before, PL/Python converted data between SQL and Python by going
through a C string representation. This broke for bytea in two ways:
- On input (function parameters), you would get a Python string that
contains bytea's particular external representation with backslashes
etc., instead of a sequence of bytes, which is what you would expect
in a Python environment. This problem is exacerbated by the new
bytea output format.
- On output (function return value), null bytes in the Python string
would cause truncation before the data gets stored into a bytea
datum.
This is now fixed by converting directly between the PostgreSQL datum
and the Python representation.
The required generalized infrastructure also allows for other
improvements in passing:
- When returning a boolean value, the SQL datum is now true if and
only if Python considers the value that was passed out of the
PL/Python function to be true. Previously, this determination was
left to the boolean data type input function. So, now returning
'foo' results in true, because Python considers it true, rather than
false because PostgreSQL considers it false.
- On input, we can convert the integer and float types directly to
their Python equivalents without having to go through an
intermediate string representation.
original patch by Caleb Welton, with updates by myself
|
|
Extract the "while creating return value" and "while modifying trigger
row" parts of some error messages into another layer of error context.
This will simplify the upcoming patch to improve data type support, but
it can stand on its own.
|
|
Switch the implementation of the plan and result types to generic attribute
management, as described at <http://docs.python.org/extending/newtypes.html>.
This modernizes and simplifies the code a bit and prepares for Python 3.1,
where the old way doesn't work anymore.
|
|
When examining what Python type to convert a PostgreSQL type to on input,
look at the base type of the input type, otherwise all domains end up
defaulting to string.
|
|
|
|
Error messages from PL/Python now always mention the function name in the
CONTEXT: field. This also obsoletes the few places that tried to do the
same manually.
Regression test files are updated to work with Python 2.4-2.6. I don't have
access to older versions right now.
|
|
provided by Andrew.
|
|
by extending the ereport() API to cater for pluralization directly. This
is better than the original method of calling ngettext outside the elog.c
code because (1) it avoids double translation, which wastes cycles and in
the worst case could give a wrong result; and (2) it avoids having to use
a different coding method in PL code than in the core backend. The
client-side uses of ngettext are not touched since neither of these concerns
is very pressing in the client environment. Per my proposal of yesterday.
|
|
for its arguments. Also add a regression test, since someone apparently
changed every single plpython test case to use only named parameters; else
we'd have noticed this sooner.
Euler Taveira de Oliveira, per a report from Alvaro
|
|
In the backend, I changed only a handful of exemplary or important-looking
instances to make use of the plural support; there is probably more work
there. For the rest of the source, this should cover all relevant cases.
|
|
PLy_exception_set, and clarify some error messages.
|
|
to the gettext domain name, to simplify parallel installations.
Also, rename set_text_domain() to pg_bindtextdomain(), because that is what
it does.
|
|
the proc->argnames array has to be initialized to zero immediately on creation,
since the error recovery path will try to free its elements.
|
|
and heap_deformtuple in favor of the newer functions heap_form_tuple et al
(which do the same things but use bool control flags instead of arbitrary
char values). Eliminate the former duplicate coding of these functions,
reducing the deprecated functions to mere wrappers around the newer ones.
We can't get rid of them entirely because add-on modules probably still
contain many instances of the old coding style.
Kris Jurka
|
|
|
|
the ereport macro. Included in this commit are enough files for starting
plpgsql, plpython, plperl and pltcl translations.
|
|
(Unlike the original submission, this patch treats TABLE output parameters
as being entirely equivalent to OUT parameters -- tgl)
Pavel Stehule
|
|
so long as all the trailing arguments are of the same (non-array) type.
The function receives them as a single array argument (which is why they
have to all be the same type).
It might be useful to extend this facility to aggregates, but this patch
doesn't do that.
This patch imposes a noticeable slowdown on function lookup --- a follow-on
patch will fix that by adding a redundant column to pg_proc.
Pavel Stehule
|
|
unnecessary #include lines in it. Also, move some tuple routine prototypes and
macros to htup.h, which allows removal of heapam.h inclusion from some .c
files.
For this to work, a new header file access/sysattr.h needed to be created,
initially containing attribute numbers of system columns, for pg_dump usage.
While at it, make contrib ltree, intarray and hstore header files more
consistent with our header style.
|