Age | Commit message (Collapse) | Author |
|
|
|
json_get_type/json_type
|
|
|
|
discussion.
|
|
|
|
The SearchSysCacheList1 macro was introduced in PostgreSQL 9.
Changed a couple tests to cope with formatting discrepancy
introduced by PostgreSQL 9 (namely, + characters between newlines).
|
|
json.html is currently a hand-edited version of the file produced by
running `make html` in the PostgreSQL source tree with json.sgml
added to doc/src/sgml/ and a corresponding entry added to
doc/src/sgml/filelist.sgml . Namely, I removed links and section
numbers, and added "♫" by hand because openjade wasn't being
cooperative with respect to unicode.
|
|
"\uD840\uDC00".
|
|
|
|
|
|
Also made json_delete a little less confusing (but it still uses gotos).
|
|
|
|
enumLabelToOid merely looked up a single enum entry, while
getEnumLabelOids looks up all of them in bulk.
|
|
Also touched up documentation for FN_EXTRA a bit.
|
|
|
|
Requiring decoded escapes to not be 0xFFFE or 0xFFFF is overzealous,
I think. In any case, this isn't even a comprehensive list of the codepoints
considered "invalid".
Also, removed utf8_encode_char helper function, as it was extremely trivial
and used in only one place.
|
|
|
|
|
|
|
|
Note that this is currently untested with server encodings other than UTF-8.
The encoding policy used is: JSON nodes and most of the JSON functions still
operate in UTF-8. Strings are converted between server encoding and UTF-8
when they go in and out of varlena (text*), and a set of helper functions
are implemented to make these conversions simple to apply.
It is done this way because converting individual codepoints to/from whatever
the server encoding may be is nontrivial (possibly requires a loaded module).
The JSON code needs to encode/decode codepoints when it deals with escapes.
Although a more clever and efficient solution might be to defer charset
conversions to when they're necessary (e.g. round up all the escapes
and encode them all at once), this is not simple, and it's probably not much
more efficient, either. Conversions to/from server encoding and UTF-8
are no-ops when the server encoding is UTF-8, anyway.
|
|
PostgreSQL's pg_wchar.h routines.
* Touched up various functions' documentation.
json_node's are currently encoded in UTF-8, and the JSON module is not
100% compatible with arbitrary server encodings yet. I plan to switch
from UTF-8 to the server encoding pretty soon, after which JSON should be
a well-behaved datatype as far as charsets go.
|
|
* Removed json_cleanup and json_validate_liberal.
json_cleanup was badly in need of a rewrite.
|
|
* A few various cleanups.
|
|
pg_indent again).
|
|
* char() method added to JSONPath for extracting chars from strings.
Although index subscripts (those using an integer) extract characters from
strings in Stefan Goessner's JSONPath, and although it works that way in
JavaScript, I believe it is rather illogical and unexpected in the context
of JSONPath, and a poor use of the [] real estate.
Because extracting characters from strings can still be useful, I have added
a char() method for this. I implemented it now to prevent the supporting code
for character extraction from wasting away.
|
|
The tests no longer take up 2.4 megabytes, and they'll be easier to update
whenever semantics are changed (e.g. switching json_path '$..*' from
matching breadth-first to depth-first).
|
|
I wrote an ad-hoc script in Haskell to generate SGML for the function table,
and it is now in the repository (doc-to-sgml.hs) along with the corresponding
input file (documentation).
|
|
|
|
* Reworked json_node::orig so it can handle replacing the key or value
without altering surrounding formatting.
* json_encode (C function) is now recursive (its stack usage depends
on the depth of input). This was done because json_encode cannot rely
on nodes' ->parent fields being trustworthy, as json_replace_value
assigns JSON nodes by reference.
|
|
Although automatic from_json may be convenient in some cases, it would require
introducing an extra pair of functions (e.g. json_get_json and json_set_json)
that work with JSON rather than converted JSON. Unless
from_json(json_get(...)) turns out to be by far the most common usage pattern,
I think knocking out the conversion wrapping here makes a lot more sense.
|
|
|
|
functions that have one.
|
|
JPRef is a structure representing an item matched by jp_match.
Before, plain old json_node was used, which couldn't distinguish between
mutable references (actual JSON nodes from the original input)
and immutable references (e.g. characters in a string).
Yes, characters in a string are regarded as immutable because:
* That's how it is in JavaScript.
* It's easier to implement.
|
|
parse_json_path .
|
|
* Affirmed (with a trivial test) that json_path returns its results breadth-first
|
|
* Removed the global variable json_escape_unicode and added a parameter for it
to the json_encode and json_encode_string C functions.
|
|
* Migrated test strings from json-0.0.2
* Added struct {...} orig; field to struct json_node, but it's not used yet.
It will be used to hold pointers to original text.
|
|
|
|
Currently, it re-encodes JSON nodes rather than preserving original text.
This is subject to change.
|
|
|
|
|
|
Added PGXS support to Makefile
|
|
This repository should probably be an honest-to-goodness branch of
mainline PostgreSQL. I'm looking into that :-)
|
|
|
|
|
|
its type name and value string.
|
|
rather ugly, but appears to work (and has been testcased).
* Reordered json_type enum in json.h to match json_type_t in json.sql.in .
|
|
The code could use another pair of eyes.
|