Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Clarify the the role of utf-8 #181

Open
akhmerov opened this issue May 29, 2020 · 8 comments
Open

Clarify the the role of utf-8 #181

akhmerov opened this issue May 29, 2020 · 8 comments

Comments

@akhmerov
Copy link
Member

In several places nbformat seems to choose utf-8 as the default encoding (in particular when read or write get filenames as input).

Does that mean that notebook files must be utf-8 encoded? If yes, perhaps it would be worth to state it explicitly.

See jupyter/jupyter-sphinx#125 for an example context where this matters.

@MSeal
Copy link
Contributor

MSeal commented May 29, 2020

Yes most of the toolchains around ipynb files assume utf-8 encoding. That's a good point that it's not documented.

@takluyver
Copy link
Member

The spec for JSON essentially says it's stored in UTF-8 unless a specific 'closed ecosystem' needs something different:

JSON text exchanged between systems that are not part of a closed ecosystem MUST be encoded using UTF-8

But it doesn't hurt to make it explicit.

@mathograham
Copy link

mathograham commented Sep 21, 2021

Hi, I'd like to try to work on stating explicitly that UTF-8 encoding must be used. Where should this information be stated, in the nbformat documentation? This would be my first attempt at a contribution to code, so please let me know if there is anything else I need to consider. Thanks!

@akhmerov
Copy link
Member Author

Since it's a statement about the overall format, I'd say it belongs close to the statement that the file is JSON, so perhaps around here: https://nbformat.readthedocs.io/en/latest/format_description.html

That's my take, the maintainers might have a different opinion though.

@westurner
Copy link
Contributor

To quote in full from the stdlib json module docs: https://docs.python.org/3/library/json.html#character-encodings 👍

Standard Compliance and Interoperability
----------------------------------------

The JSON format is specified by :rfc:`7159` and by
`ECMA-404 <http://www.ecma-international.org/publications/standards/Ecma-404.htm>`_.
This section details this module's level of compliance with the RFC.
For simplicity, :class:`JSONEncoder` and :class:`JSONDecoder` subclasses, and
parameters other than those explicitly mentioned, are not considered.

This module does not comply with the RFC in a strict fashion, implementing some
extensions that are valid JavaScript but not valid JSON.  In particular:

- Infinite and NaN number values are accepted and output;
- Repeated names within an object are accepted, and only the value of the last
  name-value pair is used.

Since the RFC permits RFC-compliant parsers to accept input texts that are not
RFC-compliant, this module's deserializer is technically RFC-compliant under
default settings.

Character Encodings
^^^^^^^^^^^^^^^^^^^

The RFC requires that JSON be represented using either UTF-8, UTF-16, or
UTF-32, with UTF-8 being the recommended default for maximum interoperability.

As permitted, though not required, by the RFC, this module's serializer sets
*ensure_ascii=True* by default, thus escaping the output so that the resulting
strings only contain ASCII characters.

Other than the *ensure_ascii* parameter, this module is defined strictly in
terms of conversion between Python objects and
:class:`Unicode strings <str>`, and thus does not otherwise directly address
the issue of character encodings.

The RFC prohibits adding a byte order mark (BOM) to the start of a JSON text,
and this module's serializer does not add a BOM to its output.
The RFC permits, but does not require, JSON deserializers to ignore an initial
BOM in their input.  This module's deserializer raises a :exc:`ValueError`
when an initial BOM is present.

The RFC does not explicitly forbid JSON strings which contain byte sequences
that don't correspond to valid Unicode characters (e.g. unpaired UTF-16
surrogates), but it does note that they may cause interoperability problems.
By default, this module accepts and outputs (when present in the original
:class:`str`) code points for such sequences.

Are you suggesting that the nbformat spec needs to say:

  • UTF-8 only
  • No BOM: Byte Order Mark
  • ensure_ascii=yes|no

What use cases would this unnecessarily impede?

Is this solving for an actual current problem?

@sls1005
Copy link

sls1005 commented Nov 12, 2023

I suggest stating that some particular encoding (like UTF-8) is preferred or recommended, or is the default.

@westurner
Copy link
Contributor

westurner commented Nov 13, 2023 via email

@sls1005
Copy link

sls1005 commented Mar 9, 2024

To make people know better how to generate or decode this kind of files.

It will not affect the files but those who use or will use them.

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Projects
None yet
Development

No branches or pull requests

6 participants