Skip to content

np.array([0.0, 1.0, 2.0], ndmin=2) fails on big endian system #14767

@mwhudson

Description

@mwhudson

Reproducing code example:

import numpy as np
np.array([0.0, 1.0, 2.0], ndmin=2) 

on a big endian system, e.g. Ubuntu on s390x (which I realise is not the most accessible of platforms)

Error message:

ValueError: ndmin bigger than allowable number of dimensions NPY_MAXDIMS (=32)

Numpy/Python version information:

1.17.3

What's going on? Well e256762 changed the ndmin local variable from an "int" to an "npy_intp". On 64-bit linux int is 32 bits and npy_intp aka intptr_t is 64 bits. &ndmin is passed to PyArg_ParseTupleAndKeywords against a "i" argument spec ... and you can see where this is going. Casting a long* to an int* and writing through it works OK on little endian but very much does not on big endian. There's no PyArg_ParseTupleAndKeywords code for intptr_t, so the best fix is probably to change ndmin back to an int and be a little careful in the fast path that "ndmin = PyLong_AsLong(ndmin_obj)" does not overflow.

Metadata

Metadata

Assignees

No one assigned

    Type

    No type

    Projects

    No projects

    Milestone

    No milestone

    Relationships

    None yet

    Development

    No branches or pull requests

    Issue actions