Skip to content

Inconsistent casting behavior in array-scalar vs array-array multiplication #5297

@idfah

Description

@idfah

Say we have an array of single precision floats.

>>> a = np.array((1,2,3), dtype=np.float32)
>>> a.dtype
dtype('float32')

If we multiply this array with a 64 bit int scalar, the result still consists of single precision floats.

>>> (a*1L).dtype
dtype('float32')

This is also true of we multiply with a 0-d numpy array containing a 64 bit int.

>>> (a*np.array(1L)).dtype
dtype('float32')

However, if we multiply with an 1-d (or n-d) array of 64 bit ints, we now get double precision floats.

>>> (a*np.array((1L,))).dtype
dtype('float64')
>>> (a*np.ones((1,))).dtype
dtype('float64')
>>> (a*np.ones(a.shape)).dtype
dtype('float64')

I can see the argument for either behavior (since the int has 64 bits perhaps the result should too) but it seems like it should be consistent either way.

I stumbled across this when I discovered that np.meshgrid changes the dtype of single precision float arrays (perhaps a separate issue?) because it multiplies by an array of np.ones unless copy=False.

>>> ax, ay = np.meshgrid(a,a)
>>> ax.dtype
dtype('float64')

>>> ax, ay = np.meshgrid(a,a, copy=False)
>>> ax.dtype
dtype('float32')

Metadata

Metadata

Assignees

No one assigned

    Labels

    No labels
    No labels

    Type

    No type

    Projects

    No projects

    Milestone

    No milestone

    Relationships

    None yet

    Development

    No branches or pull requests

    Issue actions