-
-
Notifications
You must be signed in to change notification settings - Fork 18.7k
Description
Pandas version checks
-
I have checked that this issue has not already been reported.
-
I have confirmed this bug exists on the latest version of pandas.
-
I have confirmed this bug exists on the main branch of pandas.
Reproducible Example
In [1]: df = pd.DataFrame({'ts': [pd.Timestamp('20230101 00:00:00+00:00')]})
[PYFLYBY] import pandas as pd
In [2]: df.to_parquet("~/tmp/test.parquet")
In [3]: with pd.option_context("mode.dtype_backend", "pyarrow"):
...: df2 = pd.read_parquet("~/tmp/test.parquet")
...:
In [4]: df2
Out[4]:
ts
0 2023-01-01 00:00:00+00:00
In [5]: df2.info()
<class 'pandas.core.frame.DataFrame'>
RangeIndex: 1 entries, 0 to 0
Data columns (total 1 columns):
# Column Non-Null Count Dtype
--- ------ -------------- -----
0 ts 1 non-null timestamp[us, tz=UTC][pyarrow]
dtypes: timestamp[us, tz=UTC][pyarrow](1)
memory usage: 137.0 bytes
In [6]: df2.ts.dt.tz_localize(None)
Out[6]:
0 2023-01-01 00:00:00
Name: ts, dtype: timestamp[us][pyarrow]
In [7]: df2.ts.dt.tz_convert('America/Chicago')
---------------------------------------------------------------------------
AttributeError Traceback (most recent call last)
Cell In [7], line 1
----> 1 df2.ts.dt.tz_convert('America/Chicago')
AttributeError: 'ArrowTemporalProperties' object has no attribute 'tz_convert'
Issue Description
I think #50954 might have missed adding an implementation for tz_convert
to ArrowTemporalProperties
(maybe this was by design?). With mode.dtype_backend='pyarrow'
, our existing dataframes are being loaded as timestamp[us, tz=UTC][pyarrow]
, but any calls to tz_convert fail.
Expected Behavior
Ideally we can implement tz_convert
natively so that it's fast.
If not, I think I'd prefer that tz_convert be monkey-patched in with a conversion to numpy data type, then tz_convert, then back to PyArrow backend (assuming that the conversion is lossless) so that existing APIs don't break when the dtype_backend changes. A warning might be appropriate there (so users know why the call might be slow, though I think it should still be pretty quick).
Installed Versions
INSTALLED VERSIONS
commit : 6bb8f73
python : 3.10.4.final.0
python-bits : 64
OS : Linux
OS-release : 5.15.90.1-microsoft-standard-WSL2
Version : #1 SMP Fri Jan 27 02:56:13 UTC 2023
machine : x86_64
processor : x86_64
byteorder : little
LC_ALL : None
LANG : C.UTF-8
LOCALE : en_US.UTF-8
pandas : 2.1.0.dev0+93.g6bb8f73e75
numpy : 1.23.3
pytz : 2022.2.1
dateutil : 2.8.2
setuptools : 58.1.0
pip : 22.3
Cython : 0.29.32
pytest : 7.1.3
hypothesis : None
sphinx : None
blosc : None
feather : None
xlsxwriter : None
lxml.etree : 4.9.1
html5lib : None
pymysql : None
psycopg2 : None
jinja2 : 3.1.2
IPython : 8.5.0
pandas_datareader: 0.10.0
bs4 : 4.11.1
bottleneck : 1.3.6
brotli : None
fastparquet : 0.8.3
fsspec : 2022.8.2
gcsfs : None
matplotlib : 3.5.3
numba : 0.56.4
numexpr : 2.8.4
odfpy : None
openpyxl : 3.0.10
pandas_gbq : None
pyarrow : 9.0.0
pyreadstat : None
pyxlsb : None
s3fs : 2022.8.2
scipy : 1.9.1
snappy : None
sqlalchemy : 1.4.41
tables : 3.8.0
tabulate : None
xarray : 2023.2.0
xlrd : 2.0.1
zstandard : 0.18.0
tzdata : None
qtpy : 2.3.0
pyqt5 : None