Week 5
Week 5
When the entire file is numeric, the data will be stored as a homogeneous array using one of the numeric
data types, typically float64. In this example, the first column contains Excel dates as numbers, which are the
number of days past January 1, 1900.
>>> csv_data = read_csv('FTSE_1984_2012_numeric.csv')
>>> csv_data = csv_data.to_numpy()
>>> csv_data[:4,:2]
array([[ 40954. , 5899.9],
[ 40953. , 5905.7],
[ 40952. , 5852.4],
[ 40949. , 5895.5]])
76 Importing and Exporting Data
• Use another program, such as Microsoft Excel, to manipulate data before importing.
• Dates should be converted to YYYYMMDD, a numeric format, before importing. This can be done in
Excel using the formula:
=10000⁎YEAR(A1)+100⁎MONTH(A1)+DAY(A1)+(A1-FLOOR(A1,1))
• Store times separately from dates using a numeric format such as seconds past midnight or HHmmSS.sss.
loadtxt
loadtxt is a simple, fast text importer. The basic use is loadtxt(filename), which will attempt to load the
data in file name as floats. Other useful named arguments include delim, which allow the file delimiter to be
specified, and skiprows which allows one or more rows to be skipped.
loadtxt requires the data to be numeric and so is only useful for the simplest files.
>>> data = loadtxt('FTSE_1984_2012.csv',delimiter=',') # Error
ValueError: could not convert string to float: Date
genfromtxt
genfromtxt is a slightly slower, more robust importer. genfromtxt is called using the same syntax as loadtxt,
but will not fail if a non-numeric type is encountered. Instead, genfromtxt will return a NaN (not-a-number)
for fields in the file it cannot read.
>>> data = genfromtxt('FTSE_1984_2012.csv',delimiter=',')
>>> data[0]
array([ nan, nan, nan, nan, nan, nan, nan])
>>> data[1]
array([ nan, 5.89990000e+03, 5.92380000e+03, 5.88060000e+03, 5.89220000e+03, 8.01550000e+08,
5.89220000e+03])
csv2rec
csv2rec has been removed from matplotlib. pandas is the preferred method to import csv data.
wb = xlrd.open_workbook('FTSE_1984_2012.xls')
# To read xlsx change the filename
# wb = xlrd.open_workbook('FTSE_1984_2012.xlsx')
sheetNames = wb.sheet_names()
# Assumes 1 sheet name
sheet = wb.sheet_by_name(sheetNames[0])
excelData = [] # List to hold data
for i in range(sheet.nrows):
78 Importing and Exporting Data
excelData.append(sheet.row_values(i))
The listing does a few things. First, it opens the workbook for reading (open_workbook), then it gets the
sheet names (wb.sheet_names()) and opens a sheet (wb.sheet_by_name using the first sheet name in the file,
sheetNames[0]). From the sheet, it gets the number of rows (sheet.nrows), and fills a list with the values,
row-by-row. Once the data has been read-in, the final block fills an array with the opening prices. This is
substantially more complicated than importing from a CSV file, although reading Excel files is useful for
automated work (e.g. you have no choice but to import from an Excel file since it is produced by some other
software).
openpyxl
openpyxl reads and writes the modern Excel file format (.xlsx) that is the default in Office 2007 or later.
openpyxl also supports a reader and writer which is optimized for large files, a feature not available in xlrd.
Unfortunately, openpyxl uses a different syntax from xlrd, and so some modifications are required when using
openpyxl.
import openpyxl
wb = openpyxl.load_workbook('FTSE_1984_2012.xlsx')
sheetNames = wb.sheetnames
# Assumes 1 sheet name
sheet = wb[sheetNames[0]]
rows = sheet.rows
The strategy with 2007/10/13 xlsx files is essentially the same as with 97/2003 files. The main difference is
that the command sheet.rows() returns a tuple containing the all of the rows in the selected sheet. Each row
is itself a tuple which contains Cells (which are a type created by openpyxl), and each cell has a value (Cells
also have other useful attributes such as data_type and methods such as is_date()) .
Using the optimized reader is similar. The primary differences are:
• The number of rows is not known, and so it isn’t possible to pre-allocate the storage variable with the
correct number of rows.
import openpyxl
wb = openpyxl.load_workbook('FTSE_1984_2012.xlsx')
sheetNames = wb.sheetnames
# Assumes 1 sheet name
sheet = wb[sheetNames[0]]
MATLAB stores data in a standard format known as Hierarchical Data Format, of HDF. HDF is a generic stor-
age technology that provides fast access to stored data as well as on-the-fly compression. Starting in MATLAB
7.3, mat files are stored using HDF version 5, and so can be read in using the PyTables package.
>>> import tables
>>> matfile = tables.open_file('FTSE_1984_2012_v73.mat')
>>> matfile.root
/ (RootGroup) ''
children := ['volume' (CArray), 'high' (CArray), 'adjclose' (CArray), 'low' (CArray), '
close' (CArray), 'open' (CArray)]
>>> matfile.root.open
/open (CArray(1, 7042), zlib(3)) ''
atom := Float64Atom(shape=(), dflt=0.0)
maindim := 0
flavor := 'numpy'
byteorder := 'little'
chunkshape := (1, 7042)
SciPy enables legacy MATLAB data files (mat files) to be read. The most recent file format, V7.3, is not
supported but can be read using PyTables or h5py. Data from compatible mat files can be loaded using loadmat.
The data is loaded into a dictionary, and individual variables are accessed using the keys of the dictionary.
>>> import scipy.io as sio
>>> mat_data = sio.loadmat('FTSE_1984_2012.mat')
>>> type(mat_data)
dict
>>> mat_data.keys()
['volume',
'__header__',
'__globals__',
'high',
'adjclose',
'low',
'close',
80 Importing and Exporting Data
'__version__',
'open']
f = open('IBM_TAQ.txt', 'r')
line = f.readline()
# Burn the first list as a header
line = f.readline()
date = []
time = []
price = []
volume = []
while line:
data = line.split(',')
date.append(int(data[1]))
price.append(float(data[3]))
volume.append(int(data[4]))
t = data[2]
time.append(int(t.replace(':','')))
line = f.readline()
f.close()
• Rereads the file parsing each line by the location of the commas using split(',') to split the line at
each comma into a list
• Uses replace(':','') to remove colons from the times
• Uses int() and float() to convert strings to numbers
• Closes the file directly using close()
savez_compressed('test',x=x,otherData=y)
data = load('test.npz')
# x=x provides the name x for the data in x
x = data['x']
# otherDate = y saves the data in y as otherData
y = data['otherData']
A version which does not compress data but is otherwise identical is savez. Compression is usually a good
idea and is very helpful for storing arrays which have repeated values and are large.
x = array([1.0,2.0,3.0])
y = zeros((10,10))
# Set up the dictionary
saveData = {'x':x, 'y':y}
sio.savemat('test',saveData,do_compression=True)
# Read the data back in
mat_data = sio.loadmat('test.mat')
savemat uses the optional argument do_compression = True, which compresses the data, and is generally a
good idea on modern computers and/or for large datasets.
82 Importing and Exporting Data
8.5 Exercises
Note: There are no exercises using pandas in this chapter. For exercises using pandas to read or write data,
see Chapter 16.
1. The file exercise3.xls contains three columns of data, the date, the return on the S&P 500, and the return
on XOM (ExxonMobil). Using Excel, convert the date to YYYYMMDD format and save the file.
2. Save the file as both CSV and tab delimited. Use the three text readers to read the file, and compare the
arrays returned.
3. Parse loaded data into three variables, dates, SP500 and XOM.
4. Save NumPy, compressed NumPy and MATLAB data files with all three variables. Which files is the
smallest?
5. Construct a new variable, sumreturns as the sum of SP500 and XOM. Create another new variable,
outputdata as a horizontal concatenation of dates and sumreturns.