Linear Algebra in Python
Linear Algebra in Python
import numpy as np
import numpy.linalg as la
In [2]:
x = .5
print(x)
0.5
The numpy library (we will reference it by np) is the workhorse library for linear algebra in
python. To creat a vector simply surround a python list ([1,2,3][1,2,3]) with the np.array
function:
In [3]:
x_vector = np.array([1,2,3])
print(x_vector)
[1 2 3]
We could have done this by defining a python list and converting it to an array:
In [4]:
c_list = [1,2]
print("The list:",c_list)
print("Has length:", len(c_list))
c_vector = np.array(c_list)
print("The vector:", c_vector)
print("Has shape:",c_vector.shape)
The list: [1, 2]
Has length: 2
The vector: [1 2]
Has shape: (2,)
In [5]:
z = [5,6]
print("This is a list, not an array:",z)
print(type(z))
This is a list, not an array: [5, 6]
<class 'list'>
In [6]:
zarray = np.array(z)
print("This is an array, not a list",zarray)
print(type(zarray))
This is an array, not a list [5 6]
<class 'numpy.ndarray'>
NumPy Arrays
1D NumPy array as a list of numbers. We can think of a 2D NumPy array as a matrix.
And we can think of a 3D array as a cube of numbers. When we select a row or column
from a 2D NumPy array, the result is a 1D NumPy array (called a slice).
Array Attributes
Create a 1D (one-dimensional) NumPy array and verify its dimensions, shape and size.
a = np.array([1,3,-2,1])
print(a)
Output: [ 1 3 -2 1]
a.shape
Output: (4,)
The shape of an array is returned as a Python tuple. The output in the cell above is a
tuple of length 1. And we verify the size of the array (ie. the total number of entries in
the array):
a.size
Output: 4
M = np.array([[1,2],[3,7],[-1,5]])
print(M)
Output: [[ 1 2]
[ 3 7]
[-1 5]]
When we select a row of column from a 2D NumPy array, the result is a 1D NumPy
array. However, we may want to select a column as a 2D column vector. This requires
us to use the reshape method.
For example, create a 2D column vector from the 1D slice selected from the
matrix M above:
print(col)
Output: [2 7 5]
column = np.array([2,7,5]).reshape(3,1)
print(column)
Output: [[2]
[7]
[5]]
print('Dimensions:', column.ndim)
print('Shape:', column.shape)
print('Size:', column.size)
Output:
Dimensions: 2
Shape: (3, 1)
Size: 3
The variables col and column are different types of objects even though they have
the “same” data.
print(col)
Output: [2 7 5]
print('Dimensions:',col.ndim)
print('Shape:',col.shape)
print('Size:',col.size)
Output:
Dimensions: 1
Shape: (3,)
Size: 3
Matrices
In [7]:
b = list(zip(z,c_vector))
print(b)
print("Note that the length of our zipped list is 2 not (2 by 2):",len(b))
[(5, 1), (6, 2)]
Note that the length of our zipped list is 2 not (2 by 2): 2
In [8]:
print( "But we can convert the list to a matrix like this:")
A = np.array(b)
print( A)
print( type(A))
print( "A has shape:",A.shape)
But we can convert the list to a matrix like this:
[[5 1]
[6 2]]
<class 'numpy.ndarray'>
A has shape: (2, 2)
Matrix Addition and Subtraction
Notice that we add (or subtract) the scalar value to each element in the matrix A. A can be of
any dimension.
In [9]:
result = A + 3
#or
result = 3 + A
print( result)
[[8 4]
[9 5]]
Adding or subtracting two matrices
Notice that the result of a matrix addition or subtraction operation is always of the same
dimension as the two operands.
B = np.random.randn(2,2)
print( B)
[[-0.9959588 1.11897568]
[ 0.96218881 -1.10783668]]
In [11]:
result = A + B
result
Out[11]:
array([[4.0040412 , 2.11897568],
[6.96218881, 0.89216332]])
Arithmetic Operations
M = np.array([[3,4],[-1,5]])
print(M)
Output:
[[ 3 4]
[-1 5]]
M*M
array([[ 9, 16],
[ 1, 25]])
Matrix Multiplication
We use the @ operator to do matrix multiplication with NumPy arrays:
M@M
array([[ 5, 32],
[-8, 21]])
A = np.array([[1,3],[-1,7]])
print(A)
Output:
[[ 1 3]
[-1 7]]
B = np.array([[5,2],[1,2]])
print(B)
Output:
[[5 2]
[1 2]]
I = np.eye(2)
print(I)
Output:
[[1. 0.]
[0. 1.]]
array([[-3., 1.],
[-5., 11.]])
We will use the numpy dot operator to perform the these multiplications. You can use it two
ways to yield the same result:
In [14]:
print( A.dot(C))
print( np.dot(A,C))
[[-1.19691566 1.08128294]
[-2.47040472 1.00586034]
[-3.74389379 0.93043773]]
[[-1.19691566 1.08128294]
[-2.47040472 1.00586034]
[-3.74389379 0.93043773]]
Matrix Powers
M = np.array([[3,4],[-1,5]])
print(M)
Output:
[[ 3 4]
[-1 5]]
mpow(M,2)
Output:
array([[ 5, 32],
[-8, 21]])
mpow(M,5)
Output:
array([[-1525, 3236],
[ -809, 93]])
M@M@M@M@M
Output:
array([[-1525, 3236],
[ -809, 93]])
mpow(M,3)
Output:
array([[-17, 180],
[-45, 73]])
M@M@M
Output:
array([[-17, 180],
[-45, 73]])
Transpose
Output:
[[ 3 4]
[-1 5]]
print(M.T)
Output:
[[ 3 -1]
[ 4 5]]
M @ M.T
Output:
array([[25, 17],
[17, 26]])
A = np.arange(6).reshape((3,2))
B = np.arange(8).reshape((2,4))
print( "A is")
print( A)
print( "The Transpose of A is")
print( A.T)
A is
[[0 1]
[2 3]
[4 5]]
The Transpose of A is
[[0 2 4]
[1 3 5]]
Inverse
We can find the inverse using the function scipy.linalg.inv:
A = np.array([[1,2],[3,4]])
print(A)
Output:
[[1 2]
[3 4]]
la.inv(A)
Output:
array([[-2. , 1. ],
[ 1.5, -0.5]])
Trace
We can find the trace of a matrix using the function numpy.trace:
np.trace(A)
Output:
5
Determinant
A = np.array([[1,2],[3,4]])
print(A)
Output:
[[1 2]
[3 4]]
la.det(A)
Output:
-2.0
Projections
The formula to project a vector v onto a vector w is
def proj(v,w):
'''Project vector v onto w.'''
v = np.array(v)
w = np.array(w)
return np.sum(v * w)/np.sum(w * w) * w # or (v @ w)/(w @ w) * w
proj([1,2,3],[1,1,1])
Output:
array([2., 2., 2.])
Definition
Let A be a square matrix. A non-zero vector v is
an eigenvector for A with eigenvalue λ if
Av=λv
numpy.linalg.eig
A = np.array([[1,0],[0,-2]])
print(A)
Output:
[[ 1 0]
[ 0 -2]]
results = la.eig(A)
print(results[0])
Output:
[ 1.+0.j -2.+0.j]
The corresponding eigenvectors are:
print(results[1])
Output:
[[1. 0.]
[0. 1.]]
Output:
[ 1.+0.j -2.+0.j]
print(eigvecs)
Output:
[[1. 0.]
[0. 1.]]
If we know that the eigenvalues are real numbers (ie. if A is symmetric), then we can
use the NumPy array method .real to convert the array of eigenvalues to real numbers:
eigvals = eigvals.real
print(eigvals)
Output:
[ 1. -2.]
Notice that the position of an eigenvalue in the array eigvals correspond to the column
in eigvecs with its eigenvector:
lambda1 = eigvals[1]
print(lambda1)
Output:
-2.0
v1 = eigvecs[:,1].reshape(2,1)
print(v1)
Output:
[[0.]
[1.]]
A @ v1
Output:
array([[ 0.],
[-2.]])
lambda1 * v1
Output:
array([[-0.],
[-2.]])
Symmetric Matrices
The eigenvalues of a symmetric matrix are always real and the eigenvectors are
always orthogonal! Let's verify these facts with some random matrices:
n=4
P = np.random.randint(0,10,(n,n))
print(P)
Output:
[[7 0 6 2]
[9 5 1 3]
[0 2 2 5]
[6 8 8 6]]
Output:
[[ 89 75 22 102]
[ 75 116 27 120]
[ 22 27 33 62]
[102 120 62 200]]
Output:
The eigenvalues all have zero imaginary part and so they are indeed real numbers:
evals = evals.real
print(evals)
Output:
print(evecs)
Output:
Output:
Output:
The dot product of eigenvectors v1 and v2 is zero (the number above is very close to
zero and is due to rounding errors in the computations) and so they are orthogonal!
Diagonalization
Let's use this to construct a matrix with given eigenvalues λ1=3,λ2=1, and
eigenvectors v1=[1,1]T,v2=[1,−1]T.
P = np.array([[1,1],[1,-1]])
print(P)
Output:
[[ 1 1]
[ 1 -1]]
D = np.diag((3,1))
print(D)
Output:
[[3 0]
[0 1]]
M = P @ D @ la.inv(P)
print(M)
Output:
[[2. 1.]
[1. 2.]]
Output:
[3.+0.j 1.+0.j]
print(evecs)
Output:
[[ 0.70710678 -0.70710678]
[ 0.70710678 0.70710678]]