Python Basics Nympy
Python Basics Nympy
Actually, we rarely use the "math" library in deep learning because the inputs of the functions are real numbers. In deep learning
we mostly use matrices and vectors. This is why numpy is more useful.
Instructions: x could now be either a real number, a vector, or a matrix. The data structures we use in numpy to represent
these shapes (vectors, matrices...) are called numpy arrays. You don't need to know more for now.
As you've seen in lecture, you will need to compute gradients to optimize loss functions using backpropagation. Let's code your
first gradient function.
Exercise 4 - sigmoid_derivative
Implement the function sigmoid_grad() to compute the gradient of the sigmoid function with respect to its input x. The formula is:
𝑠𝑖𝑔𝑚𝑜𝑖𝑑_𝑑𝑒𝑟𝑖𝑣𝑎𝑡𝑖(𝑥)=𝜎′(𝑥)=𝜎(𝑥)(1−𝜎(𝑥))
1. Set s to be the sigmoid of x. You might find your sigmoid(x) function useful.
2. Compute 𝜎′(𝑥)=𝑠(1−𝑠)
Exercise 5 - image2vector
Implement image2vector() that takes an input of shape (length, height, 3) and returns a vector of shape (length*height*3, 1).
For example, if you would like to reshape an array v of shape (a, b, c) into a vector of shape (a*b,c) you would do:
Please don't hardcode the dimensions of image as a constant. Instead look up the quantities you need
with image.shape[0], etc.
You can use v = v.reshape(-1, 1). Just make sure you understand why it works.
For example, if
Note that you can divide matrices of different sizes and it works fine: this is called broadcasting and you're going to
learn about it in part 5.
With keepdims=True the result will broadcast correctly against the original x.
axis=1 means you are going to get the norm in a row-wise manner. If you need the norm in a column-wise way, you
would need to set axis=0.
numpy.linalg.norm has another parameter ord where we specify the type of normalization to be done (in the exercise
below you'll do 2-norm). To get familiar with the types of normalization you can visit numpy.linalg.norm
Exercise 6 - normalize_rows
Implement normalizeRows() to normalize the rows of a matrix. After applying this function to an input matrix x, each row of x
should be a vector of unit length (meaning length 1).
Note: Don't try to use x /= x_norm. For the matrix division numpy must broadcast the x_norm, which is not supported by the
operant /=
Note: In normalize_rows(), you can try to print the shapes of x_norm and x, and then rerun the assessment. You'll find out that
they have different shapes. This is normal given that x_norm takes the norm of each row of x. So x_norm has the same number
of rows but only 1 column. So how did it work when you divided x by x_norm? This is called broadcasting and we'll talk about it
now!
Exercise 7 - softmax
Implement a softmax function using numpy. You can think of softmax as a normalizing function used when your algorithm needs
to classify two or more classes. You will learn more about softmax in the second course of this specialization.
Instructions:
-
Notes: Note that later in the course, you'll see "m" used to represent the "number of training examples", and each training
example is in its own column of the matrix. Also, each feature will be in its own row (each row has data for the same feature).
Softmax should be performed for all features of each training example, so softmax would be performed on the columns (once
we switch to that representation later in this course).
However, in this coding practice, we're just focusing on getting familiar with Python, so we're using the common math
notation m×n
where m is the number of rows and n is the number of columns.
Notes
If you print the shapes of x_exp, x_sum and s above and rerun the assessment cell, you will see that x_sum is of
shape (2,1) while x_exp and s are of shape (2,5). x_exp/x_sum works due to python broadcasting.
Congratulations! You now have a pretty good understanding of python numpy and have implemented a few useful functions that
you will be using in deep learning.
What you need to remember:
np.exp(x) works for any np.array x and applies the exponential function to every coordinate
the sigmoid function and its gradient
image2vector is commonly used in deep learning
np.reshape is widely used. In the future, you'll see that keeping your matrix/vector dimensions straight will go toward
eliminating a lot of bugs.
numpy has efficient built-in functions
broadcasting is extremely useful
2 - Vectorization
In deep learning, you deal with very large datasets. Hence, a non-computationally-optimal function can become a huge
bottleneck in your algorithm and can result in a model that takes ages to run. To make sure that your code is computationally
efficient, you will use vectorization. For example, try to tell the difference between the following implementations of the
dot/outer/elementwise product.
As you may have noticed, the vectorized implementation is much cleaner and more efficient. For bigger vectors/matrices, the
differences in running time become even bigger.
Exercise 8 - L1
Implement the numpy vectorized version of the L1 loss. You may find the function abs(x) (absolute value of x) useful.
Reminder:
The loss is used to evaluate the performance of your model. The bigger your loss is, the more different your
predictions (𝑦̂ y^) are from the true values (𝑦y). In deep learning, you use optimization algorithms like Gradient
Descent to train your model and to minimize the cost.
L1 loss is defined as:
Exercise 9 - L2
Implement the numpy vectorized version of the L2 loss. There are several way of implementing the L2 loss but you may find the
function np.dot() useful. As a reminder, if 𝑥=[𝑥1,𝑥2,...,𝑥𝑛]x=[x1,x2,...,xn], then np.dot(x,x) = ∑𝑛𝑗=0𝑥2𝑗∑j=0nxj2.
L2 loss is defined as