MLP2021 22 cw1
MLP2021 22 cw1
1 Introduction
The aim of this coursework is to explore the classification of images of handwritten digits using neural networks.
The first part of this coursework will concern the identification and discussion of a fundamental problem in
machine learning, as shown in Figure 1. Following this preliminary discussion, you will further investigate this
problem in wider and deeper neural networks, study it in terms of network width and depth. The second part
involves implementing different methods to combat the problem identified in Task 1 and then comparing these
methods empirically and theoretically. In the final part, you will briefly discuss the main strengths and weakness
of any one related work to the methods examined in Task 2.
The coursework will use an extended version of the MNIST database, the EMNIST Balanced dataset,
described in Section 2. Section 3 describes the additional code provided for the coursework (in branch
mlp2021-22/coursework1 of the MLP github), and Section 4 describes how the coursework is structured into
three tasks. The main deliverable of this coursework is a report, discussed in section 8, using a template that is
available on the github. Section 9 discusses the details of carrying out and submitting the coursework, and the
marking scheme is discussed in Section 10.
You will need to submit your completed report as a PDF file and your local version of the mlp code including
any changes you made to the provided (.py files). The detailed submission instructions are given in Section 9.2 –
please follow these instructions carefully.
2 EMNIST dataset
In this coursework we shall use the EMNIST (Extended MNIST) Balanced dataset [Cohen et al., 2017],
https://www.nist.gov/itl/iad/image-group/emnist-dataset. EMNIST extends MNIST by including images of
handwritten letters (upper and lower case) as well as handwritten digits. Both EMNIST and MNIST are extracted
from the same underlying dataset, referred to as NIST Special Database 19. Both use the same conversion
process resulting in centred images of dimension 28×28.
There are 62 potential classes for EMNIST (10 digits, 26 lower case letters, and 26 upper case letters). However,
we shall use a reduced label set of 47 different labels. This is because (following the data conversion process)
there are 15 letters for which it is confusing to discriminate between upper-case and lower-case versions. In the
47 label set, upper- and lower-case labels are merged for the following letters:
C, I, J, K, L, M, O, P, S, U, V, W, X, Y, Z.
The training set for Balanced EMNIST has about twice the number of examples as the MNIST training set, thus
you should expect the run-time of your experiments to be about twice as long. The expected accuracy rates
are lower for EMNIST than for MNIST (as EMNIST has more classes, and more confusable examples), and
differences in accuracy between different systems should be larger. Cohen et al. [2017] present some baseline
results for EMNIST.
You do not need to directly download the EMNIST database from the nist.gov website, as it is part of the
coursework1 branch in the mlpractical Github repository, discussed in Section 3 below.
1
MLP 2021/22: Coursework 1 Due: 29 October 2021
You should run all of the experiments for the coursework inside the Conda environment you set up
for the labs. The code for the coursework is available on the course Github repository on a branch
mlp2021-22/coursework1. To create a local working copy of this branch in your local repository you need to
do the following.
1. Make sure all modified files on the branch you are currently have been committed (see notes/getting-
started-in-a-lab.md if you are unsure how to do this).
2. Fetch changes to the upstream origin repository by running
git fetch origin
3. Checkout a new local branch from the fetched branch using
git checkout -b coursework1 origin/mlp2021-22/coursework1
You will now have a new branch in your local repository with all the code necessary for the coursework in it.
This branch includes the following additions to your setup:
• A new EMNISTDataProvider class in the mlp.data_providers module. This class makes some
changes to the MNISTDataProvider class, linking to the EMNIST Balanced data, and setting the number
of classes to 47.
• Training, validation, and test sets for the EMNIST Balanced dataset that you will use in this coursework.
• In order to further improve performance and mitigate the problem identified in neural networks, you will
also need to implement a new class in the mlp.layers module:
DropoutLayer
and also two weight penalty tecniques in the mlp.penalties module:
L1Penalty and L2Penalty.
• A directory called report which contains the LaTeX template and style files for your report. You should
copy all these files into the directory which will contain your report.
2
MLP 2021/22: Coursework 1 Due: 29 October 2021
(a) Error curve on the training and validation set of EMNIST dataset.
(b) Accuracy curve on the training and validation set of EMNIST dataset.
Figure 1: Error and Accuracy curves for a baseline model on EMNIST Dataset.
4 Tasks
The coursework is structured into 3 tasks, the first two are supported by experiments on the EMNIST dataset.
1. Identification of a fundamental problem in machine learning as shown in Fig 1 and setting up a baseline
system on EMNIST by a valid hyper-parameter search.
2. A research investigation and analysis into whether using Dropout and/or Weight Penalty (L1Penalty and
L2Penalty) addresses the problem found in training machine learning models (Fig 1). How do these two
approaches improve/degrade the model’s performance?
3. A brief literature review of a specific paper as discussed in Section 7 and summarising the whole report.
Figure 1 shows the training and validation error curves in Figure 1a and also training and validation accuracies
in Figure 1b for a model with 1 hidden layer1 and a ReLU activation function trained on the EMNIST dataset by
using cross-entropy error function. This curve can be re-produced by running the model settings defined in the
Coursework1.ipynb notebook in the github repository. We first identify and discuss the problem shown by
the curves in Figure 1 as overfitting, and briefly discuss potential solutions in this section for overcoming this
problem.
1
A hidden layer is AffineLayer+ReLULayer together.
3
MLP 2021/22: Coursework 1 Due: 29 October 2021
Varying number of hidden units. Initially you will train various 1-hidden layer networks by using either
32, 64 and 128 ReLU hidden units per layer on EMNIST using stochastic gradient descent (SGD) without any
regularization. Make sure you use an appropriate learning rate and train each network for 100 epochs. Visualise
and discuss how increasing number of hidden units affects the validation performance and whether it worsens or
mitigates the overfitting problem.
Varying number of layers. Here you will train various neural networks by using either 1, 2, 3 hidden layers with
128 ReLU hidden units per layer on EMNIST using stochastic gradient descent (SGD) without any regularization.
Make sure you use an appropriate learning rate and train each network for 100 epochs. Visualise and discuss
how increasing number of layers affects the validation performance and whether it worsens or mitigates the
overfitting problem.
The questions in (mlp-cw1-questions.tex) that you must answer and count for this task are:
• Question 1;
• Question 2;
• Question 5;
• Question 6;
• Question 7;
• Question Table 1;
• Question Figure 2;
• Question 8;
• Question 9;
• Question Table 2;
• Question Figure 3;
• Question 10;
• Question 12.
(20 Marks)
Definition and Motivation. We provide the analysis for Dropout, and you will have to explain
• L1Penalty and L2Penalty including their formulations and implementation details (do not copy/paste
your code here),
• how/why/to what extent each one can alleviate the overfitting problem,
4
MLP 2021/22: Coursework 1 Due: 29 October 2021
• Question 14.
(10 Marks)
Implementing Dropout and Weight Penalty. Here you will implement DropoutLayer, L1Penalty and
L2Penalty and test their correctness. Here are the steps to follow:
1. Implement the Dropout class in the DropoutLayer of the mlp.layers module. You need to implement
fprop and bprop methods for this class. Please note that the solution uses the original dropout formulation
(i.e. scale the hidden unit activations by p in the final network for compensating missing units). The
sample distribution to be used for Dropout implementation is a uniform distribution, U(0,1) to pass the
unit tests.
2. Implement the L1Penalty and L2Penalty class in the L1Penalty and L2Penalty of the mlp.penalties
module. You need to implement __call__ and grad methods for this class. After defining these functions,
they can be provided as a parameter, weights_penalty, biases_penalty in the AffineLayer class
while creating the multi-layer neural network.
3. Verify the correctness of your implementation using the supplied unit tests in
DropoutandPenalty_tests.ipynb
(20 Marks)
EMNIST Experiments. In this section you should modify your baseline network to one that uses one or a
combination of DropoutLayer with either L1Penalty or L2Penalty and train a model. For the experiments,
your baseline network should contain 3 hidden layers and 128 hidden units with ReLU activation function. Your
main aim is to i) investigate whether/how each of these functions addresses the above mentioned problem, ii)
study the generalization performance of your network when used with one of these functions or a combination
of them, iii) discover the best possible network configuration, when the only available options to choose from
are Dropout and Weight Penalty functions and the hyper-parameters (learning rate, Dropout Probability and
penalty coefficient for the Weight Penalty functions).
The Dropout probability is a float value in the range (0,1), e.g. 0.5, chosen manually. Penalty coefficient is also
a manually selected float value, e.g. 0.001, usually in the range of 0.1 − 0.00001 . For model selection, you
should use validation performance to pick the best model and finally report test performance of the best
model.
Ensure that you thoroughly describe how these functions affect performance when used together and separately
with different hyperparameters in your report, ideally both at the theoretical and empirical level. Note that
the expected amount of work in this part is not a brute-force exploration of all possible variations of
network configurations and hyperparameters but a carefully designed set of experiments that provides
meaningful analysis and insights. We have prespecified for what hyperparameter values you should run each
individual experiment for L1/L2 regularisation and Dropout on Table 3 of the template. You will have to identify
and argue for a set of 8 different hyperparameter combinations for which to run the combined Dropout and
5
MLP 2021/22: Coursework 1 Due: 29 October 2021
L1/L2 experiments. (The number 8 was not picked because there are for example 8 obvious combinations to
pick or because one could not arguably run more, but rather to limit the amount of experimentation and avoid
you putting too much time in this. There are many valid combinations of experiments to try, but you should
motivate your specific selection.)
The questions in (mlp-cw1-questions.tex) that you must answer and count for this task are:
• Question Table 3;
• Question Figure 4;
• Question 3.
(35 Marks)
In this section, you will explore one related work, Maxout Networks Goodfellow et al. [2013], discussing the
summary of the paper, strengths and limitations of the research work. Note that this review must be in your own
words.
The questions in (mlp-cw1-questions.tex) that you must answer and count for this task are:
• Question 16;
(15 Marks)
8 Report
6
MLP 2021/22: Coursework 1 Due: 29 October 2021
While inputting your answers in (mlp-cw1-questions.tex), the first thing you should do is add your UUN in
place of (sXXXXXXX) at the start of the file. Then, answer each question, being careful to only edit the text that
appears in the brackets of the commands (\youranswer).
The questions ask you to replace the text in red, fill in the tables provided in the template, and replace the figures
specified with ones you created from your experiments. There is no specific word-count limit for any question,
and you are responsible for identifying the correct level of detail based on the question itself (e.g. "discussion"
implies an extensive analysis) and context (document section and surrounding text).
There are 19 TEXT QUESTIONS (a few of the short first ones have their answers added to both the Introduction
and the Abstract). Replace the text inside the brackets of the command (\youranswer) with your answer to the
question.
There are also 3 "questions" to replace some placeholder FIGURES with your own, and 3 "questions" asking
you to fill in the missing entries in the TABLES provided.
Note that questions are ordered by the order of appearance of their answers in the text, and not by the order you
should tackle them. Specifically, you cannot answer Questions 2, 3, and 4 before concluding all of the relevant
experiments and analysis. Similarly, you should fill in the TABLES and FIGURES before discussing the results
presented there.
Also note that, if for some reason you do not manage to produce results for some FIGURES and TABLES, then
you can get partial marks by discussing your expectations of the results in the relevant TEXT QUESTIONS (for
example Question 8 makes use of Table 1 and Figure 2).
Ideally, all figures should be included in your report file as vector graphics files rather than raster files as this
will make sure all detail in the plot is visible. Matplotlib supports saving high quality figures in a wide range
of common image formats using the savefig function. You should use savefig rather than copying the
screen-resolution raster images outputted in the notebook. An example of using savefig to save a figure
as a PDF file (which can be included as graphics in LaTeX compiled with pdflatex is given below.
If you make use of any any books, articles, web pages or other resources you should appropriately cite these in
your report. You do not need to cite material from the course lecture slides or lab notebooks.
7
MLP 2021/22: Coursework 1 Due: 29 October 2021
To create a pdf file mlp-cw1-template.pdf from a LaTeX source file (mlp-cw1-template.tex), you can
run the following in a terminal:
pdflatex mlp-cw1-template
bibtex mlp-cw1-template
pdflatex mlp-cw1-template
pdflatex mlp-cw1-template
(Yes, you have to run pdflatex multiple times, in order for latex to construct the internal document references.)
An alternative, simpler approach uses the latexmk program:
Another alternative is to use an online LaTeX authoring environment such as https://overleaf.com – note that all
staff and students have free access to Overleaf Pro - see https://www.ed.ac.uk/information-services/computing/
desktop-personal/software/main-software-deals/other-software/overleaf.
It is worth learning how to use LaTeX effectively, as it is particularly powerful for mathematical and academic
writing. There are many tutorials on the web.
9 Mechanics
Marks: This assignment will be assessed out of 100 marks and forms 10% of your final grade for the course.
Academic conduct: Assessed work is subject to University regulations on academic conduct:
http://web.inf.ed.ac.uk/infweb/admin/policies/academic-misconduct
Submission: You can submit more than once up until the submission deadline. All submissions are timestamped
automatically. We will mark the latest submission that comes in before the deadline.
If you submit anything before the deadline, you may not resubmit after the deadline. (This policy allows us to
begin marking submissions immediately after the deadline, without having to worry that some may need to be
re-marked).
If you do not submit anything before the deadline, you may submit exactly once after the deadline, and a late
penalty will be applied to this submission unless you have received an approved extension. Please be aware
that late submissions may receive lower priority for marking, and marks may not be returned within the same
timeframe as for on-time submissions.
Warning: Unfortunately the submission system on Learn will technically allow you to submit late even if you
submitted before the deadline (i.e. it does not enforce the above policy). Don’t do this! We will mark the version
that we retrieve just after the deadline.
Extension requests: For additional information about late penalties and extension requests, see the School web
page below. Do not email any course staff directly about extension requests; you must follow the instructions
on the web page.
http://web.inf.ed.ac.uk/infweb/student-services/ito/admin/coursework-projects/late-coursework-extension-requests
Late submission penalty: Following the University guidelines, late coursework submitted without an authorised
extension will be recorded as late and the following penalties will apply: 5 percentage points will be deducted
for every calendar day or part thereof it is late, up to a maximum of 7 calendar days. After this time a mark of
zero will be recorded.
8
MLP 2021/22: Coursework 1 Due: 29 October 2021
It is strongly recommended you use some method for backing up your work. Those working in their AFS
homespace on DICE will have their work automatically backed up as part of the routine backup of all user
homespaces. If you are working on a personal computer you should have your own backup method in place
(e.g. saving additional copies to an external drive, syncing to a cloud service or pushing commits to your local
Git repository to a private repository on Github). Loss of work through failure to back up does not constitute
a good reason for late submission.
You may additionally wish to keep your coursework under version control in your local Git repository on the
coursework1 branch.
If you make regular commits of your work on the coursework this will allow you to better keep track of the
changes you have made and if necessary revert to previous versions of files and/or restore accidentally deleted
work. This is not however required and you should note that keeping your work under version control is a
distinct issue from backing up to guard against hard drive failure. If you are working on a personal computer
you should still keep an additional back up of your work as described above.
9.2 Submission
Your coursework submission should be done online on the Learn course webpage.
Your submission should include one zip file sxxxxxxx.zip that should contain
• Migrate to the section Assessment on the left column on the course page.
• Click on Coursework 1.
• A page will appear where you will need to browse and upload your .zip file that you created previously in
Attach Files and then click Submit.
You can amend an existing submission by attaching a different .zip file using the Attach Files option and then
Submit again.
Note that we will only mark the last uploaded coursework in case you amend your files. Thus it is your
responsibility to make sure that correct files are uploaded.
9
MLP 2021/22: Coursework 1 Due: 29 October 2021
10 Marking Guidelines
This document (Section 4 in particular) and the template report (mlp-cw1-template.pdf) provide a description
of what you are expected to do in this assignment, and how the report should be written and structured.
Assignments will be marked using the scale defined by the University Common Marking Scheme:
And finally... this assignment is worth 10% of the total marks for the course, and the next assignment is worth
40%. This is not because the second assignment is four times bigger or harder than this one (although it will be
more challenging). The reason that this assignment is worth 10% is so that people get an opportunity to learn
from their errors in doing the assignment, without it having a very big impact on their overall grade for the
module.
References
Gregory Cohen, Saeed Afshar, Jonathan Tapson, and André van Schaik. EMNIST: an extension of MNIST to
handwritten letters. arXiv preprint arXiv:1702.05373, 2017. URL https://arxiv.org/abs/1702.05373.
Ian Goodfellow, David Warde-Farley, Mehdi Mirza, Aaron Courville, and Yoshua Bengio. Maxout networks. In
International conference on machine learning, pages 1319–1327. PMLR, 2013.
10