Deep Learning Paper Review

Download as docx, pdf, or txt
Download as docx, pdf, or txt
You are on page 1of 1

DEEP LEARNING PAPER REVIEW

This paper is based on deep learning a constituent of machine learning, to understand


deep learning one must understand how machine learning works and more so, how the two are
connected. Technology has changed over the years and automation has become the norm, and
machine learning has been incorporated into web searches, e-commerce websites all with the
help of deep learning. The aspect of machine learning is not necessarily based on the engineer's
design of the system but learning is based on the raw data provided, deep learning provides
multiple layers that are responsible for detecting the key aspects used in the classification of
various features available in the data, there are three major types of layers namely; input, output
and lastly hidden layers whose number is determined by the type of data that may be in the field
of speech or image recognition.
Machine learning comes in different forms in this paper supervised learning was the main
focus, just like the name suggests the model is exposed to data with what it needs to learn and
uses that information to predict if exposed to similar data. One of the methods used is the
stochastic gradient descent which computes the outputs and errors and finds the average gradient
and it adjusts itself accordingly this method tends to find the minimum gradient at the least time
possible. A linear classifier may also be used but it requires a good feature extractor to solve
selective- variance dilemma this may not happen for SDG.
Multilayer data also provided a challenge in the field of deep learning this was met head-
on by using methods like Backpropagation which helps reach local minimal and also that was
Convolutional neural networks which is a method crafted from the field of visual neuroscience
where it used layers and uses pooling method to constitute. Convolutional neural networks are
good in the application of image and motion processing used in robots, for Recurrent neural
networks, the method is used to detect speech and language since it processes input sequences
one at a time but they are problematic when it comes to training them. Long short-term memory
(LSTM) networks usually solve the issue by providing the memory needed to remember inputs.
The paper creates a great picture and elaborates more on deep learning but it tends to rely
more on a supervised approach which in this case doesn't cover all the fields of machine
learning. The future of machine learning will get broader as time goes by given the fact that
automation of machines from cars, industrial applications, instruments of learning, and looking
from the dark side of war but all in all deep learning will keep on improving, and as this paper
has shown the ground is set for future generation to exploit and grow the field into greater
heights.

You might also like