Project 7

Download as docx, pdf, or txt
Download as docx, pdf, or txt
You are on page 1of 6

VIETNAM NATIONAL UNIVERSITY

HO CHI MINH CITY UNIVERSITY OF TECHNOLOGY

FACULTY OF COMPUTER SCIENCE AND ENGINEERING

PROJECT 7

1. Phạm Huy Thiên Phúc, Student ID: 2053346

2. Student ID:

3. Student ID:

4. Student ID:
University of Technology, Ho Chi Minh City
Faculty of Computer Science and Engineering

Contents of the project


1.
What is Deep Learning ?.....................................................................3
2.
The history and the development of Deep Learning..............................5
3.
University of Technology, Ho Chi Minh City
Faculty of Computer Science and Engineering

1.What is Deep Learning ?


Deep learning is a subset of machine learning [ Firgure 1], which is essentially
a neural network with three or more layers. These neural networks attempt to
simulate the behavior of the human brain—albeit far from matching its ability—
allowing it to “learn” from large amounts of data that is unstructured or unlabeled.
While a neural network with a single layer can still make approximate predictions,
additional hidden layers can help to optimize and refine for accuracy.

Firgure 1:AI, ML and DL


Deep learning drives many artificial intelligence (AI) applications and services
that improve automation, performing analytical and physical tasks without human
intervention. Deep learning technology lies behind everyday products and services
(such as digital assistants, voice-enabled TV remotes, and credit card fraud
detection) as well as emerging technologies (such as self-driving cars).
University of Technology, Ho Chi Minh City
Faculty of Computer Science and Engineering

2. The history and the development of Deep Learning:


ANNs started with a work by McCullogh and Pitts who showed that sets of
simple units (artificial neurons) could perform all possible logic operations and
thus be capable of universal computation. This work was concomitant to Von
Neumann and Turing who first dealt with statistical aspects of the information
processing of the brain and how to build a machine capable of reproducing them.
Frank Rosembalt invented the perceptron machine to perform simple pattern
classification. However, this new learning machine was incapable of solving
simple problems, like the logic XOR. In 1969 Minsky and Papert showed that
perceptrons had intrinsic limitations that could not be transcended, thus leading to
a fading enthusiasm for ANNs. In 1983 John Hopfield proposed a special type of
ANNs (the Hopfield networks) and proved that they had powerful pattern
completion and memory properties. The backpropagation algorithm was first
described by Linnainmaa, S. (1970) as the representation of the cumulative
rounding error of an algorithm (as a Taylor expansion of the local rounding errors),
without reference to neural networks. In 1985, Rumelhart, McClelland, and
Hinton rediscovered this powerful learning rule that allowed them to train ANNs
with several hidden units, thus surpassing the Minsk criticism.

Figure 2: Brief history of Deep learning


University of Technology, Ho Chi Minh City
Faculty of Computer Science and Engineering

Yea Contributor Contribution


r
1943 Walter Pitts and Warren McCulloch Pitts Neuron
McCulloch
1957 Frank Rosenblatt Perceptron

1960 Henry J. Kelley The First Backpropagation Model

1962 Stuart Dreyfus Backpropagation With Chain Rule

1965 Alexey Grigoryevich Multilayer Neural Network


Ivakhnenko and Valentin
Grigorʹevich Lapa

1969 Marvin Minsky and Seymour XOR problem


Papert

1970 Seppo Linnainmaa Automatic differentiation for


backpropagation
Implements backpropagation in
computer code

1971 Alexey Grigoryevich Deep neural network


Ivakhnenko

1980 Kunihiko Fukushima Neocognitron – First CNN Architecture

1982 John Hopfield Hopfield Network

Paul Werbos Backpropagation In ANN

1985 David H. Ackley, Geoffrey Boltzmann Machine


Hinton and Terrence Sejnowski

1986 Terry Sejnowski NetTalk – ANN Learns Speech

Geoffrey Hinton, Rumelhart Implementation Of Backpropagation


and Williams
( Continued )
University of Technology, Ho Chi Minh City
Faculty of Computer Science and Engineering

Yea Contributor Contribution


r
1991 Sepp Hochreiter Vanishing Gradient Problem

1997 Sepp Hochreiter and The Milestone Of LSTM


Jürgen Schmidhube

2006 Geoffrey Hinton, Ruslan Deep Belief Network


Salakhutdinov, Osindero and
Teh

2008 Andrew NG’s group GPUs for training Deep Neural Networks

2011 Yoshua Bengio, Antoine Vanishing Gradient


Bordes,
Xavier Glorot

2012 Alex Krizhevsky AlexNet

2014 Ian Goodfellow Generative Adversarial Neural Network

2016 Deepmind’s deep reinforcement learning model beats human champion in


the complex game of Go

2019 Yoshua Bengio, Geoffrey Turing Award 2018


Hinton, and Yann LeCun for their immense contribution in
advancements in area of deep learning

You might also like