0% found this document useful (0 votes)
32 views17 pages

Deep Learning State of The Art: Amulya Viswambharan ID 202090007 Kehkshan Fatima ID

Deep learning is a type of machine learning that uses neural networks with many layers between the input and output. It is useful for applications like computer vision, natural language processing, and speech recognition. The basic building blocks of deep learning include perceptrons with activation functions like sigmoid, tanh, and ReLU. Common deep learning architectures are convolutional neural networks, recurrent neural networks, long short-term memory networks, stacked auto-encoders, deep belief networks, and deep Boltzmann machines. Each architecture has advantages for different types of problems involving images, text, or sequential data.

Uploaded by

Amulya
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as PPTX, PDF, TXT or read online on Scribd
0% found this document useful (0 votes)
32 views17 pages

Deep Learning State of The Art: Amulya Viswambharan ID 202090007 Kehkshan Fatima ID

Deep learning is a type of machine learning that uses neural networks with many layers between the input and output. It is useful for applications like computer vision, natural language processing, and speech recognition. The basic building blocks of deep learning include perceptrons with activation functions like sigmoid, tanh, and ReLU. Common deep learning architectures are convolutional neural networks, recurrent neural networks, long short-term memory networks, stacked auto-encoders, deep belief networks, and deep Boltzmann machines. Each architecture has advantages for different types of problems involving images, text, or sequential data.

Uploaded by

Amulya
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as PPTX, PDF, TXT or read online on Scribd
You are on page 1/ 17

Deep Learning State of

the Art

Amulya Viswambharan ID 202090007


Kehkshan Fatima ID
Agenda
• Introduction to Deep learning
What is Deep learning
Why is it useful
• Main Components
• Basic Architectures
Introduction

Source: Google
Machine Learning Vs Deep Learning

Source: Google
Why Deep Learning
Applications of Deep Learning
Applications of Deep Learning
Basic Building Block of Deep Learning -Perceptron
Sigmoid Hyperbolic Tangent ReLU (Rectified Linear Unit)

• Smooth gradient, preventing Zero centered


“jumps” in output values. Computationally efficient—allows the
• Output values bound between 0
Like the Sigmoid function network to converge very quickly
and 1, normalizing the output of Non-linear
each neuron.
Vanishing gradient The Dying ReLU problem
Outputs not zero
centered.
Computationally
expensive
Single Layer and Deep Neural Networks
Neural Network Predictions
There exist several types of architectures for neural networks
• Convolutional Neural Network (CNN)
• Recurrent Neural Networks (RNNs)
• Long Short-Term Memory Networks (LSTMs)
• Stacked Auto-Encoders
• Deep Boltzmann Machine (DBM)
• Deep Belief Networks (DBN)
Recurrent Neural Network (RNN) 
• Deal with sequential data.
• The output from an earlier step is fed as the input to a current
step
• Vanishing Gradient Problem

 Applications
• Sentiment classification
• Image captioning
• Speech recognition
• Natural language processing
• Machine translation
Long Short Term Memory networks
• capable of learning long-term dependencies
• Gates are a way to optionally let information through

 Applications
• Captioning of images and videos
• Language translation and modeling
• Sentiment analysis
• Stock market predictions
Stacked Auto-Encoders

• Feedforward neural networks where the input is the same as the output

 Applications
• Data denoising
• Dimensionality reduction
• Variational Autoencoders (VAE)
Deep Belief Network
• Use probabilities and unsupervised learning to produce outputs.
• Greedy learning algorithms start from the bottom layer and
move up, fine-tuning the generative weights. 

 Applications
• Image recognition
• Video recognition
• Motion-capture data

You might also like