0% found this document useful (0 votes)
85 views13 pages

Eng Assng

The document provides an overview of deep learning and its applications. It defines deep learning as a branch of machine learning based on artificial neural networks that can learn from large amounts of data without being explicitly programmed. Some key applications discussed include self-driving cars, healthcare diagnostics, voice assistants, machine translation, text generation, and handwriting generation. Deep learning is achieving human-level performance in various tasks by learning hierarchical representations from large datasets.

Uploaded by

Aarif raza
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as PDF, TXT or read online on Scribd
0% found this document useful (0 votes)
85 views13 pages

Eng Assng

The document provides an overview of deep learning and its applications. It defines deep learning as a branch of machine learning based on artificial neural networks that can learn from large amounts of data without being explicitly programmed. Some key applications discussed include self-driving cars, healthcare diagnostics, voice assistants, machine translation, text generation, and handwriting generation. Deep learning is achieving human-level performance in various tasks by learning hierarchical representations from large datasets.

Uploaded by

Aarif raza
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as PDF, TXT or read online on Scribd
You are on page 1/ 13

TECHNICAL REPORT WRITING &

LANGUAGE LAB PRACTICE

TOPIC:

FUNDAMENTALS OF
DEEP LEARNING

PRESENTED BY: Aarif raza

Class Roll no: 61

Sec:A(ECE)

University roll no: 10400317217

Registration no: 171040110308


OBJECTIVE

With the reinvigoration of neural networks in the 2000s, deep learning


has become an extremely active area of research that is paving the way
for modern machine learning. Large companies such as Google,
Microsoft, and Facebook have taken notice and are actively growing in-
house deep learning teams.

Artificial Intelligence is on a rage! All of a sudden every one, whether


understands or not, is talking about it. Understanding the latest
advancements in artificial intelligence can seem overwhelming, but it
really boils down to two very popular concepts Machine
Learning and Deep Learning. But lately, Deep Learning is gaining much
popularity due to it’s supremacy in terms of accuracy when trained with
huge amount of data.

The software industry now-a-days moving towards machine


intelligence. Machine Learning has become necessary in every sector as
a way of making machines intelligent. In a simpler way, Machine
Learning is set of algorithms that parse data, learn from them, and
then apply what they’ve learned to make intelligent decisions.

The thing about traditional Machine Learning algorithms is that as


complex as they may seem, they’re still machine like. They need lot of
domain expertise, human intervention only capable of what they’re
designed for; nothing more, nothing less. For AI designers and the rest
of the world, that’s where deep learning holds a bit more promise.
INTRODUCTION

What is Deep Learning?


Deep learning is a branch of machine learning which is completely
based on artificial neural networks, as neural network is going to mimic
the human brain so deep learning is also a kind of mimic of human
brain. In deep learning, we don’t need to explicitly program everything.
The concept of deep learning is not new. It has been around for a
couple of years now. It’s on hype nowadays because earlier we did not
have that much processing power and a lot of data. As in the last 20
years, the processing power increases exponentially, deep learning and
machine learning came in the picture.
A formal definition of deep learning is- neurons
Deep learning is a particular kind of machine learning that achieves
great power and flexibility by learning to represent the world as a
nested hierarchy of concepts, with each concept defined in relation to
simpler concepts, and more abstract representations computed in terms
of less abstract ones.

In human brain approximately 100 billion neurons all together this is a


picture of an individual neuron and each neuron is connected through
thousands of their neighbors.
The question here is how do we recreate these neurons in a computer.
So, we create an artificial structure called an artificial neural net where
we have nodes or neurons. We have some neurons for input value and
some for output value and in between, there may be lots of neurons
interconnected in the hidden layer.
THE NEURAL NETWORK

BUILDING INTELLIGENT MACHINES:


The brain is the most incredible organ in the human body. It dictates the
way we per‐ ceive every sight, sound, smell, taste, and touch. It enables
us to store memories, experience emotions, and even dream. Without it,
we would be primitive organ‐ isms, incapable of anything other than the
simplest of reflexes. The brain is, inher‐ ently, what makes us intelligent
For decades, we’ve dreamed of building intelligent machines with brains
like ours— robotic assistants to clean our homes, cars that drive
themselves, microscopes that automatically detect diseases. But building
these artificially intelligent machines requires us to solve some of the
most complex computational problems we have ever grappled with;
problems that our brains can already solve in a manner of microsec‐
onds. To tackle these problems, we’ll have to develop a radically
different way of pro‐ gramming a computer using techniques largely
developed over the past decade.
The Limits of Traditional Computer Programs :
Why exactly are certain problems so difficult for computers to solve?
Well, it turns out that traditional computer programs are designed to be
very good at two things: 1) performing arithmetic really fast and 2)
explicitly following a list of instructions. So if you want to do some
heavy financial number crunching, you’re in luck. Traditional computer
programs can do the trick. But let’s say we want to do something
slightly more interesting, like write a program to automatically read
someone’s handwriting. How do we distinguish between threes and
fives? Or between fours and nines? We can add more and more rules, or
features, through careful observation and months of trial and error, but
it’s quite clear that this isn’t going to be an easy process. Many other
classes of problems fall into this same category: object recognition,
speech comprehension, automated translation, etc. We don’t know what
program to write because we don’t know how it’s done by our brains.
And even if we did know how to do it, the program might be
horrendously complicated.
PERCEPTORS:
So let’s see what a perceptron is. First, have a look at the neuron
cell below: the dendrites are extensions of the nerve cell (in the
left lower corner of the graph). They receive signals and then
transmit the signals to the cell body, which processes the stimulus
and decide whether to trigger signals to other neuron cells. In case
this cell decides to trigger signals, the extension on the cell body
called axon will triggers chemical transmission at the end of the
axon to other cells. You don’t have to memorize anything here. We
are not studying neuroscience, so a vague impression of how it
works will be enough.
Now, above is a graph of how a perceptron looks like. Pretty
similar to the nerve cell graph above, right? Indeed. Perceptrons
and other neural networks are inspired by real neurons in our
brain. Note it’s only inspired and does not work exactly like real
neurons. The procedure of a perceptron processing data is as
follows:

1. On the left side you have neurons (small circles) of x with


subscripts 1, 2, … , m carrying data input.
2. We multiply each of the input by a weight w, also labeled with
subscripts 1, 2, …, m, along the arrow (also called a synapse)
to the big circle in the middle. So w1 * x1, w2 * x2, w3 * x3 and
so on.
3. Once all the the inputs are multiplied by a weight, we sum all of
them up and add another pre-determined number called bias.
4. Then, we push the result further to the right. Now, we have
this step function in the rectangle. What it means is that if
the result from step 3 is any number equal or larger than 0,
then we get 1 as output, otherwise if the result is smaller than
0, we get0 as output.
5. The output is either 1 or 0.

Note that alternatively, if you move bias to the right side of the
equation in the activation function like sum(wx) ≥ -b then this -
b is called a threshold value. So if the sum of the inputs and
weights is greater than or equal to the threshold, then the
activation triggers an 1. Otherwise, the activation outcome is 0.
Choose whichever that helps you to understand better as these two
ways of representation are interchangeable.
APPLICATIONS

1. Self-driving cars
Companies building these types of driver-assistance services, as well as
full-blown self-driving cars like Google’s, need to teach a computer how
to take over key parts (or all) of driving using digital sensor systems
instead of a human’s senses. To do that companies generally start out
by training algorithms using a large amount of data.

2. Deep Learning in Healthcare


Breast or Skin-Cancer diagnostics? Mobile and Monitoring Apps? or
prediction and personalised medicine on the basis of Biobank-data? AI is
completely reshaping life sciences, medicine, and healthcare as an
industry. Innovations in AI are advancing the future of precision
medicine and population health management in unbelievable ways.
Computer-aided detection, quantitative imaging, decision support tools
and computer-aided diagnosis will play a big role in years to come.

3. Voice Search & Voice-Activated Assistants


One of the most popular usage areas of deep learning is voice search &
voice-activated intelligent assistants. With the big tech giants have
already made significant investments in this area, voice-activated
assistants can be found on nearly every smartphone. Apple’s Siri is on
the market since October 2011. Google Now, the voice-activated
assistant for Android, was launched less than a year after Siri. The
newest of the voice-activated intelligent assistants is Microsoft Cortana.

4. Automatic Machine Translation


This is a task where given words, phrase or sentence in one language,
automatically translate it into another language.
Automatic machine translation has been around for a long time, but
deep learning is achieving top results in two specific areas:

• Automatic Translation of Text


• Automatic Translation of Images

Text translation can be performed without any pre-processing of the


sequence, allowing the algorithm to learn the dependencies between
words and their mapping to a new language.

5. Automatic Text Generation


This is an interesting task, where a corpus of text is learned and from
this model new text is generated, word-by-word or character-by-
character.

The model is capable of learning how to spell, punctuate, form


sentences and even capture the style of the text in the corpus. Large
recurrent neural networks are used to learn the relationship between
items in the sequences of input strings and then generate text.

6. Automatic Handwriting Generation


This is a task where given a corpus of handwriting examples, generate
new handwriting for a given word or phrase.

The handwriting is provided as a sequence of coordinates used by a pen


when the handwriting samples were created. From this corpus, the
relationship between the pen movement and the letters is learned and
new examples can be generated ad hoc.
7. Image Recognition
Another popular area regarding deep learning is image recognition. It
aims to recognize and identify people and objects in images as well as to
understand the content and context. Image recognition is already being
used in several sectors like gaming, social media, retail, tourism, etc.

This task requires the classification of objects within a photograph as


one of a set of previously known objects. A more complex variation of
this task called object detection involves specifically identifying one or
more objects within the scene of the photograph and drawing a box
around them.

8. Automatic Colorization
Image colorization is the problem of adding color to black and white
photographs. Deep learning can be used to use the objects and their
context within the photograph to color the image, much like a human
operator might approach the problem. This capability leverage the high
quality and very large convolutional neural networks trained for
ImageNet and co-opted for the problem of image colorization.
Generally, the approach involves the use of very large convolutional
neural networks and supervised layers that recreate the image with the
addition of color.

9. Predicting Earthquakes
Harvard scientists used Deep Learning to teach a computer to perform
viscoelastic computations, these are the computations used in
predictions of earthquakes. Until their paper, such computations were
very computer intensive, but this application of Deep Learning improved
calculation time by 50,000%. When it comes to earthquake calculation,
timing is important and this improvement can be vital in saving a life.
10. Neural Networks in Finance
Futures markets have seen a phenomenal success since their inception
both in developed and developing countries during the last four
decades. This success is attributable to the tremendous leverage the
futures provide to market participants. This study analyzes a trading
strategy which benefits from this leverage by using the Capital Asset
Pricing Model (CAPM) and cost-of-carry relationship. The team applies
the technical trading rules developed from spot market prices, on
futures market prices using a CAPM based hedge ratio. Historical daily
prices of twenty stocks from each of the ten markets (five developed
markets and five emerging markets) are used for the analysis.
CONCLUSION
. In Conclusion…

This week I’ll be concluding this series of deep learning blog posts.
Deep learning is quickly growing field in computer science. It has
applications in nearly every other field of study and is already being
implemented commercially because machine learning can solve
problems too difficult or time consuming for humans to solve. To
describe deep learning in general terms, a variety models are used to
learn patterns in data and make accurate predictions based on the
patterns it observes.
Advantages
• Has best-in-class performance on problems that
significantly outperforms other solutions in multiple
domains. This includes speech, language, vision, playing
games like Go etc. This isn’t by a little bit, but by a
significant amount.
• Reduces the need for feature engineering, one of the most
time-consuming parts of machine learning practice.
• Is an architecture that can be adapted to new problems
relatively easily e.g. Vision, time series, language etc., are
using techniques like convolutional neural networks,
recurrent neural networks, long short-term memory etc.
Disadvantages
• Requires a large amount of data — if you only have
thousands of example, deep learning is unlikely to
outperform other approaches.
• Is extremely computationally expensive to train. The most
complex models take weeks to train using hundreds of
machines equipped with expensive GPUs.
• Do not have much in the way of strong theoretical
foundation.
REFERENCE

❖ A Fundamentals of Deep Learning

Designing Next-Generation Machine

Intelligence Algorithms by Nikhil Buduma.

❖Youtube videos by Andrew Ng, Adjunct Professor Stanford


University.

❖ https://www.quora.com/

❖ https://www.geeksforgeeks.org/introduction-deep-learning/

You might also like