0% found this document useful (0 votes)
5 views37 pages

Neural Network - Overview

The document provides an overview of neural networks, including their applications in deepfakes, agriculture, and healthcare. It discusses the history of artificial intelligence, the structure and function of neurons, and the evolution of perceptrons and multi-layer perceptrons in decision-making. Additionally, it highlights key concepts in neural network training, such as optimization, activation functions, and the challenges faced in deep learning advancements.

Uploaded by

Anurita Roy
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as PDF, TXT or read online on Scribd
0% found this document useful (0 votes)
5 views37 pages

Neural Network - Overview

The document provides an overview of neural networks, including their applications in deepfakes, agriculture, and healthcare. It discusses the history of artificial intelligence, the structure and function of neurons, and the evolution of perceptrons and multi-layer perceptrons in decision-making. Additionally, it highlights key concepts in neural network training, such as optimization, activation functions, and the challenges faced in deep learning advancements.

Uploaded by

Anurita Roy
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as PDF, TXT or read online on Scribd
You are on page 1/ 37

Neural

Network – A
Quick Tour
SAPTARSI
An application of neural net (Who is the
celebrity)
Deepfakes (a portmanteau
of "deep learning" and
"fake"[1]) are synthetic
media[2] in which a person
in an existing image or
video is replaced with
someone else's likeness.
While the act of faking
content is not new,
deepfakes leverage
powerful techniques
from machine
learning and artificial
intelligence to manipulate or
generate visual and audio
content with a high potential
to deceive.
In Agriculture
In Healthcare
Some basic questions
❑ What is AI ?

❑ What is Strong AI vs Weak AI ?

❑ What is Inductive vs deductive AI?

❑ What is Symbolic vs SubSymbolic AI?

❑ What is Turing Test


The roots of AI
Most people in artificial intelligence trace the field’s official founding to a small workshop in 1956
at Dartmouth College organized by a young mathematician named John McCarthy

McCarthy persuaded Minsky, Shannon, and Rochester to help him organize “a 2 month, 10
man study of artificial intelligence to be carried out during the summer of 1956

The term artificial intelligence was McCarthy’s invention; he wanted to distinguish this field from
a related effort called cybernetics. McCarthy later admitted that no one really liked the
name—after all, the goal was genuine, not “artificial,” intelligence—but “I had to call it
something, so I called it ‘Artificial Intelligence.’
The army of ten

LISP, Logical Cognitive Science Father of Information Theory GPS


Formalism

bounded rationality Machine


Nobel Prize Winner :Economics Learning
What is a neuron?
❑ Neurons (also called neurons or nerve cells) are the
fundamental units of the brain and nervous system
❑ The cells responsible for receiving sensory input from the
external world, for sending motor commands to our
muscles
❑ and for transforming and relaying the electrical signals at
every step in between.
❑ Though the term was coined in late ninetieth century
(1891), its actual structure was unknown until the advent
of microscope
Neurons across
Species
Biological neurons

❑ A neuron is a basic computation unit


❑ It takes input from some other neurons,
processes and gives output to some other
neuron

❑ There are a lot of them, which are interconnected


❑ Together, they can perform complex tasks
MP Neuron

❑ A decision like whether to watch a movie


or not

❑ Different Factors
✔ Is it a thriller ?
✔ Is Akshaya Kumar in the Movie?
✔ Does the move have a 6+ IMDB
Rating?
Perceptron (1957)

❑ The inputs now can have different weights


❑ They can also have real values
❑ The weight parameters are learnable
(Perceptron Learning Algorithm)
Perceptron in decision making

x1

Θ=6 2x1 + 3x2 < 6 then 0


W1 = 2 2x1 + 3x2 > 6 then 1
x2
W2 = 3

2x1 + 3x2
Perceptron in decision making

X1 X2 2x1 + 3x2 -6 Y
1 1 - 1 0
0 1 - 3 0
1 2 2 1
2 2 4 1
Step Function
Working of a perceptron
❑ A psychologist, Rosenblatt conceived of the Perceptron as a simplified mathematical
model of how the neurons in our brains operate:
❑ It takes a set of binary inputs (nearby neurons),
❑ multiplies each input by a continuous valued weight (the synapse strength to each
nearby neuron),
❑ thresholds the sum of these weighted inputs to output a 1 if the sum is big enough
and otherwise a 0 (in the same way neurons either fire or do not).
❑ The Mcculoch-Pitts model lacked a mechanism for learning, which was crucial for it to
be usable for AI.
XOR Paradox
Putting
together
different
terms
Perceptron and the machine learning
connection
Non-linearly Separable
More Multi-Layer Perceptron
Cascading of Layers
Two layers :Generate convex decision regions

Y LAYER 2 NEURON
IS 1 ONLY IN
THIS REGION
S1
S2
X
LAYER 1
w11
X S1 LAYER 2
w12
w21
S2
Y w22
What needs to be done ?
❑ Use a MLP for Breast Cancer Classification

❑ The data has 9 input features, hence the dimension of the input layer will be 9

❑ We will add a hidden layer and it may have any number of hidden layers

❑ The output layer we will use one node

❑ As the output will be 0 and 1 , we will use sigmoid activation function

❑ We need to choose an optimizer , a loss function and a metric


Some key terms
Model

❑ A model is represented by some parameters θ


❑ For a linear regression, it is the intercept and slope terms
❑ For a neural network it is number of nodes, number of layers, weights,
activation function
❑ When we calculate the deviation for a single instance it is called loss
function
❑ When we calculate for entire training set it is called cost function

❑ Optimization: We want to find values of θ, such that the loss is


minimized, this is where GD comes into play

23
Simple Exercise

https://www.kaggle.com/code/saptarsi/keras-and-simple-cl
assification-sg/
Shallow and
Deep
ML and DL
How to learn the weights (GD)
Effect of learning rate

Geron
Aurelo Geron
28
Better Activation Function
Better Optimization
Back Propagation
In physical world ……
Momentum

How about put this


phenomenon in gradient
descent?
Types of Neural
Network
What needs to be done ?
❑ Use a MLP for Breast Cancer Classification

❑ The data has 9 input features, hence the dimension of the input layer will be 9

❑ We will add a hidden layer and it may have any number of hidden layers

❑ The output layer we will use one node

❑ As the output will be 0 and 1 , we will use sigmoid activation function

❑ We need to choose an optimizer , a loss function and a metric


Simple Exercise

https://www.kaggle.com/code/saptarsi/keras-and-simple-cl
assification-sg/
Summary
❑ The magical capability of Biological Neurons inspired ANN

❑ Initial MP Neuron was improved by perceptron, and it was the thing we were waiting for
until it was challenged by the XOR Problem

❑ BP steadied the ship around 1985

❑ Even after that training deep neural network was not possible until a breakthrough in
terms of unsupervised pre training

❑ After that there has been no looking back, with newer optimizers, activations,
regularizations and initialization strategies. It’s still on going.
Thank You

You might also like