0% found this document useful (0 votes)
79 views28 pages

Neural Networks - Lecture 2

1) Artificial intelligence and machine learning have advanced significantly in recent decades, allowing for capabilities like self-driving cars and improved computer vision and speech recognition. 2) Neural networks, a key machine learning technique, operate similarly to the human brain by learning from examples rather than relying on programmed instructions. They have been applied successfully in medical applications like disease detection from scans. 3) While artificial neural networks are inspired by the human brain, the brain remains far more sophisticated, being highly parallel, fault-tolerant, and able to handle fuzzy inputs, whereas computers rely on precise signals and programming.

Uploaded by

Imad Ali
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as PDF, TXT or read online on Scribd
0% found this document useful (0 votes)
79 views28 pages

Neural Networks - Lecture 2

1) Artificial intelligence and machine learning have advanced significantly in recent decades, allowing for capabilities like self-driving cars and improved computer vision and speech recognition. 2) Neural networks, a key machine learning technique, operate similarly to the human brain by learning from examples rather than relying on programmed instructions. They have been applied successfully in medical applications like disease detection from scans. 3) While artificial neural networks are inspired by the human brain, the brain remains far more sophisticated, being highly parallel, fault-tolerant, and able to handle fuzzy inputs, whereas computers rely on precise signals and programming.

Uploaded by

Imad Ali
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as PDF, TXT or read online on Scribd
You are on page 1/ 28

NN 1

ARTIFICIAL INTELLIGENCE

َ ْ‫ان ِفي أَح‬


‫س ِن ت َ ْق ِوي‬ َ ‫س‬ ِ ْ ‫لَقَ ْد َخلَ ْقنَا‬
َ ‫اﻹن‬
Indeed Allah has created the human in the best of forms. (Surah Teen,
verse 4)

This verse indicates to Insaan (humans) being the best and


most noble creation of Allah.

ARTIFICIAL INTELLIGENCE
• Extension to industrial revolution of 1960
to which ANNs are the biggest part
• Achievements have been
– Write a text and AI draw the picture
– Show a picture and AI write the text
– Image capturing and identification rate has
gone up from 77% to 97% against human
where its still 94%
• Future is good if AI stays available to all
but bleak if it stays in few hands

Elena Marchiori 1
NN 1

Machine Learning
• In the past decade, machine
learning has given us
– Self-driving cars
– Practical speech recognition
– Real-time computer vision
applications
– Effective web search
– Better understanding of the
human genome

Elena Marchiori 2
NN 1

NEWS IN MAIN STREAM


MEDIA

25 Apr 2017

TALKING MACHINES
SPEAKING OUT
PROBLEMS

Elena Marchiori 3
NN 1

2 MAR 2017

21 Apr 2017

Elena Marchiori 4
NN 1

Google AI Creating its own AI

Deep Learning?
1. A class of machine learning techniques
that exploit many layers of non-linear
information processing for supervised or
unsupervised feature extraction and
transformation, and for pattern analysis
and classification.
2. A sub-field within machine learning that is
based on algorithms for learning multiple
levels of representation in order to model
complex relationships among data.

Elena Marchiori 5
NN 1

THE ISSUES WITH HUMAN


• Artificial Neural Networks are currently a hot research
area in medicine and it is believed that they will
receive extensive application to biomedical systems
in the next few years. At the moment, the research is
mostly on modeling parts of the human body and
recognizing diseases from various scans. Neural
networks are ideal in recognising diseases using
scans since there is no need to provide a specific
algorithm on how to identify the disease. Neural
networks learn by example so the details of how to
recognise the disease are not needed.

Count e
• Artificial Neural Networks are currently a hot research
area in medicine and it is believed that they will
receive extensive application to biomedical systems
in the next few years. At the moment, the research is
mostly on modeling parts of the human body and
recognizing diseases from various scans. Neural
networks are ideal in recognising diseases using
scans since there is no need to provide a specific
algorithm on how to identify the disease. Neural
networks learn by example so the details of how to
recognise the disease are not needed.

Elena Marchiori 6
NN 1

PREFACE
HUMAN BRAIN

PREFACE PREFACE
Mouse BRAIN MOUSE BRAIN

Elena Marchiori 7
NN 1

Human and Artificial Neurons


(FUNCTIONALITIES)

• There are hundreds different types of


neurons, each with its characteristic function,
shape and location, but the main features of a
neuron of any type are its cell body, called
soma, synapses, dendrites and the axon.
• Information is transferred from one neuron to
another at specialised junctions, called
synapses. The dendrites act as the input units
of external signals to the neuron and the axon
acts as the output unit.

Human and Artificial Neurons


(FUNCTIONALITIES)
• Much is still unknown
– How brain train itself/process information
– Theories abound
DENDRITES

CELL BODY

AXON

CELL BODY
SYNAPSE

AXON

Elena Marchiori 8
NN 1

Human and Artificial Neurons


(FUNCTIONALITIES)
• Dendrites input
• Cell Body
– soma
• Axon output
– Electrical pulses
• Synapses
– junction
• Biochemical activities
• Brain's functionality
– Thought, emotion and cognition

Human and Artificial Neurons


(Investigating similarities)

• The human brain is the most sophisticated device.


• One of the most important features of the brain is the
ability to learn from previous experience.
• Thus humans have a great success in dealing with
unforeseen situations utilising the knowledge gained
from previously experienced similar situations.
• This is not true in systems based on computer
technology as they rely entirely on human pre-
defined instructions, spelling out each step for all
tasks. Thus bugs, in these instructions may cause all
sorts of unexpected results.

Elena Marchiori 9
NN 1

Elena Marchiori 10
NN 1

BRAIN vs CPTR
• Brain • Computer
– Operating by 100-mV nerve – Technology Operating on 5-V signal
impulses lasting nearly a levels switching at nanosecond intervals.
Computers with 33 MHz takes about 40 ns
millisecond. Neuron takes 4 to execute a single instruction. Super
ms to complete a firing cycle. computers take about 3 ns to complete
a single instruction.
– It is robust and fault tolerant.
Nerve cells in the brain die – The destruction of even a single
every day without affecting transistor may cause complete
its performance significantly. loss of functionality.
– It accepts fuzzy, noisy, poorly – It can only handle precise data
conditioned inputs and fed in properly.
produces an approximate
output.
– It is highly parallel due to – The conventional computer are
massive inter-connectivity totally sequential. (Few
between neurons. connections between its basic
elements)
– Connection with other – Links/weights
elements via synapse.

What are Neural Networks?

• Simple computational elements forming a large network


– Emphasis on learning (pattern recognition)
– Local computation (neurons)
• Configured for a particular application
– Pattern recognition/data classification
• ANN algorithm
– Modeled after brain
• Brain 100,000 times slower response
– Complex tasks (image, sound recognition, motion con)
– 10,000,000,000 times efficient in energy consumption/op
• Definition of NNs is vague
– Often | but not always | inspired by biological brain

Elena Marchiori 11
NN 1

ANN vs CPTR
• Algorithmic approach
• Like human brain
• A cognitive approach:
• Interconnected
neurons – the way the problem is to solved must
be known and stated in small
• Learn by examples unambiguous instructions.
• Un-programmable – These instructions are then converted
• Careful selection of to a high level language program and
then into machine code that the
examples computer can understand.
• Ops unpredictable • Predictable; if anything goes wrong
is due to a software or hardware
fault.

ANN vs CPTR
• ANNs and conventional algorithmic computers
– Not in competition
– Complement each other.
– Few Tasks algorithmic approach like arithmetic
operations suited for computers
– Few Tasks that are more suited to ANN.
– Large number of tasks
• Require systems that use a combination
• A conventional computer is used to supervise the neural
network
• Neural networks do not perform miracles. But if
used sensibly they can produce some amazing
results.

Elena Marchiori 12
NN 1

From Human neurons to artificial


neurons

• We conduct AN by deducing essential


feature and interconnections of human
neurons
• Program a computer to simulate them
• As Knowledge is incomplete and
computation power is limited
– We reach to Gross idealization of real
neurons
But before we delve into it

History

• Roots of work on ANN are in:


• Neurobiological studies (more than one
century ago):
• How do nerves behave when stimulated
by different magnitudes of electric
current? Is there a minimal threshold MOVIES
needed for nerves to be activated? Given
that no single nerve cell is long enough,
how do different nerve cells
communicate among each other?

Elena Marchiori 13
NN 1

History

• Psychological studies:
• How do animals learn, forget, recognize
and perform other types of tasks?
• Psycho-physical experiments helped to
understand how individual neurons and
groups of neurons work.
• McCulloch and Pitts introduced the first
mathematical model of single neuron, widely
applied in subsequent work.

History
(First Attempts)
• Initial simulations using formal logic.
• McCulloch and Pitts (1943) developed
models of neural networks based on
their understanding of neurology.
– These models made several assumptions
about how neurons worked. Their
networks were based on simple neurons
which were considered to be binary
devices with fixed thresholds. The results
of their model were simple logic functions
such as "a or b" and "a and b“

Elena Marchiori 14
NN 1

History
ENGR &
(First Attempts) PHYSICIANS

• Another attempt was by using


computer simulations. Two groups
(Farley and Clark, 1954; Rochester,
Holland, Haibit and Duda, 1956)
– The first group (IBM researchers)
maintained closed contact with
neuroscientists at McGill University. So
whenever their models did not work, they
consulted the neuroscientists. This
interaction established a multidisciplinary
trend which continues to the present day

History
(Further Attempts)
• Not only was neuroscience influential in the
development of neural networks
• Also psychologists and engineers contributed
to the progress of ANN simulations
– Rosenblatt (1958) stirred considerable
interest and activity in the field when he
designed and developed the Perceptron.
• The Perceptron had three layers with the middle
layer known as the association layer
• This system could learn to connect or associate a
given input to a random output unit

Elena Marchiori 15
NN 1

History
(Further Attempts)
• Another system was the ADALINE
(ADAptive LInear Element)
– Developed in 1960 by Widrow and Hoff
(of Stanford University)
– Least-Mean-Squares (LMS) learning
rule

History
• In 1969 Minsky and Papert wrote a book in which they
generalized the limitations of single layer Perceptrons to
multilayered systems.
• In the book they said:
"...our intuitive judgment that the extension (to multilayer
systems) is sterile". NOT CAPABLE
• The significant result of their book was to eliminate
funding for research with neural network simulations.
• The conclusions supported the disenchantments of
researchers in the field.
• As a result, considerable prejudice against this field was
activated.

Elena Marchiori 16
NN 1

History
• Innovation in the 70's: (even funding was minimal)
• Individual researchers continue laying foundations
• von der Marlsburg (1973): competitive learning and
self-organization
Big neural-nets boom in the 80's and re-emergence
• Books, conferences, university programmes, funding by
major ctry
• Grossberg: adaptive resonance theory (ART)
• Hopfield: Hopfield network
• Kohonen: self-organising map (SOM)

History
• Oja: neural principal component analysis (PCA)
• Ackley, Hinton and Sejnowski: Boltzmann machine
• Rumelhart, Hinton and Williams: backpropagation
Diversification during the 90's and LATER:
• Machine learning: mathematical rigor, Bayesian
methods, information theory, support vector machines
(now state of the art!), ...
• Computational neurosciences: workings of most
subsystems of the brain are understood at some level;
research ranges from low-level compartmental models
of individual neurons to large-scale brain models

Elena Marchiori 17
NN 1

Some of most common ANN


models

Period Inventors Name of the model Applications Learning Mode


1957-1960 F. Rosenblatt Perceptron Type character Supervised
recognition and
Classification
1959-1962 B. Widrow LMS Prediction, Supervised
M. E. Hoff noise cancellation
1971-1994 I. Aleksander RAM model and PRAM Pattern recognition Supervised
J. G. Taylor (Weightless neurons) reinforcement
T. G. Clarkson
D. Gorse
1974-1986 P. Werbos, Back propagation Pattern, recognition, Supervised
D. Parker Prediction, etc.
D. Rumelhart
1975-1983 K. Fukushima Neocognitron Pattern recognition Supervised/
Unsupervised
1978-1986 G. Carpenter Adaptive Resonance Recognition: classification of Supervised/
S. Grossberg Theory (ART) complex pattern Unsupervised
1980 T. Kohonen Self Organizing Image recognition Unsupervised
Map
1982 B. Wilkie WISARD Pattern and RAM
J. Stonham image recognition based model
I. Aleksander
1982-1984 J. Hopfield Associative Memory Speech Processing Association
(Hebbian)
1985 B. Kosko Bi-directional Image Processing Association
Associative (Hebbian)
Memory (BAM)
1980-1993 M. J. D. Powell Radial Base Prediction /Recognition Supervised and
And so on J. E. Moody Function (RBF) unsupervised
C. J. Darken (Hybrid system)
S. Renals Hardware systems
T. Poggio
F. Girosi

Elena Marchiori 18
NN 1

PRESENT
• Today, neural networks discussions are
occurring everywhere.
• Their promise seems very bright as nature
itself is the proof that this kind of thing
works.
• Yet, its future, indeed the very key to the
whole technology, lies in hardware
development.
• Currently most neural network development
is simply proving that the principal works.

PRESENT
• This research is developing neural
networks that, due to processing
limitations, take weeks to learn.
• To take these prototypes out of the lab
and put them into use requires
specialized chips.
• Companies are working on three types
of neuro chips - digital, analog, and
optical.

Elena Marchiori 19
NN 1

Commercially Available Neural


Hardware – Example 1

• Electrically Trainable
Analogue Neural Network
(ETANN)
– Manufactured by Intel
– Analogue ANN with sigmoidal
transfer functions
– Two-layer feed-forward
architecture
– 64 neurones per layer

Commercially Available Neural


Hardware – Example 2

• Zero Instruction Set


Computer Board
– The board contains 4
ZISC036 chips.
– Each chip implements
36 Radial Base
Function (RBF)
neurones.
– ZISC036 chips are
manufactured by IBM
– Boards with 16 and 32
chips are also available

Elena Marchiori 20
NN 1

Deep Learning

Elena Marchiori 21
NN 1

BRAIN Vs COMPUTER

Elena Marchiori 22
NN 1

From Human neurons to artificial


neurons

• We conduct AN by deducing essential


feature and interconnections of human
neurons
• Program a computer to simulate them
• As Knowledge is incomplete and
computation power is limited
– We reach to Gross idealization of real
neurons

Human and Artificial Neurons


(FUNCTIONALITIES)
• Much is still unknown
– How brain train itself/process information
– Theories abound
DENDRITES

CELL BODY

AXON

CELL BODY
SYNAPSE

AXON

Elena Marchiori 23
NN 1

Artificial Neural Networks

• An Artificial Neural Network (ANN) is an information


processing paradigm that is inspired by the way
biological nervous systems, such as the brain,
process information.

• The key element of this paradigm is the novel


structure of the information processing system.

• It has a strong analogy to human brain neuron model.

Elena Marchiori 24
NN 1

Artificial Neural Networks

Artificial Neural Networks

Elena Marchiori 25
NN 1

ANN Training

Dataset for ANN Training

Dataset

Training Validation
Test Set
Set Set

Elena Marchiori 26
NN 1

Gradient Descent- ANN


Training Algorithm

An Artificial Neuron Model

Elena Marchiori 27
NN 1

A Simple Neurone Model


(The Perceptron)
b
• A perceptron has
analogue inputs but
binary output.

• Each input has an


associated weight.
out  f (net )
• Positive weights n
correspond to net   wi  xi  b
excitatory inputs and i 1
negative weights to
1 if net  threshold
inhibitory inputs. f 
0 if net  threshold

Elena Marchiori 28

You might also like