0% found this document useful (0 votes)
62 views12 pages

HCL Shiv Vlsi

This document discusses VLSI (Very Large Scale Integration) for neural networks and their applications. It provides an introduction to VLSI, describing how it allows more logic devices to be packed into smaller areas. It then discusses three main categories of VLSI designs: analog, application specific integrated circuits, and systems on a chip. The document also provides an overview of artificial neural networks and biological neural networks, and how VLSI can be used to implement neural networks through approaches like SIMD (single instruction multiple data) and systolic arrays.
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as DOCX, PDF, TXT or read online on Scribd
0% found this document useful (0 votes)
62 views12 pages

HCL Shiv Vlsi

This document discusses VLSI (Very Large Scale Integration) for neural networks and their applications. It provides an introduction to VLSI, describing how it allows more logic devices to be packed into smaller areas. It then discusses three main categories of VLSI designs: analog, application specific integrated circuits, and systems on a chip. The document also provides an overview of artificial neural networks and biological neural networks, and how VLSI can be used to implement neural networks through approaches like SIMD (single instruction multiple data) and systolic arrays.
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as DOCX, PDF, TXT or read online on Scribd
You are on page 1/ 12

VLSI FOR NEURAL NETWORKS AND THEIR APPLICATIONS

INTRODUCTION

ABSTRACT : given 500 characters of code that you


know to be C, C++, Java, or Python. Now,
Most of the students of
construct a program that identifies the
Electronics Engineering are exposed to
code's language. One solution is to
Integrated Circuits (IC's) at a very basic
construct a neural net that learns to
level, involving SSI (small scale integration)
identify these languages.
circuits like logic gates or MSI (medium
scale integration) circuits like multiplexers, According to a simplified account,
parity encoders etc. But there is a lot the human brain consists of about ten
bigger world out there involving billion neurons -- and a neuron is, on
miniaturization at levels so great, that a average, connected to several thousand
micrometer and a microsecond are other neurons. By way of these
connections, neurons both send and
literally considered huge! This is the world
receive varying quantities of energy. One
of VLSI - Very Large Scale Integration.
very important feature of neurons is that
Neural networks are a new they don't react immediately to the
method of programming computers. They reception of energy.
are exceptionally good at performing
Instead, they sum their
pattern recognition and other tasks that
are very difficult to program using received energies, and they send their own

conventional techniques. Programs that quantities of energy to other neurons only

employ neural nets are also capable of when this sum has reached a certain
learning on their own and adapting to critical threshold. The brain learns by
changing conditions. adjusting the number and strength
of these connections. The brain's network
Neural nets may be the future
of neurons forms a massively parallel
of computing .A good way to understand
information processing system. This
them is with a puzzle that neural nets can
contrasts with conventional computers, in
be used to solve. Suppose that you are
which a single processor executes a single
series of instructions.

VLSI stands for "Very Large Scale


Integration". This is the field which
involves packing more and more logic
devices into smaller and smaller areas
.Thanks to VLSI, circuits that would have
taken broad fulls of space can now be put
into a small space few millimeters across!
This has opened up a big opportunity to do
things that were not possible before. VLSI
circuits are everywhere ... your computer,
your car, your brand new state-of-the-art
digital camera, the cell -phone, etc.

MOST OF TODAY’S VLSI DESIGNS ARE


CLASSIFIED INTO THREE CATEOGRIES:

1. Analog 2.Application Specific Integrated


Circuits 3.Systems on a chip

The VLSI is also used for Neural


Networks in many ways and applications.
A neural network is a powerful data
modeling tool that is able to capture and

2
:

represent complex input/output


relationships. The motivation for the
development of neural network ARTIFICIAL NEURAL NETWORK:
technology stemmed from the desire to
An artificial neural network (ANN) or
develop an artificial system that could
commonly as Neural Network (NN) is an
perform "intelligent" tasks similar to those
interconnected group of artificial neurons
performed by the human brain.
that uses a mathematical or computational
The true power and advantage of model for information processing based on
neural networks lies in their ability to a connectionist approach to computation.
represent both linear and non-linear In most cases an ANN is an adaptive
relationships and in their ability to learn system that changes its structure based on
these relationships directly from the data external or internal information that flows
being modeled. Traditional linear models through the network.
are simply inadequate when it comes to
In more practical terms neural networks
modeling data that contains non-linear
are non-linear statistical data modeling
characteristics.
tools. They can be used to model complex
The most common neural relationships between inputs and outputs
network model is the multilayer or to find patterns in data.
perceptron (MLP). This type of neural
network is known as a supervised network
because it requires a desired output in
order to learn. The goal of this type of
network is to create a model that correctly
maps the input to the output using
historical data so that the model can then
be used to produce the output when the
desired output is unknown.

3
Biological Neural Network

From "Texture of the Nervous


System of Man and the Vertebrates". The
figure illustrates the diversity of neuronal
morphologies in the auditory cortex.

In neuroscience, a neural network


is a bit of conceptual juggernaut: the
conceptual transition from neuroanatomy,
a rigorously descriptive discipline of
observed structure, to the designation of
the parameters delimiting a 'network' can
be problematic. In outline a neural
network describes a population of
physically interconnected neurons or a
group of disparate neurons whose inputs
or signaling targets define a recognizable
circuit. Communication between neurons
often involves an electrochemical process.
Biological Neural Network The interface through which they interact
with surrounding neurons usually consists
of several dendrites (input connections),
which are connected via synapses to other
A biological neural network is a plexus of neurons, and one axon (output
connected or functionally related neurons connection). If the sum of the input signals
in the peripheral nervous system or the surpasses a certain threshold, the neuron
central nervous system. In the field of sends an action potential (AP) at the axon
neuroscience, it most often refers to a hillock and transmits this electrical signal
group of neurons from nervous systems along the axon.
that are suited for laboratory analysis.

4
In contrast, a neuronal circuit is a with multiple data (SIMD) and systolic
functional entity of interconnected arrays. For SIMD design, each processor
neurons that influence each other (similar executes the same instruction in parallel
to a control loop in cybernetics). but on different data. In systolic arrays, a
processor does one step of a calculation
The neural network is divided into
(always the same step) before passing it's
three different categories
result on to the next processor in a
pipelined manner. SIMD chips include the
Inova N64000 and the HNC 100 NAP. All
chips execute the same instruction and
DIGITAL:
common control and data bases allow for
The digital neural network multiple chips to be combined.
category encompasses many sub-
Radial Basis Functions: RBF networks
categories including slice architectures,
provide fast learning and straight-forward
SIMD and systolic array devices, and RBF
interpretation. The comparison of input
architectures. For the designer, digital
vectors to stored training vectors can be
technology has the advantages of mature
calculated easily without
fabrication techniques, weight storage in
using multiplication operations. Two
RAM, and arithmetic operations exact
commercial RBF products are now
within the number of bits of the operands
available: the IBM ZISC036 (Zero
and accumulators. From the users
Instruction Set Computer) chip and the
viewpoint, digital chips are easily
Nestor Ni1000 chip. The ZISC036 contains
embedded into most applications.
36 prototype-vector neurons, where the
However, digital operations are usually
vectors have 64 8-bit elements, and can be
slower than in analog systems, especially
assigned to categories from 1 to 16383.
in the multiplication, and analog inputs
Multiple chips can be easily cascaded to
must first be converted to digital.
provide additional prototypes. The chip
Multi-processor Chip: A far more implements a Region of Influence learning
elaborate approach is to put many small algorithm using signum basis functions
processors on a chip. Two architectures with radii of 0 to 16383. Recall processing
dominate such designs: single instruction takes for a 250k/sec pattern presentation
rate. The

5
Analog hardware networks can exploit
physical properties to do network
Nestor Ni1000, developed jointly by Intel
operations and thereby obtain high speed
and Nestor, contains 1024 prototypes of
and densities. A common output line, for
256 5-bit elements. The chip has two on-
example, can sum current outputs from
chip learning algorithms, RCE[21] and
synapses to sum the neuron inputs.
PNN[22], and other algorithms can be
However, analog design can be very
micro coded. The processing rate is about
difficult because of the need to
40k patterns/sec with a 40MHz clock.
compensate for variations in
Other Digital Designs: manufacturing, in temperature, etc.
Creating an analog synapse involves the
Some digital neural network chips don't
complications of analog weight storage
quite fit into the above three sub-
and the need for a multiplier linear over a
categories. Examples include the Micro
wide range. While many designs use
Circuit Engineering MT19003 NISP Neural
analog techniques to carry out
Instruction Set Processor [23] and the
conventional architectures like multi-layer
Hitachi Wafer Scale Integration chips [24].
feed forward networks, neuromorphic
The NISP is basically a very simple RISC
designs, such as the Synaptic Silicon
processor with seven instructions,
Retina, emulate biological functions as
optimized for implementation of multi-
closely as possible. The first analog
layer networks, and loaded with small
commercial chip was the Intel 80170NW
programs to direct the processing. Feed-
ETANN (Electrically Trainable Analog
forward processing reaches 40MCPS. At
Neural Network) that contains 64 neurons
the other end of the complexity scale are
and 10280 weights. The non-volatile
the Hitachi Wafer Scale Integration chips.
weights are stored as charge on floating
Both Hopfield and back-propagation
gates and a Gilbert multiplier provides 4-
wafers have been built. A neurocomputer
quadrant multiplication. A flexible design,
with 8 of the back-prop wafers, each with
including internal feedback and division of
144 neurons, achieved 2.3GCUPS [6].
the weights into two 64x80 banks
(including 16 biases), allows for multiple
configurations including 3-layers of 64
ANALOG:

6
neurons/layer, and 2-layers with 128 implementation was the Neural
inputs and 64 neurons. Semiconductor chip set with the SU3232
synapse unit and the NU32 neuron unit.
The Ricoh Company has reported a pulse
HYBRID: chip with a special back-propagation
algorithm implemented on-chip. TheRN-
Hybrid designs attempt to combine
100contained only a single neuron with 8
the best of analog and digital techniques.
inputs and8outputs.An array of 12 RN-
Typically, the external inputs/outputs are
100's learned to balance a 2-D pendulum in
digital to facilitate integration into digital
just30s.
systems, while internally some or all of the
processing is analog. The AT&TANNA
Artificial Neural Network ALU, for
APPLICATIONS OF NEURAL NETWORKS IN
example, is externally digital but uses
MEDICINE:
capacitor charge, periodically refreshed by
DAC's, to store the weights. Similarly, the
Bell core CLNN-32 chip has 5-bit weights
Network (NN) in medicine has
loaded digitally but the processing of the
attracted many researchers. A simple
network with Boltzmann style annealing is
search by Machado (1996) in Medline for
done in analog.
articles about computer-based NN
The Neuro Classifier from the Mesa between 1982 and 1994 resulted with
Research Institute at the University of more than 600 citations. Several
Twente has 70 analog inputs, 6 hidden applications was reviewed and evaluated
and 1 analog output with 5-bit digital based on the model used, input and output
weights. The feed-forward processing rate data, the results and project status. From
is an astounding 20ns, representing the review, several research and
20GCPS. The final output is without a applications of Neural Expert System in
squashing function so that multiple chips medical applications have been
can be added to increase the number of listed. Most of the research that employed
hidden units. The use of pulse rates or NN yields between 70% to 80%
pulse widths is another method to emulate accuracy. NN has been shown as a
nets in hardware. The first commercial

7
powerful tool to enhance current medical In basic sciences, NN helps clinician
diagnostic techniques. to investigate the impact of parameter
after certain conditions or treatments. It
Partridge et al. (1996) listed several
supplies clinicians with information about
potentials of NN over conventional
the risk or incoming circumstances
computation and manual analysis in
regarding the domain.
medical application:
Learning the time course of
 Implementation using data instead of
blood glucose .For example can help
possibly ill defined rules.
clinician to control the diabetes
 Noise and novel situations are handled mellitus. Uses feed forward NN for
automatically via data generalization. predicting the time course of blood glucose
levels from the complex interaction of
 Predictability of future indicator values
glucose counter regulatory hormones and
based on past data and trend recognition.
insulin.

 Automated real-time analysis and


Multi-Layer Perceptron (MLP) with
diagnosis.
sigmoidal Feed-Forward and standard

 Enables rapid identification and Back-Propagation (BP) learning algorithm

classification of input data. was employed as a forecaster for bacteria-


antibiotic interactions of infectious
 Eliminates error associated with human
diseases. They conclude that the 1-month
fatigue and habituation.
forecaster produces output correct to
within occurrences of
sensitivity. However, predictions for the 2-
In this paper the discussion of
month and 3-month are less accurate.
applications of neural network in medical
applications is divided into several domain
that are applications in basic sciences,
Applications in Clinical Medicine:
clinical medicine, signal processing and
interpretation and medical image Patient who hospitalize for

processing. having high-risk diseases required special


monitoring as the disease might spread in
Applications in Basic sciences:
no time. NN has been used as a tool for

8
patient diagnosis and prognosis to (Lagerholm et al., 2000) employed Self-
determine patients’ survival. Bottaci and Organizing Neural Networks (Self-
Drew (1997) investigate fully connected Organizing Maps or SOMs) in conjunction
feed forward MLP and BP learning rule, with Hermite Basis function for the
were able to predict patients with purpose of beat clustering to identify and
colorectal cancer more accurately than classify ECG complexes in
clinico pathological methods. They arrhythmia. SOMs topological structure is
indicate that NN predict the patients’ a benefit in interpreting the data.
survival and death very well compared to
The experimental results
the surgeons.
were claimed to outperform other
supervised learning method that uses the
same data. Analysis of NN as ECG analyzer
Applications in Signal Processing and
also proves that NN is capable to deal
Interpretation:
with ambiguous nature of ECG signal.
Signal processing and Sloop and Marchesi use static and
interpretation in medicine involve a recurrent neural network (RNN)
complex analysis of signals, graphic architectures for the classification tasks in
representations, and pattern ECG analysis for arrhythmia, myocardial
classification. Consequently, even ischemia and chronic alterations. Feed
experienced surgeon could misinterpret or forward network with 8-24-14-1
overlooked the data. In architecture was employed as a classifier
electrocardiography (ECG) analysis for for ECG patient monitoring (Waltrous and
example, the complexity of the ECG Towell, 1995). The analysis indicated that
readings of acute myocardial infarction the performance of the patient-adapted
could be misjudged even by experienced network was improved due to the ability
cardiologist (Janet, 1997). Accordingly the of the modulated classifier to adjust the
difficulty faced in ECG patient monitoring boundaries between classes, even though
is the variability in the distributions of beats were different
for different patients.
Morphology and timing across patients
and within patients, of normal and Multi layer RNN performance with 15-3-2
ventricular beats. architecture had been studied and the

9
performance of NN is compared with SOFM was claimed to have advantage of
conventional algorithms for recognizing ease implementation and guaranteed
fetal heart rate abnormality (Lee et al., convergence.
1999). The study reveals that the
A neural network is an interconnected
performance of NN is exceptional
group of nodes, akin to the vast network of
compared to conventional systems even
neurons in the human brain
with adjusted thresholds.

Applications in Medical Image Processing:


The brain, neural networks and
Image processing is one of the
computers:
important applications in medicine as most
of decision-making is made by looking at While historically the brain has

the images. In general the segmentation been viewed as a type of computer, and

of medical images is to find regions, which vice-versa, this is true only in the loosest

represent single anatomical sense. Computers do not provide us with

structures. Poli and Valli employed accurate hardware for describing the brain

Hopfield neural network for optimum (even though it is possible to describe a

segmentation of 2-D and 3-D medical logical process as a computer program or

images. The networks have been tested to simulate a brain using a computer) as

on synthetic images and on real they do not posses the parallel processing

tomographic and X-ray images. architectures that have been described in


the brain. Even when speaking of
Uses of two self-organizing maps
multiprocessor computers, the functions
(SOM) in two stages, self-organizing
are not nearly as distributed as in the
principal components analysis (SOPCA) and
brain.
self-organizing feature map (SOFM) for
automatic volume segmentation of Neural networks, as used in artificial

medical images. They performed a intelligence, have traditionally been

statistical comparison of the performance viewed as simplified models of neural

of the SOFM with Hopfield network and processing in the brain, even though the

ISODATA algorithm. The results indicate relation between this model and brain

that the accuracy of SOFM is superior biological architecture is very much

compare to both networks. In addition, debated. To answer this question, Marr

10
has proposed various levels of analysis world of computers, there has been a
which provide us with a plausible answer dramatic proliferation of tools that can be
for the role of neural networks in the used to design VLSI circuits.
understanding of human cognitive
The terms computation power,
functioning. The question of what is the
utilization of available area, yield. The
degree of complexity and the properties
combined effect of these two advances is
that individual neural elements should
that people can now put diverse
have in order to reproduce something
functionality into the IC's, opening up new
resembling animal intelligence is a subject
frontiers. Examples are embedded
of current research in theoretical
systems, where intelligent devices are put
neuroscience.
inside everyday objects.
Historically computers evolved from
CONCLUSIONS:
Von Neumann architecture, based on
sequential processing and execution of Progress in the fabrication of
explicit instructions. On the other hand IC's has enabled us to create fast and
origins of neural networks are based on powerful circuits in smaller and smaller
efforts to model information processing in devices. This also means that we can pack
biological systems, which are primarily a lot more of functionality into the same
based on parallel processing as well as area. All modern digital designs start with
implicit instructions based on recognition a designer writing a hardware description
of patterns of 'sensory' input from external of the IC.
sources. In other words, rather sequential
NN have been successfully
processing and execution, at their very
implemented in many applications
heart, neural networks are complex
statistic processors.

including medicine. NN, which


simulates the function of human biological
DISADVANTAGES: neuron, has potential of ease

VLSI has been around for a implementation in many applications

long time, there is nothing new about it ... domain. The main consideration of NN

but as a side effect of advances in the implementation is the input data. Once

11
the network is train, the knowledge could
be applied to all cases including the new
cases in the domain. Studies have shown
that NN predictive capability is a useful
capability in medical application. Such
capability could be used to predict patient
condition based on the history cases. The
prediction could help doctor to plan for a
better medication and provide the patient
with early diagnosis.

REFERENCES:

1) Waserman - “Neural computing”.

2) Wayne Wolf – “Modern VLSI Design”

12

You might also like