0% found this document useful (0 votes)
55 views14 pages

On Neural Networks and Application: Submitted By: Makrand Ballal Reg No. 1025929 Mca 3 Sem

The document discusses neural networks and their applications. It provides an introduction to neural networks, describing their structure as being composed of interconnected processing elements similar to neurons in the brain. Neural networks can be trained to learn tasks through adjustment of connection weights between elements. The document also describes different types of neural network architectures, including feedforward and feedback networks, and different network layers. It discusses perceptrons and some limitations identified in early work. In closing, it argues that while neuroscience focuses on detailed problems, its study of neural learning principles can still inform learning approaches in other domains.

Uploaded by

makku_ballal
Copyright
© Attribution Non-Commercial (BY-NC)
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as DOCX, PDF, TXT or read online on Scribd
0% found this document useful (0 votes)
55 views14 pages

On Neural Networks and Application: Submitted By: Makrand Ballal Reg No. 1025929 Mca 3 Sem

The document discusses neural networks and their applications. It provides an introduction to neural networks, describing their structure as being composed of interconnected processing elements similar to neurons in the brain. Neural networks can be trained to learn tasks through adjustment of connection weights between elements. The document also describes different types of neural network architectures, including feedforward and feedback networks, and different network layers. It discusses perceptrons and some limitations identified in early work. In closing, it argues that while neuroscience focuses on detailed problems, its study of neural learning principles can still inform learning approaches in other domains.

Uploaded by

makku_ballal
Copyright
© Attribution Non-Commercial (BY-NC)
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as DOCX, PDF, TXT or read online on Scribd
You are on page 1/ 14

Report

On
Neural Networks and Application

Submitted By:

Makrand Ballal

Reg No. 1025929

MCA 3rd Sem


Introduction to Neural Networks
A Neural Network (NN) is an information processing paradigm that is inspired by the way
biological nervous systems, such as the brain, process information. The key element of this
paradigm is the novel structure of the information processing system. It is composed of a large
number of highly interconnected processing elements (neurones) working in unison to solve
specific problems. NNs, like people, learn by example. An NN is configured for a specific
application, such as pattern recognition or data classification, through a learning process.
Learning in biological systems involves adjustments to the synaptic connections that exist
between the neurones. This is true of NNs as well.

Over the past several years, various methods based on such areas as operations research,
statistics, computer simulation, control theory have been developed and applied to solve a wide
spectrum of problems in manufacturing. Today’s manufacturing environment is
characterized by complexity, inter-disciplinary manufacturing functions and ever growing
demand for new tools and techniques to solve difficult problems.

Neural network offers a new and intelligent alternative to investigate and analyze challenging
issues related to manufacturing. In this section, an introduction to neural network and two
commonly used neural network methods will be provided. Neural network is used to capture the
general relationship between variables of a system that are difficult to analytically relate. Neural
network has been described as “brain metaphor of information processing” or as “a biologically
inspired statistical too1”.

It has the capability to learn or to be trained about a particular task, its computational capabilities
and the ability to formulate abstractions and generalizations. Neural network has an organization
similar to that of a human brain and it is a network made up of processing elements called
neurons. Neurons get data from the surrounding neurons, perform some computations, pass the
results to other neurons. Connections between the neurons have weight associated with them. In
neural network, the knowledge is stored in the network’s interconnection weights in an implicit
manner, learning takes place within the system and plays the most important role in the
construction of an neural network system. The neural network system learns by determining the
interconnection weights from a set of given data .

Learning in neural network can be supervised, unsupervised or based on a combined n


supervised supervised training. In supervised learning, a set of data, called a training
data set, is used to help the network in arriving at the appropriate weights. A teacher teaches the
network and gives results of the output corresponding to the input. The inputs as well as side
information indicating the correct outputs are presented to the network. The network is also
‘programmed’ to know the procedure to be applied to adjust the weights and thus the network
has the means to determine whether or not its output was correct and the means to apply the
learning law to adjust its weights in response to the resulting errors. Weights are generally
modified on the basis of the errors between desired and actual outputs in an iterative fashion and
one of the widely used training algorithms is the “Delta Rule”.
The neural network learns the desired outputs by adjusting its internal connection
weights to minimize the discrepancy between the actual outputs of the system and the desired
outputs Neural networks, with their remarkable ability to derive meaning from complicated or
imprecise data, can be used to extract patterns and detect trends that are too complex to be
noticed by either humans or other computer techniques. A trained neural network can be thought
of as an "expert" in the category of information it has been given to analyse. This expert can then
be used to provide projections given new situations of interest and answer "what if" questions.

Architecture of neural networks


Feed-forward networks

Feed-forward ANNs (figure 1) allow signals to travel one way only; from input to output. There
is no feedback (loops) i.e. the output of any layer does not affect that same layer. Feed-forward
ANNs tend to be straight forward networks that associate inputs with outputs. They are
extensively used in pattern recognition. This type of organisation is also referred to as bottom-up
or top-down.

Figure A: An example of a simple feedforward network


Feedback networks

Feedback networks (figure 1) can have signals travelling in both directions by introducing loops
in the network. Feedback networks are very powerful and can get extremely complicated.
Feedback networks are dynamic; their 'state' is changing continuously until they reach an
equilibrium point. They remain at the equilibrium point until the input changes and a new
equilibrium needs to be found. Feedback architectures are also referred to as interactive or
recurrent, although the latter term is often used to denote feedback connections in single-layer
organisations.

Figure B: An example of a complicated network

Network layers

The commonest type of artificial neural network consists of three groups, or layers, of units: a
layer of "input" units is connected to a layer of "hidden" units, which is connected to a layer of
"output" units. (see Figure A)

1.The activity of the input units represents the raw information that is fed into the network.

2.The activity of each hidden unit is determined by the activities of the input units and the
weights on the connections between the input and the hidden units.

3. The behaviour of the output units depends on the activity of the hidden units and the weights
between the hidden and output units.
This simple type of network is interesting because the hidden units are free to construct their own
representations of the input. The weights between the input and hidden units determine when
each hidden unit is active, and so by modifying these weights, a hidden unit can choose what it
represents.

We also distinguish single-layer and multi-layer architectures. The single-layer organisation, in


which all units are connected to one another, constitutes the most general case and is of more
potential computational power than hierarchically structured multi-layer organisations. In multi-
layer networks, units are often numbered by layer, instead of following a global numbering.

Perceptrons

The most influential work on neural nets in the 60's went under the heading of 'perceptrons' a
term coined by Frank Rosenblatt. The perceptron (figure 4.4) turns out to be an MCP model
( neuron with weighted inputs ) with some additional, fixed, pre--processing. Units labelled A1,
A2, Aj , Ap are called association units and their task is to extract specific, localised featured
from the input images. Perceptrons mimic the basic idea behind the mammalian visual system.
They were mainly used in pattern recognition even though their capabilities extended a lot more.

Figure 4.4

In 1969 Minsky and Papert wrote a book in which they described the limitations of single layer
Perceptrons. The impact that the book had was tremendous and caused a lot of neural network
researchers to loose their interest. The book was very well written and showed mathematically
that single layer perceptrons could not do some basic pattern recognition operations like
determining the parity of a shape or determining whether a shape is connected or not. What they
did not realised, until the 80's, is that given the appropriate training, multilevel perceptrons can
do these operations.
Neural Network Principles

Because of a recent string of educational fads which sought authority by claiming (often
without much justification) to be derived from neuroscience, many educators now react
badly to what they call 'neuro-babble', and have convinced themselves that
neuroscience has nothing to tell us, yet, about how we learn, and therefore has nothing
to contribute to the design of more effective approaches to education.

It is true that neuroscientists don't say much about education, but it is not because they
have nothing to contribute, it's because they are focused on a different set of problems
and are working at a different level of detail. They are necessarily bogged down in the
highly intricate scientific (experimental, statistical and ethical) problems that arise when
they try to unravel such complexities as; why the speed at which we can read and
distinguish between past and future tense verbs is effected by whether the words are
presented in the left or right visual field.

    Their
tools and methods are designed to work at that small scale, the microcosm, and
are not suitable for exploring the general principles, the macrocosm, of human learning.
They know a lot about how our neural networks learn to perceive the world, but they
don't talk about it.

    So
it falls to others who, like myself, are less constrained by professional/academic
group-think, to bridge the gaps between the different academic disciplines, to extract
principles and axioms from one domain and try them out in others. We will probably be
ignored and then attacked by many prestigious and well established tribes, but hey,
someone has to do it.

    Designers of artificial intelligence software and smart robot-builders spend a lot of time
copying ideas and discoveries from neuroscience and working out how they can be
applied to real world problem-solving. From their perspective, neuroscience has already
discovered many new and fundamental principles of neural network learning which have
profound implications for the the design of human education systems. Here are few
particularly important ones.

  

In evolutionary terms, high quality sensory information is a valuable but expensive


commodity. As balance had to be struck. As a consequence, our sensory capability is
surprisingly limited, much more limited than we normally realize. We feel as if we can
see everything there is to see, and hear everything there is to hear, but actually we can
only detect a tiny proportion of the vast amount of information that is bouncing around
the universe and we are deaf and blind to the rest of it.

The brain uses this trapped experience to pre-consciously enhance our limited real-time
sensory information. Your understanding of how the world appeared to work in the past,
(as encoded in the structure of your neural connections) is used to help make sense of
the limited, vague, ambiguous and noisy information you are receiving from your senses
right now.

        Our
accumulated maps and models about the world are not stored as facts or data
in a lookup table, they are encoded in the current wiring of the network which
continuously evolves in response to the sensory experiences that flow through it. New
real-time sensory information passes through (selectively stimulates and is interpreted
by) networks which have been shaped by past experience, networks which have been
'taught' by experience to be sensitive to particular patterns of associations and
distinctions, and insensitive to others.

    Thisis how we interpret our environment, how we recognize familiar objects, people,
places, movements, smells, situations, threats and opportunities, from within the stream
of information that flows through our senses. This is how past experience is pre-
consciously employed to help make sense of limited, noisy, current information.

  

Our perceptions, judgments and reactions, feel much more solid than they really are.
They are actually composed of a small measure of sensory information and a large
measure of assumptions about how to world 'is' now, which are based on our
experience of how the world appeared to us in the past. The quality of our current
perception, our understanding and our behavioral choices is highly dependent on the
quality of our accumulated maps and models about reality - and the quality of our maps
and models changes significantly with time and experience.

Applications of Neural Networks

A Simple Neuron

An artificial neuron is a device with many inputs and one output (Figure 3). The neuron has two
modes of operation; the training mode and the using mode. In the training mode, the neuron can
be trained to fire (or not), for particular input patterns. In the using mode, when a taught input
pattern is detected at the input, its associated output becomes the current output. If the input
pattern does not belong in the taught list of input patterns, the firing rule is used to determine
whether to fire or not.

Figure: A simple neuron

Firing Rules
The firing rule is an important concept in neural networks and accounts for their high flexibility.
A firing rule determines how one calculates whether a neuron should fire for any input pattern. It
relates to all the input patterns, not only the ones on which the node was trained. The firing rule
gives the neuron a sense of similarity and enables it to respond 'sensibly' to patterns not seen
during training.

Neural Networks in Practice


Neural networks have broad applicability to real world business problems. In fact, they have
already been successfully applied in many industries. Since neural networks are best at
identifying patterns or trends in data, they are well suited for prediction or forecasting needs
including:
sales forecasting,
industrial process control
customer research
data validation
Risk management.

ANN are also used in the following specific paradigms: recognition of speakers in
communications; diagnosis of hepatitis; undersea mine detection; texture analysis; three-
dimensional object recognition; hand-written word recognition; and facial recognition.
Typical applications of hardware NNWs are:
OCR (Optical Character Recognition)
The Adaptive Solutions ImageLink OCR Subsystem captures the special high performance
hardware required for high throughput. These days a purchase of a new scanner typically
includes a commercial OCR program. Ligature Ltd also has an OCR-on-a-Chip example which
illustrates a cheap dedicated chip for consumer products Data Mining: A company named HNC
made about 23million dollars profit on 110 million dollar revenue in 1997, on their product
called falcon. “Falcon is a neural network based system that examines transaction, cardholder,
and merchant data to detect a wide range of credit card fraud”

Voice Recognition
Examples are the Sensory Inc. RSC Microcontrollers and ASSP speech recognition specific
Chips. Traffic Monitoring: an example is the Nestor Traffic Vision Systems. High Energy
Physics: An example is an online data filter built by a group at the Max Planck Institute for the
H11 electron-proton collider experiment in the Hamburg using Adaptive Solutions CNAPS
boards. However, most NNW applications today are still run with the conventional software
simulation on PC’s and workstations with no special hardware add-ons.

Neural Networks in Control Engineering


The ever-increasing technological demands of our modern society require innovative approaches
to highly demanding control problems. Artificial neural networks with their massive parallelism
and learning capabilities offer the promise of better solutions, at least to some problems. By now,
the control community has heard of neural networks and wonders if these networks can be used
to provide better control solutions to old problems or perhaps solutions to control problems that
have withstood our best efforts.

Control system applications


Neural networks have been applied very successfully in the identification and control of dynamic
systems. The universal approximation capabilities of the multilayer perceptron have made it a
popular choice for modeling nonlinear systems and for implementing general-purpose nonlinear
controllers.

For the purposes of this work we will look at neural networks as function approximators. As
shown in Figure 4, we have some unknown function that we wish to approximate. We want to
adjust the parameters of the network so that it will produce the same response as the unknown
function, if the same input is applied to both systems. For our applications, the unknown function
may correspond to a system we are trying to control, in which case the neural network will be the
identified plant model. The unknown function could also represent the inverse of a system we are
trying to control, in which case the neural network can be used to implement the controller.
Figure: Neural Network as Function Approximator

Fixed Stabilizing Controllers

Fixed stabilizing controllers (see Figure 5) have been proposed in (Kawato, 1990). This scheme
has been applied to the control of robot arm trajectory, where a proportional controller with gain
was used as the stabilizing feedback controller. We can see that the total input that enters the
plant is the sum of the feedback control signal and the feedforward control signal, which is
calculated from the inverse dynamics model (neural network). That model uses the desired
trajectory as the input and the feedback control as an error signal.
As the NN training advances, that input will converge to zero. The neural network controller will
learn to take over from the feedback controller. The advantage of this architecture is that we can
start with a stable system, even though the neural network has not been adequately trained.
Figure: A Stabilizing Controller (Hagan et al., 2002)

We have selected one type of network, the multilayer perceptron. We have demonstrated the
capabilities of this network for function approximation, and have described how it can be trained
to approximate specific functions. We then presented control architecture which use neural
network function approximates as basic building blocks. Control engineering also involves
robotics, where Intelligent Control is the discipline that implements Intelligent Machines(IMs) to
perform anthropormorphic tasks with minimum supervision and interaction with a human
operator.

Agricultural Control System Engineering


Control and management of agricultural machinery offers many opportunities for application of
general purpose empirical models. The nature of agricultural machines creates the need for
modeling systems that are robust, noise tolerant, adaptable for multiple uses, and are extensible.
Artificial Neural Networks (ANNs) have these characteristics and are attractive for use in control
and modeling in agricultural machinery.

Weed Detection in Sprayers


The complete sprayer consisted of many of the sensor-nozzle elements placed in parallel on a
single spray boom. A sensor was fabricated to detect color on the surface of the ground in a 7.5
by 50-cm wide image. Three color bands; green, red, and near infra-red were sensed. The signals
from the sensor were digitized with a 68HC11 based controller using the on-chip 8-bit A/D
converter. The 68HC11 based computer was also used to activate a solid-state switch that
energized a solenoid valve in the spray nozzle. The intent of control in the system was to sense
the presence of a weed by color and to activate the nozzle to spray the plant at the point in time
that the plant was under the nozzle. A time budget is shown in the figure. If computing time plus
the time required for the fluid to reach the ground once it emerges from the nozzle was
insignificant, the sensor and nozzle could be located together. Agricultural sprayers based on
optical sensing and control of spray nozzle activation currently exists on the market.

Weed identification

Zhang, Yang, & El-Faki (1994) reported the use of ANNs to process color images of weeds in a
winter-wheat environment with the objective of being able to distinguish between weeds and
other components of the image. They were particularly interested in detecting weeds with
reddish stems.An ANN was also developed to allow color patterns to be recognized in an
agricultural weed sprayer application by Stone (1994).

Neural Networks in Electrical Engineering


Artificial Neural Network (ANN) is currently a 'hot' research area in electrical engineering. The
model used to simulate artificial neural networks is based on the biological nerve cell or neuron
shown in Figure 7. Electrical signals arising from impulses from our receptor organs (e.g. eyes,
ears) are carried into neurons on dendrites.

Signal classification with Perceptron

Figure: A Biological Neuron (Howard, 2006)

A problem of particular interest to electrical engineers is that of signal detection, particularly in a


noisy environment. Methods such as filtering and signal averaging have been used successfully.

Neural Networks and its application in Civil Engineering


Neural networks have gained a broad interest in civil engineering problems. They are used as an
alternative to statistical and optimization methods as well as in combination with numerical
simulation systems. Application areas in Civil Engineering are e.g. forecasting, water
management, control and decision support systems.

Limitations of Neural Networks


The major issues of concern today are the scalability problem, testing, verification, and
integration of neural network systems into the modern environment. Neural network programs
sometimes become unstable when applied to larger problems. The defence, nuclear and space
industries are concerned about the issue of testing and verification. The mathematical theories
used to guarantee the performance of an applied neural network are still under development. The
solution for the time being may be to train and test these intelligent systems much as we do for
humans. Also there are some more practical problems like: the operational problem encountered
when attempting to simulate the parallelism of neural networks instability to explain any results
that they obtain. Networks function as "black boxes" whose rules of operation are completely
unknown.Likewise, in OCR, we find that one cannot claim Neural Networks (NNWs) are
conquering the world, because one does not feed the pixels of the picture file into a single giant
NNW and out pops the text. To turn a picture of text into a text file, a dozen or more steps must
be completed successfully by the OCR program. For example, an OCR system might follow the
step in the diagramin Figure.

Figure (From Adaptive Solutions CNAPS User Guide).

Note that Adaptive Solution CNAPS is an example of a general, but expensive system that can
be reprogrammed for many kinds of tasks.Designers of OCR programs may choose NNWs to
accomplish one or more of these steps with NNWs while using for other steps other techniques
such as conventional AI (If-Then rules), statistical models, hidden Markov models, etc. The
point is that NNWs are becoming commonly used tools but, just like other techniques such as
Fast Fourier Transform and least squares fit, they are still only tools, not the whole solution. Few
real problems of interest can be totally solved by a single NNW. It is also true that implementing
NNWs in Hardware and Software to run on them is relatively expensive. With the
aforementioned, one quickly begins to see why the business of Neural Network hardware has not
boomed the way some in the field expected back in the 1980’s..

Conclusion

Prediction for the future rests on some sort of evidence or established trend which, with
extrapolation, clearly takes us into a new realm. Neural Networks will fascinate user-specific
systems for education, information processing, entertainment, genetic engineering, neurology
and psychology.

Programs could be developed which require feedback from the user in order to be effective but
simple and "passive" sensors (e.g. fingertip sensors, gloves, or wristbands to sense pulse, blood
pressure, skin ionization, and so on), could provide effective feedback into a neural control
system. NN’s ability to learn by example makes them very flexible and powerful. Perhaps the
most exciting aspect of neural networks is the possibility that some day 'conscious' networks
might be produced.

You might also like