Article # 5
Neural Networks
Computer Concepts
Section: - C
By:
Saad Zahid 7805
Ali shayan 7807
A n artificial neural network, which is commonly known as “Neural
Network” is a mathematical or computational model which is based on
biological neural networks. It consists of an interconnected group of artificial
neurons and process information using a connectionist approach to
computation. Neural networks can be defined in two major aspects i-e
Biological neural network and artificial neural network. Learning in biological
systems involves adjustments to the synaptic connections that exist between the
neurons. The power of neural network includes speech synthesis, diagnostic
problems, medicine, business and finance, robotic control, signal processing,
computer vision and many other problems that fall under the category of pattern
recognition. Generally a biological neural network is composed of a group or
groups of chemically connected or functionally associated neurons. A single
neuron may be connected to many other neurons and the total number of
neurons and connections in a network is extensive. Connections are usually
called as synapses which are formed from axons to dendrites.
The synapses
There is no precise agreed definition among researchers as
to what a neural network is, but most would agree that it involves a network of
simple processing elements (neurons), which can exhibit complex global
behavior. Neural networks were introduced in 1940, when McCulloch and Pitts
developed the first neural model. In 1962 Rosenblatt developed perceptron
model, which no doubt generated much interest because of its ability to solve
some simple pattern problems. This interest was further developed in 1969
when Minsky and Papert provided mathematical proofs of the limitations of the
perceptron and pointed out its weakness in computation. In the past decade, the
interest of peoples has increased in researchers and in areas of application. The
development of more-powerful networks, better training algorithms, and
improved hardware has all contributed to the revival of the field. Neural-
network paradigms in recent years include the Boltzmann machine, Hopfield's
network, Kohonen's network, Rumelhart's competitive learning model,
Fukushima's model, and Carpenter and Grossberg's Adaptive Resonance
Theory model.
Neural networks do not perform miracles. But if used sensibly they can produce
some amazing results.
Inspired by the structure of the brain, a neural network
consists of a set of highly interconnected entities, called nodes or units. Each
unit is designed to mimic its biological counterpart, the neuron. Each accepts a
weighted set of inputs and responds with an output. Figure 1 presents a picture
of one unit in a neural network.
Much is still unknown about how the brain itself process
information, so theories abound. In the human brain, a typical neuron collects
signals from others through a host of fine structures called dendrites. The
neuron sends out spikes of electrical activity through a long, thin stand known
as an axon, which splits into thousands of branches. At the end of each branch,
a structure called a synapse converts the activity from the axon into electrical
effects that inhibit or excite activity from the axon into electrical effects that
inhibit or excite activity in the connected neurons. When a neuron receives
excitatory input that is sufficiently large compared with its inhibitory input, it
sends a spike of electrical activity down its axon. Learning occurs by changing
the effectiveness of the synapses so that the influence of one neuron on another
changes.
There are three major learning paradigms, each
corresponding to a particular abstract learning task. These three are:
SUPERVISED LEARNING, UNSUPERVISED LEARNING AND
REINFORCEMENT LEARNING. In supervised learning we are given a set of
example pairs and the aim is to find a function f in the allowed class of
functions that matches the examples. In other words, we wish to infer the
mapping implied by the data; the cost function is related to the mismatch
between our mapping and the data and it implicitly contains prior knowledge
about the problem domain. In unsupervised learning we are given some data x,
and the cost function to be minimized can be any function of the data x and the
network's output, f. In reinforcement learning, data x is usually not given, but
generated by an agent's interactions with the environment. At each point in time
t, the agent performs an action yt and the environment generates an observation
xt and an instantaneous cost ct, according to some (usually unknown) dynamics.
The computing world has a lot to gain from neural networks. Their ability to
learn by example makes them very flexible and powerful. Furthermore there is
no need to devise an algorithm in order to perform a specific task; i.e. there is
no need to understand the internal mechanisms of that task. They are also very
well suited for real time systems because of their fast response and
computational times which are due to their parallel architecture. Neural
networks also contribute to other areas of research such as neurology and
psychology. They are regularly used to model parts of living organisms and to
investigate the internal mechanisms of the brain. The real world applications of
neural networks are Sales forecasting, industrial process control, customer
research, data validation, risk management and target marketing
Bibliography:-
The idea for this article is taken with the help of a
psychiatrist named Dr. Qazi irshad. We paid a visit to him for the report and
during discussion he suggested us to write on this topic. Mean while he gives us
some of the precious information regarding this topic too. This article is an
abstract from different websites which includes Blackwell-synergy.