0% found this document useful (0 votes)
11 views

Soft Computing

Cdxcvvvvcc

Uploaded by

THE TRUTH
Copyright
© © All Rights Reserved
Available Formats
Download as PDF or read online on Scribd
0% found this document useful (0 votes)
11 views

Soft Computing

Cdxcvvvvcc

Uploaded by

THE TRUTH
Copyright
© © All Rights Reserved
Available Formats
Download as PDF or read online on Scribd
You are on page 1/ 29
UNIT-1 NEURAL NETWORKS-1 WHAT IS ARTIFICIAL NEURAL NETWORK? An Artificial Neural Network (ANN) is a mathematical model that tries to simulate the structure and functionalities of biological neural networks. Basic building block of every artificial neural network is artificial neuron, that is, a simple mathematical model (function). Such a model has three simple sets of rules; multiplication, summation and activation. At the entrance of artificial neuron the inputs are weighted what means that every input value is multiplied with individual weight. In the middle section of artificial neuron is sum function that sums all weighted inputs and bias. At the exit of artificial neuron the sum of previously weighted inputs and bias is passing through activation function that is also called transfer function. letormaton fom > BIOLOGICAL NEURON STRUCTURE AND FUNCTIONS, Aneuron, or nerve cell, is an electrically excitable cell that communicates with other cells via specialized connections called synapses. It s the main component of nervous tissue, Neurons are typically classified Into three types based on their function. Sensory neurons respond to stimuli such as touch, sound, or light that affect the cells of the sensory organs, and they send signals to the spinal cord or brain. Motor neurons receive signals from the brain and spinal cord to control everything from muscle contractions to glandular output. Interneurons connect neurons to other neurons within the same region of the brain or spinal cord. ‘A group of connected neurons is called a neural circuit. Atypical neuron consists of a cell body (soma), dendrites, and a single axon. The soma is usually compact. ‘The axon and dendrites are filaments that extrude from it. Dendrites typically branch profusely and extend a few hundred micrometers from the soma. The axon leaves the soma at a swelling called the axon hillock, and travels for as far as 1 meter in humans or more in other species. It branches but usually maintains a constant diameter. At the farthest tip of the axon's branches are axon terminals, where the neuron can transmit a signal across the synapse to another cell, Neurons may lack dendrites or have no axon. The term neurite is used to describe either a dendrite or an axon, particularly when the cell is undifferentiated. ‘The soma is the body of the neuron. As it contains the nucleus, most protein synthesis occurs here. The nucleus can range from 3 to 18 micrometers in diameter. ‘The dendrites of a neuron are cellular extensions with many branches. This overall shape and structure is referred to metaphorically as a dendritic tree. This is where the majority of input to the neuron occurs via the dendritic spine. The axon is a finer, cable-like projection that can extend tens, hundreds, or even tens of thousands of times the diameter of the soma in length. The axon primarily carries nerve signals away from the soma, and ‘carries some types of information back to it. Many neurons have only one axon, but this axon may—and usually will—undergo extensive branching, enabling communication with many target cells. The part of the ‘axon where it emerges from the soma is called the axon hillock. Besides being an anatomical structure, the axon hillock also has the greatest density of voltage-dependent sodium channels. This makes it the most easily excited part of the neuron and the spike initiation zone for the axon. In electrophysiological terms, it has the most negative threshold potential. While the axon and axon hillock are generally involved in information outflow, this region can also receive {input from other neurons. ‘The axon terminal is found at the end of the axon farthest from the soma and contains synapses. Synaptic boutons are specialized structures where neurotransmitter chemicals are released to communicate with target neurons. In addition to synaptic boutons at the axon terminal, a neuron may have en passant boutons, which are located along the length of the axon. Most neurons receive signals via the dendrites and soma and send out signals down the axon. At the majority of synapses, signals cross from the axon of one neuron to a dendrite of another. However, ‘Synapses can connect an axon to another axon or a dendrite to another dendrite, The signaling process is partly electrical and partly chemical. Neurons are electrically excitable, due to maintenance of voltage ‘Gradients across their membranes, If the voltage changes by a large amount over a short interval the Neuron generates an all-or-nothing electrochemical pulse called an action potential. This potential travels rapidly along the axon, and activates synaptic connections as it reaches them. Synaptic signals may be excitatory or inhibitory, increasing or reducing the net voltage that reaches the soma. In most cases, neurons are generated by neural stem cells during brain development and childhood. Neurogenesis largely ceases during adulthood in most areas of the brain. However, strong evidence supports generation of substantial numbers of new neurons in the hippocampus and olfactory bulb. Colt body ‘Synaptic terminals ® 4c an a mode! of boloteal neurons modal network. THe artificial neuron ved inhibitory postaynaptic representing a neuron's ely weighted, and the er funtion or transfer function, The My cake the frm of other non-linear estatgnen rmonotonically Increasing red building VOpic #Ates has bling brain processing, For ch log In recent times. plteable to Sample new devices such aS memristors nave D atin) tin) +1 uA , = in, @ &) step fi p function sigmoid function 4, 2 Bengt abe She) Co vaye #O%R) cell) Je Ykco era Sveti fonsin . [pavers (Mik wt. Wee i) Fons Eaqugy + (babies area = BT a rial 35 1. Size: Our brain contains about 86 billion neurons and more than a 100 synapses (connections). The number of “neurons” in artificial networks is much less than that, 2, Signal transport and processing: The human brain works asynchronously, ANNs work synchronously. 3, Processing speed: Single blological neurons are slow, while standard neurons in ANNs are fast. 4, Topology: Biological neural networks have complicated topologies, while ANNs are often in a tree structure, 5, Speed: certain biological neurons can fire around 200 times a second on average. Signals travel at different speeds depending on the type of the nerve impulse, ranging from 0.64 m/s up to 119 m/s. Signal travel speeds also vary from person to person depending on their sex, age, height, temperature, medical condition, lack of sleep etc. Information in artificial neurons ts carried over by the continuous, floating point number values of synaptic weights. There are no refractory perlods for artificial neural networks (periods while it is impossible to send another action potential, due to the sodium channels being lock shut) and artificial neurons do not experience “fatigue”: they are functions that can be calculated as many times and as fast as the computer architecture would allow. 6. Fault-tolerance: biological neuron networks due to thelr topology are also fault-tolerant. Artificial neural networks are not modeled for fault tolerance or self regeneration (similarly to fatigue, these ideas are not applicable to matrix operations), though recovery is possible by saving the current state (weight values) of the model and continuing the training from that save state. 7. Power consumption: the brain consumes about 20% of all the human body's energy — despite it's large cut, an adult brain operates on about 20 watts (barely enough to dimly light a bulb) being extremely efficient. Taking into account how humans can still operate for a while, when only given some c-vitamin rich lemon juice and beef tallow, this is quite remarkable. For benchmark: a single Nvidia GeForce Titan X GPU runs on 250 watts alone, and requires a power supply. Our machines are way less efficient than biological systems. Computers also generate a lot of heat when used, with consumer GPUs operating safely between 50-B0°Celsius instead of 36.5-37.5 °C. 8. Learning: we still do not understand how brains learn, or how redundant connections store and recall information. By learning, we are bullding on information that is already stored in the brain. Our knowledge deepens by repetition and during sleep, and tasks that once required a focus can be executed automatically ‘once mastered. Artifical neural networks in the other hand, have a predefined model, where no further neurons or connections can be added or removed. Only the welghts of the connections (and biases representing thresholds) can change during training. The networks start with random weight values and will slowly try to reach a point where further changes In the welghts would no longer improve performance. Biological networks usually don't stop / start learning, ANNs have different fitting (train) and prediction (evaluate) phases. 9, Fleld of application: ANNs are specialized, They can perform one task. They might be perfect at playing chess, but they fall at playing go (or vice versa), Blological neural networks can learn completely new tasks. 10, Training algorithm: ANNs use Gradient Descent for learning, Human brains (put we don’t know what). ye something different building blocks: ids upon the following three Processing of ANN depen 1. Network Topology 2. Adjustments of Weights 3. Activation Functions or Learning the arrangement of a network along with Its nodes and 1 Network Topology A network Ope ee classified asthe following kinds: connecting lines. According to the topology, n-recurrent network having processing units/nodes in layers and all the nodes in a layer are connected with the nodes of the previous layers. The connection has the signal can only flow in one different weights upon them, There is no feedback loop means ° direction, from input to output. It may be divided into the following two types: A. Feed forward Network: It is a not ‘© Single layer feed forward network: The concept is of feed forward ANN having only one weighted layer. In other words, we can say the input layer Is fully connected to the output layer. Inputs Outputs * _ Multilayer feed forward network: one weighted layer. As this network has one goon cept Is of feed forward layer, it is called hidden layers, tn lore layers between the i the input and the outpu it ® jedback Network: As the name suggests, a feedback network has feedback paths, which means the signal can flow In both directions using loops. This makes it a non-linear dynamic system, which changes continuously until it reaches a state of equilibrium. It may be divided into the following y + Recurrent networks: They are feedback networks with closed loops. Following are the two types of recurrent networks, + Fully recurrent network: It is the simplest neural network architecture because all nodes are connected to all other nodes and each node works as both input and output. S Spratiad One { reas oy. Hep pitt nyo Elvan rife 4 Pra riot pedal | Pheri, ayes gienpine Rowse: We MY Bhonsle 6 ina Be Seer cunilt + Jordan network - It Is a closed loop network In which the output will go to the input again as feedback as shown in the following diagram, LO SOe es | | © RZ SZ OSBOSSBO SOS OBSOBSO a 2. Adjustments of Welghts or Learning: Learning, in artificial neural network, is the method of modifying the weights of connections between the neurons of a specified network. Learning in ANN can be classified {nto three categorles namely supervised learning, unsupervised learning, and reinforcement learning, femnly Prodigras ones saan (Aca oupat U Suaferied Lamy vs on aoe fone Generator Mal mdse Tlearning is done under the supervision of a ‘of ANN under supervised learning, the ‘This output vector Is compared srence between the actual Ormred-y Supervised Learning: As the name suggests, this type of teacher. This learning process dependent. During the training {Input vector is presented to the network, which will give an output vector. ‘with the desired output vector. An error signal is generated, if there isa diffe seca the desired output vector. On the basis of his errr signal, the weights arg adjusted unt the ch satis desired ouput Sane te briny empha os oer gf ath, Pe eee eS enn al sn, encoded ‘te Kear tert Se z eared thing) nescal Hid WK Te take dae “thi iSarningis done without the supervision of a \ outs snerd es op ane gaM UF ewig ord oi se OBE harming Ae tie nde supiets ie Ope a irning process is independent. During the training of ANN under unsupervised learning, the J Grmad 6s $ opti ita 29 4 tainty tee ocat X input) Neural ' Network —> — ¥(Actual output) teacher. This ea Put vectors of similar type are combined to form cluster ee in rs, When a new input pattern is applied, then the hee ¥ een an output response indicating the class to which the input pattern belongs. There is peneinnt pe ar ee ronment as to what should be the desired output and ifitis correct or incorrect. learning the network itself must discover the patterns and features from the input ata, andthe relation for the input data over the output. Bante ¢ te msey ee five oer ahve ‘or tale Wh Mer thay Suan nbh y Ge A, fe Greater Ron Wosbg tf Ries vee Fc che , | 5 op tassios niente , (PTo« 2E) | i 4 1) Linear Activation Function: Its also called the Identity function as it performs no input editing, It can be defined as: F(x) = x 1). Sigmold Activation Function: Its of two type as follows ~ + Binary sigmoldal function: This activation function performs input editing between 0 and 1. It is positive in nature. I is always bounded, which means its output cannot be less than 0 and more Than 1. It is also strictly Increasing In nature, which means more the input higher would be the output, It can be defined as ov F(x) =sigm(x)=11 +exp(-x)F(x)=sigm(x)=11 +exp(-») + Bipolar sigmoldal function: This activation function performs input editing between -1 and 1. It ‘can be positive or negative in nature. It is always bounded, which means its output cannot be less than -1 and more than 1, It is also strictly increasing in nature like sigmold function. It can be defined as F(x)=sigm(x)=21+exp(~x)-1=1-exp(x)1+exp() WHAT IS A NEURAL NETWORK ACTIVATION FUNCTION? Ina neural network, inputs, which are typically real values, are fed into the neurons in the network. Each neuron has a welght, and the inputs are multiplied by the weight and fed into the activation function, Each neuron’s output Is the input of the neurons in the next layer of the network, and so the Inputs cascade through multiple activation functions until eventually, the output layer generates a prediction. Neural networks rely on nonlinear activation functions—the derivative of the activation function helps the network learn through the backpropagation process Input | ———> f | Output SOME COMMON ACTIVATION FUNCTIONS INCLUDE THE FOLLOWING: n zero and one. For ve 1. The sigmoid function has a smooth gradient and outputs values betweer ni = Nhat vauescfneipucparanecrs te newark sen bevery so Fach recon called the vanishing gradient problem. 1 2 ‘The Tani inetion szer-cenered making easter to model Inputs thatare strongly negative strongly positive or neutral. 2. Thee funcion sng conputtonalyeMcent buts notable to proces inputs that approach zero or negative. - The Leaky Rela function has a small postive slope ints negative area, enabling ito procese zero or negative values. The Parametric ReLu function allows the negative slope to be learned, performing backpropagation to learn the most effective slope for zero and negative input values. foRmax is a special activation function use for output neurons. It normalizes outputs for each class between O and 1, and returns the probability that the input belongs to a specific class. ‘Swish Isa new activation function discovered by Google researchers. It performs better than ReLu with a similar level of computational efficiency. 1 Nop woe — Stemoid os | Tnh os. 3. Fuzzy Logic: Theory of approximate reasoning, 4. Artificial Life: Evolutionary Computation, Swarm Intelligence, 5. Artificial Immune System: A computer program based on the 6 Medical: At the moment, the research is mostly on modelling parts of the human body and recognizing diseases from various scans (e.g cardiograms, CAT scans, ultrasonic biological immune system, scans, etc). Neural Provide a specific details of how to networks are ideal in recognizing diseases using scans since there is no need to algorithm on how to identify the disease. Neural networks learn by example so th recognize the disease are not needed. What is needed is a set of examples that are Tepresentative of all the variations of the disease, The quantity of examples is not as important as the ‘quantity’, The examples need tobe selected very carefully f the system Isto perform reliably and efMcienty ® 7. Computer Science: Researchers in quest of artificial intelligence have created spin offs lke dynamic programming, object oriented programming symbolic programming, intelligent stories management systems and many more such tools. The primary goal of creating an artificial Intelligence still remains a distant dream but people are getting an idea of the ultimate path, which could lead to it. 8, Aviation: Airlines use expert systems in planes to monitor atmospheric conditions and system status. The plane can be put on autopilot once a course Is set for the destination. 9, Weather Forecast: Neural networks are used for predicting weather conditions. Previous data is fed toa neural network, which learn the pattern and uses that knowledge to predict weather paterns, 10, Neural Networks in business: Business is a diverted fleld with several general areas of specialization such as accounting or financial analysis. Almost any neural network application ‘would fit into one business area or financial analysis. 11, There is some potential for using neural networks for business purposes, including resource allocation and scheduling, 412, There is also a strong potential for using neural networks for database mining which is, searching for patterns implicit within the explicitly stored information in databases. Most of the funded work In this area is cassified as proprietary. Thus it fs not possible to report on the fll extent of the ‘work going on. Most work is applying neural networks, such as the Hopfield-Tank network for optimization and scheduling, 13, Marketing: There is a marketing application which has been integrated with neural network system, The Airline Marketing Tactician (a trademark abbreviated as 'AMT) is a computer system sorte of various intelligent technologies including expert systems. A feed forward neural network is integrated with the AMT and was trained using back-propagation to assis the ‘marketing control of vinline seat allocations. The adaptive neural approach was amenable to rule expression. ‘Additionally, the application's environment changed rapidly and constantly, which required a continuously adaptive solution. 14, Credit Evaluation: The HNC company, founded by Robert Hecht-Nielsen, has developed several Treural network applications. One of them is the Credit Scoring system which increases the profitability of the existing model up to 27%. The HNC neural systems were also applied to Trortgage sereening. A neural network automated mortgage insurance under writing SHAN Ree developed by the Nestor Company. This system was trained with 5048 applications of ‘which 2597 aoe cortified, The data related to property and borrower qualifications. In a conservative mode the system agreed on the under writers on 97% ofthe cases. In the liberal model the System agreed 49 ofthe cases. Tiss system run on an Apollo DN3000 and used 250K memory while processing a case file in approximately 1 sec. ADVANTAGES OF ANN 4. Adaptive learning: An ability to learn how todo tasks based on the data given for training or initial experience. Sisait Organisation: An ANN can create its own organisation or representation of the information it recelves during learning time. Fite Time Operation: ANN computations may be carried out in parallel, and special hardware devices are being designed and manufactured which take advantage of this capability. ae ies ae ition: Is a powerful technique for harnessing the information inthe data and generalizing ‘about it, Neural nets learn to recognize the patterns which exist n the data set. 5. The system is developed through learning rather than programming. Neural nets teach themselves the patterns in the data freelng the analyst for more interesting work. Ry ks may take some ¢ h neural networl ime nt. iy changing information. pting Se eeracall approaches fall. Because neu r co wironmer 6. Neural networks are flexible in a changing enviro data which is too difficult to modey tat aday rai sudden drastic change they are excelene a 2eaP 7. Neural networks can bulld aay ees they can aially model i networks can handle very complex inte eal tastes or programminglOgic — with traditional approaches such as inferé sespleal statistical 3d as cl re of the data in a Performance ee ena eae ‘are more reflective of the structu problems, The neural netwo significantly less time. ‘LIMITATIONS OF ANN its i is a Limitation Im this technological era everything has Merits and some Demerits in others ea Ey ascent with every system which makes this ANN technology weak in some points. The vai are 1) ANN is not a daily life general purpose problem solver. 2) There sno structured methodology available in ANN, 2) There is no single standardized paradigm for ANN development. 4 The Output Quality of an ANN may be unpredictable, 2) Many ANN Systems does not describe how they solve problems. 6) Black box Nature 7) Greater computational burden, 8) Proneness to over fitting, 9) Empirical nature of model development. Forward Pass: The forward pass takes the JnPuts, passes them through the network and allows each paron to react to a fraction of the Input. Neurone {generate thelr outputs and pass them on te the next ‘ayer, until eventually the network generates se output. Error Function: Defines how far the actual wae ent Of the current model is from the correct output. When, thane the model, the objective isto minimize the error function and bring output as close ae Possible to lue, Backpropagation: In order to discover the optimal weights for the neurons, we perform a backward pass, moving back from the network's prediction to the neurons that generated that prediction. This is called backpropagation. Backpropagation tracks the derivatives of the activation functions in each successive neuron, to find weights that bring the loss function to a minimum, which will generate the est prediction. This is a mathematical process called gradient descent. Blas and Variance: When training neural networks, ike in other machine learning techniques, we try to balance between bias and variance. Bias measures how well the model fits the training set—able to correctly predict the known outputs of the training examples. Variance measures how well the model ‘works with unknown inputs that were not available during training. Another meaning of bias is a “bias heuron” which is used in every layer of the neural network. The bias neuron holds the number 1, and ‘makes it possible to move the activation function up, down, left and right on the number graph. Hyperparameters: A hyper parameter is a setting that affects the structure or operation of the neural network. In real deep learning projects, tuning hyper parameters is the primary way to build a network that provides accurate predictions for a certain problem, Common hyper parameters include the number ‘of hidden layers, the activation function, and how many times (epochs) training should be repeated. ‘McCULLOGH-PITTS MODEL 1n 1943 two electrical engineers, Warren MeCullofgh and Walter Pitts, published the first paper describing ‘what we would call a neural network. # § aagregs Re Um farm Fake @ lection boelen Rt apg: + ip Cambe encifeory om irtibiheny © JO ang win inhibifeig, eb Flaten agen) > Ex a PF GAH el '{ wre no) alae 20 14 Fac O aif TCO Recall Kyrahaichny Prxemety tne {Dr His cathe Reeshstbry logre OU ifpar beng Iemay be divided into 2 parts. The first part, g takes an input, performs an aggregation and based on the aggregated value the second part, f makes a decision. Let us suppose that I want to predict my own. Gecision, whether to watch a random football game or not on TV. The inputs are all boolean ie, {0,1} and my output variable is also boolean {0: Will watch it, 1: Won't watch it). 0.1 as ‘we {0,1} 0, x1 could be ‘is Indian Premier League On’ (I like Premier League more) X2 could be ‘is ita knockout game (I tend to care less about the league level matches) 3 could be ‘is Not Home’ (Can't watch it when I'm in College. Can I?) 4 could be ‘is my favorite team playing’ and so on. ‘These inputs can either be excitatory or inhibitory. Inhibitory inputs are those that have maximum effect on the decision making irrespective of other inputs Le. ifX3{s 1 (not home) then my output will always be O4e, the neuron will never fire, so X3 is an inhibitory input. Excitatory inputs are NOT the ones that will ‘make the neuron fire on their own but they might fie it when combined together, Formally, this is what Is going on: smelt Seema Arms out tbe 2 OF ore. ‘We can see that g(x) is just doing thresholding parameter. For exami the theta is Zhere. This is called the se McCulloch Pits neural models als known aslinea’ thresh orinpuns into two different mathematically using sensed one output'y The inear threshold gate simply aero Thus the output y is binary-Such a function can be x Sum = OW, a Where, Wi, W2,Ws,-—, Wm are weight values normalized in the range of either (0,1) or (—1, I) and associated with each inputline, Sum is the weighted sum, and T is a threshold constant. The function fs a Tinea step function at threshold T’ as shown in figure 2.3. The symbolic representation of the linear threshold gate is shown in figure below. y oa “Sum Linear Threshold Function Inputs Weights Threshold T Symbolic lustration of Linear Threshold Gate | ® In any Boolean function, all Inputs are Boolean and the output is also Boolean. So essentially, the neuron is Just trying to learn a Boolean function, mh : 2- Cp {0,1} z ‘This representation just denotes that, for the boolean Inputs x.1,x2 and x.3 the g(x) |, sum z theta, the neuron will fire otherwise, it won't, AND Function a | a ye {0,1} ts ‘An AND funetion neuron would only fire when ALL the inputs are ON Le, g(x) 2 3 here. OR Function’ cal | | 2 «ye {0,1} Ts For an OR function neuron would fire if ANY of the Inputs Is ON Le. g(x) 2 1 here. NOR Function dy wy 0 Go wo ye {0,1} % Ane 14, ay For a NOR neuron to fire, we want ALL the Inputs to be 0 so the thresholding parameter should also be 0 ‘and we take them all as inhibitory Input. ne NOT Rnetton n—4 0 -y €{0, 1} so we take the (nput as al inhibitory input and set the Por aNOT neuron, 1 OutpUts and O outputs L thresholding parameter wd, We can summarize these rutes with the Mecutough-Pitts output rule ws: ing potential, It also has ‘The MeCutloch-Pitts model of a neuron is siyple yo has substantial comput ea aeratea dition, However, this mode) 6 <0 smpliste that tt only genveraes 1 binary you pred also the weight and threshold values are roo rae neural computing algorithm has diverse feature ged alsa applications, Thus, we need to, obra The neural modal with more flexible computational features. . earning rule is a method or a mathematteal owe IBIS Neural Network to learn from the existing conditions and improve its performance, Thus learning rules updates the welghts and bias levels of a

You might also like