Open navigation menu
Close suggestions
Search
Search
en
Change Language
Upload
Sign in
Sign in
Download free for days
0 ratings
0% found this document useful (0 votes)
23 views
24 pages
SC Unit1
soft computing
Uploaded by
Tavva Priyareddy
AI-enhanced title
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content,
claim it here
.
Available Formats
Download as PDF or read online on Scribd
Download
Save
Save SC UNIT1 For Later
Share
0%
0% found this document useful, undefined
0%
, undefined
Print
Embed
Report
0 ratings
0% found this document useful (0 votes)
23 views
24 pages
SC Unit1
soft computing
Uploaded by
Tavva Priyareddy
AI-enhanced title
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content,
claim it here
.
Available Formats
Download as PDF or read online on Scribd
Carousel Previous
Carousel Next
Download
Save
Save SC UNIT1 For Later
Share
0%
0% found this document useful, undefined
0%
, undefined
Print
Embed
Report
Download
Save SC UNIT1 For Later
You are on page 1
/ 24
Search
Fullscreen
' What is meant by, artificial MewrO? 9 explain working of fece bomeg method AD As An autieiciat neuron , oPlen veered 10 asa Nedle OF unit, block in artifical neural networks isa Pundamentat building Inspired! by the biological necnons The artificial neuron is 2 cornputational rtder ain bul ise ‘sirmplifle: puts, processe in the human bi ¢ them and -Iben “S$ Je receives one or more in prodeices an output: the basic structure of an artificial NeLree consists of the following coroponentst te Inputs (&.%2 ¥30-- Xn)i These axe numerical vatues represeosi -09.the ip signals to the neurons &- weights (ws pWa Wg, --- Win)! Each input is ascoclated witha bh rePlects the stength of the connection weight, whiel plo the ip and the neuron + 3 suremation Function (2): ‘The ilps are multiplied Ly their yespeln weights, and the products ave summed UP mathenatically, Sum 2 (xp xwi) 4-- Aetivation Function (PF)! the sur oblained Pioen the previous an activation Punction- step is then passed through (tanh), «common AF include sigmoid , hyperbolic tangent RelU - 5-0uT pul (y)i The result of -the activation Punctionts the cutpul of the neur9n, which may be uced as an ifp Por other newre® ey ac the Pinal olp OF the nlw-ze KING OF FEED FORWARD ME1 t ones OE (EGP FERIIASP MER: a FF method OY FEnewal nelwor’ , NEWIONS are organized into ayn youes> an input layer» one, or more hidden layers and an outpul lager a qo formation plows Aprough the alw 19 one direction, 60m tp to Ofp raye? withoul any. cycles or loops: = The process of information propagation though the alw can be given as? t+ Input layer? “The ilp layer veceives the Initial ip data a. Hidden layers! tach neuron in the hidden layers receives lp prom the newx00s in the previous layer. nd then passes: the processes this i]p using weighted sure and activation Function ay to the neuron in the Sexe layer Pinal layer produces the olp of the olw ct through. the hidden layers vesutt 3 Output layer * he pased 09 the inkorrralion processé * input layer Hidden lager pulput layer input 4 __$ cutpue input 2 input. ile4-Uist ANd elaborate difherence belween biological PEL ane Wtibicial newon Features ANN Biological NN . \ Pore > Petinition — 6 IL ig the roathernaitica! 1 is a0 compoted op model which is ain't pived by the bi0!09IA! KNOWN AC PeWION: Sevesa! PIOCCSEING Piece ing in the . newon syste in that are linked (o9¢ the, human brain: + vio synapses was sequen * Tks proce $Sing the ina, a penaliel and distributed manner» “Processing + 1s processing -tialand centralized + si ; Size. * small in size * tange in size > Co NOL Mecha © ite control Unit Keeps back * AN Processing is ma -nism oF alt computer - Yelated opera -ed cenbalty tions - —y Ratelspeed . + Jt processes the inko ak, «Lt processes the into baste: speed at slow speed —> Coroplexity + Tt cannot perform _ tf The large quantity and complex pattern recogniti —ipepplexity oF the connecter 92. extiow -the brain #0 penton ec ‘ an complicated lasts + Feedbate ¢ does'nt provide any, pees Peed Late geedback > structure « inpue «© denlrites hidden larjer- cellbody weight : synapse Output axon. ~y Foutt tolerance » There {sno fault © At hee Peuett tole7anee tolerance : > Retianility * Ie is ver vulnerable + Tt is yobust~ > warning © Ww had Very accurate ~ + They are tolerant (0 structures and formatted data ambi quity: > Response o Ue vesponce time is «Tes vesponse time 15 in mitt ume measured in milliseconds ima sureal in nanosecondVe 3. explain about co IMPONENnts of ANNs AntiPicial Ne erat nietworte concists of several inlerconnected corny roponents fi e 4hQL 0) aogether to process informantor and rate predictions i$ tlere are the main components oF ANN: t- NEURONS (Nodes ox units)! ey Newnons are the basic moce ssing units in an ANN + Each neuron yeceives one OF MOTE UP ivation function 1and theo ¢ ,perforrns a weighted gum of these ilps ,appties an oct produces an output: > Neuron: s process injo.and pass 4k along to othe: Meuron s in the lw 2. Connections (DAEs OF wIEGHTS): en neurons and carry ino fom -¥ connections exists betwe: in 40 another one newror as associated wei =} Each cennection b ght that rnedulates the strength oF the connection 3. INPUT Layer! the ip laye is yespon: -ythe ip (ayer Serves as th at neteor into the neur sible Por receiving the initial fp data e entry poink Por external data 4: HIDDEN coyers: uo ifp and ele me plidden layers ave (Oy fayers > information fig pass ers of neurons bl unough these layers ed and processed ,aitowing the alte to tear cormpl® patterns nd representation ¢ 5 OUTPUT LAYERS *ne Pinal output oP the lwo. ON “HUhe ofp layers produces Wh ull 0 prediction of the nlw for th, the clp 1ayer provides the 165 given ip & toss runenon: : F the tose Punction measures the alipperere’ blw the predicted vy, and thé actual ofp ~Y Al quantiPies the €r107 0» deviation of the clw’s pre diction. > puring nainfng the goal 1s 40 roinlmize the loss Punction , quiding the lw lo mallee more accurate predictions 4 optimizer? -b the optimize: is an algorith™ that adjuces the eoeights ane) biases of the NN during taining to minimize the loss Punclicr -’the optimizer guides the tearning proces + by updating rouse) pararneters to innprove predictive performance - Architecture: ip signals cia a> Error signals - 2cae explain working of biological NeuTOr and is components Ly A biological neewon ig the basic building plocle of the huraan newous system and saves as the primary punetlonal unit for jnbormation processing * ro Components oe Biolog\(a! Neuro te cell hady (soma): ucteus - che cett body contains the 9 ls prom dendrites and decides = TE integrates Ineo ming signe whether to generate an olp signa a pendri tes! Ronn pendrites are branching extensions that yeceive signals boom other NeUTONS and wansmit them fowards the cell body - a Aye! = he axon is a tong ssiender P ignal catled as ackton PO Abe cell body 4pward athe: neurons wojection that carries the neurons olp si tential, auoay bro? Leynap se! 4. SynAe ee ~ ‘the $ one new von an ynapse ic a junction blw the axon terminal oF d dentrites oF another: / The signals or the information yeleased into the synapse gransmie signals Prom one neuron to another até s- cov [ synaps ¢ HE | nyon Aa ~ “S Worring of Biological Necor! SlePO: input singals : : penhites ina biological NEO” veceive signals bom other nei, - rite : ens the porn of peurovansmilt step@! Integration! : ates the signals’ yeteived from the cett bodylsorma) Integy dendrites int to reach a certain - Tb the combined ilp is subbicie ‘ an attion potentiena, threshold Ahe neuron generates SleP®! Activation Penction: VAM AHAR Panter ~The generation oP an action potenttal involves an elecho chemical pocess that travels along the avon. — Step@: output signal: wen Utan ~ The action potential Havels down the axon , where hewobansmillers ave veleated into the synapse - step@: s fe feclont ©+ synaptic Dansmission \ ~ Nenotsancmitlers released at the akon terminals bin! 4o receptors on the dendrites of the next neuron, anmitting the signal. y@. Explain the Prame work of Neusat Network ‘with sketch’ CPL th-2 10) -1, Explain the framework of neural network with a neat diagram. » The term “Artificial Neural Network” is derived from Biological neural networks that develop the structure of a human brain. Similar to the human brain that has neurons interconnected to one another, artificial neural networks also have neurons that are interconnected to one another in various layers of the networks. > These neurons are <=> as nodes, cept) > <> “ zeo— >) coup Bilge Neural Network Artificial Neural Network Dendrites Inputs Cell nucleus Nodes Synapse Weights Axon Output > The artificial neural network is designed by programming computers to behave simply like interconnected brain cells. icial neural network: The architecture of an a > Artificial Neural Network primarily consists of three layers; Input Layer: As the name suggests, it accepts inputs in several different formats provided by the programmer.Hidden Layer: The hidden layer presents in-between input and output layers. It performs all the calculations to find hidden features and patterns. Output Layer: The input goes through a series of transformations using the hidden layer, which finally results in output that is conveyed using this layer. The artificial neural network takes input and computes the weighted sum of the inputs and includes a bias, This computation is represented in the form of a transfer function. It determines weighted total is passed as an input to an activation function to produce the output.—— Deri © 0 ‘ath : 4 Devive ast order derivatwve of sigmoid and tanh functions a Activation functions is used to provide the output ofa rode yt is used ‘lo determine the output Of Neural networks fire Yes oF 0 jy Tt maps the resulting values in blw olo4 0 ito 4 depending upon Abe function Usedwar oy the adlivalion Punctions tas teatly divided int 0 2 -types: L. linear Activation Punetion. a Non- lingo? Activation function. Inder Non-linear activatton ma sigmoid and tanh functions Comes ut Functlons - fa) SIGMoIP OR LomIsTIC ACIVATION Function! BG ONO i Sa cNians | omen = aan tooks lke a g-shape } the sigmoid cave =o We set os eeueceays =O} Sy 4c. 98 Pundion ts, tt exists wy the main Yeason why we us sigmoid plo oand 4- cially used for mode's where we hare -y Therehore, it is espe to predict the probabil Pe -anything exists ley as an olP- ae probability © only ble the range ob 0 and 2 -} Einst order derivate ok sigmoid function - cl by o(2) is dePined sy The sigmoid Punetion, denote: | | ab: c(t) = Se em y derivative ero’ % Ly now tels Pind the Plest orde a wd ft le) ae (
The vange of the tanh Fu > tanh ic aso sigmoidal (s-shared) o|o-- ---- ~os <1 -3 => tC! 5 Elst order derivative for tanh Function: eb The hyperbotie tangent function, denoted by tanh (3) is defined as: = tonhwd= ety = Now, let's Pind the 4SF order derivative wate “x A teanned) = ete dz Lettet = fetanh’ (x). en &_ (tanhtx)) = I- tanh>(x) aa Oe ear actvalion Functions in NN and 4. discuss about non-lin tS signibicarce in processing of informatie define AF ¢ explain is the Rectitied linear units AE ReLUs the Rely Punctlon ction - ey Th te most widely used activation fu > 4 ig dePined OS! P(x) = max (01%)< ge of using the ReLU Punclon over Other “ytne main advan ia it does not activate ait the qetivation functions te that pemons at the same time - wp REL Punction iP the typ is neg pe neuron does not get ative i will conwre tt to 7670 activated and t id) Leaky RetV! (dd Punction 1s nothing b ban improved version oe tealty REL oP the RelU Function - instead oF dePining he ReLU Function as 0 for ¥ =o nent of %- ’ we dePing it as mall tinear COMPO & Tt ean be dePined asi pla) = ax, yeo pla) = %, otherwis? - Fly) of imreReanucns NCE I PROCESSING NCE a arity to the cetevork QQ produces non- tine te Neural Nero” gmougr thé sla esenk complex fransorations, 1 AE i0 ©. AF enab mapPings rs to warn ad YEP? cleing of non. linea? 3. AF hele mitigate the vanishing exploding gradient problem duing ba 4 AP tontribute 10 ce propagation the normalization of olps.to Explain ah, —_ Cahout Auto caccociative and Heleva. acsorlaltve Concepts 0 Mtr n¢ tn Neural networles’ ar ‘ The teoo dippercol types of Assodative roermo nes (voy sles ae 4 aiso catted as. contents adcuesiable Memory (tora) thar, Hal ay designed for aigherenl purposes are * t Aulo- nssocialive maernory ° &-Helero Associative memory ~ en \U10- ASSOCIATIVE memory! OWA UA This is a single layer Ow IO which the tip Baining ver, arc the olp targel vectors ae garne « Mrehritectuye? ; [x] = Here! 0" no0h i Ine and obtained (| —>® J tt 0. Ob ob LM: tal > @ a © morring: : -» puring bairing phase, the nlve is presen tect with @ set ob ip pacers The nleo warns 1° associate each ilp pater et } During vecall, ap patlen ig not complete. tt shouted secant ot mee ‘complete the patter “Applica tion ® 1 : Ly nulo associative mercy jes op fen used bor patleo completion er pater? reconstruction « Q. HETERO- Assoc lve MEMORY! ww Asso ee pep: this ts aco @ ingle lager pewal ale bul inthis nleo* the ule painind vector ant the ofp target vector ae nol same /pcre (re y tere 0-0-0 ips “rn”. n0- of OlPe wasting: > during the Laining phase, the nlw js presented with pairs of ies and target alps patterns the Alw learns fo associate each ip pattern with the forvessponding olf patten- > puring vecait| testing» ib a Lainedd ip pattern 1s presented athe nlw should recall the associated ofp pattern “Application! > Hetero- associative memories are often used 19 appliatont where thé goat is 10 Map one sed OF patterns to another- €g: like tasks cuch as recognition, Classification , Apping etc mapping et oDtl. explain RNN Orchitecture Ond list OUt dibberence welween aingle (ayer Feed Forward NN Gnd raultif lone Feed Forward NN: E i NN: B trgpliccuuse BP SoMyy ebite: biteete Qh traditional Quin) geht enn ic a AYP of Na Where the olp Aram Previous sel? anf fed a> ip to. cuvvent = step. 2 fo traditional NIN jolt the loputs € outputs axe jadePendent of each other +o predict next und ma sentence: 2 Hain and = most important +eature ot ANN ts Hidden Slate which temembers some itor mation ghovut a sequence. 2 RNN hod A® Memory that vYemembers all “the information dbavt what hor been calculated: i 3 enn WOrKS of principle of Saving ap ot a fanttevlaa lay and 4eeding -thre back to Ip © order to predict lp. 7 ANS. Modu
f, Ch, 4643) eonete, hi) Neus stare Ae 9 funtion sith pasomebedc hB-i) > old “ctate A) UP ertod ot ye cof process o recut ence dime step be ° Seopente of vettots 1 by OPpluty sorteoulo + iq hy er ae I fet vation Synction bys tanh (wat Way he) calwwloting output Yue Wythe bo tte wygcraneeakt at af eat Day waeeatt at y, VeuveES Single lager Reed fouward Newral Network . er op neur sons, whieh directly connects (ht ue to the olp+ Il does not have hiddlen layers ~& Consists of only one tay =} Umiteed learning capacity due to the absence of hidden feuegers = Cannol repres and hievarchica > Raining complexity is very low as Ahere is no back propagatior Athrough hidden layers eb inputs and oulputs Me linkeet with each other: =} Not efgictent in certain areas oye oS Se So jnpue output layer rane” cot comple ® relationships { Features indata mutt layer Med Foray, a ~ 1 Newmal Nebworr ab consists OP multiple iq Yee op neu ans, includ ing an j aneor more hicllen tarje,. Oy, ary olp ayer ‘ed ay tigher téarning Capacity ob Can represent cornpy, ye lations hips * & paini paining tan be move complex , especialy wits, deeper architecture ax backpropagation aS Thee ar mutliple (aye, blo ile 1 olp layers whichar, known as hidden lagers Ey More the hidden layers , more the complerity 4 ole, but effictent ofp 1s produced >. hidden AIplayer tenerpiseuss the convergence rile i le in Neural Netw its and jus impact on Paining process « sein Re oh Inve ne fc i g y convergence rule is the principle 4hal governs when and how athe training Process of a neural nelworle should be considered complete -& convergence fis achieved when the weights and biases of the network have veached a state where Puather iteratiars do nosignificantty improve performance 00 the training alata -» The impact of the convergence rule on thé 41ain ing process ie @ucial for ensuring thet the olw learns eftectively and ebbiclently - : -} Tmpact ON TRAINING PROCESS ¢ PIPACT OR Nan OA t computational etkeciency? Applying ao tonvergence rule helps prevent unnecessary computation ence the Niw has yeached a Sats factory state - Gieting! Monitoring convergence cveeeria, a. prevent ove? per gormance helps prevent ove rfitling- such as valida too B caly stopping) involves patting taining when the es ergorran le onav from overgitting rmodel’s P aidation £6 starlet deteriorates -bthis preven!S rnode! 4- Resource manager! * elpeciantly managing compulational Ne er re crucial. the convergene rule helps avoid unnedessory computed (os wher moder has alveady learend Abe esseottal parkerns’ “us— 8. Deri ine PL computing. Give mathematical lepresertattor, or Abies i : cat neenons and cateulate net Up 10 olp netron 4 i ace IPS & ewcignts: So on I MPULS ave fxr s%2 +73 f = 103! os 0-6 y and weighls ay e feos wo, coz} = JO7/01's -03} sol 8! sort comeuling: aw manna on ee e developlenent of intetligent systems that 0% capable op handting uncertainty and {0 solve conn ple& seat - coord Pobiems that may be disticul to solve using basi ‘Cray methods - Mathermatice! repesentation OP tntigicial Newent onan iiaue os -¥The mathematicat representation of artigiclatneuron involy Mt g the weighted Sum of ilps , passing it though catcutetin' unction® and producing 27 outpul an activation P ve. ys Pinel). where, y=outpul P= activation Punceion. net = 2 (ajwid+t bias oh Problem! Ran i tets catcurate the net ilp to alp neuron basedon given ips and weights: ey civen, ilps 4 fn ta %3 2 $0.3 05, O¢f weights => Fu wast 2 fo.2,001-03 or cimplicity tet Us assume bias = O° a ——— . fret = @ (xiwi) + bias iFig 2 i.wi)t © ie 2 (aw)ey nek 2 (03K02) 4 (0-5 XO) 4 (0.6 460-3) 0: Obt 0.06 p18 = -00F wet = -0.04 0, the net ifP to the olp neuron is -0.04 be
You might also like
ML Unit-Ii
PDF
No ratings yet
ML Unit-Ii
168 pages
ML - Chapter 5 - Neural Network
PDF
No ratings yet
ML - Chapter 5 - Neural Network
64 pages
Artificial Neural Network
PDF
100% (1)
Artificial Neural Network
16 pages
Introduction To Neural Networks
PDF
No ratings yet
Introduction To Neural Networks
25 pages
Unit 2
PDF
No ratings yet
Unit 2
93 pages
Lesson 03 Artificial Neural Network
PDF
No ratings yet
Lesson 03 Artificial Neural Network
116 pages
ANN Unit 1
PDF
No ratings yet
ANN Unit 1
77 pages
12 Neural Network
PDF
No ratings yet
12 Neural Network
52 pages
M5 FULL - ANN, CLUSTERING ALGORITHM - 21CS54 Notes
PDF
No ratings yet
M5 FULL - ANN, CLUSTERING ALGORITHM - 21CS54 Notes
71 pages
Neural Networks
PDF
No ratings yet
Neural Networks
75 pages
AAI Unit 2
PDF
No ratings yet
AAI Unit 2
147 pages
DL IT324a 2 ANN
PDF
No ratings yet
DL IT324a 2 ANN
123 pages
Unit 3 - Ann
PDF
No ratings yet
Unit 3 - Ann
49 pages
Artificial Neural Networks
PDF
No ratings yet
Artificial Neural Networks
66 pages
Unit I Architecture of Neural Network
PDF
No ratings yet
Unit I Architecture of Neural Network
74 pages
AI Lecture 16
PDF
No ratings yet
AI Lecture 16
51 pages
Ann Theory
PDF
No ratings yet
Ann Theory
59 pages
Artificial Neural Network
PDF
No ratings yet
Artificial Neural Network
54 pages
6ee412 ch6 Neural DSP
PDF
No ratings yet
6ee412 ch6 Neural DSP
41 pages
4 Neural Networks
PDF
No ratings yet
4 Neural Networks
44 pages
Unit-4 ML
PDF
No ratings yet
Unit-4 ML
41 pages
Artificial Neural Network
PDF
No ratings yet
Artificial Neural Network
22 pages
Artificial Neural Network
PDF
No ratings yet
Artificial Neural Network
40 pages
Intelligent Control of Drives-1
PDF
No ratings yet
Intelligent Control of Drives-1
82 pages
Artificial Neural Networks
PDF
No ratings yet
Artificial Neural Networks
34 pages
Unit 5
PDF
No ratings yet
Unit 5
29 pages
Ni 6
PDF
No ratings yet
Ni 6
40 pages
Unit 1
PDF
No ratings yet
Unit 1
89 pages
Lec 1
PDF
No ratings yet
Lec 1
57 pages
DL Mod1
PDF
No ratings yet
DL Mod1
58 pages
ML Ch-4 Artificial Neural Network
PDF
No ratings yet
ML Ch-4 Artificial Neural Network
54 pages
Ann Chapter 2
PDF
No ratings yet
Ann Chapter 2
240 pages
UNIT1 NN 2023ver1
PDF
No ratings yet
UNIT1 NN 2023ver1
57 pages
ANN Module 1
PDF
No ratings yet
ANN Module 1
39 pages
Assign 1 Soft Comp
PDF
No ratings yet
Assign 1 Soft Comp
12 pages
@vtucode - in Module 5 AI 2021 Scheme 5th Sem
PDF
No ratings yet
@vtucode - in Module 5 AI 2021 Scheme 5th Sem
66 pages
Unit 9 - Neural Network
PDF
No ratings yet
Unit 9 - Neural Network
53 pages
II. Artificial Neural Networks
PDF
No ratings yet
II. Artificial Neural Networks
73 pages
Artificial Neural Network
PDF
No ratings yet
Artificial Neural Network
14 pages
6-Neural NT
PDF
No ratings yet
6-Neural NT
44 pages
Unit V Tn321
PDF
No ratings yet
Unit V Tn321
50 pages
Lecture 01-Introduction
PDF
No ratings yet
Lecture 01-Introduction
33 pages
ML Unit-5
PDF
No ratings yet
ML Unit-5
22 pages
NN Lecture1 Introduction
PDF
No ratings yet
NN Lecture1 Introduction
40 pages
Module 5 AIML Notes
PDF
No ratings yet
Module 5 AIML Notes
77 pages
Fai Unit 4 To 5 Ans
PDF
No ratings yet
Fai Unit 4 To 5 Ans
16 pages
Ict L2 PDF
PDF
No ratings yet
Ict L2 PDF
49 pages
Artificial Neural Network
PDF
No ratings yet
Artificial Neural Network
22 pages
Lecture 25 - Artificial Neural Networks
PDF
No ratings yet
Lecture 25 - Artificial Neural Networks
42 pages
Ann Today
PDF
No ratings yet
Ann Today
30 pages
Neural Networks
PDF
No ratings yet
Neural Networks
28 pages
Artificial Neural Networks
PDF
No ratings yet
Artificial Neural Networks
48 pages
Introduction To Neurons and Neural Networks by Dr. Maitreyee Dutta Professor, CSE Department
PDF
No ratings yet
Introduction To Neurons and Neural Networks by Dr. Maitreyee Dutta Professor, CSE Department
43 pages
Artificial Intelligence Artificial Neural Networks - : Introduction
PDF
No ratings yet
Artificial Intelligence Artificial Neural Networks - : Introduction
43 pages
Artificial Neural Networks (Anns) : Intro
PDF
No ratings yet
Artificial Neural Networks (Anns) : Intro
15 pages
LIET III-II CSE AIML IV UNIT Previous Yrs QN Papers Qns and Answers
PDF
No ratings yet
LIET III-II CSE AIML IV UNIT Previous Yrs QN Papers Qns and Answers
15 pages
Artifcial Neural Network": "A Project On
PDF
No ratings yet
Artifcial Neural Network": "A Project On
31 pages
Artificial Neural Network Outline: Dr. Ruwan GOPURA
PDF
No ratings yet
Artificial Neural Network Outline: Dr. Ruwan GOPURA
8 pages
Artificial Neural Network
PDF
No ratings yet
Artificial Neural Network
20 pages