Open navigation menu
Close suggestions
Search
Search
en
Change Language
Upload
Sign in
Sign in
Download free for days
0 ratings
0% found this document useful (0 votes)
220 views
110 pages
Tech Neo DL U3 6 Split This Is A PDF of Techneo Deep Learning
DL notes
Uploaded by
Alefiya Rampurawala
AI-enhanced title
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content,
claim it here
.
Available Formats
Download as PDF or read online on Scribd
Download
Save
Save tech-neo-dl-u3-6-split-this-is-a-pdf-of-techneo-de... For Later
Share
0%
0% found this document useful, undefined
0%
, undefined
Print
Embed
Report
0 ratings
0% found this document useful (0 votes)
220 views
110 pages
Tech Neo DL U3 6 Split This Is A PDF of Techneo Deep Learning
DL notes
Uploaded by
Alefiya Rampurawala
AI-enhanced title
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content,
claim it here
.
Available Formats
Download as PDF or read online on Scribd
Carousel Previous
Carousel Next
Download
Save
Save tech-neo-dl-u3-6-split-this-is-a-pdf-of-techneo-de... For Later
Share
0%
0% found this document useful, undefined
0%
, undefined
Print
Embed
Report
Download
Save tech-neo-dl-u3-6-split-this-is-a-pdf-of-techneo-de... For Later
You are on page 1
/ 110
Search
Fullscreen
Deep Leaming (SPPU Sam 8 Camp: Joep Newel Network (ONNG)).-. Page no. 2-40) This kind of representations makes it possible for | dv) Deep Learning : A diverse set of algorithms that words with similar meaning to have 2 similar represenaion, which can improve the performance of chute. 2 Classification Algorithms ‘The clasification step involves in general, a stttical model ke Nave Baye’s, Logie Regression, support vestor machines, or Neural Neewors. 4) Naive Bayes: a family of peobsbilisic algoriths that tue, Baye's Theorem to pret the coleory of ax (W) ‘Linear Regression : A very wellknown algoridim in statistics used to predict some val (V) given st of features (0) (ity Support Veetor Machines = I is « noprotsisic tnodel ich ess a reprseaition of text examples 26 prints in a multidimensional space Examples of citterent categories (Sentiments) we rmagped to distoct egions within that space Then, new texts are asigned a extegory based on similarities with existing texts and the regions they are mapped to sxtempt (0 mimic the human train, by employing sctifcial neural netmorks to proces data = Hybrid Approsches Ebi systems combine the desicible cements of re- based and atomic tecbigues into oe sytem. One huge Denes of these systems is the rele ore more offen sccurate. © Sentiment Analysis Challenges Dota scienisis are geting beter at creating. more accurate sentiment clases, we mencon below some of the auin challenges of machine based eaten analysis 1. Subjectivity and Tone, : Cnsext and polarity Teeny and sarcasm Comparisons, Enmojis Defining neural Human Annotator Accurecy, eras ‘Chapter Ends. goa 33 a 35 38aa 35 36 US ea Convolution Neural Network (CNN) Ingrogustion, CNN arcilectre overview, The Basle Stucture of a Canvolitonal NoIwork- Padding, Strides, Typical Saltogs, the Rel ayer, Pooling, Fully Connected Layers, The inieeaving between Layers, Local Response Narmakzaton, Tring a Convokstonal Newark. ‘Convolutional Networks... GO. Esplaincorvalutional natworks.. ‘Applications of Convoluton... CO. Mention applications of canvoltion, 323. Comolutin Formulas 324 Notation. 325 Formation of Convolution. 826 Cieular Convolution. se \What cecular and cisoreta convolution operation ?.. Parameter Sharing = GQ, Whatis parameter shaing ? Explain ts use. Pacing, GO, Diecves pacing or Wit short nate on Pacing 34.1. Problem witn Simple Convolution Layer. 342. Typesot Padi. GO. Whatare ypes of Padding. prenriods Stided Comehtion nnn 2 60. Whats sided convauton 2 354 Output Dinonsio ne i Rectified Lines Unt... i asenean Ga, Discuss rected inesr unt and ts eavancags 3.61 Advaniages of Pctfied Lneer Ut. 362 Cisadiantages of Rectifed Linear Unit 383 Varian: Lear Varets.. GQ. Explain diferent types of Varia. 3.64 Non-Linear Varies. GO. Expein diferent types of Varian 8.65 One Layer of a Convolutional Network 8.6.6 uly Conoctod Layers nnn 8.87 Ragularisatn...a7 38 a9 310 ant 312 a3 34 a5, ate Deep Leaming (S°PU- Sem - Core, (Convoliion Neural Netwaek(CNN)...Paga no, (3-2) ott ot Explain Pacing layers ands diferent Wpe8;- ene 87.1 Average Pein mn Varn of he Basi Convalon Fnaton. 38:1 Dilated Convottions 282 —Trangposed convoluions... 8.82 —Separable eomvoutions, Loeally conn ested Leo en Convolution Neura! Network (CNN) Arciteotre, 60. Explain CNN erhitecture. 310.1 Arcilectre 3.10.2 Funcfons-l Hen Layers GQ. Explain function of Hidden layers. 310.3 Design of Multlayor Percept i GO, Exolsinin dota Multlayer Perception. 3.10.4 “Performance Measure 840.5. Input Layee ocular enn architecture ~alex net. ‘The Inttzaving between ayers Lota Response Normalisation ('AN)... 948.1 Intor-Chaanel LAN 9432. Inta-Chaarel LAN. 41123) Sach Nomalsaon Vs Laval Resconge Nomafsaon LAN, 3134 Nesd of Nornslisation Layers. 843.8 Exacinass of LAN 313.8 Practical WorkeoUt 0 3157 Alod\et and Batch Nonralsalon 18.8 Layer Nommalsation and Balch NormafSsaion.. 813. Normatsation Vs Siandardsalion.. 9.18.10 Uretations of Bate NoAMelsB4CR. nm 318.11 Advantages of Batch Nowmatsation Convokition Operation nn 341 Cross-Correlation, 95142 Applicallons of Convolution. GQ. Mention sppliatons of convaition.. 3.14.3 Gonvelutisn Forrutas 164 Notation. 848.5. Fomelion of Conveltion 946.8 Cireular Convatution. " GQ Whats cxcuarand dare convouionoparton 7. 3.87 Discrete Convolution Operation... BABB PIO ETE ner CConvoktional Netwers GQ. Elan gonveldion newors.- “Teaining « Convolutional NEWOI wm Chapter Ends... (GPPU - New Sjlebus wef academic yoor 22-22) (2-85) TalrecntieoFubicatena.A SACHIN SHAH Ventre int 0 @324 enture CONVOLUTIONAL NETWORKS Host €Q.__ Explain convalutional netmors. CConvoluionel network isa special class of multilayer perceptron, designed to recogise oxo-dimensional shapes with high degree of accuracy to skewing. scaling and to translation and otter forms of dsiorton. This task is achieved by supervised manner which includes the following forms of eastains: (1) Bach neuron extracis local feature from the previous layer. Convolution layers apply a convolution operation to the inpat passing the result to the next layer. A
(1) Benefits of parameter-shoring © Convolution Neural Networks have @ couple of lechaigues known as parameter abaring and paramcter lying. Parameter sharing is the method of sharing ‘weights by all neurons in a potieuler featare mep. It Delps to reduce the aumber of parameters in the whole system, and it makes it computationally cheap, © Perameter sharing is used in all convolution layers in ‘he network, Parameter sharing reduces the waining time and that dirty reduces the nomter of weight ‘sates during back propagation © Reeurencs Neural Networks aso use shared parameters, The shared weights perspecilve comes from thinking sbout RNS as feed. Forward netwocks, 1 the weights vere different at cach moment, this would be a fee! forward network. (SPPU-- New Sylabus w.ef academic year 2229) (PO-05) >) Importance of parameter sharing. +The min purpose of parameter sharing is @ reduction of the parameters thatthe model paso lear, * This is the pupose of using RINN, If one learns 2 different network for eseh time-step and feed the outpat of the first model to the second ete; one would end ap ‘with » regular fed forward network. >) Weight sharing in CNN '* Atypical pplicetioe of weight sharing is to share the same weighls serosal fr filters. ‘© Tn this cootent weight sharing has the following effec. It reduces the number of weights that must be leamed and that reduces mode! Jearming time and cos. > (4) Parameters of CNN ‘© In a CNN, Bach layer har two kinds of parameters ‘weights and biases. ©The total number of parameters is jos the sum of all, weights and biases, = Number of weights of convelutional Layer= number of biases of the convolutional layer. (5) Parameters in neural network. © Parameters are the coefficients of the model, and they sue chosen by the mode itself. © Tt means thatthe algorithm, while emning, optimises these coefficien’s and returns an amay of parameiets ‘whieh minimioos the evor. > (© Calculation of CNN Parameters ‘Number of parameters 0. CONN layer: ‘This is where CNN leans, hence we have weight saitices. To calculate the learnsble parameters, all ve have © do is, jut mally the shape of width m, height 1, previous layer’s finers d and account for ell such filterek in the eurent layer. > (Parameters in deep learning * Model perameters are properties of taining data tha will earn during the earning process. © In case of deep Ieaming, weight and bias ee perameers. Parameter is used as 2 measure of OW ‘well a model is peformin. [Bl recneorublestons..a SACHIN SHAH Verte‘Deep Leaming (SPPU - Sem 8 Corp. > (6) Parametersin a model © A model parameer is @ configuration vasable and iis internal to the model and its valve can be estimated froma. © They are required by the mode} when making predictions. Their valoes define the shill of the mode} ‘on te problem, They ae estimated from deta > ()Difterence between variable and 2 parameter © There is a clear difference between verables and panmeers. A varoble rpresenis & model state, and may change during siavon. © A pacaneter is commonly used to describe objec! ttaistically. A paraneter is normally & cosstant in 2 ‘ingle simulation, and is changed only when you need to adjust your model behavione. > 40) Parameter Sharing in Deop Learning ©The two main approsches for performing MTL (Mal task Leaming) with reara networks are badd and soft panamcers sharing, fee We Wish ¥0 Team shared 0° ‘nile hidden reprvesttions forth dtferent tosh. 4+ Ino de i impose these snilartes between asks, the model learns sinullancously for ell tks and ith Some constrains or regulassation on the relationship between related parameters, } (13) Hard parameter sharing + Te hard parometer sharing we eam a common space represeniation for all asks (ie. completely share weigh teen tacks), This share este space is wed to mode the different tasks, usualy task specific layers. © Hard perameters staring acts as regularisation and seduces the risk of over ing a the rdel leams 2 represenation tat wil generalise well oral asks Tekh, Tere Task ere] Ewe] Ea ES ee Sharve ayer Fig. 23.1: Hard parameter sharing arehitecture Page ro. (6) ‘itectare (Convolution Neural Netwarki CNN). Example of hard parameter sharing > (12) Soft parameter sharing ‘+ Instead of sharing exactly the same value of the perametes, in soft paranoeter sharing we add a Cossirsint 10 encourage similarities among related parameters, © We leam 3 model for each task and penalise the distance between the different models parzmeters. This approach gives more flexibility for the tasks by only Toosely coupling the shared space rprossnisions. + We exhibit an example of soft parometer sharing architect: Lat us suppose that we ae interested in Joaming 2 asks, and Band cence the layer parureters for task 4 by W, » one possible apreach to impose smilies between conresponding parameters is to augment our Joss foceton wilh an eldorl Ins term, a a x vets Dati) Wy lly ‘Whereis be ong os ton for oh sk (og sum of sss al I te squared Frbesis norm = In Convolational Neural Nevworks (CNN), padding is a term that refers to the 8 number of pixels added to an image when itis being processed by the Kemel of a CNN. ‘© For exemple, if padding in CNN is kept zero, then every pix vlue that is added will be of value ero. * Ip (SPPU- New Slabus we academic year 22-23} (P85) Tect-Noo Publicalons..A SACHIN SHAH Venture (seref the add a clated e the This y only haring aming stank antes 1 oss s (e8. ined ngis a dio an al of a othe, venture Deep Leaming (SPPU - Som8 -Comp) {Comoutin Newal NewofCAN)...Page na (8-7) ing is simply a process of adding layers of zeros to ‘ur ipo images s0as io avoid the problems mentioned above, CNN ate used extensively for tackling. problems ‘ceuring in image processing ard predictive modelling or classification tesks. The main application of CNN is to analyse imaze daa + Now, every image in any daaser is a matrix of it's pinel values, When working with simple CNN for en Image, we get output reduced in sie and that i lass of dia, And that hioders 40 obtain a proper result seeording to our requirements * When we do not want the shape or our cutpas 19 reduce in size, the addition of move layers in the data cea help to obtain a proper result and that addon ean be done by piling, ‘© Now we sty padding with ts importance and how to use it with CAN models. We also see different methods of pacing and how they ean be implemented. mA Problem with Simple Convolution layer © A.simple CNN of size (0 xn) with (fx 9 fltericernel size gives result for output image as fa-tex@-f+1) * For example in any convolution operation with (8 x 8) image and (3 x 3) filer the output image size ‘nll be (6% 6), Thus the output of the layers is shrunk in comparison to the input, Again, the fers we are ‘sing, may not focus on th comness every time when it moves on Ue pixels, Comer px Eye pixel FE BS ip. 341: CNNstructure The above image isan example of the movement of ‘ler of size (3 x3) on an image of size (6 x 6), comer Plsel A is coming under the fier in only one ‘movement. Ths shown tht pine] A is misinterpreted, This causes loss of information avaiable in the corners and also the output from the layers is reduced and this reduced information may creste confusion for next layer. The problem of model can be solved by padding bayer. ‘The convolution ayers reduce the size ofthe oatput. So when we want ‘0 save the information présente inthe comers, We can use padding layers where padding helps by adding extra rows and columns on the outer dimension ofthe images. So he sizeof the input data will romain similar tothe oui dats © Paling basically extends the area of on image in ‘which convolutional neural network processes. ‘The Kemelfiter veh moves across the image scans each pisels and conver the image ints smaller image. + Padding is added to the outer frme of the image to fallow for more space for te filler to cover im the image. This creates a more accurate analysis of ixages. W542 Types of Padding Of Padding. i {eq What are bpes We come arcs thee types of padding (1) Same padding (2) Vatia Peting (3) Causal pading (1) Same padding : In this type of padding, the padcing layers have zex0 values inthe cuter frame of the inages or dat so the fills we ase using can cover the edge of the matrix and iccaries the required inference, ‘Valid padding : This type of padding is called es no padding, Here we con't apply any padding, but the lnptt ges, fully covered by the filer and so the every ive ofthe image i vali Valid padding i to be used with Max-pooting layers Here we use every pixel or point value while Feamning ‘he model es valid pixel works on the validaien of pixel and not onthe size ofthe input Causal padding : This type of padding wosks with ‘one-dimensional convolution Iayers, We can use them aajery in time series analysis. Since a time series is sequential dat, it helps in adding 208 ot the start of ala, and il helps a predicting the values of previons time eps. ‘We consider an example of = simplified image to understand the idea of edge detection Here, 2 6 X 6 minx convolved with a 3 x 3 matrix, ouput is 4% 4 matrix. recall tha i'm» n image convolved with fof kere o fier then ontpa image is of size (a-f4 1) x(n-F43) 2 @ (SPPU- New Syllabus we. academic yeas 22-29} (PE-85) {Slreontesnoeson. ASAGHN SHAH VereDeep Learning (SPFU - Sem 8- Comp) (Convolution Neural Network(ONN))..Page n9. (8-8) SEES DA 3.5. STRIDED CONVOLUTION rofefolo [alo refelelsfele| ——. aisista refololofole| FEE} felaiste rofiofofofe fe] Lfota} fetette holioliofo fo fe 0 ‘elafals tele] Fig. 342: A 66 image convolved with 33 filter [As we have already seen, there are two problems convolution : 1. After enti convolution operation, or criginal image becomes smal] which we don't want to heppen. 2. Comer features of any tage ae mot used rouch inthe output the size ofthe original image Fig. 3.43 : Padded Image convolved with 2* 2 kernel Hieace, if (7 X 2) matrix convolved with an (£1) mwtsx with paddieg P then the size of the cutput image ‘becomes. (+ 2P-f+ Dx (n+ 2P-f41); bere P=1 Remark ‘Zero padding is introdveed ‘o make the shapes match ss regured, equally on every side of the input map. ‘Valid means no padding. Fig, 3.44: Same versus yabd padding with CNN (SPPU- New Sylabus wes ecadsmic your 22:23) (F888) Definition : Basically stride means number of piel over scl the input matrix. 4+ Atrded oonvotusion i another type of bildng block ff C, that js used in convolutional neura) networs. Suppose we wart to eonvolve the 7 times 7 image with 3 time 3 fillers using a stride of 2. Thus stride isthe ‘since between random location where the ‘convolution kernel is applied. +The whole concept of stride convolution is that we can ‘sride the window over the input vector, metrix oF ‘ensot. The ste parameter indicat the Yength of step je the stride, Jn enyframeworkitiealways 1 + Of course we cam increase the sidestep length in cde to save space o cot calculation time, We may fox ‘9 some information while doing so. We cannot have ® Sire of 0 this would mean not sliing at al. ‘© Applying convolution moans sliding a kernel over an inpot signal oxtputing a weighted sum where the weighs are the values inside the Kernel. If weuse a convolution witha (2 2) sre, the ste is 2 im both x end y direction, followed by nom-stded comvolotion, stride 1, step 1 ‘© Weobserve tha the output of cenvolotion wit stride Dhalves the width and height ofthe input, whereas the cup of a convolution wih sido 1 hes ‘width ~ input .. width - 2 and height = height ~ 2, since the komelis 333. Stride Foe Fier ry image. maze lage Letinage Side «0 Mae trago © Stide = 1 Righsinape :Stide =2 Fig, 351: convolution _LS.1 Output Dimension Fer padding P iter size (f% 1 and input image size (0), and stride ‘sour ovipat image dimersion willbe (eed) [Oe] [Brean rttcatore.A SACHIN SHAH VeniLearning (SPPU- Sam - 0% DH 36 RECTIFIED LINEAR UNIT © Definidion : In ardficil neural networks, the rectifier activation fonction is an activation function defined 25 the postive part of it's ergument fo) = x's max,x) ‘where x isthe input 1 a neuron. This is also known es «2 ramp function and is analogous to half wave rectfiestion in electscal engineering. © Anode or unit that implements this activation function is referred 10 as @ rectifier linear activation unit or Fig. 3.61 : Plot of Reta rectifier nearx = 0 (and GELa) function (Gaussian Error Linear unit) © In a neural network, the sctivation function is responsible for uaasforming the summed weighted ingot from the node into activation of the node or wtpot for that input. ©The rectified linear anificial functen or ReLu is Piecewise Jinear fonction that will output the inpot Alrctly i tis postive, otherwise the owipa is zero, + Tehas become default ectivation fosction for many types of neural networks because a mode! that uses itis casir fo train and aften achieves beter performance. ‘Here we discuss the rectified linear activation function cf ReLv fordeoplocrning neurl networks : (1) The sigmoid aod ypesbolic tangent aetivadon functions cannot be used in neiworks with many layers Decawse of gradient problems and gradient vanishes. But the rectified liner astivation function overeomes (SPPU- New Sylabus we scadamis year 2-29) (PB.85) @ eo ay o © ” ® a (Convolation Newral Network(CNN))..Page no. (3-8) the vanishing gradient problems allowing models to ‘earn fase and 1 perform beter. ‘The reciified linear activation function is default activation when developing muldlayer perception anc convolutional neural rerworks. ‘Yor 2 given node, the inputs are muliplied by the ‘weit ina node and summed together. This valve is referred fo as the sured eotvation of the node. The summed activation is then transformed via an ‘ctvation function and defines the specific ouput or “activation” ofthe ode. ‘The Tinea vctivaion fanetions ase he simplex sctivuion function. A network consisting of linear sctivation functions cannot leam complex. mapping fonctions bot it is very easy to tain, But lines sectivaion finetionsprediet regression problems, hence they are wsed in output layer for necks. (On he otper hand nonlinear ectivatio faneions eam more complex stroctures in date. The sigmoid and hyperbolic tangent activation fictions are sed 3s nonlinear etvation fonctions. The sigmoid activation function is also called as logistic fonction in neural networks, The inpot to the fanction is transformed inlo a value between 0.0 and 1.0. Inpats which ae lerger than 1.0 are reganded as 1.0 simiey valves smaller than 0.0 are taken as 0.0. The shape of the fonction is an Ssshape from zec0 vp throu 0.5 1010 ‘Hyperbolic tangent function is similar shaped nonlinear ntvaion fenton, and t's output values ie between — 1.0 and 1.0, This fonctlon has bever predictive performance ond wes cesir 10 train. Thus hyperbolic tangent function's performance is beter then logistic sigmoid function. ‘The sigmeié and tenhfancionssntrete. Tan fonction satura 0 1.0 for lager values and sigmoid fanction strate 0 00 ~ 1.0 for small values. 3.6.1 Advantages of Rectified Linear Unit Here giadiem propagation is beter : Very few Vanishing. gradient problems, compared to siamoidel ‘etivation function. Moreover sigmeigal activation fonction saturate in both irsetion, In 2 randomly initialised network, only bout 50% of hiéen units have 8 on-2ro output “TechNoo Publcains...A SACHIN SHAH VentureDosp Leerring (SPPU- Sem &- Comp, (@). Competation is fast and easy. Itinvolves only addition, _muliplicaton and comparison. (9) Weis scale imvasant i, max (0, ax) = amax(0,x) fora20 (4). Reetifying zetivaton functions were tained o separate specific easittion ond unspecific inkbition in the neural networks. (5) The use of rectifier os a nom-lineaity enables deep supervised taining without requiring unsupervised pre- ttcining, Compared to sigmoid function or ober similar sectivation functions, etfied linear units allow effective tesining of deep neural architectures on complex database, YS. 3.6.2 Disadvantages of Rectified Linear Unite (1) The fonsvion is differentiable except at reo, and the value of the derivative at 0 is arbitrarily chosen 2s Oorl. (® Wisarbounded (9) tis not zero-cenered. (@) Rectified linear writ neurons become sometimes inactive for esseataly all inputs. In this tae, thee is no gradients flow backward, through the neuron, and 0 the nearon remains in an inactive state and ‘dies", This ‘is called as ‘Vanishing gradient” problem. (5) In some cases, move capacity gests decreased becanee lrge nearer of neurons ina network got stack in dead states. ‘This problems happers expecially when the Tearing rate istoo high, By using leaky ReLos, the performance inredveed, QU63 Variants: Linear Variants (3) Lealey Reb ‘When the unit isnot active leaky ReLus allow srall, positive gradient: { 4 iho "9 = 1 pone, otherwise (iParametric Ret ‘This isan advanced variamt that Leaky Rev. Here the cooffisient of leckage is made into a parameter and that (2-10) is learped along with the other neural network paranetess. Here ve] [Convaltion Neural Network(CNN).P x, xed (2) Gaussian Error Linear Untt (GELU) Ik is & smooth opproximation to the rectifier. It has a bump when x <0, and is non-monotonic It serves as a default, activation for tonne standard models. Iisdefiredas:—f@) = 068) ‘rere p(x) is the cumlative distribution fanetion of the standerd normal dstsibation. (1) Softpius ‘A smooth eporoximstion wo the rectifes isthe analytic finctioe = f(x) = loge (1-+0"), and it scaled at “Softphus’ or ‘SmoothReLu’ function fw include sharpness parameters then 10) = wate) . sole eat aos as ie Dividing N and D by oe a Fo) = Me ‘Thus ‘he logitie sipmeid fimcton is 2 smooth approximation of the derivative atthe recier. “The mulivarable generalisation of singl-vasiable Gefiphs is the ‘Log Sum Exp" function with the fist argument kptto zero. LS Ep jeX2. 6%) = LSE (0, x, ~,%) = lop (tee 404.42) “The ‘Lag Sum Bxp! function is LSE (xy kpeh) = beg el +e +e 1's gradients the sofuas an kis (SPPU-New Sylabus we ecadorsio year 22-23) [P8-85) (Bre teo Paces. SACHIN SAH Vente (errthas a esas ton of nalytic smooth jaxiable 1 fast Venture Learning (SPPU - Sem 8 - Comp: ca) ew BLU is expovenial linear uni. Using this mean activations avo made closer to nero. And it spends 9p Jeaming. And we can have higher clssifeation accuracy than ReLs ss ifx0 Risdeined asst) =) aXe" =1), otherwise Where 220 an isa hyper-paraneter. The ELU isasmootied version of ReL, which has the for fie) = maxs(ea.0), wherea2 0. 5.6.5. One Layer of a Convolutional ‘Network: © Convolution wih firs iter pives ore 4imes 4 images ‘pat and convolving with the second Mer wives Afferent 4x 4 ourput To convert this into a convolutional neural actwork Jayer, we add scalar quay, ‘bias’. Bias is wed to every clement in 4 x 4 ovipet, or to all these 16 elements. Then we apply activation function Rela * The same we will do with the ouput we got by ‘applying the second 3 x 33 filter (kemel) Again we ‘edd a different bias and apply ReLo activation function. After applying bins or after applying ReLu artificial function dimensions of curpots remain the same, 80 we have two 4 X 4 matress. And then We repeat the steps and we get one ayer of e convolutional neural network, ® 5.6.6 Fully Connected Layers + ally connected layers multiply the input by a weight matrix acd then adds a bias? vector. + The convolutional layers are followed by ose or more folly connected layers. Hence all neurons in a fully connected Iyers connect 10 all reurons in the previous layer. % 5.6.7 Regularisation To prevent overfiing in the weining phase, there re itfereot ways of comolling taining of CNS. In particular they ae L2/L1 regularisation and max-rorm constants; 1) L2 regularisation : This regularisation is implemented by penalising directly the squared magnitude of all ‘parameters inthe objectives. Using ihe gradient descent parameter, every weight is decayed linearly towards 2210 by L2 regularisation method. (GPPU- Now Sylabus wos academic your 22:24) (8.26) nvolu ton Neural Network(CNN)... Page no. 11) (2) LA regularisation + Here in this method, we «03 the {erm 2 for exch weight cate the cbjective. {tisalso possible to combine the L1 regularisation with 112 regularisation iy lol + 2g hy! ard is known. as Biasicanetregslaisation, (3) Max-Norm constraint + Here we enforce an absolute upper bound on the sagnitude ofthe weight vecter for leery neuron and use projected gradient descent 10 enforce the constraint. This is altogether 2 differeat form of egulrisstion. DH 3.7. POOLING LAYERS $159. Explaio Pooling layers and ics erent types (@) Pooling layers are used to reduce ibe dimensions ofthe feature maps, I reduees the wun of parame to learn andthe amount of computation performed i the sor. ® The pooling layer summarises the Features preset in region of the feature map geocrsted by « convolutional layer @ A pooling layers is enother building block of a CNN, ‘when processing multichanne input-dara, the pooling layer pools cach input channel separately. (#) Pooling layer reduce the dimensions of the dats by Combining the output of neuron-clasess, Toe pooling liyer is used 10 reduce spatial dimensions, but not depth, on a convolutional neural network, (5) Pooling layer operates on each feature map independenly (6) Pooling is basicaly ‘downscaling’ the image obtined fiom the previous layers. can be compared to sheinking an imwge to reduce the pixsl density 57.1 Averaze Poollng ‘There exe two types oF widely used pooting in CNN layer Max pooling 2 Average Pooling 1, Hax-pooling + The popular kind of pooling is Max-pcoling. Suppose we want to pool by 2 ratio of 2 It mplies that the height end width of your image will be half of i's orginal value, + So we have to compress every 4 pixels (a2 * 2 tri) end mp ito anew single pixel with loosing the important data from the missing pine eben io pbitoneA SACHN SHAH Vera‘Deep Leeming (SPPU -Sem 6 Com: Max pooling is done by taking the largest valve of theve pels. Thus oe new itl represents old Finls by using the largest vloe of these 4 pines "This is repeated for every group of 4 pinels ttuoyghont the who'e image. ST Tea] ete 22 ae : rm lowe) fe | a tw feel abe I site : a ie ‘Here the laget pixels ae 6,3 nd 4 ‘© Ble the quality of an image is refused 0 avcid ‘computational Toad onthe systom, By redveing the quality ofthe image, we can increase the ‘Bepth’ OF the eye, to have more fstre in the seduced Sinage ‘Reducing the image ste Beps the comvolution ayer afer the pooling layer 19 look fex ‘higher level feetare’, It meas ht the convolution layer Jo0k3 alte picture as « whol 2. Average pooling ‘Average pooling ie ferent fram Max Pooling. ‘Average pooling resins ‘ess important’ information about the elements ofa bock, or pool © But Max pooling chocses the mximam value and seats ess important values (a5 shown in the Fg 37. ; © Bul “evs important is sso sometimes vseful in & variety of situations. 4phils Av (4.3.5) 225 3 [4 4[5/4 s|9 28 [45 53 [so 3 8 3 4 3 3 3 3 Bie. 7.1: Max pooling Pa {SPU -Now Syllabus w.ef academic year 22-28) (P6-85) (Convolution Neural Network(CNN)). no. (8-12) Dbl 3a. VARIANTS OF THE BASIC CONVOLUTION FUNCTION FFs we define afew parameters that define a eonvcluional lay (1) Kernel Size : The keel sae defines the field of view af the convoluton. A common choice for 2 D i = that is 33 pial @ Stride : The side define the step sizeof the kernel ‘When wavering the imge. While is defaolt is wsnelly {have cam wse aie of 2 for down sampling en ipage simar to Man Poking. (G) Padding + The padding defines how the border of 2 sample is bonded. A (half) padded convolution will ‘eepthe svat output drensoms equi to the inp but the tnpeded convolution will crop away of the borers if he Hemel is lng 1 (4) Input and output channels : & convolution layer takes a cérain number of input channcls q) and calealies a specie sumer of cuit channels (0) “The needed parameters fox soch a layer can be calcslated by I. 0. K, where X equal 19 number of ‘aes inthe keel, 1 Dilated Convatutions ‘© Dilated convolutions introduce another paremetet 10 convolutions inzoduce another parameter 10 convolutional layers called the dilation rate. This givet ‘a spacing betweca the values ina kerel © -A3x3 Kemal with 9 dilation rate of 2 wil have the sme field of view a2 5x 5 kernel, whilo asing only 9 parameters ¢ This delivers & wider field of view at the same compotationl cst. Dilated corvoloions are particularly popular in the ‘field of real-time segmentation. Y_ 1.8.2 Transposed convolutions We note that transposed convolution is not @ deeomvoluton. ‘+ Dewonvoluions do exist but they re not common in the field of cep learning. A transposed convolution is somewhat simir because it produces the same spatial Teeolotion « hypotheteal éeconvoloional layer Wil But the setae! meematcal operation tats performed onthe values different. Tercera Pebiaions.A SACHN SHE Ventre 2 At Aa Th sen + Te p00 ae = ter et mm we ° Tt ws ° op ° Ww w ist ob * of In ‘SOBEL w 01,0,- parame va In ‘convo a) ad tw 6 (sePu~~ BGS in is ill. ue ‘Deep Leeming (SPPU - Sam 8 - Como.) » Actransposed convolutional layer caries out 2 regular convolution but reverts its spetial ansformation. We consider an example: ‘An image of $x i fd into a convolutional layer ‘The stride is set to 2, the padding is deactivated and the ero ig 3x 3. This results in a2 image + To reverse the process, a transposed convolution produces an output of 5 x 5 image. It performs a ‘normal convolucion operation, + Te reconstruats the cpstial resolution from before and performs a convolution, (tis actully not a ‘mathematical inverse, but for Encoder deceder architecture; itis stil very useful) © This way we ean combine the up scaling of aa image with 2 convolution YQ 3.8.5. Soparable convolutions In separsble convolution, we cam aplit the Kemet operation in to mip steps. We express a conveluton as y = sonvoiuions! (x, ¥) vehere yi the output image, xi th input image ad isthe kere! © Letus assume hat cean be calcuied as kak, dot). Thr makes ita seprable convolution because insead af doing a 2D convolution with k, we could get the samme resultby doing 21D convolution with ky and ks Inimage processing we use sobel kerel SOBEL K and ¥ riters =tfopet] [+1 ]+2[+e =2[o]+2| [o [o [o =1fo[er =a-t Xie Y'fler ‘We can get the sume Kernel by mukipying the vector 11, 0,=1) and (1 2 117. This wal requie 6 instead of ‘rameters while ding the sme operation. ———————_—_— DH 5.9. LOCALLY CONNECTED LAYER ___ 2 Tr some cases, we donot scully went to use convohiton, bu rar Helly concecie layers (1) adjacency mix in the gragh of cur MLP isthe sme, buteveryconpecton hs its own Weight specified by a 6D eens: W. (SPU - New Syltus wer academic year 22-28) (P8-85) {Convolution Nout (2) Theinices into W ace espectivay (@) isthe output chance, ) j,the onput eos, (©) the ouput column, (© Uheinpatchancel, mth 10m oft within the input, and (8), the column offset within the Input, ‘The linear put of a locally consected layer is then ‘given by Tae =D Mijeneteee Mikted Fig.3.9.1 : Locally connected layers Also called unshared convolution ‘© Local consections, convolution, full connections 5© Use of locally connected layers + Locally connected layers ae useful wen we know that each feature should be a function of a Small pet of space, but there is uo reason to think hat the same fearure should occur across all of space © Be sif ve want tel if an image is 2 picture ofa foe, ‘we only need to look for the mouth ia the bottom half of the image. "= Constraining Outputs ¢Constain each output channel ito bea function of enly ‘a aabsot ofthe input channele, (© Make the fist » output chenaals connect to only ‘the frst m ingot channels, (© The second m output channels connect to only the second m input channels, et ‘enh-Noo Publcations...A SACHIN SHAH VertueDeep Learning (SPPU = Sem 8 -Gamp, + Modeling interactions between few chennels allows fewer porametecs to: Gaara O0000 00000 ©0000 O0000 DO000 00000 OOO0° OOOO" Sestalecorrake Fig. 39.2: Input, Output channels (1) Reduce memory, increase statistical efficacy, reduce computation fr fornard/oeck-propagstion. @) accomplishes these goals without seducing no15 cofhidéen units [Network with furter restricted connectivity ‘Channa coordinates == Tiled Convotution © Compromise between 2 convolutional Layer and 2 lecaly connected layer. ‘+ Rather than leaming ¢ separate set of weights at every spatial Joraton, we Fea asct of Kemels that we: rete through as we move through space, This means that immesiawly neighboring locations will have different filters, ike Ln a locally counceted ayer, but the memory requirements for storing the paraeters will increase only by & factor of the size of this set of kemels rather than the size ofthe entire oatpat featore map, Comparison of locally connected layers, ied (GPPU - New Sjlabus wef academic your 22-28) (P8-85) Torso ns konel anes ples wat ig. 293: Standard Convorution © Defining Tiled Convolution Algebraically 2 Leth be a 6D teaser, where two of the dimensions correspord to diferent Ieations in the output map. Rather than having a separate index foreach location in the outpuc map, curput locations eyele through at of £ diffrent choices of keoel stark in each direction © If ris eqoe to the outout width, this is the same a5 a locally ceanected layer wwhere% isthe modulo operation, withtst=0, +1) &t= 1, 2 Pere Vejom=iken=1 Ki pmnime tise t = Operations to Implement convolutional nets ‘© Besides convolution, other operations are necessary to iimplemeat aconvolutonal network: ‘© To perform leaming, need to compute gradicot wit the eine, given the gradient wit respect tothe outputs. It some simple cases, this operation can be performed using the convolution operation, but when side greater than 1, donot have this progeny. = Implementation of Convolution Convolution is a Tinear operation and can thus be escrited asa matrit muigtcation ‘© if we frst reshape the input iensor into a fit vector “+ Mais involved is Cancion of the conyotation kernel + Matrix is sparse and cach olemen of the kernel is copied to several elements ofthe matrix + This view helps us 10. desive some of the other operations needed t implement convolutional newyork, EElreentco rubies A SAGANNSHAH Vereswoon ot the iT med reales us be ore nel is other tional erture Deep Laaring (SPPU. em 8 - Comp. D1 3.10 CONVOLUTION NEURAL NETWORK (CNN) ARCHITECTURE GQ. Bp cit rdhecere + A Convolution Neural Netverk (CNN) it a feed forward neural actwork (ENN). So far CNNs have ‘established extraordinary performance in image search services, voice recognition and natural language processing (NLP) © We have akeady seen that a regular mulilyer perceptron gives good result for small imeges. But it ‘breaks down for larger images. + For example ifthe fret layer has 1000 neuron, then there vil be 10 milion connections. + GUNG solve sis problem using gattlly connected layers, CNN has Fewer parameter than a ésep neural etotk (DN, hence i requis Is rnin da. + Bn adetion, CNN can detect any particler feturo anywhere onthe image, tut a DNN can deetit nly in tht patel locaton, Since images have generally repetve feats, CNNS can generlse much beter than DNS for image procestng task such as clsieation. ¢ACNNs architecture has prior knowledge of how pines are oraised. + Lower lgess identify features in smell teas of the images, while bigher layers combine features of lower~ layer int larger fetus, In case of DNS, his dosi”t ‘week. Thain short, CNN is lass of neural networks that specialises in processing dats that has a gridlike topolog, suchas. an image, + Bach neuron works in its oun eecgtive fil and is conedted to olber neurons in a way that they cover ‘entre visual field + ACNN design begins with feture extraction and finches with elasiGeation (SPPU- Now Sylabus wf academic ysar 22-25) (F3-85) (Convolution Neural Networl(CNN) 15) © DMN Neural network Layer! — Leyte Ourputlayar idcen ayers ig. 2.0: : DNN Newral network P-z= Seale Conlon rt amengement ‘Raw mage image ‘neurons in ° ‘sng pooling In Fig. 3.104, there is DNN seeral network. On the right, convolution ast arranges it's neurons in 3+ cimeacions, at tean in one ofthe layers. C1 Definition : Convolution isa matermatical operation. fn “coavolution neural ne:work” convetution i employed in the network. 3.10.1 Architecture «A conventional neural network consists of an input layer, hidden layers and outpat layer. (es shown inthe Figure). The middle layers are called as hidden layers because their inputs and outputs are governed by scilvation funcoa and convolution. © ICNN, the input is a tensor wit shape : (umber of Imus) x Glapat height) % Capot width) x Gnput channels). © Ale passing through a convolutioeal ayer, the image becomes a features map, also called an activatios map, with shape. (oumber of inputs) x (feature map helght) X (eature map ‘wih x feature map channels) The input is convolved in conyclutional layers end the result is forwarded to che aext_ layer. Each BlrecneoPticaton.A SACHIN SHAH VoteLeaming (SPPU - Sem 8 - Comp.) ‘comvelhtional neuron processes data only for it's reeepive field. © Fully connected feed forward neural newworks ‘architecture is impractical for large inputs. reqeites & ery high umber of aeurons. For exemple, & fully -connecied layer of size 100 x 100 has 10,000 weights for each neuron ie the second layer. © Instead using eonvoltion a $ x 5 fling regions, only 25 leamuble parameters are requted, Also Convolutional neural networks ae ideal for data with a {tt-ike topology since separate features are taken into secount ding convolution, BW 3.10.2 Functions of Hidden Layers 1, The first hidden layer performs convolution, Bach feature cap inthe hidden layer consists of neurons and ath newton is assigned a recegtve field. 2 The seoond Hidden layer performs averaging end suinsempling, Each layer consists of feature maps end the neurons ofeach feature map as a receptive fel. It fas 2 trainable bas, trainable coefficient and sigmoid function, They control the operating point of the 3. ‘The next hidden layer performs a second convebtion ‘Again each feasue map inthis hidden layer consists of fcurons, And ech neuron Fas connections with the previous hiddos layer. 4. The next biden layer performs averaging and subsampling. 5. The output layer performs final stage of convection, Fach neuron is assigned a receptive field and. is assigned the pessble characters. The layers in the network altemate betwoen convolution and subsampling, and we get a “bipyramidal” fice, Thas et each ayer ie. cither subsempling or ‘convolutional ayer, te numberof feature maps is inereased While the space-resoluton Js reduced, The weight sharing redoces the manmber of fee parameters in the networe compared to the syneptic connections in multilayer perceptron, Also by the use of weight sharing, Jinplementation of eonvolutional retwock in parallel form is possible, YB 3.10.3 Design of Muklayer Perceptron (Gq Eicain mcetai naive Peeper. ‘(Convluiton tleural Network(CNNI)..Page No. (5-16) ‘© Using convaluional network, «swiilayer pereoptron of munageable size can lea a complet, high: Aimensiond,nenineat marina. © Through the taining set synaptic weights and bas levels can be eared by simple back proprgation 2igontum. ‘Back propagation algorithm isthe Key stone lgoritn foc the mulaer peceptrons, The pail derivatives ofthe cos function with respect tthe fee parameters ofthe network are detecrined by backepropagaing of the erot signals of the oviput neurons, hence the algorithm is called as Back- Propagation Algeritin. © Updating the synaptic weights and biases of uilayer perception and deriving all partial derivatives ofthe ast function wih respect to Gree gamete até the tase factors for evaluating the power ofthe algorithm. 4+ Datils involved ithe design of a multityer perceptron at 25 flow (1) All the neurons in the network ae nonlinear, And nonlinearity is chieved by using a sigmoid funon, sad they ae @) The ronsyanzitc logistic fection, and (The anieycmetic hyperbolic tangent fneton (2) Each neuron hs it's ova hyperplane in decision space, 2d tke combination of hyperplanes formed by all the teurons in the aetwork is iteratively adusted by a epervised learning process. Ang this pallens fom diffeect clases are epasied with the few asifiewionenors (@) For the patter classification, the random back- ‘propagation algorthm is used to perform the training, (4) In nonlinear regression, the ouput range of the rubiayer perceptron should be sulficiently larger Algorithe stased on mi But minimising the cost function may lead to cotimising the intermediate quantity. and this may not te the arn of the system. Hence ‘reward-to-vlailiy” ratio as performace mess of risk-adjusted ret more appreciable than Eav, %® 5.10.5 Inpuc Layer ‘The input layer passes the data directly to the fist hidden layer, Here the data is mulipted by the first hidden (SPPU- Now Sylabus w.e.t academic ear 22-26) (P5-88) -TechNeo Publications... SACHIN SHAH Venture «Toa eno + Theo (sPPU- 1Deen Learing (SPPU comp) 10. (Convoliton Neural NetworkiCNN} layers weights. Thea the input layer passes the data through | €3* Ineeptionnet / GoogleNet (2024) the activation function thea it passes on. DOL 3:11 POPULAR CNN ARCHITECTURE — ALEX NET Before we proceed with popular CNN Architecture, we define same terminology : (@) A wider network means moze festure maps (Filters) in the convolutional layers. Gi) Adeeper network means more convetutiona layers. Gil) A. network with higher resolution means that it proessres inpat images with larger width and depth (spatial resolutions), That way the produced feature ‘maps will have higher spatial dimensions, 5 Alex Net Alon net is made up of $ convolutional layers starting from an (11 x 11) kemel. © Te was the first architecture that employed mapool ayers, ReLo setivation Furetions, and dropout for 3 enormous linear layers ©The network was used for image classification with 1000 possible classes; Now we ean implement it in 35. lines of pyToreh end. © Te was the first convolutional model thot was successfully tained on Imagenet and at that time, it was more difficult to implement such < model in cua. To avoid over fing, dropout is beavily used in the ‘eucemous linear waasformaions, © ‘Thecore building bioc, called the inception module i as follows secu eitoen pes tai alt Sea Tina unex iecua vein ease aioe eh aries oe Situ eaaen emai Sern epee fee Cake GA iach cary pel ae oie ein anacaLaN eee ee Nei aoe en ees ee eat ee eal tail Disvaseatta fuego ea as ook Soro cteappiatiias Sion ere aaa : (4) Now we need the IX! coavolutional layer to ‘project” See oe computational power. And with these extra resources, Le es gg Fai a ae Seoebu rbd: Tis teat tly de, Geis tx oa sbls ata Coane toa idee Fig. 3.11.1: Inception module (SPPU- New Sylabus wet seaderic year 22-28) (PE-85) “Tech-Neo Publcaions..# SACHIN SHAH VentureDeep Learning (SPPU - Sem & Comp) The whcle arhitectare it called GoogleNet_ or TnceptionNel. In etenee, they iry 1 approximate a sparse conve with noel douse ayers Note dt only asl number of nevions ae festive, ‘They satisfy the Hebbian Prizeiple + *Noarea tht Fie ogeler wiv together. In general «lrg kernel prefered for information thar resies sloblly, aod a smaller ered is prefered for information tat is ditbatd locally, + Besides, 1% 1 conveluton are sed 1 comute redecens before the computationlly expensive convolutions (3 x 3 and 5x5). ©The Tnceptin’Vet/ CoogL-Net achiectre consists of 5 inosption modules stacked together, with max: pooling layers between (CO halve. the spat diresions} Teens of 22 layers (27 with the pecing layer) It tet gota! average pooling after the Ist inception module —— pM 3.12 THE INTRLEAVING BETWEEN LAYERS: ————EEee———— «J the standard fully supervised convolutional net motels, te top few layers of the network are fully connect neiworks and the fra Inyer is a. sofimax classifier # Ttis also described 28, multiple imerieaved layets of conveetios, somlineer activations, local response sormaliatons and mex poling ayes. © tmeansthet they crete 2 CNN by “airing between” these layer types (e.g. conv Poo! —r conv -» Pool > © Or wit the wording of the definition,“ place one layer” ofeach type “between the ober existing layers (Ge. san vith a couple of convolution layers, then place ‘Poo! layer beineen, two convolutional Tapes, and $0.01). «©The ea! ercitecure woul Iook some fie ti Conv. Rel + LRN-+ Pool Conv Re LRN Poot Conv... + So, tis “intereaved” CNN Is a nelwors where the ifferent layer types ax applied in “sees, leyerby- layer (GPPU- New Sylabue wet academic year 22-28) (P65) As a compacizon, in “negtion” type CNN, one at fpply the lfecert layer types in parle” (the grapkie below is | layer) > ne podaus Fig. 21241: Parallel Layers —— Dbl 3.43 LOCAL RESPONSE NORMALISATION (LRN) © Normalison is used in deep neural networks io compensate for the urbounded nature of sciivaion functions like Res, FL and others ‘© These activation faneions output laers are at limnted to afte range like ( 1 1] for anh), bot can ise as fara the taining allons. To save unbounded activation from elevating the cuiput Iyer values, normalization is wsed simply catlce han the ectivation feation Local Response Normalisation (LRN) was ft tized in AlenNot atehitetre, win ReLa serving beesuse the activation fanetion rater than the more common ta’ and ‘sigmoid’. In addion LAN was used to enhance Intra innbiion. + Local Response Normalisation i @ normalisation layer thot makes use of lateral intbito, to pu ia force the ration. ‘+ Lateral inhibition ig 2 neurobiological concept that cutlines. bow: stimulated neurons repress nearby reutrons. It intensifies the sensory experience. and produces inaxisal form spikes that create contest in the sree One can normalize within oc between channels when using LRN to normalise a coovolutional seul network, +The LRN primiive conducts forward and béckward normalization flacal responses. 5.13.1 fncer-Chanmel LRN ta et zp Guy jemi) Ta rea neo uaa. SACHIN A Venue |! 4 (PP
You might also like
Analog and Digital Electronics by U.a.bakshi, A.P.godse
PDF
No ratings yet
Analog and Digital Electronics by U.a.bakshi, A.P.godse
547 pages
Computer-graphics-Samit Bhattacharya-Department of Computer Science Engineering-Oxford Press-2015 e
PDF
No ratings yet
Computer-graphics-Samit Bhattacharya-Department of Computer Science Engineering-Oxford Press-2015 e
180 pages
Acn Question Bank With Solution.
PDF
100% (1)
Acn Question Bank With Solution.
47 pages
IOT Unit 1 (Nirali)
PDF
No ratings yet
IOT Unit 1 (Nirali)
24 pages
Assessment Plan - Infosys Springboard Offerings - Naan Mudhalvan
PDF
No ratings yet
Assessment Plan - Infosys Springboard Offerings - Naan Mudhalvan
14 pages
RMT Techneo Thanks To Pankya
PDF
No ratings yet
RMT Techneo Thanks To Pankya
72 pages
CNN Short
PDF
No ratings yet
CNN Short
61 pages
Final Mohan
PDF
No ratings yet
Final Mohan
272 pages
Unit-Iv Hopfield Networks: As Per Jntu Your Syllabus Is
PDF
No ratings yet
Unit-Iv Hopfield Networks: As Per Jntu Your Syllabus Is
48 pages
32 Lecture CSC462
PDF
No ratings yet
32 Lecture CSC462
34 pages
Wireless Communication System Handwritten Notes
PDF
No ratings yet
Wireless Communication System Handwritten Notes
20 pages
B e Ece-2021
PDF
No ratings yet
B e Ece-2021
389 pages
TB04 - Soft Computing Ebook PDF
PDF
100% (4)
TB04 - Soft Computing Ebook PDF
356 pages
NASSCOM Digital 101 Sample Questions
PDF
100% (1)
NASSCOM Digital 101 Sample Questions
2 pages
Unit 1
PDF
No ratings yet
Unit 1
109 pages
CS3491 - Notes - Unit 4 - Ensemble Techniques and Unsupervised Learning
PDF
No ratings yet
CS3491 - Notes - Unit 4 - Ensemble Techniques and Unsupervised Learning
35 pages
ML Notes Updated
PDF
No ratings yet
ML Notes Updated
60 pages
Artificial Intelligence and Machine Learning - CS3491 2021 Regulation - Question Paper 2024 April May
PDF
No ratings yet
Artificial Intelligence and Machine Learning - CS3491 2021 Regulation - Question Paper 2024 April May
12 pages
Internship 7th Sem
PDF
No ratings yet
Internship 7th Sem
16 pages
Video Summarization Project Presentaion
PDF
No ratings yet
Video Summarization Project Presentaion
34 pages
AI&ML BM4251 Unit 1-5 Notes
PDF
No ratings yet
AI&ML BM4251 Unit 1-5 Notes
116 pages
FMA (Decode) #MB
PDF
No ratings yet
FMA (Decode) #MB
102 pages
Features of MapReduce
PDF
No ratings yet
Features of MapReduce
4 pages
CG End-Sem
PDF
No ratings yet
CG End-Sem
61 pages
Inception Net
PDF
No ratings yet
Inception Net
88 pages
Ccs Module 123 Mu Cloud Computing Sem 7
PDF
No ratings yet
Ccs Module 123 Mu Cloud Computing Sem 7
100 pages
In House Project Report - Beg
PDF
No ratings yet
In House Project Report - Beg
8 pages
MES Module 4 Notes (1) 2
PDF
No ratings yet
MES Module 4 Notes (1) 2
24 pages
Convolutional Neural Networks
PDF
No ratings yet
Convolutional Neural Networks
108 pages
Ai Unit 6 Techknow
PDF
No ratings yet
Ai Unit 6 Techknow
31 pages
NPTEL Domain
PDF
No ratings yet
NPTEL Domain
1 page
Yole IoT June 2014 Sample PDF
PDF
No ratings yet
Yole IoT June 2014 Sample PDF
25 pages
Hopfield Neural Network
PDF
100% (1)
Hopfield Neural Network
6 pages
EC8551 CN Unit 1 Notes
PDF
No ratings yet
EC8551 CN Unit 1 Notes
29 pages
DL Unit-4
PDF
No ratings yet
DL Unit-4
26 pages
Module 1 Introduction - DBMS
PDF
No ratings yet
Module 1 Introduction - DBMS
43 pages
DIP IPT Unit V Complete
PDF
No ratings yet
DIP IPT Unit V Complete
69 pages
ML Question
PDF
No ratings yet
ML Question
2 pages
ML Akash
PDF
No ratings yet
ML Akash
53 pages
CS 601 Machine Learning Unit 3
PDF
No ratings yet
CS 601 Machine Learning Unit 3
37 pages
GRT Institute of Engineering and Technology,: Tiruttani
PDF
No ratings yet
GRT Institute of Engineering and Technology,: Tiruttani
11 pages
Jagadish
PDF
No ratings yet
Jagadish
13 pages
Ece443 - Wireless Sensor Networks Course Information Sheet: Electronics and Communication Engineering Department
PDF
No ratings yet
Ece443 - Wireless Sensor Networks Course Information Sheet: Electronics and Communication Engineering Department
10 pages
DL M1 Tech
PDF
No ratings yet
DL M1 Tech
40 pages
EC3353 Syllabus
PDF
No ratings yet
EC3353 Syllabus
1 page
Deep Learning 2017 Lecture5CNN
PDF
No ratings yet
Deep Learning 2017 Lecture5CNN
30 pages
Convolutional Neural Networks Notes
PDF
No ratings yet
Convolutional Neural Networks Notes
29 pages
DL M6 Tech
PDF
No ratings yet
DL M6 Tech
29 pages
Convolutional Networks
PDF
No ratings yet
Convolutional Networks
37 pages
JSPM's Rajarshi Shahu College of Engineering, Pune: Instructions For Writing BE Project Work Stage - II Report
PDF
No ratings yet
JSPM's Rajarshi Shahu College of Engineering, Pune: Instructions For Writing BE Project Work Stage - II Report
12 pages
DL M1 TechNeo
PDF
No ratings yet
DL M1 TechNeo
30 pages
DL M2 Tech
PDF
No ratings yet
DL M2 Tech
32 pages
Scan 30 Sep 23 18 20 44
PDF
No ratings yet
Scan 30 Sep 23 18 20 44
30 pages
Logical Connectivity Q & A
PDF
No ratings yet
Logical Connectivity Q & A
6 pages
DL M4 Tech
PDF
No ratings yet
DL M4 Tech
24 pages
CP16036
PDF
No ratings yet
CP16036
6 pages
Sample Questions: Subject Name: Semester: VI
PDF
No ratings yet
Sample Questions: Subject Name: Semester: VI
17 pages
DL Unit2
PDF
No ratings yet
DL Unit2
25 pages
DL M5 Tech
PDF
No ratings yet
DL M5 Tech
21 pages
Techknowledge DevOps Unit 1
PDF
No ratings yet
Techknowledge DevOps Unit 1
15 pages
DL M3 Tech
PDF
No ratings yet
DL M3 Tech
15 pages
Department of Computer Science and Engineering SUB - CODE: CS8711 YEAR: IV (2020-2021 ODD) Sub - Name: Cloud Computing Lab Sem: Vii Course Outcome
PDF
No ratings yet
Department of Computer Science and Engineering SUB - CODE: CS8711 YEAR: IV (2020-2021 ODD) Sub - Name: Cloud Computing Lab Sem: Vii Course Outcome
2 pages
Food Recognition and Calorie Estimation Using Image Processing
PDF
No ratings yet
Food Recognition and Calorie Estimation Using Image Processing
5 pages
Ec805 Wireless Sensor Networks
PDF
No ratings yet
Ec805 Wireless Sensor Networks
1 page
Genetic Algorithms Versus Traditional Methods
PDF
No ratings yet
Genetic Algorithms Versus Traditional Methods
7 pages
Reinforcement Learning
PDF
No ratings yet
Reinforcement Learning
2 pages
It2402 Mobile Communication
PDF
No ratings yet
It2402 Mobile Communication
1 page