Open navigation menu
Close suggestions
Search
Search
en
Change Language
Upload
Sign in
Sign in
Download free for days
0 ratings
0% found this document useful (0 votes)
32 views
11 pages
SCT Unit2
Uploaded by
shindesanket928
AI-enhanced title
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content,
claim it here
.
Available Formats
Download as PDF or read online on Scribd
Download
Save
Save SCT_Unit2 For Later
0%
0% found this document useful, undefined
0%
, undefined
Embed
Share
Print
Report
0 ratings
0% found this document useful (0 votes)
32 views
11 pages
SCT Unit2
Uploaded by
shindesanket928
AI-enhanced title
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content,
claim it here
.
Available Formats
Download as PDF or read online on Scribd
Carousel Previous
Carousel Next
Download
Save
Save SCT_Unit2 For Later
0%
0% found this document useful, undefined
0%
, undefined
Embed
Share
Print
Report
Download now
Download
You are on page 1
/ 11
Search
Fullscreen
Q. Implement AND NOT function using McCulloch-Pitts neuron. Consider binary data and excitatory weight as 1 and inhibitory weight as -1. McCulloch-Pitts Neural net: The McCulloch-Pitts neurons are connected by directed weighted paths. It should be noted that the activation of M-P neuron is binary, that is, at any time step the neuron may fire or may not fire The weights associated with the communication links may be excitatory (weight is positive) or inhibitory (weight is negative). There is a fixed threshold for each neuron, and if the net input to the neuron is greater than the threshold then the neuron fires. Also, it should be noted that any nonzero inhibitory input would prevent the neuron from firing. The M-P neurons are most widely used in the case of logic function AND/NOT Function: Truth Table: xt x2 y 0 0 0 0 1 0 1 0 1 1 I 0 In the case of ANDNOT function, the response is true if the first input is true and the second input is false. For all the other variations, the response is false Be" The given function gives an output only when x1 = | and x2 = 0. The weights have to be decided only after the analysis, Case 1: Assume that both weights w: and wo. are excitatory, ie., wi = w2 = I Then for the four inputs calculate the net input using = xiwi + x2w2 For inputs (1, 1), y= 1x 11x 1 (1,0), yin= 1x 1+ 0x 0,1), Yin= OX 141X151 (0,0), yin =0x 1+0x1=0From the calculated net inputs, it is not possible to fire the neuron form input (1, 0) only. Hence, these weights are not suitable. Case 2: Assume one weight as excitatory and the other as inhibitory, ie., wi= 1, w2=-1 Now calculate the net input. For the inputs (1), yin=1x1+1x-1=0 (1,0), yin= 1x 1+0x-1 (0,1), yin=Ox1+1x-1=-1 (0, 0), yin=0x 1+0x-1=0 From the calculated net inputs, now it is possible to fire the neuron for input (1, 0) only by fixing a threshold of 1, i¢., 92 1 for Y unit. Thus, wis Twe=-1; 021 Note: The value of @ is calculated using the following: @>nw-p @22x1-1 621 Thus, the output of neuron Y can be written as O if yin21 y =F (yin) = Lifyin< 1Q. Write the training algorithm/flowchart of McCulloch-Pitts neuron. The first computational model of a neuron was proposed by Warren MuCulloch (neuroscientist) and Walter Pitts (logician) in 1943. (and the explanation in another answer)Q. List and explain all the activation functions used in ANN. 3.2.3 Activation Function An activation function f is applied over the net input to calculate the output of an ANN. The choice of activation functions depends on the type of problems to be solved by the network. The most common functions are 1, Identity function. It is a linear function, It is defined as fix) = x for all x 2. Binary step function: The function can be defined as — lifx>=0 f= = Oifx<@ Here, represents the threshold value. 3. Bipolar Step function: The function can be defined as fx) => -lifx<@ Here,@ represents the threshold value 4. Sigmoidal functions: These functions are used in back-propagation nets. They are of two types: Binary Sigmoid function: It is known as unipolar sigmoid function, Itis defined by the equation fx) = She Here, a is the steepness parameter. The range of the sigmoid function is from 0 to 1 Bipolar Sigmoid function: This function is defined as fy ES Here, A is the steepness parameter. The range of the sigmoid function is from -1 to +1 5. Ramp function: The ramp function is defined as 1 ifx>1 f= — xif0sxrs1 Oiftx<0 The graphical representation is shown below for all the activation functionsIdentity Activation Function Mathematically tcan be represented as: The linear activation function, also known as “no activation," or “identity function” (multiplied x10), is where the activation is proportional to the input. ‘The function doesn't do anything to the ‘Weighted sum of the input, it simply spits Out | ssowever «liner activation function has two the value it was given. major problems ‘+ ItS not possible to use backpropagation ‘as the derivative ofthe function is a ‘constant and has no relation to the input % «+ Alllayers of the neural network will ‘collapse into one i linear activation ‘function is used. No matter the number of layers in the neural network, the last layer wil stil be a linear function of the first layer. So, essentially, a linear activation function tus the neural ‘network into just one layer. Linear Activation Function Binary Step Function Ineiberneticaly tea) be repeated a Binary step function depends on a threshold value that decides whether a neuron should be Binary step activated or not. no (TES The input fed to the activation function is compared to a certain threshold; ifthe input is. greater than it, then the neuron is activated, else it is deactivated, meaning that its output is Here are some of the limitations of binary step not passed on to the next hidden layer. function: + Itcannot provide mutti-value outputs— for example, it cannot be used for multi- class classification problems. + The gradient of the step function is zero, which causes a hindrance in the ‘backpropagation process. Bipolar Step Function The Bipolar activation function used to _ | convert the activation level of a unit Binary Step Function (neuron) into an output signal. It is also known as transfer funetion or squashing function due to the capability to squeeze the amplitude range of output signal to some finite value [13].Q. Give the details on perceptron networks. 3.7 Perceptron Networks Perceptron Networks are single-layer feed forward networks. They are the simplest perceptron, Perceptron consists of three units — input unit (sensory unit), hidden unit (associator unit) and output unit (response unit), The input units are connected to the hidden units with fixed weights having values 1, 0 or -1 assigned at random. The binary activation function is used in input and hidden unit. The response unit has an activation of 1, 0 or -1. The output signal sent from the hidden unit to the output unit are binary. The output of the perceptron network is given by y =f(yin) where yin is the activation function. input xi weights wij output vi x x2 Xn Single layer Perceptron Fig 3.5: Perceptron modelPerceptron Learning algorithm ‘The training of perceptron is a supervised learning algorithm. The algorithm can be used for cither bipolar or binary input vectors, fixed threshold and variable bias. ‘The output is obtained by applying the activation function over the calculated net input. The weights are adjusted to minimize error when the output does not match the desired output. ep 0: 1 he weights and the Hor cayy calculation they can be set 1a era), Al initialize the learning Fate as ees 1) Fa set te 1. Siena tee? Gu pplng condition ts ep teps 3-5 for each training pale Indicated hy Stop 3: The input layer ew alntng Input unlty Ie applied with entity activation functions: Step 4: Caleulatethe output nf the network Ts do sf aban heme tp wba Samn \ where “nis the number of input neurons In the input layer. Then apply activations over the net input calculated to obtain the output: “ve me " 1 i y.20 YeSOI={O MOSS 0 1 i y<-0 Step 5: Weight and bias adjustment: Compare the value of the actual (calculated) output and desired (target) output. My #1, then w (new) =w/(old) + atx, (new) = bald) + at else we have w,(new) = w,(old) (new) = bold) ‘Step 6: Train the network until there is no weight change. This is the stopping condition for the network. Ifthis condition is not met, then start again from Step 2.Q. Explain in detail about bidirectional associative memory. 4.10 Bidirectional Associative Memory (BAM) Several versions of the heteroassociative recurrent neural network, or bidirectional associative memory (BAM), developed by Kosko (1988 ). Bidirectional Associative Memory (BAM) is a type of recurrent neural network BAM has 2 layers: input and output and information can go in both directions input to output and back from output to input. BAM is hetero-associative, meaning given a pattern it can return another pattern which is potentially of a different size, Itis similar to the Hopfield network in that they are both forms of associative memory. rom The BAM network performs forward and backward associative searches for stored stimulus responses. 2. Ita type of recurrent heteroassociative pattern matching network that encodes using Hebbian learning rule. BAM neural nets can respond either ways from input and output layers. 4. It consists of two layers of neurons which are connected by directed weight path connections, The network dynamics involves two layers of interaction until all the neurons reach equilibrium. Fig: 4.8 Bidirectional associ: e memory netQ. Write in detail about the tree neural networks. 4.5 Tree Neural Networks Definition + Decision Tree algorithm belongs to the family of supervised learning algorithms. Unlike other supervised learning algorithms, decision tree algorithm can be used for solving regression and classification problems too. + The general motive of using Decision Tree is to create a training model which can use to predict class or value of target variables by learning decision rules inferred from prior data(training data). These networks are basically used for pattern recognition problems. It uses multilayer neural network at each decision-making node of a binary classification for extracting a non-linear feature. The decision nodes are circular nodes and the terminal nodes are square nodes. The splitting rule decides whether the pattern moves to the right or left. The algorithm consists of two phases 1. The growing phase- A large tree is grown in this phase by recursively finding the rules of splitting until all the terminal nodes have nearly pure membership or else it can split further. 2. Tree pruning phase- To avoid overfilling/overfitting of data, a smaller tree is selected or it is pruned. Example- Tree neural networks can be used for waveform recognition problem. Fig 4.4: Binary Classification treeQ. What is the function used by Radial basis function network? Draw and explain its architecture. 4.2 Radial Basis Function network The radial basis function is a classification and functional approximation neural network. It uses non-linear activation functions like sigmoidal and Gaussian functions. Since radial basis functions have only one hidden layer, the convergence of optimization is much faster. 1, The architecture consists of two layers. Radial Function Architecture: Input Hidden Output layer layer (REF) ‘ayer © The architecture consist of two layers whose output nodes form a linear combination of the kernel (or basis) functions computed by means of the RBF nodes or hidden layer nodes. © The basis function (nonlinearity) in the hidden layer produces a significant nonzero response to the input stimulus it has received only when the input of it falls within a small localized region of the input space. © This network can also be called as localized receptive field network.3.5 Concept of Linear Separability Concept: Sets of point in 2-D space are linearly separable if the points can be separated by a straight line In ANN, linear separability is the concept wherein the separation is based on the network response being positive or negative. A decision line is drawn to separate positive and negative responses. The decision line is called as linear-separable line. $1 S2 Fig 3.3: Linear Separable Patterns The linear separability of the network is based on the decision-boundary line. If there exists weights for which the training data has correct response ,+ 1 (positive) ,it will lie on one side of the decision boundary line and all other data on the other side of the boundary line. This is known as linear separability.
You might also like
Module - 5 - ANN
PDF
No ratings yet
Module - 5 - ANN
50 pages
Neural Network and Fuzzy Logic
PDF
50% (2)
Neural Network and Fuzzy Logic
54 pages
Neural Networks
PDF
No ratings yet
Neural Networks
54 pages
McCulloch-Pitts Neuron
PDF
No ratings yet
McCulloch-Pitts Neuron
14 pages
Unit 2 - Machine Learning
PDF
No ratings yet
Unit 2 - Machine Learning
19 pages
Activation Function Learning
PDF
No ratings yet
Activation Function Learning
35 pages
CHP 9
PDF
No ratings yet
CHP 9
29 pages
08 Neural Networks
PDF
No ratings yet
08 Neural Networks
47 pages
Feed Back
PDF
No ratings yet
Feed Back
30 pages
Module - 3 AAI
PDF
No ratings yet
Module - 3 AAI
119 pages
Deep Learning-Material For The Units 1,2,3
PDF
No ratings yet
Deep Learning-Material For The Units 1,2,3
36 pages
EAI14
PDF
No ratings yet
EAI14
16 pages
Assignment 1
PDF
No ratings yet
Assignment 1
7 pages
Neural Network
PDF
No ratings yet
Neural Network
98 pages
NNFL Unit III For ECE & EEE
PDF
No ratings yet
NNFL Unit III For ECE & EEE
29 pages
Machine Learning
PDF
No ratings yet
Machine Learning
77 pages
NN Learning
PDF
No ratings yet
NN Learning
69 pages
UNIT III 3.1 ML Artificial Neural Networks
PDF
No ratings yet
UNIT III 3.1 ML Artificial Neural Networks
65 pages
Function of Single Biological Neuron and Modelling of Artificial Neuron From It
PDF
No ratings yet
Function of Single Biological Neuron and Modelling of Artificial Neuron From It
33 pages
SC ESE Cae 1
PDF
No ratings yet
SC ESE Cae 1
25 pages
UNIT1 Perceptron MLP
PDF
No ratings yet
UNIT1 Perceptron MLP
26 pages
FML Unit5
PDF
No ratings yet
FML Unit5
21 pages
WINSEM2023-24 BITE410L TH VL2023240503970 2024-03-11 Reference-Material-I
PDF
No ratings yet
WINSEM2023-24 BITE410L TH VL2023240503970 2024-03-11 Reference-Material-I
40 pages
NNDL
PDF
No ratings yet
NNDL
96 pages
0905 Cs 161183 Vishal
PDF
No ratings yet
0905 Cs 161183 Vishal
38 pages
New Microsoft Word Document
PDF
No ratings yet
New Microsoft Word Document
14 pages
Artificial Neural Networks
PDF
No ratings yet
Artificial Neural Networks
48 pages
Module1 - Upto Loss Function
PDF
No ratings yet
Module1 - Upto Loss Function
137 pages
Unit 5 Neural Networks and Types of Learning
PDF
No ratings yet
Unit 5 Neural Networks and Types of Learning
38 pages
Artificial Neural Network: Lecture Module 22
PDF
No ratings yet
Artificial Neural Network: Lecture Module 22
54 pages
Artificial Neural Networks
PDF
No ratings yet
Artificial Neural Networks
82 pages
Unit 5
PDF
No ratings yet
Unit 5
58 pages
@vtucode - in Module 5 AI 2021 Scheme 5th Sem
PDF
No ratings yet
@vtucode - in Module 5 AI 2021 Scheme 5th Sem
66 pages
Artificial Neural Networks
PDF
No ratings yet
Artificial Neural Networks
61 pages
Week-3 Module-2 Neural Network
PDF
No ratings yet
Week-3 Module-2 Neural Network
58 pages
ANN MODULE 1 Part2
PDF
No ratings yet
ANN MODULE 1 Part2
58 pages
Artificial Neural Network
PDF
No ratings yet
Artificial Neural Network
86 pages
NN 2nd
PDF
No ratings yet
NN 2nd
23 pages
ML Unit 3 Study Material-1
PDF
No ratings yet
ML Unit 3 Study Material-1
32 pages
ML Neural Networks
PDF
No ratings yet
ML Neural Networks
71 pages
Unit 2
PDF
No ratings yet
Unit 2
18 pages
Artificial Neural Artificial Neural Networks
PDF
No ratings yet
Artificial Neural Artificial Neural Networks
40 pages
Deep Learning Notes
PDF
No ratings yet
Deep Learning Notes
47 pages
Lecture 10 Neural Network
PDF
No ratings yet
Lecture 10 Neural Network
34 pages
Question 105A
PDF
No ratings yet
Question 105A
33 pages
Softcomputing Assignment 1
PDF
No ratings yet
Softcomputing Assignment 1
7 pages
Unit 2 Deep Learning and Neural Networks
PDF
No ratings yet
Unit 2 Deep Learning and Neural Networks
38 pages
Advanced Supervised Learning
PDF
No ratings yet
Advanced Supervised Learning
17 pages
Neural Network Notes
PDF
No ratings yet
Neural Network Notes
8 pages
02 - Supervised Network (Perceptron)
PDF
No ratings yet
02 - Supervised Network (Perceptron)
40 pages
ML Tushar Assignment
PDF
No ratings yet
ML Tushar Assignment
8 pages
Unit 2 - Machine Learning - WWW - Rgpvnotes.in
PDF
No ratings yet
Unit 2 - Machine Learning - WWW - Rgpvnotes.in
18 pages
Neural NetworksChapter2Sup
PDF
No ratings yet
Neural NetworksChapter2Sup
20 pages
Neural Sec B Consolidated
PDF
No ratings yet
Neural Sec B Consolidated
4 pages
ML M4 Ann
PDF
No ratings yet
ML M4 Ann
31 pages
Mcculloch Pitts Final
PDF
No ratings yet
Mcculloch Pitts Final
47 pages
Neural Networks: Models: Why There Are Many Neural Network Models? Characteristics
PDF
No ratings yet
Neural Networks: Models: Why There Are Many Neural Network Models? Characteristics
8 pages
Activation Functions in Neural Networks - 241102 - 224129
PDF
No ratings yet
Activation Functions in Neural Networks - 241102 - 224129
7 pages