0% found this document useful (0 votes)
32 views11 pages

SCT Unit2

Uploaded by

shindesanket928
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as PDF or read online on Scribd
0% found this document useful (0 votes)
32 views11 pages

SCT Unit2

Uploaded by

shindesanket928
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as PDF or read online on Scribd
You are on page 1/ 11
Q. Implement AND NOT function using McCulloch-Pitts neuron. Consider binary data and excitatory weight as 1 and inhibitory weight as -1. McCulloch-Pitts Neural net: The McCulloch-Pitts neurons are connected by directed weighted paths. It should be noted that the activation of M-P neuron is binary, that is, at any time step the neuron may fire or may not fire The weights associated with the communication links may be excitatory (weight is positive) or inhibitory (weight is negative). There is a fixed threshold for each neuron, and if the net input to the neuron is greater than the threshold then the neuron fires. Also, it should be noted that any nonzero inhibitory input would prevent the neuron from firing. The M-P neurons are most widely used in the case of logic function AND/NOT Function: Truth Table: xt x2 y 0 0 0 0 1 0 1 0 1 1 I 0 In the case of ANDNOT function, the response is true if the first input is true and the second input is false. For all the other variations, the response is false Be" The given function gives an output only when x1 = | and x2 = 0. The weights have to be decided only after the analysis, Case 1: Assume that both weights w: and wo. are excitatory, ie., wi = w2 = I Then for the four inputs calculate the net input using = xiwi + x2w2 For inputs (1, 1), y= 1x 11x 1 (1,0), yin= 1x 1+ 0x 0,1), Yin= OX 141X151 (0,0), yin =0x 1+0x1=0 From the calculated net inputs, it is not possible to fire the neuron form input (1, 0) only. Hence, these weights are not suitable. Case 2: Assume one weight as excitatory and the other as inhibitory, ie., wi= 1, w2=-1 Now calculate the net input. For the inputs (1), yin=1x1+1x-1=0 (1,0), yin= 1x 1+0x-1 (0,1), yin=Ox1+1x-1=-1 (0, 0), yin=0x 1+0x-1=0 From the calculated net inputs, now it is possible to fire the neuron for input (1, 0) only by fixing a threshold of 1, i¢., 92 1 for Y unit. Thus, wis Twe=-1; 021 Note: The value of @ is calculated using the following: @>nw-p @22x1-1 621 Thus, the output of neuron Y can be written as O if yin21 y =F (yin) = Lifyin< 1 Q. Write the training algorithm/flowchart of McCulloch-Pitts neuron. The first computational model of a neuron was proposed by Warren MuCulloch (neuroscientist) and Walter Pitts (logician) in 1943. (and the explanation in another answer) Q. List and explain all the activation functions used in ANN. 3.2.3 Activation Function An activation function f is applied over the net input to calculate the output of an ANN. The choice of activation functions depends on the type of problems to be solved by the network. The most common functions are 1, Identity function. It is a linear function, It is defined as fix) = x for all x 2. Binary step function: The function can be defined as — lifx>=0 f= = Oifx<@ Here, represents the threshold value. 3. Bipolar Step function: The function can be defined as fx) => -lifx<@ Here,@ represents the threshold value 4. Sigmoidal functions: These functions are used in back-propagation nets. They are of two types: Binary Sigmoid function: It is known as unipolar sigmoid function, Itis defined by the equation fx) = She Here, a is the steepness parameter. The range of the sigmoid function is from 0 to 1 Bipolar Sigmoid function: This function is defined as fy ES Here, A is the steepness parameter. The range of the sigmoid function is from -1 to +1 5. Ramp function: The ramp function is defined as 1 ifx>1 f= — xif0sxrs1 Oiftx<0 The graphical representation is shown below for all the activation functions Identity Activation Function Mathematically tcan be represented as: The linear activation function, also known as “no activation," or “identity function” (multiplied x10), is where the activation is proportional to the input. ‘The function doesn't do anything to the ‘Weighted sum of the input, it simply spits Out | ssowever «liner activation function has two the value it was given. major problems ‘+ ItS not possible to use backpropagation ‘as the derivative ofthe function is a ‘constant and has no relation to the input % «+ Alllayers of the neural network will ‘collapse into one i linear activation ‘function is used. No matter the number of layers in the neural network, the last layer wil stil be a linear function of the first layer. So, essentially, a linear activation function tus the neural ‘network into just one layer. Linear Activation Function Binary Step Function Ineiberneticaly tea) be repeated a Binary step function depends on a threshold value that decides whether a neuron should be Binary step activated or not. no (TES The input fed to the activation function is compared to a certain threshold; ifthe input is. greater than it, then the neuron is activated, else it is deactivated, meaning that its output is Here are some of the limitations of binary step not passed on to the next hidden layer. function: + Itcannot provide mutti-value outputs— for example, it cannot be used for multi- class classification problems. + The gradient of the step function is zero, which causes a hindrance in the ‘backpropagation process. Bipolar Step Function The Bipolar activation function used to _ | convert the activation level of a unit Binary Step Function (neuron) into an output signal. It is also known as transfer funetion or squashing function due to the capability to squeeze the amplitude range of output signal to some finite value [13]. Q. Give the details on perceptron networks. 3.7 Perceptron Networks Perceptron Networks are single-layer feed forward networks. They are the simplest perceptron, Perceptron consists of three units — input unit (sensory unit), hidden unit (associator unit) and output unit (response unit), The input units are connected to the hidden units with fixed weights having values 1, 0 or -1 assigned at random. The binary activation function is used in input and hidden unit. The response unit has an activation of 1, 0 or -1. The output signal sent from the hidden unit to the output unit are binary. The output of the perceptron network is given by y =f(yin) where yin is the activation function. input xi weights wij output vi x x2 Xn Single layer Perceptron Fig 3.5: Perceptron model Perceptron Learning algorithm ‘The training of perceptron is a supervised learning algorithm. The algorithm can be used for cither bipolar or binary input vectors, fixed threshold and variable bias. ‘The output is obtained by applying the activation function over the calculated net input. The weights are adjusted to minimize error when the output does not match the desired output. ep 0: 1 he weights and the Hor cayy calculation they can be set 1a era), Al initialize the learning Fate as ees 1) Fa set te 1. Siena tee? Gu pplng condition ts ep teps 3-5 for each training pale Indicated hy Stop 3: The input layer ew alntng Input unlty Ie applied with entity activation functions: Step 4: Caleulatethe output nf the network Ts do sf aban heme tp wba Samn \ where “nis the number of input neurons In the input layer. Then apply activations over the net input calculated to obtain the output: “ve me " 1 i y.20 YeSOI={O MOSS 0 1 i y<-0 Step 5: Weight and bias adjustment: Compare the value of the actual (calculated) output and desired (target) output. My #1, then w (new) =w/(old) + atx, (new) = bald) + at else we have w,(new) = w,(old) (new) = bold) ‘Step 6: Train the network until there is no weight change. This is the stopping condition for the network. Ifthis condition is not met, then start again from Step 2. Q. Explain in detail about bidirectional associative memory. 4.10 Bidirectional Associative Memory (BAM) Several versions of the heteroassociative recurrent neural network, or bidirectional associative memory (BAM), developed by Kosko (1988 ). Bidirectional Associative Memory (BAM) is a type of recurrent neural network BAM has 2 layers: input and output and information can go in both directions input to output and back from output to input. BAM is hetero-associative, meaning given a pattern it can return another pattern which is potentially of a different size, Itis similar to the Hopfield network in that they are both forms of associative memory. rom The BAM network performs forward and backward associative searches for stored stimulus responses. 2. Ita type of recurrent heteroassociative pattern matching network that encodes using Hebbian learning rule. BAM neural nets can respond either ways from input and output layers. 4. It consists of two layers of neurons which are connected by directed weight path connections, The network dynamics involves two layers of interaction until all the neurons reach equilibrium. Fig: 4.8 Bidirectional associ: e memory net Q. Write in detail about the tree neural networks. 4.5 Tree Neural Networks Definition + Decision Tree algorithm belongs to the family of supervised learning algorithms. Unlike other supervised learning algorithms, decision tree algorithm can be used for solving regression and classification problems too. + The general motive of using Decision Tree is to create a training model which can use to predict class or value of target variables by learning decision rules inferred from prior data(training data). These networks are basically used for pattern recognition problems. It uses multilayer neural network at each decision-making node of a binary classification for extracting a non-linear feature. The decision nodes are circular nodes and the terminal nodes are square nodes. The splitting rule decides whether the pattern moves to the right or left. The algorithm consists of two phases 1. The growing phase- A large tree is grown in this phase by recursively finding the rules of splitting until all the terminal nodes have nearly pure membership or else it can split further. 2. Tree pruning phase- To avoid overfilling/overfitting of data, a smaller tree is selected or it is pruned. Example- Tree neural networks can be used for waveform recognition problem. Fig 4.4: Binary Classification tree Q. What is the function used by Radial basis function network? Draw and explain its architecture. 4.2 Radial Basis Function network The radial basis function is a classification and functional approximation neural network. It uses non-linear activation functions like sigmoidal and Gaussian functions. Since radial basis functions have only one hidden layer, the convergence of optimization is much faster. 1, The architecture consists of two layers. Radial Function Architecture: Input Hidden Output layer layer (REF) ‘ayer © The architecture consist of two layers whose output nodes form a linear combination of the kernel (or basis) functions computed by means of the RBF nodes or hidden layer nodes. © The basis function (nonlinearity) in the hidden layer produces a significant nonzero response to the input stimulus it has received only when the input of it falls within a small localized region of the input space. © This network can also be called as localized receptive field network. 3.5 Concept of Linear Separability Concept: Sets of point in 2-D space are linearly separable if the points can be separated by a straight line In ANN, linear separability is the concept wherein the separation is based on the network response being positive or negative. A decision line is drawn to separate positive and negative responses. The decision line is called as linear-separable line. $1 S2 Fig 3.3: Linear Separable Patterns The linear separability of the network is based on the decision-boundary line. If there exists weights for which the training data has correct response ,+ 1 (positive) ,it will lie on one side of the decision boundary line and all other data on the other side of the boundary line. This is known as linear separability.

You might also like