Deep Learning Notes
Deep Learning Notes
2. Hidden Layer(s):
• Intermediate layers between the input and output layers.Each node in a hidden
layer receives input from all nodes in the previous layer..
3. Output Layer:
• Nodes in this layer produce the final output.The number of nodes in the output
layer depends on the type of task
4. Connections (Weights):
• Each node has an associated bias, allowing the model to capture non-linear
relationships.
2. Unsupervised Learning:
3. Reinforcement Learning:
Activation Functions:
Activation functions introduce non-linearity into ANNs, allowing them to model
complex relationships between inputs and outputs. Here are some commonly used
activation functions:
• Sigmoid Function: Outputs a value between 0 and 1, often used for binary
classification problems.
• Tanh Function: Similar to the sigmoid function, but with an output range of -
1 to 1.
• ReLU (Rectified Linear Unit): Simpler and computationally efficient,
outputs the input directly if it's positive, and 0 otherwise.
• Leaky ReLU: A variant of ReLU that addresses the "dying ReLU" problem
by allowing a small positive gradient for negative inputs.
2. Learning Rate:
3. Bias:
• Bias is an additional parameter in each neuron that allows the model to capture
non-linear relationships.It shifts the activation function and affects the output
of the neuron.Similar to weights, biases are adjusted during training to
improve the model's performance.
4. Threshold:
2. Layered Structure:Neurons are organized into layers: input layer, hidden layers, and
output layer.
3. Acyclic Graph:The network structure forms an acyclic graph, ensuring that there are
no loops in the connectivity.
4. Pattern Recognition: effective in tasks involving pattern recognition, where the input
data is transformed into meaningful output representations.
4. List out various applications of Artificial Neural Networks with illustrations.
1. Image Recognition: Identifying objects or patterns within images. ANNs can be
trained to recognize and classify objects in images, such as identifying animals or
objects in photos.
5. Game Playing:ANNs can be used to create intelligent game agents that learn from
playing the game, adapting their strategies over time.
• Weights Weights determine the influence of each input on the neuron's output.If a
weight is large, the corresponding input has a significant impact on the output.
• Threshold: A fixed value that the summed input must exceed for the neuron to fire
(output a 1).
• Output: A binary value (0 or 1), indicating whether the neuron is activated (fired) or
not.
1. Activation:
• If Neuron A and Neuron B are activated simultaneously (both fire), the
Hebbian learning model suggests that the connection strength between Input A
and Input B should be strengthened.
2. Strengthening Connection:
• The connection strength is increased, making it more likely that the activation
of Neuron A will lead to the activation of Neuron B and vice versa.
3. Weakening Connection:
• If Neuron A fires while Neuron B is inactive or vice versa, the connection
strength should weaken.
4. Helpful in Neural Network Problems:
• Hebbian learning is beneficial in unsupervised learning scenarios where the
network learns patterns or associations in the absence of explicit labels.
Limitations:
• Hebbian learning alone cannot solve complex real-world problems
-
12. What is BPN? With a neat diagram explain the Back Propagation
algorithm to reduce errors while training a feed forward Network
Backpropagation is a fundamental and widely used supervised learning algorithm employed
in artificial neural networks (ANNs). It addresses the issue of adjusting the network's weights
and biases to minimize the error between the actual output and the desired target during
training.
1. Forward Pass:
In the forward pass, the input data is propagated through the network to produce the predicted
output.
• Inputs :Represent the features of the input data.
3. Compute Error:
Calculate the error (E) between the predicted output and the target output:
4. Backward Pass:
In the backward pass, the error is propagated backward through the network, and weights and
biases are adjusted to minimize the error.
5. Update Weights and Biases:
The weights (wij) and biases (bj) are updated using the gradient descent optimization
algorithm.
6. Calculate Gradients:
Compute the gradients with respect to the weights and biases using the chain rule.
7. Update Weights and Biases:
Use the computed gradients to update the weights and biases.
8. Repeat:
Repeat the forward and backward pass iteratively until the error converges to a satisfactory
level or a predetermined number of epochs is reached.
2. Hidden Layer:Neurons apply radial basis functions to transform input data into a
higher-dimensional space.Each neuron calculates its activation based on the distance
between the input data and its center.
3. Linear Combination:The output layer combines the activations of the hidden layer
neurons linearly to produce the final output.
UNIT 2
14. Explain the concept Associative memory networks. Explain various types of
associative memory networks. Explain the concept Auto Associative memory Network.
Associative memory is defined as the ability to learn and remember the relationship between
unrelated items. for example, remembering the name of someone or the aroma of a particular
perfume. Associative memory deals specifically with the relationship between different objects
or concepts.
Eg:Imagine a bookshelf where books are not just organized alphabetically but also by theme
and author. Associative memory networks function similarly, storing information in a way
that allows retrieval based on related concepts or patterns.
auto-associative
Associative memory recovers a previously stored pattern that most closely
relates to the current pattern. It is also known as an auto-associative
correlator.
BAM
• Weights (W):The connections between the input and output layers are represented by
a weight matrix W.The weights are updated during learning and determine the
strength of associations between the patterns.
Features of BAM:
1. Bidirectional Associations:BAM can establish and recall bidirectional associations
between input and output patterns.
3. Binary Neurons:Neurons in both the input and output layers are binary, simplifying
the implementation and making the network suitable for binary data.
4. Fully Connected:The network is fully connected, allowing each neuron to potentially
influence every other neuron in both layers.
16. Train and test for BAM network convergence. Use product rule to find the weight
matrix to store the following input signal pattern
To demonstrate the training and testing process for a Bidirectional Associative Memory
(BAM) network convergence, let's consider a simple example. We'll use the product rule to
find the weight matrix to store the following input signal pattern:X1=[1,−1,1]
The corresponding output pattern will be Y1. The BAM network will be trained to associate
X1 with Y1. We can then test the convergence of the network by presenting X1 and checking
if the network recalls the associated Y1.
17. A Hetero Associative net is trained by Hebb’s outer product rule for input row
vectors S=(x1, x2, x3, x4) to output row vectors t=(t1, t2). Find the weight matrix
S1 = [1 1 0 0] t1 = [1 0]
S2 = [0 1 0 0] t2 = [1 0]
S3 = [0 0 1 1] t3 = [0 1]
S4 = [0 0 1 0] t4 = [0 1]
18 Use Hebb rule to store the vector [1 1 -1 -1] in an Auto Associative network.
• Find the weight Matrix
• Test the input vector x=[1 1 -1 -1]
• Test the network with one mistake in the input vector
• Test the network with one missing in the input vector
• Test the network with two mistakes in the input vector
• Test the network with two missing in the input vector
19. Train using hetero associative network through the Hebb rule for the
following
input vector s = (s1, s2, s3, s4) and output vector t = (t1, t2)
S1 = [ 1 1 -1 -1 ]
S2 = [-1 1 -1 -1 ]
S3 = [-1 -1 -1 1 ]
S4 = [-1 -1 1 1 ]
T1 = [-1 -1 1 1 ]
T2 = [ 1 1 -1 -1 ]
20. For given input patterns the target output for E is [-1, 1] and F is [1, 1]. Given
pattern design for E and F with matrix dimension 5X3
### ###
# . . ###
### # . .
#.. #..
### # . .
21. Train and test for BAM network convergence. Use product rule to find the weight
matrix to store the following input signal pattern.For given input patterns the target
output for A is [-1, 1] and C is [1, 1]. Given pattern design for E and F with matrix
dimension 5X3
.#. .##
# .# # . .
### # . .
# .# # . .
# .# . ##
22. Design a Hopfield Network for 4 bit patterns. The training patterns are
S1 = [1 1 -1 -1]
S2 = [1 -1 -1 -1]
S3 = [1 1 1 1]
S4 = [-1 -1 -1 -1]
23. Find the weight matrix and the energy for the four input samples
Unit-3
1. What are Crisp sets? Explain various Features and properties of Crisp sets with
suitable examples
Ans:
Crisp sets, also known as classical sets, are the building blocks of set theory. They
deal with well-defined collections of objects where membership is absolute. There's
no room for ambiguity! These sets define clear boundaries for membership. An
element either belongs to a set or it doesn't. There's no in-between.
Features:
Binary Membership: The core principle of crisp sets is that an element can either be
a member of the set or not. There's no in-between.
Sharp Boundaries: The criteria for membership are well-defined, leaving no room
for vagueness.
Well-defined Operations: Operations like union, intersection, difference, and
complement have clear rules.
Properties:
Closure: Performing operations on crisp sets always results in another crisp set.
Commutativity: The order doesn't affect operations like union and intersection.
Associativity: Grouping doesn't change the outcome. (A union B) union C is the same
as A union (B union C)
2. Why Crisp sets play a crucial role in Systems? Explain various Functions on
Crisp Sets
Ans:
Reasons why Crisp sets play a crucial role in Systems:
• Crisp sets facilitate well-defined operations on data sets.
• Crisp sets provide a fundamental way to represent and organize data within a
system.
• The criteria for membership are well-defined, leaving no room for vagueness.
• Operations like union, intersection, difference, and complement have clear
rules.
• Crisp sets, with their binary membership, provide a foundation for building
Boolean logic
Functions on Crisp Sets:
Here are some essential functions performed on crisp sets within systems:
• Union (U): This function combines elements from two sets (A and B) to create a
new set containing all unique elements from both.
Example: A = {1, 3, 5} and B = {2, 4, 5}. A U B = {1, 2, 3, 4, 5}.
• Intersection (∩): This function identifies elements that are common to both sets
(A and B), resulting in a new set containing only those shared elements.
Example: A = {apple, banana, orange} and B = {mango, banana, grapes}. A ∩
B = {banana}.
• Difference (): This function finds elements that are present in set A but not in set
B, creating a new set with those unique elements.
Example: A = {red, blue, green} and B = {blue, yellow}. A \ B = {red, green}.
• Complement (~): This function creates a new set containing all elements in the
universe set (U) that are not members of set A.
Example: U = {all colors} and A = {primary colors}. ~A = {secondary colors,
tertiary colors, etc.}
3. Explain the concept of Fuzzy? Explain various Operations on Fuzzy sets with
examples
Ans:
Unlike crisp sets with their clear-cut boundaries, fuzzy sets deal with the ambiguity of
the real world. They allow elements to have varying degrees of membership, ranging
from 0 (completely not a member) to 1 (completely a member).
Functions on Crisp Sets:
Here are some essential functions performed on crisp sets within systems:
• Union (U): This function combines elements from two sets (A and B) to create a
new set containing all unique elements from both.
Example: A = {1, 3, 5} and B = {2, 4, 5}. A U B = {1, 2, 3, 4, 5}.
• Intersection (∩): This function identifies elements that are common to both sets
(A and B), resulting in a new set containing only those shared elements.
Example: A = {apple, banana, orange} and B = {mango, banana, grapes}. A ∩
B = {banana}.
• Complement (~): This function creates a new set containing all elements in the
universe set (U) that are not members of set A.
Example: U = {all colors} and A = {primary colors}. ~A = {secondary colors,
tertiary colors, etc.}
• Algebraic Product: The algebraic product of two fuzzy sets A and B, denoted
by A(x) * B(x), represents the interaction or "AND" operation between them
• Algebraic Sum: The algebraic sum of two fuzzy sets A and B, denoted by
A(x) + B(x), represents a combined effect, similar to an "OR" operation
4. In detail elaborate Membership function of Fuzzy sets. List out the differences
between Crisp and Fuzzy Membership functions
Ans:
The heart of a fuzzy set lies in its membership function, denoted by µ (mu). This
function defines the degree of membership for each element. In fuzzy logic, it
represents the degree of truth as an extension of valuation. A membership function
(MF) is a curve that defines how each point in the input space is mapped to a
membership value (or degree of membership) between 0 and 1. The input space is
often referred to as the universe of discourse.
One of the most commonly used examples of a fuzzy set is the set of tall people.
Imagine a fuzzy set representing "tall people." The membership function (µ_tall(x))
would map a person's height (x) to a degree of membership between 0 and 1.
Someone very tall might have a membership degree close to 1, while someone shorter
might have a lower degree
5. Discuss the triangular membership function in fuzzy controllers.
Ans:
6. What are Propositions? How Propositional logic operations are been
implemented in Fuzzy systems? Illustrate with suitable examples
Ans:
A proposition is a collection of declarative statements that have either a
truth value "true” or a truth value "false". They are the fundamental units of logical
reasoning, forming the foundation for more complex arguments. Propositions
themselves don't hold inherent truth value; it's their content that determines whether
they are true or false in a specific context.
Examples of Propositions:
The sun is shining. (True)
The Earth is flat. (False)
Water boils at 100°C at sea level. (True)
Propositional Logic Operations:
Propositional logic, also called Boolean logic, deals with how propositions can be
combined to form new propositions with defined truth values based on the truth
values of the original propositions.
7. What do you mean by Fuzzy Inference? Explain the Fuzzy Inference using
Propositions
Ans:
Fuzzy inference refers to the process of reasoning within a fuzzy system. Unlike
classical logic with its crisp true/false values, fuzzy inference deals with degrees of
truth, allowing for more nuanced decision-making in situations with uncertainty or
ambiguity.
9. Discuss Fuzzy inference system. Explain the steps involved in fuzzy inference
process
Ans:
Fuzzy inference refers to the process of reasoning within a fuzzy system. Unlike
classical logic with its crisp true/false values, fuzzy inference deals with degrees of
truth, allowing for more nuanced decision-making in situations with uncertainty or
ambiguity.
The process of fuzzy inference involves all the pieces that are described in
Membership Functions, Logical Operations, and If-Then Rules.
2. Fuzzification:
Fuzzification is the process of converting a crisp input value to a fuzzy value that is
performed by the use of the information in the knowledge base
4. Fuzzy Inference:
Fuzzy inference refers to the process of reasoning within a fuzzy system.
5. Defuzzification:
Defuzzification is a crucial step in fuzzy logic systems. It takes the fuzzy outputs
generated by fuzzy inference and converts them into a single, crisp numerical value
suitable for controlling the system.
1.Discuss the evolution of deep learning. Explain the concept of deep learning?
Ans:
Deep learning is a branch of machine learning which is based on artificial neural
networks. It is capable of learning complex patterns and relationships within data.
• Deep Learning is a subfield of Machine Learning that involves the use of neural
networks to model and solve complex problems.
• The key characteristic of Deep Learning is the use of deep neural networks, which
have multiple layers of interconnected nodes.
• Deep Learning has achieved significant success in various fields, including image
recognition, natural language processing, speech recognition, and
recommendation systems.
• Training deep neural networks typically requires a large amount of data and
computational resources
❖ Norms guide the learning process by quantifying errors and driving weight updates.
Convolution Layer
This is the first step in the process of extracting valuable features from an image. A
convolution layer has several filters that perform the convolution operation. Every image
is considered as a matrix of pixel values.
ReLU layer:
ReLU stands for the rectified linear unit. Once the feature maps are extracted, the next
step is to move them to a ReLU layer. ReLU performs an element-wise operation and sets
all the negative pixels to 0. It introduces non-linearity to the network, and the generated
output is a rectified feature map.
Pooling Layer
Pooling is a down-sampling operation that reduces the dimensionality of the feature map.
The rectified feature map now goes through a pooling layer to generate a pooled feature
map
5.Elaborate on the convolution operations in convolutional layer for a CNN model.
Ans:
Convolution is a mathematical operation that allows the merging of two sets of
information.
• In the case of CNN, convolution is applied to the input data to filter the
information and produce a feature map.
• In CNNs, the feature map is the output of one filter applied to the previous layer.
It is called feature map because it is a mapping of where a certain kind of feature
is found in the image.
• A convolution converts all the pixels in its receptive field into a single value.
Convolution is applied to the input data to filter the information and produce a
feature map.
• To perform convolution, the kernel goes over the input image, doing matrix
multiplication element after element. The result for each receptive field is
written down in the feature map