ML Exam Prep
ML Exam Prep
Artificial Neural Networks (ANNs) are one of the most important techniques in the field of
artificial intelligence and machine learning. Inspired by the structure and functioning of the
human brain, ANNs are used to solve complex problems such as pattern recognition,
prediction, classification, and decision-making. They are at the core of modern deep learning
systems.
2. Definition
3. Biological Inspiration
ANNs are inspired by the biological neural networks in the human brain:
1. Input Layer
2. Hidden Layer(s)
3. Output Layer
• Each connection between neurons has a weight, indicating the importance of the
input.
• A bias is added to shift the activation function.
5. Activation Function
5. Working of an ANN
1. Forward Propagation:
o Input data is passed through the layers.
o Each neuron applies a weighted sum and activation function.
o The output layer provides the prediction or decision.
2. Loss Calculation:
o The difference between predicted and actual values is calculated using a loss
function.
3. Backpropagation:
o Error is propagated backward to adjust weights and reduce the error.
o Optimization algorithms like Gradient Descent are used to update weights.
7. Applications of ANN
• Handwriting and character recognition
• Image and speech recognition
• Medical diagnosis
• Stock market prediction
• Autonomous vehicles
• Natural language processing (NLP)
• Fraud detection
8. Advantages
9. Limitations
10. Conclusion
The perceptron is the most basic type of artificial neural network, originally proposed by
Frank Rosenblatt in 1958. It simulates the way a biological neuron works and is widely
regarded as the foundational building block of more advanced neural network models.
Perceptrons are used in supervised learning tasks for binary classification.
2. Definition
A Perceptron is a type of artificial neuron that takes multiple input values, multiplies them
with assigned weights, sums the result, and applies an activation function to produce a binary
output (0 or 1). It is used to classify data that is linearly separable.
3. Architecture of a Perceptron
• Inputs (x₁, x₂, ..., xₙ): Feature values from the dataset.
• Weights (w₁, w₂, ..., wₙ): Each input has an associated weight indicating its
importance.
• Bias (b): A constant that shifts the decision boundary.
• Summation Function: Computes the weighted sum of inputs.
• Activation Function: Applies a threshold to decide the final output (usually a step
function).
4. Working of Perceptron
6. Types of Perceptron
7. Applications of Perceptron
These limitations are overcome by using Multi-Layer Perceptrons (MLPs) and deep neural
networks.
9. Conclusion
The perceptron neural network is a fundamental concept in the field of artificial intelligence
and machine learning. Although simple, it laid the foundation for modern deep learning
models. Understanding perceptrons helps in grasping how neural networks process and
classify data in real-world applications.
3) Multilayer Perceptron Learning
Ans:
Multilayer Perceptron (MLP) is an advanced type of artificial neural network (ANN) that
consists of multiple layers of neurons. Unlike the single-layer perceptron, MLP can solve
complex and non-linearly separable problems. It is widely used in modern machine learning
and deep learning applications due to its ability to learn intricate patterns from data.
2. Definition
A Multilayer Perceptron (MLP) is a feedforward neural network that consists of at least three
layers: an input layer, one or more hidden layers, and an output layer. Each layer contains
nodes (neurons) that are fully connected to the next layer. The network learns by adjusting
the weights through a process called backpropagation.
3. Architecture of MLP
4. Working of MLP
5. Activation Functions
MLPs use activation functions to introduce non-linearity into the network. Common
activation functions include:
• .
Consider a digit recognition system where the goal is to identify handwritten digits (0–9). An
MLP can take pixel values as input, process them through hidden layers, and produce the
correct digit as output. This kind of problem involves complex patterns that a single-layer
perceptron cannot solve, but an MLP can.
7. Applications of MLP
9. Limitations
10. Conclusion
The Multilayer Perceptron is a powerful and widely-used neural network architecture capable
of solving both linear and non-linear problems. It forms the basis of more complex deep
learning models and is essential in various real-world AI applications. Understanding MLP is
crucial for anyone pursuing a career in artificial intelligence and data science.
4) What
Ans:
2. Definition
3. Objective of Backpropagation
The main goal of the backpropagation algorithm is to:
1. Forward Propagation
• The error is calculated using a loss function (e.g., Mean Squared Error or Cross-
Entropy).
E=12(yactual−ypredicted)2E = \frac{1}{2}(y_{\text{actual}} -
y_{\text{predicted}})^2E=21(yactual−ypredicted)2
4. Weight Update
where:
8. Advantages
10. Conclusion
The Backpropagation Algorithm is the backbone of neural network training. It allows the
model to learn from its mistakes by propagating the error backward and updating weights
accordingly. Its efficiency and accuracy make it an essential component in modern machine
learning and deep learning systems.
Ans:
In artificial neural networks, activation functions play a crucial role in determining whether a
neuron should be activated or not. They add non-linearity to the model, enabling it to learn
complex patterns and relationships in data. Activation functions help networks understand
intricate data structures and are key components in both shallow and deep learning models.
There are several types of activation functions, categorized into linear and non-linear
functions. In this assignment, we will focus on:
➤ Definition:
The Binary Activation Function outputs either a 0 or 1, depending on whether the input
meets a specific threshold.
➤ Use Case:
Used in simple binary classification problems, where outputs are clearly divided into two
classes.
➤ Graph:
➤ Definition:
The Bipolar Activation Function is similar to the binary function but outputs -1 or +1, making
it suitable for data that includes negative values.
➤ Use Case:
➤ Graph:
➤ Definition:
➤ Use Case:
➤ Definition:
The Ramp Activation Function is a piecewise linear function. It increases linearly with input
until a certain point and then becomes constant.
➤ Use Case:
Used when a limited output range is required but still allowing for gradual change.
➤ Graph:
Looks like a straight diagonal line between 0 and 1, flat before and after.