0% found this document useful (0 votes)
14 views

Adarsh - 2024en01 - Soft Computing Assignment 1

Uploaded by

ayush kushwaha
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as PDF, TXT or read online on Scribd
0% found this document useful (0 votes)
14 views

Adarsh - 2024en01 - Soft Computing Assignment 1

Uploaded by

ayush kushwaha
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as PDF, TXT or read online on Scribd
You are on page 1/ 12

ASSIGNMENT 1

Name- Adarsh Maurya


Reg No-2024EN01
Environmental Engg Mtech 1st Yr
Soft Computing Methods in Engineering Problem Solving

Ques 1: Define an Artificial Neural Network and explain its basic components.
Ans: An Artificial Neural Network (ANN) is a computational model inspired by the human brain. It processes
data through layers of interconnected nodes (neurons) and learns from data by adjusting weights.
Key Components:
 Input Layer: Receives input data and passes it to the hidden layer.
 Hidden Layer(s): Extracts patterns and processes information through weighted connections.
 Output Layer: Produces the final prediction or classification.
 Weights and Biases: Adjustable parameters optimized during training to minimize prediction error.
 Activation Functions: Introduce non-linearity, enabling the network to model complex patterns.

Ques 2: How does an artificial neural network model the brain?


Ans: An artificial neural network (ANN) models the brain by mimicking the way neurons communicate. It
consists of layers of interconnected nodes (neurons), where each node processes input, applies a weight, and
passes the result through an activation function. Just like neurons in the brain, these nodes work together to
learn patterns and make decisions based on input data, adjusting their connections (weights) during training
to improve accuracy.

Ques 3: What are the limitations of using a perceptron as a model of biological neurons?
Why is the perceptron only capable of learning linearly separable functions?
Ans: The perceptron, while inspired by biological neurons, has several limitations as a model. It is a simple
neural network with no hidden layers and uses a linear activation function. This limits its ability to model
complex patterns seen in biological neurons, which process information non-linearly and across multiple
layers. The perceptron only computes a weighted sum of inputs and applies a threshold, making it incapable
of handling more intricate tasks like those processed by multi-layered networks.
The perceptron is only capable of learning linearly separable functions because it uses a linear decision
boundary to classify data. This means it can only distinguish between data that can be separated by a straight
line (or hyperplane in higher dimensions). For problems where the data classes overlap in a non-linear way
(e.g., XOR problem), the perceptron fails to classify them correctly, as it lacks the capacity to learn non-linear
relationships.

Ques 4: What is the difference between autoassociative and heteroassociative memory?


Ans: Autoassociative and heteroassociative memory are types of associative memory in neural networks, but
they differ in how they relate inputs and outputs:
Autoassociative Memory: This type of memory stores and retrieves patterns based on partial or noisy versions
of the same input. The input and output patterns are the same (i.e., it "associates" the input with itself). An
example is the Hopfield network, where the network recalls an entire stored pattern when given part of it.
Heteroassociative Memory: In contrast, heteroassociative memory stores different input-output pairs. It learns
to associate one pattern with a different pattern (i.e., the input and output are distinct). Given a specific input,
it retrieves a corresponding but different output.
In summary, autoassociative memory relates patterns to themselves, while heteroassociative memory links
different input and output patterns.

Ques 5: What are the key differences between Hebbian learning and competitive
learning?
Ans: An artificial neural network (ANN) models the brain by mimicking the way neurons communicate. It
consists of layers of interconnected nodes (neurons), where each node processes input, applies a weight, and
passes the result through an activation function. Just like neurons in the brain, these nodes work together to
learn patterns and make decisions based on input data, adjusting their connections (weights) during training
to improve accuracy.

Ques 6: Explain the two major classes of learning paradigms: supervised learning and
unsupervised (selforganized) learning. What are the key differences that distinguish
these two learning paradigms?
Ans: Supervised learning and unsupervised (self-organized) learning are two major paradigms in machine
learning, each characterized by how the model learns from data. In supervised learning, the model is trained
on labeled data, meaning each input is paired with a known output (or label). The objective is for the model
to learn the mapping between inputs and outputs, and its accuracy is determined by how well it predicts these
labels during testing. Essentially, the model is "supervised" by being provided with the correct answers, which
helps guide its learning process. Common tasks include classification and regression, where the model aims
to generalize from the provided examples.
In contrast, unsupervised learning deals with unlabeled data, meaning the model has no pre-existing
knowledge of the correct output. The goal here is for the model to discover patterns, relationships, or structures
within the data on its own. There are no explicit correct answers, so the model "self-organizes" as it learns.
Common tasks in this paradigm include clustering, where the model groups similar data points, and
dimensionality reduction, which simplifies complex data while retaining key features. The primary difference
between the two is that supervised learning uses labeled data with guidance, whereas unsupervised learning
operates independently, working to uncover hidden patterns without external direction.

Ques 7: Explain the structure of a single-layer perceptron and how it makes decisions.
Include a discussion on the activation function.
Ans: A single-layer perceptron is a basic neural network model that consists of a single layer of output
neurons connected to input features through weighted connections. It is a simple yet foundational model for
binary classification tasks.
1. Structure:

Inputs: The perceptron takes in multiple input features (e.g., x1, x2, x3), where each input represents a
different characteristic of the data.
Weights: Each input is associated with a weight (w1, w2, w3, etc.) that determines the importance of the
input in making the decision. These weights are learned during the training process.
Bias: A bias term (b) is added to the weighted sum to adjust the threshold at which the neuron activates.
It helps shift the decision boundary.
Weighted Sum: The perceptron computes a weighted sum of the inputs, which can be represented
mathematically as:
z = w1 * x1 + w2 * x2 + w3 * x3 + ... + b
This weighted sum determines how "strong" the input is, based on the learned weights.
2. Decision-Making:
After computing the weighted sum, the perceptron applies an activation function to make a decision. In
a single-layer perceptron, the most commonly used activation function is the step function (also called
the Heaviside function).
Step Function (Activation Function): The step function outputs either 0 or 1 based on the weighted sum
z. If z exceeds a certain threshold (usually 0), the perceptron outputs 1 (indicating a positive class);
otherwise, it outputs 0 (indicating a negative class). Mathematically:
Output = 1 if z >= 0
0 if z < 0
The perceptron, therefore, classifies data by drawing a linear boundary between two classes based on
the learned weights and bias. If the data points are linearly separable, the perceptron can successfully
learn a decision boundary that separates them.
3. Limitations:
The single-layer perceptron can only learn linearly separable functions, meaning it can only classify
data that can be separated by a straight line or hyperplane. It struggles with more complex, non-linear
problems, which require multi-layer perceptrons or other advanced architectures.

Ques 8: Describe the backpropagation algorithm used for training ANNs. How does it
minimize the error in predictions?
Ans: The backpropagation algorithm is a method for training artificial neural networks (ANNs) that
minimizes prediction errors by adjusting the network's weights.
 Forward Pass: Input data is fed through the network to compute the output, which involves calculating
weighted sums and applying activation functions.
 Error Calculation: The difference between the predicted output and the actual target output is
quantified using a loss function.
 Backward Pass: The algorithm computes the gradient of the loss with respect to the output, indicating
how much the output needs to change to reduce the error. This error is then propagated backward
through the network, calculating the gradients for each weight using the chain rule.
 Weight Update: Weights are updated based on the calculated gradients, typically using a learning rate,
which controls the size of the updates.
 Iteration: The process repeats for multiple epochs until the error converges to an acceptable level.
By systematically adjusting the weights based on the error, backpropagation effectively reduces prediction
errors, leading to improved performance of the ANN.

Ques 9: Discuss the significance of a loss function in ANN training. Provide examples of
commonly used loss functions.
Ans: A loss function quantifies the difference between the predicted output of the neural network and the
actual target values. It serves as a guide for the optimization process during training, indicating how well the
model is performing. The goal of training is to minimize the loss function, which helps improve the model’s
accuracy.
Commonly Used Loss Functions:

 Mean Squared Error (MSE): Used for regression tasks; it calculates the average squared difference
between predicted and actual values.
 Cross-Entropy Loss: Used for classification tasks; it measures the dissimilarity between the predicted
probability distribution and the true distribution.
 Hinge Loss: Used for "maximum-margin" classification, mainly in Support Vector Machines
Ques 10: Explain the role of activation functions in ANNs. Provide examples of common
activation functions and their characteristics.
Ans: Activation functions introduce non-linearity into the model, enabling it to learn complex patterns. They
determine the output of neurons based on their input and are crucial for deep learning.
Common Activation Functions:
 Sigmoid: Outputs values between 0 and 1, useful for binary classification; however, it can suffer from
vanishing gradients.
 ReLU (Rectified Linear Unit): Outputs the input directly if positive; otherwise, it outputs zero. It helps
mitigate the vanishing gradient problem and is widely used in hidden layers.
 Tanh: Outputs values between -1 and 1; it is zero-centered but can still face vanishing gradient issues.
 Softmax: Used in the output layer for multi-class classification; it converts raw scores into
probabilities.

Ques 11: Discuss strategies to prevent overfitting in ANNs, including regularization


techniques.
Ans: Overfitting occurs when a model learns the training data too well, resulting in poor generalization to
unseen data. Strategies to prevent overfitting include:
Regularization Techniques:
 L1 and L2 Regularization: Add a penalty to the loss function based on the size of the weights,
discouraging overly complex models.
 Dropout: Randomly drops a fraction of neurons during training, preventing co-adaptation of neurons
and promoting redundancy.
 Early Stopping: Monitor validation loss during training and stop when it starts to increase, indicating
overfitting.
 Data Augmentation: Increase the diversity of training data through transformations (e.g., rotations,
translations) to help the model generalize better.

Ques 12: Discuss how ANNs can be applied to image recognition tasks, including a brief
overview of the steps for image processing.
Ans: Artificial Neural Networks, especially Convolutional Neural Networks (CNNs), are widely used for
image recognition tasks. The general steps for image processing include:
1. Preprocessing: Resize images, normalize pixel values, and apply data augmentation.
2. Feature Extraction: Use convolutional layers to automatically learn hierarchical features from
images.
3. Pooling: Reduce dimensionality while retaining important features.
4. Classification: Use fully connected layers to classify the processed features into categories.

5. Post-processing: Apply techniques like thresholding to finalize predictions.


Ques 13: Describe the architecture of CNN and discuss their applications, particularly
in image processing and computer vision.?
Ans: Architecture of Convolutional Neural Networks (CNNs)
Convolutional Neural Networks (CNNs) are designed for processing image data and typically consist of the
following layers:
1. Input Layer: Accepts the input image as a 3D tensor (height, width, channels).
2. Convolutional Layers: Apply filters (kernels) to extract features from the input image by
performing convolution operations, resulting in feature maps.
3. Activation Function: Non-linear functions like ReLU are applied to the feature maps to introduce
non-linearity.
4. Pooling Layers: Reduce the spatial dimensions of feature maps (commonly using max or average
pooling) to decrease computational load and retain important features.
5. Fully Connected Layers: Flatten the pooled feature maps and connect them to one or more fully
connected layers for classification.
6. Output Layer: Produces the final class probabilities using a softmax activation function for multi-
class classification tasks.
Applications of CNNs in Image Processing and Computer Vision
CNNs are widely used in various applications due to their effectiveness in feature extraction:
1. Image Classification: Classifying images into categories (e.g., identifying objects).
2. Object Detection: Detecting and localizing multiple objects within an image (e.g., YOLO, Faster R-
CNN).
3. Image Segmentation: Classifying each pixel in an image for tasks like autonomous driving (e.g., U-
Net, Mask R-CNN).
4. Facial Recognition: Identifying and verifying individuals from images.
5. Medical Image Analysis: Assisting in diagnosing diseases through analysis of X-rays and MRIs.
6. Style Transfer and Image Generation: Applying artistic styles to images and generating realistic
images using GANs.
In summary, CNNs have a specialized architecture for efficiently processing images and are applied across
numerous domains in computer vision, leading to significant advancements in the field.

Ques 14: Explain the structure and unique properties of RNNs. How do they differ
from traditional feedforward neural networks?
Ans: Structure of Recurrent Neural Networks (RNNs)
Recurrent Neural Networks (RNNs) are designed to process sequential data and consist of:
1. Input Layer: Accepts sequences of data, such as time series or sentences.
2. Hidden Layer: Contains recurrent connections that maintain a hidden state, allowing the network to
remember previous inputs in the sequence.
3. Output Layer: Generates output based on the processed sequence, either at each time step or after
the entire sequence.
Unique Properties of RNNs
 Temporal Dependency: RNNs can capture dependencies in sequential data, making them suitable
for tasks like natural language processing.
 Memory Mechanism: The recurrent connections allow RNNs to store information about past inputs.
 Variable Input Length: RNNs can handle input sequences of varying lengths.
Differences from Traditional Feedforward Neural Networks
1. Architecture:
o Feedforward Networks: Have a linear flow of information without loops and lack memory.
o RNNs: Feature recurrent connections that allow for feedback and memory over time.
2. Handling Sequential Data:
o Feedforward Networks: Process data independently, unsuitable for sequential tasks.
o RNNs: Specifically designed for sequences, enabling them to model time-dependent patterns.
3. Training Complexity:
o Feedforward Networks: Simpler to train using standard gradient descent.
o RNNs: More complex due to issues like vanishing gradients, often requiring advanced
architectures like LSTMs or GRUs to manage long-range dependencies.
In summary, RNNs are tailored for sequential data processing, offering memory and temporal modeling
capabilities that traditional feedforward networks do not possess.
Ques 15: Provide examples of popular neural network simulators and discuss their key features and
applications.
Ans: Popular neural network simulators, along with their key features and applications:
1. TensorFlow
 Key Features:
o Open-source library developed by Google.
o Supports both CPU and GPU computations.
o Provides high-level APIs (like Keras) for easy model building and training.
o Extensive support for deep learning, including CNNs, RNNs, and reinforcement learning.
o Offers TensorFlow Serving for deploying models in production.
 Applications:
o Widely used for image recognition, natural language processing, and time series forecasting.
o Supports research in AI and machine learning, enabling rapid prototyping and
experimentation.
2. Keras
 Key Features:
o High-level neural network API, often used as an interface for TensorFlow.
o User-friendly and modular, making it easy to build and experiment with neural networks.
o Supports various backends (TensorFlow, Theano, CNTK).
o Built-in support for common layers, optimizers, and loss functions.
 Applications:
o Popular for rapid prototyping and experimentation in research and industry.
o Used in applications like image classification, text processing, and speech recognition.

Ques 16: Consider a neural network with a single hidden layer, as described below:
Input Layer: 2 neurons (x1, x2)
Hidden Layer: 2 neurons (h1, h2) with a sigmoid activation function
Output Layer: 1 neuron (o1) with a sigmoid activation function
Loss Function: Mean Squared Error
The initial weights and biases are given as follows: w1 (from x1 to h1): 0.15, w2 (from
x2 to h1): 0.20,
w3 (from x1 to h2): 0.25, w4 (from x2 to h2): 0.30, w5 (from h1 to o1): 0.40, w6 (from h2
to o1): 0.50,
b1 (bias to h1): 0.35, b2 (bias to h2): 0.35, b3 (bias to o1): 0.60. Given an input (x1, x2) =
(0.05, 0.10)
and a target output of 0.01.
Perform two iterations of backpropagation to update the weights. Use a learning rate
(η) of 0.5.
Ans: Problem Descrip on
The problem involves performing two iterations of backpropagation on a neural network with the following
characteristics:

Input Layer: 2 neurons (x1, x2)


Hidden Layer: 2 neurons (h1, h2) with sigmoid activation functions
Output Layer: 1 neuron (o1) with a sigmoid activation function
Loss Function: Mean Squared Error (MSE)
Learning Rate (η): 0.5
Initial Weights and Biases:
- w1 (from x1 to h1): 0.15
- w2 (from x2 to h1): 0.20
- w3 (from x1 to h2): 0.25
- w4 (from x2 to h2): 0.30
- w5 (from h1 to o1): 0.40
- w6 (from h2 to o1): 0.50
- Biases:
- b1 (to h1): 0.35
- b2 (to h2): 0.35
- b3 (to o1): 0.60

Input Values: (x1, x2) = (0.05, 0.10)


Target Output: 0.01

4. Step-by-Step Solution
Step 1: Forward Pass
1. Calculate the weighted sum for each neuron in the hidden layer:
For h1:
z_h1 = (x1 * w1) + (x2 * w2) + b1
= (0.05 * 0.15) + (0.10 * 0.20) + 0.35
= 0.0075 + 0.02 + 0.35
= 0.3775
For h2:
z_h2 = (x1 * w3) + (x2 * w4) + b2
= (0.05 * 0.25) + (0.10 * 0.30) + 0.35
= 0.0125 + 0.03 + 0.35
= 0.3925

2. Apply the sigmoid activation function to the weighted sums:


Sigmoid function: σ(z) = 1 / (1 + e^(-z))
For h1:
a_h1 = σ(0.3775) = 1 / (1 + e^(-0.3775)) ≈ 0.593
For h2:
a_h2 = σ(0.3925) = 1 / (1 + e^(-0.3925)) ≈ 0.596

Step 2: Output Neuron Calculation


1. Calculate the weighted sum for the output neuron:
z_o1 = (a_h1 * w5) + (a_h2 * w6) + b3
= (0.593 * 0.40) + (0.596 * 0.50) + 0.60
= 0.2372 + 0.298 + 0.60
= 1.1352
2. Apply the sigmoid activation function:
a_o1 = σ(1.1352) = 1 / (1 + e^(-1.1352)) ≈ 0.757
3. Calculate the error (using Mean Squared Error):
Error = 0.5 * (target - a_o1)^2
= 0.5 * (0.01 - 0.757)^2 ≈ 0.278

Step 3: Backpropagation (Output Layer)


1. Calculate the output error (delta) for the output neuron o1:
δ_o1 = (a_o1 - target) * a_o1 * (1 - a_o1)
= (0.757 - 0.01) * 0.757 * (1 - 0.757)
= 0.747 * 0.757 * 0.243
≈ 0.136

2. Update the weights from the hidden layer to the output layer (w5, w6):
Δw5 = -η * δ_o1 * a_h1
= -0.5 * 0.136 * 0.593
≈ -0.0404
Updated w5 = 0.40 + Δw5 ≈ 0.3596

Δw6 = -η * δ_o1 * a_h2


= -0.5 * 0.136 * 0.596
≈ -0.0405
Updated w6 = 0.50 + Δw6 ≈ 0.4595

3. Update the bias for the output layer (b3):


Δb3 = -η * δ_o1
= -0.5 * 0.136
≈ -0.068
Updated b3 = 0.60 + Δb3 ≈ 0.532

Step 4: Backpropagation (Hidden Layer)


1. Calculate the error (delta) for each hidden neuron:
For h1:
δ_h1 = δ_o1 * w5 * a_h1 * (1 - a_h1)
= 0.136 * 0.40 * 0.593 * (1 - 0.593)
≈ 0.0138

For h2:
δ_h2 = δ_o1 * w6 * a_h2 * (1 - a_h2)
= 0.136 * 0.50 * 0.596 * (1 - 0.596)
≈ 0.0164

2. Update the weights from the input layer to the hidden layer (w1, w2, w3, w4):
Δw1 = -η * δ_h1 * x1
= -0.5 * 0.0138 * 0.05
≈ -0.000345
Updated w1 = 0.15 + Δw1 ≈ 0.1497

Δw2 = -η * δ_h1 * x2
= -0.5 * 0.0138 * 0.10
≈ -0.00069
Updated w2 = 0.20 + Δw2 ≈ 0.1993

Δw3 = -η * δ_h2 * x1
= -0.5 * 0.0164 * 0.05
≈ -0.00041
Updated w3 = 0.25 + Δw3 ≈ 0.2496

Δw4 = -η * δ_h2 * x2
= -0.5 * 0.0164 * 0.10
≈ -0.00082
Updated w4 = 0.30 + Δw4 ≈ 0.2992

Step 5: Update Bias for Hidden Layer


1. Update the biases for the hidden neurons:
Δb1 = -η * δ_h1
= -0.5 * 0.0138
≈ -0.0069
Updated b1 = 0.35 + Δb1 ≈ 0.3431

Δb2 = -η * δ_h2
= -0.5 * 0.0164
≈ -0.0082
Updated b2 = 0.35 + Δb2 ≈ 0.3418

Step 6: Repeat for Second Iteration


After completing the first iteration, repeating the same process for the second iteration, using the updated
weights and biases from the first iteration.

5. Second Iteration
Step 1: Forward Pass (Second Iteration)
1. Calculate the weighted sum for each neuron in the hidden layer using updated weights and biases from the
first iteration:
For h1:
z_h1 = (x1 * w1) + (x2 * w2) + b1
= (0.05 * 0.1497) + (0.10 * 0.1993) + 0.3431
= 0.007485 + 0.01993 + 0.3431
= 0.370515
For h2:
z_h2 = (x1 * w3) + (x2 * w4) + b2
= (0.05 * 0.2496) + (0.10 * 0.2992) + 0.3418
= 0.01248 + 0.02992 + 0.3418
= 0.3842

2. Apply the sigmoid activation function to the weighted sums:


For h1:
a_h1 = σ(0.370515) = 1 / (1 + e^(-0.370515)) ≈ 0.5915
For h2:
a_h2 = σ(0.3842) = 1 / (1 + e^(-0.3842)) ≈ 0.5948

Step 2: Output Neuron Calculation (Second Iteration)


1. Calculate the weighted sum for the output neuron using updated weights:
z_o1 = (a_h1 * w5) + (a_h2 * w6) + b3
= (0.5915 * 0.3596) + (0.5948 * 0.4595) + 0.532
= 0.2126 + 0.2733 + 0.532
= 1.0179

2. Apply the sigmoid activation function:


a_o1 = σ(1.0179) = 1 / (1 + e^(-1.0179)) ≈ 0.7346

3. Calculate the error (using Mean Squared Error):


Error = 0.5 * (target - a_o1)^2
= 0.5 * (0.01 - 0.7346)^2 ≈ 0.261

Step 3: Backpropagation (Output Layer - Second Iteration)


1. Calculate the output error (delta) for the output neuron o1:
δ_o1 = (a_o1 - target) * a_o1 * (1 - a_o1)
= (0.7346 - 0.01) * 0.7346 * (1 - 0.7346)
= 0.7246 * 0.7346 * 0.2654
≈ 0.1412

2. Update the weights from the hidden layer to the output layer (w5, w6):
Δw5 = -η * δ_o1 * a_h1
= -0.5 * 0.1412 * 0.5915
≈ -0.0417
Updated w5 = 0.3596 + Δw5 ≈ 0.3179

Δw6 = -η * δ_o1 * a_h2


= -0.5 * 0.1412 * 0.5948
≈ -0.0420
Updated w6 = 0.4595 + Δw6 ≈ 0.4175

3. Update the bias for the output layer (b3):


Δb3 = -η * δ_o1
= -0.5 * 0.1412
≈ -0.0706
Updated b3 = 0.532 + Δb3 ≈ 0.4614

Step 4: Backpropagation (Hidden Layer - Second Iteration)


1. Calculate the error (delta) for each hidden neuron:
For h1:
δ_h1 = δ_o1 * w5 * a_h1 * (1 - a_h1)
= 0.1412 * 0.3179 * 0.5915 * (1 - 0.5915)
≈ 0.0153

For h2:
δ_h2 = δ_o1 * w6 * a_h2 * (1 - a_h2)
= 0.1412 * 0.4175 * 0.5948 * (1 - 0.5948)
≈ 0.0170

2. Update the weights from the input layer to the hidden layer (w1, w2, w3, w4):
Δw1 = -η * δ_h1 * x1
= -0.5 * 0.0153 * 0.05
≈ -0.000383
Updated w1 = 0.1497 + Δw1 ≈ 0.1493

Δw2 = -η * δ_h1 * x2
= -0.5 * 0.0153 * 0.10
≈ -0.000765
Updated w2 = 0.1993 + Δw2 ≈ 0.1985

Δw3 = -η * δ_h2 * x1
= -0.5 * 0.0170 * 0.05
≈ -0.000425
Updated w3 = 0.2496 + Δw3 ≈ 0.2492

Δw4 = -η * δ_h2 * x2
= -0.5 * 0.0170 * 0.10
≈ -0.000850
Updated w4 = 0.2992 + Δw4 ≈ 0.2984

Step 5: Update Bias for Hidden Layer (Second Iteration)


1. Update the biases for the hidden neurons:
Δb1 = -η * δ_h1
= -0.5 * 0.0153
≈ -0.00765
Updated b1 = 0.3431 + Δb1 ≈ 0.3354

Δb2 = -η * δ_h2
= -0.5 * 0.0170
≈ -0.0085
Updated b2 = 0.3418 + Δb2 ≈ 0.3333

You might also like