Name of Student: Yashika Asrani PRN No.
20070124041
Experiment 5
Title: Backpropagation training algorithm and Kolmogorav Theorm
DoP: 25 September & 5 October 2023 DoS: 06 October 2023
Aim:
a) To design and simulate a Backpropagation training algorithm for multi-layer continuous perception
.
b) To Study Kolmogorav Theorm
Learning Outcome:
To understand discrete implement and understand Backpropagation training algorithm for multilayer
continuous perceptron and to study Kolmogorav Theorm
Hardware/Software:
Hardware Software
M1 MacBook (2022) macOS (optimized for M1 architecture)
M1 Chip IDE : DataSpell
Ample RAM and Storage Programming in Python
Theory:
Neural networks are computational models inspired by the human brain's structure and functioning.
Backpropagation is a supervised learning algorithm commonly used to train neural networks. This
experiment involves implementing a neural network with backpropagation in Python using the NumPy
library.
Neural Network Architecture: The neural network architecture comprises an input layer, a hidden
layer, and an output layer. The number of neurons in each layer can be customized based on the problem
requirements. In this experiment, the input layer has three neurons, the hidden layer has three neurons,
and the output layer has one neuron.
Activation Function: The sigmoid activation function is employed in this experiment. The sigmoid
function squashes the output between 0 and 1, making it suitable for binary classification problems.
Training Process:
1. Initialization:
• Weights and biases are initialized randomly. The initial weights play a crucial role in
the convergence and performance of the network.
Name of Student: Yashika Asrani PRN No. 20070124041
• Learning rate (α) is set to control the step size during weight updates.
2. Feedforward:
• Input data is fed into the network, and the output is calculated using the current weights.
• Hidden and output layer activations are computed using the sigmoid activation function.
3. Backpropagation:
• The error between the predicted output and the target output is computed.
• Using the error, gradients are calculated for adjusting the weights.
• Weights are updated using the backpropagation algorithm and the learning rate.
4. Iterations:
• The feedforward and backpropagation steps are repeated for a specified number of
epochs.
• The network gradually adjusts its weights to minimize the error between predicted and
target outputs.
Results: After training, the network is tested using the provided input patterns, and the predicted
outputs are compared with the target outputs. The weights of the neural network are adjusted during
training to improve its accuracy in predicting the target outputs.
Conclusion: This experiment demonstrates the implementation of a neural network using
backpropagation for supervised learning. The choice of hyperparameters, such as the number of hidden
neurons and the learning rate are added to improve the performance. The backpropagation algorithm
efficiently adjusts the weights, enabling the network to learn and make accurate predictions.
A) To Problem Statement: Write a Matlab Program to Find the New Weights for Input
Patterns [0.6 0.8 0] and Target Output is 0.9 Use Learning Rate Alpha = 0.3 and Use
Sigmoid Activation Function. Program :
import numpy as
np
def
sigmoid(x):
return 1 / (1 + np.exp(-x))
def
sigmoid_derivative(x):
return x * (1 - x)
def feed_forward(inputs, weights_input_hidden,
weights_hidden_output):
hidden_inputs = np.dot(inputs, weights_input_hidden)
hidden_outputs = sigmoid(hidden_inputs)
final_inputs = np.dot(hidden_outputs, weights_hidden_output)
final_outputs = sigmoid(final_inputs)
return hidden_outputs, final_outputs
Name of Student: Yashika Asrani PRN No. 20070124041
def backpropagation(inputs, target, hidden_outputs, final_outputs,
weights_hidden_output, weights_input_hidden, learning_rate=0.05):
error = target - final_outputs
output_error_term = error * sigmoid_derivative(final_outputs)
hidden_error = np.dot(output_error_term, weights_hidden_output.T)
hidden_error_term = hidden_error * sigmoid_derivative(hidden_outputs)
# Update weights
weights_hidden_output += learning_rate * np.outer(hidden_outputs,
output_error_term)
weights_input_hidden += learning_rate * np.outer(inputs, hidden_error_term)
return weights_input_hidden, weights_hidden_output
def train_neural_network(inputs, targets,
epochs=10000): input_size = len(inputs[0])
hidden_size = 3
# You can adjust the number of hidden neurons
output_size = 1
# Initialize weights
weights_input_hidden = np.random.rand(input_size, hidden_size)
weights_hidden_output = np.random.rand(hidden_size, output_size)
for epoch in range(epochs):
for i in range(len(inputs)):
input_data = inputs[i]
target = targets[i]
# Feedforward
hidden_outputs, final_outputs = feed_forward(input_data,
weights_input_hidden, weights_hidden_output)
# Backpropagation
weights_input_hidden, weights_hidden_output =
backpropagation(input_data, target, hidden_outputs, final_outputs,
weights_hidden_output, weights_input_hidden)
return weights_input_hidden, weights_hidden_output
# Inputs and Outputs
inputs = np.array([[0.6, 0.8, 0], [0.5, 0.3, 0], [0.2, 0.1, 0]])
targets = np.array([[0.03], [0.02], [0.01]])
# Training
weights_input_hidden, weights_hidden_output = train_neural_network(inputs,
targets)
# Test the neural network for
i in range(len(inputs)):
_, prediction = feed_forward(inputs[i], weights_input_hidden,
weights_hidden_output)
print(f"Input: {inputs[i]}, Target: {targets[i]}, Prediction: {prediction}")
Output:
Name of Student: Yashika Asrani PRN No. 20070124041
B) To study Kolmogorov's Theorem
Theory:
Kolmogorov's Theorem:
Kolmogorov's complexity, formulated by Andrey Kolmogorov in the 1960s, is a fundamental concept
in algorithmic information theory. The theorem provides insights into the inherent complexity of
individual objects and the limits of compressibility. It formalizes the intuitive notion of the complexity
of an object as the size of the shortest possible description of that object.
Theoretical Foundation:
1. Algorithmic Complexity:
• Kolmogorov's theorem is concerned with the concept of algorithmic complexity, which
measures the amount of information needed to specify an object or sequence.
2. Definition of Kolmogorov Complexity:
• The Kolmogorov complexity of an object is defined as the length of the shortest possible
description (in some fixed programming language) of that object.
3. Universal Turing Machines:
Name of Student: Yashika Asrani PRN No. 20070124041
• The choice of a specific programming language is arbitrary. Kolmogorov considered the
use of a universal Turing machine that can simulate any other Turing machine. This choice
ensures that the complexity is language-independent.
Main Theorem:
• For any object x (binary string or data sequence), there exists a universal Turing machine
U such that the length of the shortest binary program that, when executed by U, produces x
as output is approximately equal to the Kolmogorov complexity of x.
Implications and Significance:
1. Uncomputability:
• Kolmogorov complexity is incomputable. There is no algorithm to determine the shortest
program for an arbitrary object.
2. Independence of Choice of Universal Turing Machine:
• The theorem holds for any choice of a universal Turing machine. The complexity is not
affected by the specific details of the simulation machinery.
3. Relation to Information Theory:
• Kolmogorov complexity is closely related to concepts in information theory, such as
entropy and compression. Short descriptions correspond to compressible data.
Limitations:
• The theorem does not provide an algorithm to find the shortest program for a given object.
It establishes the existence of such programs without providing a constructive method.
Applications:
1. Data Compression:
• Understanding the inherent complexity of data is crucial in data compression
algorithms. Short descriptions lead to efficient compression.
2. Algorithmic Information Theory:
• Kolmogorov complexity is a foundational concept in algorithmic information theory,
providing a theoretical basis for understanding the limits of computability and
compressibility.
Conclusion:
Kolmogorov's Theorem contributes significantly to the theoretical understanding of algorithmic
complexity, providing deep insights into the fundamental limits of compressibility and the role of
universal Turing machines in defining complexity.