0% found this document useful (0 votes)
8 views2 pages

AAM pr-9

The document presents a Python implementation of a simple neural network using NumPy, featuring a sigmoid activation function and its derivative. It includes a training loop for a dataset representing the XOR problem, where the model learns to predict outputs based on the inputs. After 10,000 epochs, the final output predictions are displayed, showing the network's performance on the training data.

Uploaded by

Ishwari khebade
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as DOCX, PDF, TXT or read online on Scribd
0% found this document useful (0 votes)
8 views2 pages

AAM pr-9

The document presents a Python implementation of a simple neural network using NumPy, featuring a sigmoid activation function and its derivative. It includes a training loop for a dataset representing the XOR problem, where the model learns to predict outputs based on the inputs. After 10,000 epochs, the final output predictions are displayed, showing the network's performance on the training data.

Uploaded by

Ishwari khebade
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as DOCX, PDF, TXT or read online on Scribd
You are on page 1/ 2

Practical no :9

CODE:

import numpy as np
# Sigmoid activation and its derivative
def sigmoid(x):
return 1 / (1 + np.exp(-x))

def sigmoid_derivative(x):
return x * (1 - x)

# Input dataset (4 samples, 2 features)


X = np.array([
[0, 0],
[0, 1],
[1, 0],
[1, 1]
])
# Expected output (4 samples, 1 output)
y = np.array([[0], [1], [1], [0]])
# Seed for consistent results
np.random.seed(1)
# Initialize weights and biases
input_layer_neurons = 2
hidden_layer_neurons = 2
output_neurons = 1

# Weights
wh = np.random.uniform(size=(input_layer_neurons, hidden_layer_neurons)) # 2x2
bh = np.random.uniform(size=(1, hidden_layer_neurons)) # 1x2
wo = np.random.uniform(size=(hidden_layer_neurons, output_neurons)) # 2x1
bo = np.random.uniform(size=(1, output_neurons)) # 1x1
# Training loop
for epoch in range(10000):

# ---- Feedforward ----


hidden_input = np.dot(X, wh) + bh
hidden_output = sigmoid(hidden_input)

final_input = np.dot(hidden_output, wo) + bo


predicted_output = sigmoid(final_input)

# ---- Backpropagation ----


error = y - predicted_output
d_output = error * sigmoid_derivative(predicted_output)

error_hidden = d_output.dot(wo.T)
d_hidden = error_hidden * sigmoid_derivative(hidden_output)

# ---- Updating weights and biases ----


wo += hidden_output.T.dot(d_output) * 0.1
bo += np.sum(d_output, axis=0, keepdims=True) * 0.1
wh += X.T.dot(d_hidden) * 0.1
bh += np.sum(d_hidden, axis=0, keepdims=True) * 0.1
# Final Output
print("Final Output after training:")
print(predicted_output)

OUTPUT:
[[0.06368082]
[0.94085536]
[0.94108726]
[0.06402009]]

You might also like