0% found this document useful (0 votes)
37 views3 pages

FFNN

This document describes implementing a simple feedforward neural network with one hidden layer for classification. It defines training data points and labels, a sigmoid activation function, hyperparameters, randomly initialized weights and biases, and a training loop that performs forward and backward passes to update the weights and biases over multiple epochs based on minimizing mean squared error loss. New data points are then used to make predictions with the trained network.

Uploaded by

Dhanunjay p
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as DOCX, PDF, TXT or read online on Scribd
0% found this document useful (0 votes)
37 views3 pages

FFNN

This document describes implementing a simple feedforward neural network with one hidden layer for classification. It defines training data points and labels, a sigmoid activation function, hyperparameters, randomly initialized weights and biases, and a training loop that performs forward and backward passes to update the weights and biases over multiple epochs based on minimizing mean squared error loss. New data points are then used to make predictions with the trained network.

Uploaded by

Dhanunjay p
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as DOCX, PDF, TXT or read online on Scribd
You are on page 1/ 3

Lab 11:

Implement a simple feed forward neural network with a single hidden layer for classification a
set of data points based on their features and levels.

Solution:

import numpy as np

# Define the training data points and their labels

features = np.array([[0, 0], [0, 1], [1, 0], [1, 1]])

labels = np.array([[0], [1], [1], [0]])

# Define the activation function

def sigmoid(x):

return 1 / (1 + np.exp(-x))

# Define the derivative of the activation function

def sigmoid_derivative(x):

return sigmoid(x) * (1 - sigmoid(x))

# Set random seed for reproducibility

np.random.seed(42)

# Define hyperparameters

input_size = 2

hidden_size = 4

output_size = 1

learning_rate = 0.1

num_epochs = 1000

# Initialize the weights and biases

W1 = np.random.randn(input_size, hidden_size)
b1 = np.zeros((1, hidden_size))

W2 = np.random.randn(hidden_size, output_size)

b2 = np.zeros((1, output_size))

# Training loop

for epoch in range(num_epochs):

# Forward pass

hidden_layer = sigmoid(np.dot(features, W1) + b1)

output_layer = sigmoid(np.dot(hidden_layer, W2) + b2)

# Compute the loss

loss = np.mean((output_layer - labels) ** 2)

# Backward pass

d_loss = 2 * (output_layer - labels) / output_layer.shape[0]

d_output = d_loss * sigmoid_derivative(output_layer)

d_hidden = np.dot(d_output, W2.T) * sigmoid_derivative(hidden_layer)

# Update the weights and biases

W2 -= learning_rate * np.dot(hidden_layer.T, d_output)

b2 -= learning_rate * np.sum(d_output, axis=0, keepdims=True)

W1 -= learning_rate * np.dot(features.T, d_hidden)

b1 -= learning_rate * np.sum(d_hidden, axis=0, keepdims=True)

# Print the loss for every 100 epochs

if (epoch + 1) % 100 == 0:

print(f'Epoch [{epoch+1}/{num_epochs}], Loss: {loss:.4f}')

# Make predictions on new data points


new_data = np.array([[0.5, 0.5], [0.8, 0.2]])

hidden_layer = sigmoid(np.dot(new_data, W1) + b1)

predictions = sigmoid(np.dot(hidden_layer, W2) + b2)

print(predictions)

In this code example, we first define the training data points (features) and their
corresponding labels (labels).

Next, we define the sigmoid activation function and its derivative, which will be used in the
forward and backward passes of the neural network.

We set the random seed for reproducibility and define the hyperparameters such as the
learning rate and number of epochs.

We initialize the weights (W1, W2) and biases (b1, b2) with random values.

The training loop iterates over the specified number of epochs. In each iteration, it performs a
forward pass to compute the output of the hidden layer and the output layer. Then, it
computes the loss using the mean squared error. After that, it performs a backward pass to
compute the gradients of the weights and biases using the chain rule. Finally, it updates the
weights and biases based on the gradients and the learning rate.

Finally, we use the trained neural network to make predictions on new

You might also like