Open In App

What is Forward Propagation in Neural Networks?

Last Updated : 23 Jul, 2025
Comments
Improve
Suggest changes
Like Article
Like
Report

Forward propagation is the fundamental process in a neural network where input data passes through multiple layers to generate an output. It is the process by which input data passes through each layer of neural network to generate output. In this article, we’ll more about forward propagation and see how it's implemented in practice.

Understanding Forward Propogation

In Forward propagation input data moves through each layer of neural network where each neuron applies weighted sum, adds bias, passes the result through an activation function and making predictions. This process is crucial before backpropagation updates the weights. It determines the output of neural network with a given set of inputs and current state of model parameters (weights and biases). Understanding this process helps in optimizing neural networks for various tasks like classification, regression and more. Below is the step by step working of forward propagation:

1. Input Layer

  • The input data is fed into the network through the input layer.
  • Each feature in the input dataset represents a neuron in this layer.
  • The input is usually normalized or standardized to improve model performance.

2. Hidden Layers

  • The input moves through one or more hidden layers where transformations occur.
  • Each neuron in hidden layer computes a weighted sum of inputs and applies activation function to introduce non-linearity.
  • Each neuron receives inputs, computes: Z= W X + b , where:
    • W is the weight matrix
    • X is the input vector
    • b is the bias term
  • The activation function such as ReLU or sigmoid is applied.

3. Output Layer

  • The last layer in the network generates the final prediction.
  • The activation function of this layer depends on the type of problem:
    • Softmax (for multi-class classification)
    • Sigmoid (for binary classification)
    • Linear (for regression tasks)

4. Prediction

  • The network produces an output based on current weights and biases.
  • The loss function evaluates the error by comparing predicted output with actual values.

Mathematical Explanation of Forward Propagation

Consider a neural network with one input layer, two hidden layers and one output layer.

fpnn
architecture of a neural network

1. Layer 1 (First Hidden Layer)

The transformation is: A^{[1]} = \sigma(W^{[1]}X + b^{[1]}) where:

  • W^{[1]} is the weight matrix,
  • X is the input vector,
  • b^{[1]}is the bias vector,
  • \sigma is the activation function.

2. Layer 2 (Second Hidden Layer)

A^{[2]} = \sigma(W^{[2]}A^{[1]} + b^{[2]})

3. Output Layer

Y = \sigma(W^{[3]}A^{[2]} + b^{[3]}) where Y is the final output. Thus the complete equation for forward propagation is:

A^{[3]} = \sigma(\sigma(\sigma(X W^{[1]} + b^{[1]}) W^{[2]} + b^{[2]}) W^{[3]} + b^{[3]})

This equation illustrates how data flows through the network:

  • Weights (W) determine the importance of each input
  • Biases (b) adjust activation thresholds
  • Activation functions (\sigma) introduce non-linearity to enable complex decision boundaries.

Implementation of Forward Propagation

1. Import Required Libraries

Here we will import Numpy and pandas library.

Python
import numpy as np
import pandas as pd

2. Create Sample Dataset

  • The dataset consists of CGPA, profile score and salary in LPA.
  • X contains only input features.
Python
data = {'cgpa': [8.5, 9.2, 7.8], 'profile_score': [85, 92, 78], 'lpa': [10, 12, 8]}
df = pd.DataFrame(data)
X = df[['cgpa', 'profile_score']].values

3. Initialize Parameters

When initilaizing parameters Random initialization avoids symmetry issues where neurons learn the same function.

Python
def initialize_parameters():
    np.random.seed(1)
    W = np.random.randn(2, 1) * 0.01
    b = np.zeros((1, 1))
    return W, b

4. Define Forward Propagation

  • Z=WX+B computes the linear transformation.
  • Sigmoid activation ensures values remain between 0 and 1.
Python
def forward_propagation(X, W, b):
    Z = np.dot(X, W) + b
    A = 1 / (1 + np.exp(-Z))  
    return A

5. Execute Forward Propagation

Here we will execute the process of forward propagation using the above functions we created.

Python
W, b = initialize_parameters()
A = forward_propagation(X, W, b)
print("Final Output:", A)

Output:

Final Output:
[[0.40566303]
[0.39810287]
[0.41326819]]

Each number represents the model's predicted probability before training for the given input. The values represent the sigmoid activation output which ranges between 0 and 1 indicating a probability like score for classification. Understanding forward propagation is crucial for building and optimizing deep learning models as it forms the basis for making predictions before weight adjustments occur during backpropagation


Similar Reads