Open In App

Python | Tensorflow nn.relu() and nn.leaky_relu()

Last Updated : 21 Apr, 2025
Comments
Improve
Suggest changes
Like Article
Like
Report

TensorFlow provides several activation functions for neural networks, two of which are ReLU (Rectified Linear Unit) and Leaky ReLU. These functions ensure that neural networks learn effectively. This article will explore nn.relu() and nn.leaky_relu() in TensorFlow.

ReLU Activation Function

ReLU function is defined as: f(x)=max(0,x)

This means that if the input is greater than zero, the output is the same as the input; otherwise, the output is zero.

In TensorFlow, the ReLU activation function is implemented as tf.nn.relu().

Syntax: tf.nn.relu(features, name=None)

Parameters:

  • features: A tensor of any of the following types: float32, float64, int32, uint8, int16, int8, int64, bfloat16, uint16, half, uint32, uint64. name (optional): The name for the operation.
  • Return type: A tensor with the same type as that of features.

Example:

Python
import tensorflow as tf
input_tensor = tf.constant([[-1.0, 2.0], [3.0, -4.0]])

# Applying ReLU
output_tensor = tf.nn.relu(input_tensor)
print(output_tensor)

Output

Capture

The ReLU function suffers from what is called the “dying ReLU” problem. Since the slope of the ReLU function on the negative side is zero, a neuron stuck on that side is unlikely to recover from it.

This causes the neuron to output zero for every input, thus rendering it useless. A solution to this problem is to use Leaky ReLU which has a small slope on the negative side.

Leaky ReLU Activation Function

Leaky ReLU allows a small, non-zero gradient when the input is negative, which helps the model learn even from negative values.

The function is defined as: f(x)=\max(αx,x)

Where α (alpha) is a small constant, typically set to 0.01. This means that if the input is negative, the output will be a small negative value instead of zero, ensuring that the neuron remains active.

In TensorFlow, the Leaky ReLU function is implemented as tf.nn.leaky_relu().

Syntax: tf.nn.leaky_relu(features, alpha, name=None)

Parameters:

  • features: A tensor of any of the following types: float32, float64, int32, uint8, int16, int8, int64, bfloat16, uint16, half, uint32, uint64.
  • alpha: The slope of the function for x < 0. Default value is 0.2.

A tensor with the same type as that of features.

Example:

Python
import tensorflow as tf
input_tensor = tf.constant([[-1.0, 2.0], [3.0, -4.0]])

# Applying Leaky ReLU with alpha = 0.1
output_tensor = tf.nn.leaky_relu(input_tensor, alpha=0.1)
print(output_tensor)

Output

Capture




Similar Reads