ODE Project B23404
ODE Project B23404
Differential Equations
Rohith Pranav
Roll Number: B23404
Course: MA-211 ODE
Instructor: Muslim Malik Sir
October 24, 2024
1 Introduction
This report presents a method for solving second-order ordinary differential equations (ODEs) using
Artificial Neural Networks (ANNs). Specifically, we solve the differential equation:
2 Approach
Neural networks provide an alternative to traditional numerical methods for solving ODEs. In this
project, the neural network receives x as input and predicts the corresponding value of y(x), such that
the prediction satisfies the ODE.
4 Loss Function
To ensure that the neural network predicts a solution that satisfies the ODE, we define a loss function
with two components:
1
• Residual Loss: Measures how well the predicted y(x) satisfies the ODE at different points in the
domain. The residual is computed as:
N
1 X ′′ 2
Residual Loss = (y (xi ) + 5y ′ (xi ) + 6y(xi ))
N i=1
• Boundary Condition Loss: Enforces the boundary condition y(0) = 1 by adding a term to the
loss function:
Boundary Loss = 1000 · (y(0) − 1)2
The total loss is then given by the sum of the residual loss and the boundary condition loss:
5 Training Process
5.1 Data Generation
We generate 100 points, uniformly spaced between x = 0 and x = 1, to train the model. The neural
network is trained to minimize the total loss over these points.
6 Code Implementation
Below is the Python code used to implement the neural network for solving the ODE:
import numpy as np
import tensorflow as tf
import matplotlib.pyplot as plt
2
])
# Input data
x_train = np.linspace(0, 1, 100).reshape(-1, 1)
x_train_tf = tf.convert_to_tensor(x_train, dtype=tf.float32)
7 Results
After training, the neural network’s solution aligns closely with the true solution y(x) = e−2x . Below is
the plot comparing the predicted solution and the analytical solution:
3
Figure 1: Comparison of Predicted and True Solutions for y ′′ (x) + 5y ′ (x) + 6y(x) = 0
8 Conclusion
This project demonstrates how neural networks can be effectively used to solve differential equations.
The network learns to approximate the solution by minimizing the residual of the equation and enforcing
boundary conditions. The approach can be extended to more complex equations and systems, such as
non-linear ODEs or coupled differential equations.
Future work could focus on experimenting with deeper architectures, alternative activation functions,
and extending the approach to solve higher-order or non-linear differential equations.