3D Curve Fitting With Python
Last Updated :
23 Jul, 2025
Curve fitting is a widely used technique in the field of data analysis and mathematical modeling. It involves the process of finding a mathematical function that best approximates a set of data points. In 3D curve fitting, the process is extended to three-dimensional space, where the goal is to find a function that best represents a set of 3D data points.
Python is a popular programming language used for scientific computing, and it provides several libraries that can be used for 3D curve fitting. In this article, we will discuss how to perform 3D curve fitting in Python using the SciPy library. so here In this article, we use how the SciPy library will be used for 3D curve fitting.
SciPy Library
The SciPy library is a powerful tool for scientific computing in Python. It provides a wide range of functionality for optimization, integration, interpolation, and curve fitting. In this article, we will focus on the curve-fitting capabilities of the library.
SciPy provides the curve_fit function, which can be used to perform curve fitting in Python. The function takes as input the data points to be fitted and the mathematical function to be used for fitting. The function then returns the optimized parameters for the mathematical function that best approximates the input data.
Let's see the full step-by-step process for doing 3D Curve Fitting of 100 randomly generated points using the SciPy library in Python.
Prerequisites Library: here we use NumPy for generating random points and storing them, SciPy, and matplotlib for plotting these points in 3D space. so first of all install them using the below command in the terminal.
pip install numpy
pip install scipy
pip install matplotlib
3D Curve Fitting in Python
Let us now see how to perform 3D curve fitting in Python using the SciPy library. We will start by generating some random 3D data points using the NumPy library.
Python3
import numpy as np
# Generate random 3D data points
x = np.random.random(100)
y = np.random.random(100)
z = np.sin(x * y) + np.random.normal(0, 0.1, size=100)
data = np.array([x, y, z]).T
We have generated 100 random data points in 3D space, where the z-coordinate is defined as a function of the x and y coordinates with some added noise.
Next, we will define the mathematical function to be used for curve fitting. In this example, we will use a simple polynomial function of degree 3.
Python3
def func(xy, a, b, c, d, e, f):
x, y = xy
return a + b*x + c*y + d*x**2 + e*y**2 + f*x*y
The function takes as input the x and y coordinates of a data point, and the six parameters a, b, c, d, e, and f. These parameters are the coefficients of the polynomial function that will be optimized during curve fitting.
We can now perform curve fitting using the curve_fit function from the SciPy library.
Python3
from scipy.optimize import curve_fit
# Perform curve fitting
popt, pcov = curve_fit(func, (x, y), z)
# Print optimized parameters
print(popt)
Output:
[ 0.04416919 -0.12960835 -0.11930051 0.16187097 0.1731539 0.85682108]
The curve_fit function takes as input the mathematical function to be used for curve fitting and the data points to be fitted. It returns two arrays, popt and pcov. The popt array contains the optimized values of the parameters of the mathematical function, and the pcov array contains the covariance matrix of the parameters.
The curve_fit() function in Python is used to perform nonlinear regression curve fitting. It uses the least-squares optimization method to find the optimized parameters of a user-defined function that best fit a given set of data.
About popt and pcov
popt and pcov are the two outputs of the curve_fit() function in Python. popt is a 1-D array of optimized parameters of the fitted function, while pcov is the estimated covariance matrix of the optimized parameters.
popt is calculated by minimizing the sum of squared residuals between the fitted function and the actual data points using a least-squares optimization algorithm. The curve_fit() function uses the Levenberg-Marquardt algorithm to perform this optimization. This algorithm iteratively adjusts the parameter values to minimize the objective function until convergence.
pcov is estimated using the covariance matrix of the gradient of the objective function at the optimized parameter values. The diagonal elements of pcov represent the variances of the optimized parameters, and the off-diagonal elements represent the covariances between the parameters. pcov is used to estimate the uncertainty in the optimized parameter values.
We can now use the optimized parameters to plot the fitted curve in 3D space.
Python3
import matplotlib.pyplot as plt
from mpl_toolkits.mplot3d import Axes3D
# Create 3D plot of the data points and the fitted curve
fig = plt.figure()
ax = fig.add_subplot(111, projection='3d')
ax.scatter(x, y, z, color='blue')
x_range = np.linspace(0, 1, 50)
y_range = np.linspace(0, 1,
X, Y = np.meshgrid(x_range, y_range)
Z = func(X, Y, *popt)
ax.plot_surface(X, Y, Z, color='red', alpha=0.5)
ax.set_xlabel('X')
ax.set_ylabel('Y')
ax.set_zlabel('Z')
plt.show()
Output:
3D curve fitting
The code above creates a 3D plot of the data points and the fitted curve. The blue dots represent the original data points, and the red surface represents the fitted curve.
Full code:
so now, below is the full code which shows how we do 3D curve fitting in Python using the SciPy library.
Python3
import numpy as np
from scipy.optimize import curve_fit
import matplotlib.pyplot as plt
from mpl_toolkits.mplot3d import Axes3D
# Generate random 3D data points
x = np.random.random(100)
y = np.random.random(100)
z = np.sin(x * y) + np.random.normal(0, 0.1, size=100)
data = np.array([x, y, z]).T
# Define mathematical function for curve fitting
def func(xy, a, b, c, d, e, f):
x, y = xy
return a + b*x + c*y + d*x**2 + e*y**2 + f*x*y
# Perform curve fitting
popt, pcov = curve_fit(func, (x, y), z)
# Print optimized parameters
print(popt)
# Create 3D plot of the data points and the fitted curve
fig = plt.figure()
ax = fig.add_subplot(111, projection='3d')
ax.scatter(x, y, z, color='blue')
x_range = np.linspace(0, 1, 50)
y_range = np.linspace(0, 1, 50)
X, Y = np.meshgrid(x_range, y_range)
Z = func((X, Y), *popt)
ax.plot_surface(X, Y, Z, color='red', alpha=0.5)
ax.set_xlabel('X')
ax.set_ylabel('Y')
ax.set_zlabel('Z')
plt.show()
Output:
3D curve fitting Spline interpolation
Spline interpolation is a method of interpolation that uses a piecewise polynomial function to fit a set of data points. The interpolant is constructed by dividing the data into smaller subsets, or "segments," and fitting a low-degree polynomial to each segment. These polynomial segments are then joined together at points called knots, forming a continuous and smooth interpolant.
The scipy library provides several functions for spline interpolation, such as interp2d and Rbf.
Python3
import numpy as np
from scipy.interpolate import Rbf
import matplotlib.pyplot as plt
# Generate random 3D data points
x = np.linspace(-5, 5, 100)
y = np.linspace(-5, 5, 100)
X, Y = np.meshgrid(x, y)
Z = np.cos(np.sqrt(X**2 + Y**2))
# Fit a radial basis function model
rbf = Rbf(X, Y, Z, function="quintic")
Z_pred = rbf(X, Y)
# Plot the original data and the fitted function
fig = plt.figure()
ax = fig.add_subplot(projection="3d")
ax.plot_surface(X, Y, Z)
ax.plot_surface(X, Y, Z_pred)
plt.show()
Output:
Spline interpolation
In this article, we have discussed how to perform 3D curve fitting in Python using the SciPy library. We have generated some random 3D data points, defined a polynomial function to be used for curve fitting, and used the curve_fit function to find the optimized parameters of the function. We then used these parameters to plot the fitted curve in 3D space.
Curve fitting is a powerful technique for data analysis and mathematical modeling, and Python provides several libraries that make it easy to perform curve fitting. The SciPy library is a popular choice for curve fitting in Python, and it provides several functions that can be used for curve fitting in 1D, 2D, and 3D space.
3D Curve Fitting With Python
Similar Reads
Data Science Tutorial Data Science is a field that combines statistics, machine learning and data visualization to extract meaningful insights from vast amounts of raw data and make informed decisions, helping businesses and industries to optimize their operations and predict future trends.This Data Science tutorial offe
3 min read
Introduction to Machine Learning
What is Data Science?Data science is the study of data that helps us derive useful insight for business decision making. Data Science is all about using tools, techniques, and creativity to uncover insights hidden within data. It combines math, computer science, and domain expertise to tackle real-world challenges in a
8 min read
Top 25 Python Libraries for Data Science in 2025Data Science continues to evolve with new challenges and innovations. In 2025, the role of Python has only grown stronger as it powers data science workflows. It will remain the dominant programming language in the field of data science. Its extensive ecosystem of libraries makes data manipulation,
10 min read
Difference between Structured, Semi-structured and Unstructured dataBig Data includes huge volume, high velocity, and extensible variety of data. There are 3 types: Structured data, Semi-structured data, and Unstructured data. Structured data - Structured data is data whose elements are addressable for effective analysis. It has been organized into a formatted repos
2 min read
Types of Machine LearningMachine learning is the branch of Artificial Intelligence that focuses on developing models and algorithms that let computers learn from data and improve from previous experience without being explicitly programmed for every task.In simple words, ML teaches the systems to think and understand like h
13 min read
What's Data Science Pipeline?Data Science is a field that focuses on extracting knowledge from data sets that are huge in amount. It includes preparing data, doing analysis and presenting findings to make informed decisions in an organization. A pipeline in data science is a set of actions which changes the raw data from variou
3 min read
Applications of Data ScienceData Science is the deep study of a large quantity of data, which involves extracting some meaning from the raw, structured, and unstructured data. Extracting meaningful data from large amounts usesalgorithms processing of data and this processing can be done using statistical techniques and algorit
6 min read
Python for Machine Learning
Learn Data Science Tutorial With PythonData Science has become one of the fastest-growing fields in recent years, helping organizations to make informed decisions, solve problems and understand human behavior. As the volume of data grows so does the demand for skilled data scientists. The most common languages used for data science are P
3 min read
Pandas TutorialPandas is an open-source software library designed for data manipulation and analysis. It provides data structures like series and DataFrames to easily clean, transform and analyze large datasets and integrates with other Python libraries, such as NumPy and Matplotlib. It offers functions for data t
6 min read
NumPy Tutorial - Python LibraryNumPy (short for Numerical Python ) is one of the most fundamental libraries in Python for scientific computing. It provides support for large, multi-dimensional arrays and matrices along with a collection of mathematical functions to operate on arrays.At its core it introduces the ndarray (n-dimens
3 min read
Scikit Learn TutorialScikit-learn (also known as sklearn) is a widely-used open-source Python library for machine learning. It builds on other scientific libraries like NumPy, SciPy and Matplotlib to provide efficient tools for predictive data analysis and data mining.It offers a consistent and simple interface for a ra
3 min read
ML | Data Preprocessing in PythonData preprocessing is a important step in the data science transforming raw data into a clean structured format for analysis. It involves tasks like handling missing values, normalizing data and encoding variables. Mastering preprocessing in Python ensures reliable insights for accurate predictions
6 min read
EDA - Exploratory Data Analysis in PythonExploratory Data Analysis (EDA) is a important step in data analysis which focuses on understanding patterns, trends and relationships through statistical tools and visualizations. Python offers various libraries like pandas, numPy, matplotlib, seaborn and plotly which enables effective exploration
6 min read
Introduction to Statistics
Statistics For Data ScienceStatistics is like a toolkit we use to understand and make sense of information. It helps us collect, organize, analyze and interpret data to find patterns, trends and relationships in the world around us.From analyzing scientific experiments to making informed business decisions, statistics plays a
12 min read
Descriptive StatisticStatistics is the foundation of data science. Descriptive statistics are simple tools that help us understand and summarize data. They show the basic features of a dataset, like the average, highest and lowest values and how spread out the numbers are. It's the first step in making sense of informat
5 min read
What is Inferential Statistics?Inferential statistics is an important tool that allows us to make predictions and conclusions about a population based on sample data. Unlike descriptive statistics, which only summarize data, inferential statistics let us test hypotheses, make estimates, and measure the uncertainty about our predi
7 min read
Bayes' TheoremBayes' Theorem is a mathematical formula used to determine the conditional probability of an event based on prior knowledge and new evidence. It adjusts probabilities when new information comes in and helps make better decisions in uncertain situations.Bayes' Theorem helps us update probabilities ba
13 min read
Probability Data Distributions in Data ScienceUnderstanding how data behaves is one of the first steps in data science. Before we dive into building models or running analysis, we need to understand how the values in our dataset are spread out and thatâs where probability distributions come in.Let us start with a simple example: If you roll a f
8 min read
Parametric Methods in StatisticsParametric statistical methods are those that make assumptions regarding the distribution of the population. These methods presume that the data have a known distribution (e.g., normal, binomial, Poisson) and rely on parameters (e.g., mean and variance) to define the data.Key AssumptionsParametric t
6 min read
Non-Parametric TestsNon-parametric tests are applied in hypothesis testing when the data does not satisfy the assumptions necessary for parametric tests, such as normality or equal variances. These tests are especially helpful for analyzing ordinal data, small sample sizes, or data with outliers.Common Non-Parametric T
5 min read
Hypothesis TestingHypothesis testing compares two opposite ideas about a group of people or things and uses data from a small part of that group (a sample) to decide which idea is more likely true. We collect and study the sample data to check if the claim is correct.Hypothesis TestingFor example, if a company says i
9 min read
ANOVA for Data Science and Data AnalyticsANOVA is useful when we need to compare more than two groups and determine whether their means are significantly different. Suppose you're trying to understand which ingredients in a recipe affect its taste. Some ingredients, like spices might have a strong influence while others like a pinch of sal
9 min read
Bayesian Statistics & ProbabilityBayesian statistics sees unknown values as things that can change and updates what we believe about them whenever we get new information. It uses Bayesâ Theorem to combine what we already know with new data to get better estimates. In simple words, it means changing our initial guesses based on the
6 min read
Feature Engineering
Model Evaluation and Tuning
Data Science Practice