Different Variants of Gradient Descent Last Updated : 23 Jul, 2025 Comments Improve Suggest changes Like Article Like Report Gradient descent is a optimization algorithm in machine learning used to minimize functions by iteratively moving towards the minimum. It's important as fine-tuning parameters helps us to reduce prediction errors. In this article we are going to explore different variants of gradient descent algorithms. Different Variants of Gradient Descent1. Batch Gradient DescentBatch Gradient Descent is a variant of the gradient descent algorithm where the entire dataset is used to compute the gradient of the loss function with respect to the parameters. In each iteration the algorithm calculates the average gradient of the loss function for all the training examples and updates the model parameters accordingly. Batch Gradient DescentThe update rule for batch gradient descent is:\theta = \theta - \eta \nabla J(\theta)where:\theta represents the parameters of the model\eta is the learning rate∇J(θ) is the gradient of the loss function J(θ)) with respect to θ. Python ImplementationComputes the gradient using all training examples.Averages the gradient over the full dataset.Updates theta once per epoch.Suitable for small to medium datasets. Python def batch_gradient_descent(X, y, theta, lr=0.01, epochs=100): m = len(y) for _ in range(epochs): gradients = (1/m) * X.T @ (X @ theta - y) theta -= lr * gradients return theta Advantages of Batch Gradient DescentStable Convergence: Since the gradient is averaged over all training examples the updates are less noisy and more stable.Global View: It considers the entire dataset for each update providing a global perspective of the loss landscape.Disadvantages of Batch Gradient DescentComputationally Expensive: It Processing the entire dataset in each iteration can be slow and resource-intensive especially for large datasets.Memory Intensive: This requires storing and processing the entire dataset in memory which can be impractical for very large datasets.2. Stochastic Gradient DescentStochastic Gradient Descent (SGD) is a variant of the gradient descent algorithm where the model parameters are updated using the gradient of the loss function with respect to a single training example at each iteration. Unlike batch gradient descent which uses the entire dataset SGD updates the parameters more frequently, leading to faster convergence.Stochastic Gradient DescentThe update rule for SGD is:\theta = \theta - \eta \nabla J(\theta; x^{(i)}, y^{(i)})where:θ represents the parameters of the model,η is the learning rate,\nabla J(\theta; x^{(i)}, y^{(i)}) is the gradient of the loss function J(θ) with respect to θ for the i^{th} training example (x^{(i)}, y^{(i)}).Python ImplementationUpdates theta using one example at a time.Leads to faster but noisier updates.Useful for online learning and large datasets.More sensitive to learning rate. Python import numpy as np def stochastic_gradient_descent(X, y, theta, lr=0.01, epochs=100): m = len(y) for _ in range(epochs): for i in range(m): xi = X[i:i+1] yi = y[i:i+1] gradient = xi.T @ (xi @ theta - yi) theta -= lr * gradient return theta Advantages of Stochastic Gradient DescentFaster Convergence: Frequent updates can lead to faster convergence, especially in large datasets.Less Memory Intensive: Since it processes one training example at a time, it requires less memory compared to batch gradient descent.Better for Online Learning: Suitable for scenarios where data comes in a stream, allowing the model to be updated continuously.Disadvantages of Stochastic Gradient DescentNoisy Updates: Updates can be noisy, leading to a more erratic convergence path.Potential for Overshooting: The frequent updates can cause the algorithm to overshoot the minimum, especially with a high learning rate.Hyperparameter Sensitivity: Requires careful tuning of the learning rate to ensure stable and efficient convergence.3. Mini-Batch Gradient DescentMini-Batch Gradient Descent is a compromise between Batch Gradient Descent and Stochastic Gradient Descent. Instead of using the entire dataset or a single training example Mini-Batch Gradient Descent updates the model parameters using a small, random subset of the training data called a mini-batch.Mini-Batch Gradient DescentUpdate rule for Mini-Batch Gradient Descent is:\theta = \theta - \eta \nabla J(\theta; \{x^{(i)}, y^{(i)}\}_{i=1}^m)where:θ represents the parameters of the model,η is the learning rate,\{x^{(i)}, y^{(i)}\}_{i=1}^{m}represents a mini-batch of mmm training examples,\nabla J(\theta; \{x^{(i)}, y^{(i)}\}_{i=1}^m) is the gradient of the loss function J(\theta) with respect to θ for the mini-batch.Python ImplementationSplits data into mini-batches like 32 samples.Shuffles data for better generalization.Combines speed of SGD with stability of Batch GD.Supports parallel computation like GPUs. Python def mini_batch_gradient_descent(X, y, theta, lr=0.01, epochs=100, batch_size=32): m = len(y) for _ in range(epochs): indices = np.random.permutation(m) X_shuffled, y_shuffled = X[indices], y[indices] for i in range(0, m, batch_size): xb = X_shuffled[i:i+batch_size] yb = y_shuffled[i:i+batch_size] gradient = (1/len(yb)) * xb.T @ (xb @ theta - yb) theta -= lr * gradient return theta Advantages of Mini-Batch Gradient DescentFaster Convergence: By using mini-batches, it achieves a balance between the noisy updates of SGD and the stable updates of Batch Gradient Descent, often leading to faster convergence.Reduced Memory Usage: Requires less memory than Batch Gradient Descent as it only needs to store a mini-batch at a time.Efficient Computation: Allows for efficient use of hardware optimizations and parallel processing, making it suitable for large datasets.Disadvantages of Mini-Batch Gradient DescentComplexity in Tuning: Requires careful tuning of the mini-batch size and learning rate to ensure optimal performance.Less Stable than Batch GD: While more stable than SGD, it can still be less stable than Batch Gradient Descent, especially if the mini-batch size is too small.Potential for Suboptimal Mini-Batch Sizes: Selecting an inappropriate mini-batch size can lead to suboptimal performance and convergence issues.Momentum-Based Gradient DescentMomentum-Based Gradient Descent is an enhancement of standard gradient descent algorithm that aims to accelerate convergence particularly in the presence of high curvature, small but consistent gradients or noisy gradients. It introduces a velocity term that accumulates the gradient of the loss function over time thereby smoothing the path taken by the parameters. Momentum-Based Gradient DescentThe update rule for Momentum-Based Gradient Descent is:v_t = \gamma v_{t-1} + \eta \nabla J(\theta_t)\theta_{t+1} = \theta_t - v_twhere:v_t is the velocity at iteration t,\gamma is the momentum term (typically between 0 and 1),\eta is the learning rate,\nabla J(\theta_t) is the gradient of the loss function J(θ) with respect to θ at iteration t.Python ImplementationMaintains a velocity vector v to smooth updates.gamma controls how much past gradients influence current step.Helps in faster convergence especially in noisy loss surfaces.Common in deep learning frameworks like TensorFlow and PyTorch. Python def momentum_gradient_descent(X, y, theta, lr=0.01, epochs=100, gamma=0.9): m = len(y) v = np.zeros_like(theta) for _ in range(epochs): gradient = (1/m) * X.T @ (X @ theta - y) v = gamma * v + lr * gradient theta -= v return theta Advantages of Momentum-Based Gradient DescentAccelerated Convergence: Helps in faster convergence especially in scenarios with small but consistent gradients.Smoother Updates: Reduces the oscillations in the gradient updates, leading to a smoother and more stable convergence path.Effective in Ravines: Particularly effective in dealing with ravines or regions of steep curvature and is common in deep learning loss landscapes.Disadvantages of Momentum-Based Gradient DescentAdditional Hyperparameter: Introduces an additional hyperparameter (momentum term) that needs to be tuned.Complex Implementation: Slightly more complex to implement compared to standard gradient descent.Potential Overcorrection: If not properly tuned the momentum can lead to overcorrection and instability in the updates.Comparison between the variants of Gradient Descent VariantData UsedConvergenceMemory UsageEfficiencyKey AdvantageBatch Gradient DescentEntire datasetStable but slowHigh (entire dataset)Computationally expensiveStable convergence, global view of dataStochastic Gradient DescentOne example per iterationFast but noisyLow (one example)Less efficientFaster convergence, good for online learningMini-Batch Gradient DescentMini-batch of dataFaster and smootherMedium (mini-batch)Efficient, parallelizableBalance of speed and stabilityMomentum-Based Gradient DescentEntire dataset or mini-batchFaster and smootherMedium (like Mini-Batch)Efficient with momentumAccelerated convergence, smooth updatesEach variant has its advantages and is suited for different tasks depending on the dataset size, computational resources and the trade-off between speed and stability. The selection of the appropriate gradient descent method depends on the specific problem, desired trade-offs and the available hardware and computational power. Comment More infoAdvertise with us Next Article Introduction to Machine Learning S sai_teja_anantha Follow Improve Article Tags : Machine Learning Blogathon AI-ML-DS Data Science Blogathon 2024 Practice Tags : Machine Learning Similar Reads Machine Learning Tutorial Machine learning is a branch of Artificial Intelligence that focuses on developing models and algorithms that let computers learn from data without being explicitly programmed for every task. In simple words, ML teaches the systems to think and understand like humans by learning from the data.Do you 5 min read Introduction to Machine LearningIntroduction to Machine LearningMachine learning (ML) allows computers to learn and make decisions without being explicitly programmed. It involves feeding data into algorithms to identify patterns and make predictions on new data. It is used in various applications like image recognition, speech processing, language translation, 8 min read Types of Machine LearningMachine learning is the branch of Artificial Intelligence that focuses on developing models and algorithms that let computers learn from data and improve from previous experience without being explicitly programmed for every task.In simple words, ML teaches the systems to think and understand like h 13 min read What is Machine Learning Pipeline?In artificial intelligence, developing a successful machine learning model involves more than selecting the best algorithm; it requires effective data management, training, and deployment in an organized manner. A machine learning pipeline becomes crucial in this situation. A machine learning pipeli 7 min read Applications of Machine LearningMachine Learning (ML) is one of the most significant advancements in the field of technology. It gives machines the ability to learn from data and improve over time without being explicitly programmed. ML models identify patterns from data and use them to make predictions or decisions.Organizations 3 min read Python for Machine LearningMachine Learning with Python TutorialPython language is widely used in Machine Learning because it provides libraries like NumPy, Pandas, Scikit-learn, TensorFlow, and Keras. These libraries offer tools and functions essential for data manipulation, analysis, and building machine learning models. It is well-known for its readability an 5 min read Pandas TutorialPandas is an open-source software library designed for data manipulation and analysis. It provides data structures like series and DataFrames to easily clean, transform and analyze large datasets and integrates with other Python libraries, such as NumPy and Matplotlib. It offers functions for data t 6 min read NumPy Tutorial - Python LibraryNumPy (short for Numerical Python ) is one of the most fundamental libraries in Python for scientific computing. It provides support for large, multi-dimensional arrays and matrices along with a collection of mathematical functions to operate on arrays.At its core it introduces the ndarray (n-dimens 3 min read Scikit Learn TutorialScikit-learn (also known as sklearn) is a widely-used open-source Python library for machine learning. It builds on other scientific libraries like NumPy, SciPy and Matplotlib to provide efficient tools for predictive data analysis and data mining.It offers a consistent and simple interface for a ra 3 min read ML | Data Preprocessing in PythonData preprocessing is a important step in the data science transforming raw data into a clean structured format for analysis. It involves tasks like handling missing values, normalizing data and encoding variables. Mastering preprocessing in Python ensures reliable insights for accurate predictions 6 min read EDA - Exploratory Data Analysis in PythonExploratory Data Analysis (EDA) is a important step in data analysis which focuses on understanding patterns, trends and relationships through statistical tools and visualizations. Python offers various libraries like pandas, numPy, matplotlib, seaborn and plotly which enables effective exploration 6 min read Feature EngineeringWhat is Feature Engineering?Feature engineering is the process of turning raw data into useful features that help improve the performance of machine learning models. It includes choosing, creating and adjusting data attributes to make the modelâs predictions more accurate. The goal is to make the model better by providing rele 5 min read Introduction to Dimensionality ReductionWhen working with machine learning models, datasets with too many features can cause issues like slow computation and overfitting. Dimensionality reduction helps to reduce the number of features while retaining key information. Techniques like principal component analysis (PCA), singular value decom 4 min read Feature Selection Techniques in Machine LearningIn data science many times we encounter vast of features present in a dataset. But it is not necessary all features contribute equally in prediction that's where feature selection comes. It involves selecting a subset of relevant features from the original feature set to reduce the feature space whi 5 min read Feature Engineering: Scaling, Normalization, and StandardizationFeature Scaling is a technique to standardize the independent features present in the data. It is performed during the data pre-processing to handle highly varying values. If feature scaling is not done then machine learning algorithm tends to use greater values as higher and consider smaller values 6 min read Supervised LearningSupervised Machine LearningSupervised machine learning is a fundamental approach for machine learning and artificial intelligence. It involves training a model using labeled data, where each input comes with a corresponding correct output. The process is like a teacher guiding a studentâhence the term "supervised" learning. I 12 min read Linear Regression in Machine learningLinear regression is a type of supervised machine-learning algorithm that learns from the labelled datasets and maps the data points with most optimized linear functions which can be used for prediction on new datasets. It assumes that there is a linear relationship between the input and output, mea 15+ min read Logistic Regression in Machine LearningLogistic Regression is a supervised machine learning algorithm used for classification problems. Unlike linear regression which predicts continuous values it predicts the probability that an input belongs to a specific class. It is used for binary classification where the output can be one of two po 11 min read Decision Tree in Machine LearningA decision tree is a supervised learning algorithm used for both classification and regression tasks. It has a hierarchical tree structure which consists of a root node, branches, internal nodes and leaf nodes. It It works like a flowchart help to make decisions step by step where: Internal nodes re 9 min read Random Forest Algorithm in Machine LearningRandom Forest is a machine learning algorithm that uses many decision trees to make better predictions. Each tree looks at different random parts of the data and their results are combined by voting for classification or averaging for regression. This helps in improving accuracy and reducing errors. 5 min read K-Nearest Neighbor(KNN) AlgorithmK-Nearest Neighbors (KNN) is a supervised machine learning algorithm generally used for classification but can also be used for regression tasks. It works by finding the "k" closest data points (neighbors) to a given input and makesa predictions based on the majority class (for classification) or th 8 min read Support Vector Machine (SVM) AlgorithmSupport Vector Machine (SVM) is a supervised machine learning algorithm used for classification and regression tasks. It tries to find the best boundary known as hyperplane that separates different classes in the data. It is useful when you want to do binary classification like spam vs. not spam or 9 min read Naive Bayes ClassifiersNaive Bayes is a classification algorithm that uses probability to predict which category a data point belongs to, assuming that all features are unrelated. This article will give you an overview as well as more advanced use and implementation of Naive Bayes in machine learning. Illustration behind 7 min read Unsupervised LearningWhat is Unsupervised Learning?Unsupervised learning is a branch of machine learning that deals with unlabeled data. Unlike supervised learning, where the data is labeled with a specific category or outcome, unsupervised learning algorithms are tasked with finding patterns and relationships within the data without any prior knowl 8 min read K means Clustering â IntroductionK-Means Clustering is an Unsupervised Machine Learning algorithm which groups unlabeled dataset into different clusters. It is used to organize data into groups based on their similarity. Understanding K-means ClusteringFor example online store uses K-Means to group customers based on purchase frequ 4 min read Hierarchical Clustering in Machine LearningHierarchical clustering is used to group similar data points together based on their similarity creating a hierarchy or tree-like structure. The key idea is to begin with each data point as its own separate cluster and then progressively merge or split them based on their similarity. Lets understand 7 min read DBSCAN Clustering in ML - Density based clusteringDBSCAN is a density-based clustering algorithm that groups data points that are closely packed together and marks outliers as noise based on their density in the feature space. It identifies clusters as dense regions in the data space separated by areas of lower density. Unlike K-Means or hierarchic 6 min read Apriori AlgorithmApriori Algorithm is a basic method used in data analysis to find groups of items that often appear together in large sets of data. It helps to discover useful patterns or rules about how items are related which is particularly valuable in market basket analysis. Like in a grocery store if many cust 6 min read Frequent Pattern Growth AlgorithmThe FP-Growth (Frequent Pattern Growth) algorithm efficiently mines frequent itemsets from large transactional datasets. Unlike the Apriori algorithm which suffers from high computational cost due to candidate generation and multiple database scans. FP-Growth avoids these inefficiencies by compressi 5 min read ECLAT Algorithm - MLECLAT stands for Equivalence Class Clustering and bottom-up Lattice Traversal. It is a data mining algorithm used to find frequent itemsets in a dataset. These frequent itemsets are then used to create association rules which helps to identify patterns in data. It is an improved alternative to the A 3 min read Principal Component Analysis(PCA)PCA (Principal Component Analysis) is a dimensionality reduction technique used in data analysis and machine learning. It helps you to reduce the number of features in a dataset while keeping the most important information. It changes your original features into new features these new features donât 7 min read Model Evaluation and TuningEvaluation Metrics in Machine LearningWhen building machine learning models, itâs important to understand how well they perform. Evaluation metrics help us to measure the effectiveness of our models. Whether we are solving a classification problem, predicting continuous values or clustering data, selecting the right evaluation metric al 9 min read Regularization in Machine LearningRegularization is an important technique in machine learning that helps to improve model accuracy by preventing overfitting which happens when a model learns the training data too well including noise and outliers and perform poor on new data. By adding a penalty for complexity it helps simpler mode 7 min read Cross Validation in Machine LearningCross-validation is a technique used to check how well a machine learning model performs on unseen data. It splits the data into several parts, trains the model on some parts and tests it on the remaining part repeating this process multiple times. Finally the results from each validation step are a 7 min read Hyperparameter TuningHyperparameter tuning is the process of selecting the optimal values for a machine learning model's hyperparameters. These are typically set before the actual training process begins and control aspects of the learning process itself. They influence the model's performance its complexity and how fas 7 min read ML | Underfitting and OverfittingMachine learning models aim to perform well on both training data and new, unseen data and is considered "good" if:It learns patterns effectively from the training data.It generalizes well to new, unseen data.It avoids memorizing the training data (overfitting) or failing to capture relevant pattern 5 min read Bias and Variance in Machine LearningThere are various ways to evaluate a machine-learning model. We can use MSE (Mean Squared Error) for Regression; Precision, Recall, and ROC (Receiver operating characteristics) for a Classification Problem along with Absolute Error. In a similar way, Bias and Variance help us in parameter tuning and 10 min read Advance Machine Learning TechniqueReinforcement LearningReinforcement Learning (RL) is a branch of machine learning that focuses on how agents can learn to make decisions through trial and error to maximize cumulative rewards. RL allows machines to learn by interacting with an environment and receiving feedback based on their actions. This feedback comes 6 min read Semi-Supervised Learning in MLToday's Machine Learning algorithms can be broadly classified into three categories, Supervised Learning, Unsupervised Learning, and Reinforcement Learning. Casting Reinforced Learning aside, the primary two categories of Machine Learning problems are Supervised and Unsupervised Learning. The basic 4 min read Self-Supervised Learning (SSL)In this article, we will learn a major type of machine learning model which is Self-Supervised Learning Algorithms. Usage of these algorithms has increased widely in the past times as the sizes of the model have increased up to billions of parameters and hence require a huge corpus of data to train 8 min read Ensemble LearningEnsemble learning is a method where we use many small models instead of just one. Each of these models may not be very strong on its own, but when we put their results together, we get a better and more accurate answer. It's like asking a group of people for advice instead of just one personâeach on 8 min read Machine Learning PracticeTop 50+ Machine Learning Interview Questions and AnswersMachine Learning involves the development of algorithms and statistical models that enable computers to improve their performance in tasks through experience. Machine Learning is one of the booming careers in the present-day scenario.If you are preparing for machine learning interview, this intervie 15+ min read 100+ Machine Learning Projects with Source Code [2025]This article provides over 100 Machine Learning projects and ideas to provide hands-on experience for both beginners and professionals. Whether you're a student enhancing your resume or a professional advancing your career these projects offer practical insights into the world of Machine Learning an 5 min read Like