Open In App

Python OpenCV - Background Subtraction

Last Updated : 11 Aug, 2025
Comments
Improve
Suggest changes
Like Article
Like
Report

Background subtraction is technique in computer vision for detecting and isolating moving objects within video sequences. It plays an important role in applications like video surveillance, traffic monitoring, gesture recognition and automatic scene analysis, where distinguishing dynamic foreground elements from a static or slowly changing background is required.

OpenCV provides robust and widely used background subtraction methods, most notably:

  • MOG (Mixture of Gaussians): This algorithm models each background pixel by a mixture of Gaussians and updates the model over time, making it suitable for environments with subtle background changes.
  • MOG2: An improved version of MOG, this approach adds better adaptability to varying lighting conditions and can also detect and differentiate shadows from foreground objects, resulting in more accurate object segmentation.
  • GMG (Gaussian Mixture + Bayesian Segmentation): GMG combines statistical background image estimation with per-pixel Bayesian segmentation. It initially uses several frames to model the background, then applies Bayesian inference to distinguish foreground objects, making it effective even in complex or dynamic scenes.

Step-by-Step Implementation

Let's see the implementation of Background Subtraction using OpenCV,

Click here to download the used sample video.

Step 1: Install and Import the Required Libraries.

Python
!pip install opencv-contrib-python

import cv2
import numpy as np

Step 2: Upload and Prepare Video

Upload a sample video file and open it for frame-by-frame processing.

Python
from google.colab import files
uploaded = files.upload()

video_path = list(uploaded.keys())[0]
cap = cv2.VideoCapture(video_path)

Step 3: Background Subtraction Using MOG

We use the MOG algorithm to model each pixel as a mixture of Gaussians, distinguishing moving objects (foreground) from the background. The resulting binary mask shows detected motion in white. We show only the first 30 frames for brevity.

Python
cap = cv2.VideoCapture(video_path)
fgbg_mog = cv2.bgsegm.createBackgroundSubtractorMOG()

frame_count = 0
while cap.isOpened():
    ret, frame = cap.read()
    if not ret:
        break
    fgmask = fgbg_mog.apply(frame)
    cv2_imshow(fgmask)
    frame_count += 1
    if frame_count >= 30:
        break
cap.release()

Output:

MOG1
Background Subtraction Using MOG

Step 4: Background Subtraction Using MOG2

MOG2 includes shadow detection (shadows might appear in gray), making it more robust against lighting changes and moving objects. Like before, moving regions appear as lighter areas in the mask.

Python
cap = cv2.VideoCapture(video_path)
fgbg_mog2 = cv2.createBackgroundSubtractorMOG2()

frame_count = 0
while cap.isOpened():
    ret, frame = cap.read()
    if not ret:
        break
    fgmask = fgbg_mog2.apply(frame)
    cv2_imshow(fgmask)
    frame_count += 1
    if frame_count >= 30:
        break
cap.release()

Output:

MOG2
Background Subtraction Using MOG2

Step 5: Background Subtraction Using GMG

GMG requires an initial training phase (about 120 frames) before outputting valid masks. Once trained, moving objects are highlighted. A morphological open operation reduces noise and improves mask clarity.

Python
cap = cv2.VideoCapture(video_path)
fgbg_gmg = cv2.bgsegm.createBackgroundSubtractorGMG(
    initializationFrames=120, decisionThreshold=0.8)

kernel = cv2.getStructuringElement(cv2.MORPH_ELLIPSE, (3, 3))
frame_count = 0
display_after = 130

while cap.isOpened():
    ret, frame = cap.read()
    if not ret:
        break
    fgmask = fgbg_gmg.apply(frame)
    if frame_count > display_after:
        fgmask = cv2.morphologyEx(fgmask, cv2.MORPH_OPEN, kernel)
        cv2_imshow(fgmask)
    frame_count += 1
    if frame_count > display_after + 30:
        break
cap.release()

Output:

gmg-
Background Subtraction Using GMG

Article Tags :
Practice Tags :

Similar Reads