0% found this document useful (0 votes)
10 views71 pages

Computer Vision 3 Segmentation 1 Students

The document discusses image segmentation, focusing on methods like K-means and mean shift clustering to partition images into regions based on pixel characteristics. It highlights the importance of image segmentation in object detection and localization, particularly in applications such as autonomous vehicles and cancer detection. The document also details the K-means clustering algorithm, outlining its steps and providing examples of its application in color image segmentation.

Uploaded by

sushanth.tambe
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as PDF, TXT or read online on Scribd
0% found this document useful (0 votes)
10 views71 pages

Computer Vision 3 Segmentation 1 Students

The document discusses image segmentation, focusing on methods like K-means and mean shift clustering to partition images into regions based on pixel characteristics. It highlights the importance of image segmentation in object detection and localization, particularly in applications such as autonomous vehicles and cancer detection. The document also details the K-means clustering algorithm, outlining its steps and providing examples of its application in color image segmentation.

Uploaded by

sushanth.tambe
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as PDF, TXT or read online on Scribd
You are on page 1/ 71

Computer Vision

Image Segmentation
Contents

• Segmentation by clustering pixels


• K means
• Mean shift
Ex: Image Segmentation

Cat/dog Classifier
Ex: Image Segmentation

Cat/dog Classifier

• Image localization helps to identify the location of single object


• For multiple objects, use localization and object detection
• Image segmentation is used before object localization and object detection
• Use multi class classifier to classify different objects
Why Image Segmentation?
• Object detection builds a bounding box corresponding to each class in the image
• Image segmentation creates a pixel-wise mask for each object in the image

Image Segmentation
Object Detection
Why Image Segmentation?
• Object detection builds a bounding box corresponding to each class in the image
• Image segmentation creates a pixel-wise mask for each object in the image
• Segmentation provides finer details of boundary of the objects

Object Detection Image Segmentation


Ex: Image Segmentation
• Shape of the cancerous cells plays a vital role in determining the severity of the
cancer
• Image Segmentation can be used to determine the shape of cancerous cells
Applications of Image Segmentation?
Object detection
• Autonomous Vehicles use devices like cameras to segment objects in surrounding
• Detect and segment cancerous cell(s) in the image and determine shape of cancerous
cells to determine the severity of cancer using clustering algorithms
Object Localization
• If an Image contains multiple types of objects, object localization is required
• Segment object and then localization
• One of the methods to localize is background subtraction, mean shift algorithm etc
• Localized objects can be classified using segmentation algorithms
Image Segmentation

• Partition an image into multiple regions based on the characteristics


of the pixels for each region
• Each region contains pixels with similar attributes
• Clustering is a technique to group similar entities and label them
• Label identifies each group/region of pixels
• Typically used to locate objects and boundaries
Color Image Segmentation with K means Clustering
• Identify and distinguish different objects or regions in an image based
on their colors
• Clustering algorithms groups similar colors together, without the need
to specify threshold values for each color
• Useful for images that have a large range of colors
• or when the exact threshold values are not known in advance
K-means clustering

• Unsupervised classification algorithm


• Groups objects into k clusters/groups based on their features
• Each group has centroid in feature space (ex: scatter plot)
• Minimize the sum of the distances between each sample and cluster
centroid
• Usually Euclidean distance is used
Steps: K-means clustering

1. Choose number of groups, k


2. Choose locations k centroids at random locations
3. Assign each sample pixel in feature space to the its nearest centroid
4. Update the position of each centroid considering new locations of
sample pixels
5. Repeat steps 3 and 4 until the centroids do not move, or move
below a threshold distance in each step
K-means clustering
• Selects k initial points, where k is the number of clusters
• Each of k points serves as an initial centroid for a cluster
• Assign other points to the centroid which is closest
Feature 2 

Feature 1 
K-means clustering
• Recalculate the locations of the centroids
• Coordinate of the centroid is the mean value of all points of the cluster
• Reassign other points to new centroid which is closest
• The recalculation of centroids is repeated until a stopping condition is satisfied
K-means clustering
The recalculation of centroids is repeated until a stopping condition is
satisfied

Stopping condition is satisfied


K-means clustering for color image

G
R

Image Feature plot


Ex: K-means clustering for color image
• A 3x3 color image has pixels with the following values
• Segment image using k=2

Sr No R G B

1 10 200 5
2 29 150 30
3 11 200 4
4 11 200 6
5 50 98 50
6 51 98 50
7 11 199 6
8 9 198 5
9 49 99 50
Ex: K-means clustering
• Choose centroid locations
• Centroid 1: (11,200, 4) and Centroid 2: (51,98, 50)

Sr No R G B Centroid Dist from centroid1 Dist from centroid2


label
1 10 200 5 (10-11)2+ (200-200)2+(5-4)2=2 (10-51)2+ (200-98)2+(5-50)2=
2 29 150 30 (29-11)2+ (150-200)2+(30-4)2=3500 (29-51)2+ (150-98)2+(30-50)2= 3588
3 11 200 4 1 (11-11)2+ (200-200)2+(4-4)2=0 (11-51)2+ (200-98)2+(4-50)2=
4 11 200 6 (11-11)2+ (200-200)2+(6-4)2=4 (11-51)2+ (200-98)2+(6-50)2=
5 50 98 50 (50-11)2+ (98-200)2+(50-4)2= (50-51)2+ (98-98)2+(50-50)2= 1
6 51 98 50 2 (51-11)2+ (98-200)2+(50-4)2= (51-51)2+ (98-98)2+(50-50)2= 0
7 11 199 6 (11-11)2+ (199-200)2+(6-4)2=5 (11-51)2+ (199-98)2+(6-50)2=
8 9 198 5 (9-11)2+ (198-200)2+(5-4)2=9 (9-51)2+ (198-98)2+(5-50)2=
9 49 99 50 (49-11)2+ (99-200)2+(50-50)2= (49-51)2+ (99-98)2+(50-50)2=5

Iteration 1
Ex: K-means clustering
• Centroid 1: (11,200, 4) and Centroid 2: (51,98, 50)
• Calculate distance of each sample from centroid

Sr No R G B Centroid Dist from centroid1 Dist from centroid2


label

1 10 200 5 (10-11)2+ (200-200)2+(5-4)2=2 (10-51)2+ (200-98)2+(5-50)2=


2 29 150 30 (29-11)2+ (150-200)2+(30-4)2=3500 (29-51)2+ (150-98)2+(30-50)2= 3588
3 11 200 4 1 (11-11)2+ (200-200)2+(4-4)2=0 (11-51)2+ (200-98)2+(4-50)2=
4 11 200 6 (11-11)2+ (200-200)2+(6-4)2=4 (11-51)2+ (200-98)2+(6-50)2=
5 50 98 50 (50-11)2+ (98-200)2+(50-4)2= (50-51)2+ (98-98)2+(50-50)2= 1
6 51 98 50 2 (51-11)2+ (98-200)2+(50-4)2= (51-51)2+ (98-98)2+(50-50)2= 0
7 11 199 6 (11-11)2+ (199-200)2+(6-4)2=5 (11-51)2+ (199-98)2+(6-50)2=
8 9 198 5 (9-11)2+ (198-200)2+(5-4)2=9 (9-51)2+ (198-98)2+(5-50)2=
9 49 99 50 (49-11)2+ (99-200)2+(50-50)2= (49-51)2+ (99-98)2+(50-50)2=5

Iteration 1
Ex: K-means clustering
• Centroid 1: (11,200, 4) and Centroid 2: (51,98, 50)
• Calculate distance of each sample from centroid

Sr No R G B Centroid Dist from centroid1 Dist from centroid2


label

1 10 200 5 (10-11)2+ (200-200)2+(5-4)2=2 (10-51)2+ (200-98)2+(5-50)2=


2 29 150 30 (29-11)2+ (150-200)2+(30-4)2=3500 (29-51)2+ (150-98)2+(30-50)2= 3588
3 11 200 4 1 (11-11)2+ (200-200)2+(4-4)2=0 (11-51)2+ (200-98)2+(4-50)2=
4 11 200 6 (11-11)2+ (200-200)2+(6-4)2=4 (11-51)2+ (200-98)2+(6-50)2=
5 50 98 50 (50-11)2+ (98-200)2+(50-4)2= (50-51)2+ (98-98)2+(50-50)2= 1
6 51 98 50 2 (51-11)2+ (98-200)2+(50-4)2= (51-51)2+ (98-98)2+(50-50)2= 0
7 11 199 6 (11-11)2+ (199-200)2+(6-4)2=5 (11-51)2+ (199-98)2+(6-50)2=
8 9 198 5 (9-11)2+ (198-200)2+(5-4)2=9 (9-51)2+ (198-98)2+(5-50)2=
9 49 99 50 (49-11)2+ (99-200)2+(50-50)2= (49-51)2+ (99-98)2+(50-50)2=5

Iteration 1
Ex: K-means clustering
• Centroid 1: (11,200, 4) and Centroid 2: (51,98, 50)
• Calculate distance of each sample from centroid

Sr No R G B Centroid Dist from centroid1 Dist from centroid2


label

1 10 200 5 (10-11)2+ (200-200)2+(5-4)2=2 (10-51)2+ (200-98)2+(5-50)2=


2 29 150 30 (29-11)2+ (150-200)2+(30-4)2=3500 (29-51)2+ (150-98)2+(30-50)2= 3588
3 11 200 4 1 (11-11)2+ (200-200)2+(4-4)2=0 (11-51)2+ (200-98)2+(4-50)2=
4 11 200 6 (11-11)2+ (200-200)2+(6-4)2=4 (11-51)2+ (200-98)2+(6-50)2=
5 50 98 50 (50-11)2+ (98-200)2+(50-4)2= (50-51)2+ (98-98)2+(50-50)2= 1
6 51 98 50 2 (51-11)2+ (98-200)2+(50-4)2= (51-51)2+ (98-98)2+(50-50)2= 0
7 11 199 6 (11-11)2+ (199-200)2+(6-4)2=5 (11-51)2+ (199-98)2+(6-50)2=
8 9 198 5 (9-11)2+ (198-200)2+(5-4)2=9 (9-51)2+ (198-98)2+(5-50)2=
9 49 99 50 (49-11)2+ (99-200)2+(50-50)2= (49-51)2+ (99-98)2+(50-50)2=5

Iteration 1
Ex: K-means clustering
Sr No R G B Centroid Dist from centroid1 Dist from centroid2
label

1 10 200 5 (10-11)2+ (200-200)2+(5-4)2=2 (10-51)2+ (200-98)2+(5-50)2=


2 29 150 30 (29-11)2+ (150-200)2+(30-4)2=3500 (29-51)2+ (150-98)2+(30-50)2= 3588
3 11 200 4 1 (11-11)2+ (200-200)2+(4-4)2=0 (11-51)2+ (200-98)2+(4-50)2=
4 11 200 6 (11-11)2+ (200-200)2+(6-4)2=4 (11-51)2+ (200-98)2+(6-50)2=
5 50 98 50 (50-11)2+ (98-200)2+(50-4)2= (50-51)2+ (98-98)2+(50-50)2= 1
6 51 98 50 2 (51-11)2+ (98-200)2+(50-4)2= (51-51)2+ (98-98)2+(50-50)2= 0
7 11 199 6 (11-11)2+ (199-200)2+(6-4)2=5 (11-51)2+ (199-98)2+(6-50)2=
8 9 198 5 (9-11)2+ (198-200)2+(5-4)2=9 (9-51)2+ (198-98)2+(5-50)2=
9 49 99 50 (49-11)2+ (99-200)2+(50-50)2= (49-51)2+ (99-98)2+(50-50)2=5

Iteration 1
• Assign labels to sample pixels
Ex: K-means clustering

Sr No R G B Centroid Dist from centroid1 Dist from centroid2


label

1 10 200 5 1 (10-11)2+ (200-200)2+(5-4)2=2 (10-51)2+ (200-98)2+(5-50)2=


2 29 150 30 1 (29-11)2+ (150-200)2+(30-4)2=3500 (29-51)2+ (150-98)2+(30-50)2= 3588
3 11 200 4 1 (11-11)2+ (200-200)2+(4-4)2=0 (11-51)2+ (200-98)2+(4-50)2=
4 11 200 6 1 (11-11)2+ (200-200)2+(6-4)2=4 (11-51)2+ (200-98)2+(6-50)2=
5 50 98 50 2 (50-11)2+ (98-200)2+(50-4)2= (50-51)2+ (98-98)2+(50-50)2= 1
6 51 98 50 2 (51-11)2+ (98-200)2+(50-4)2= (51-51)2+ (98-98)2+(50-50)2= 0
7 11 199 6 1 (11-11)2+ (199-200)2+(6-4)2=5 (11-51)2+ (199-98)2+(6-50)2=
8 9 198 5 1 (9-11)2+ (198-200)2+(5-4)2=9 (9-51)2+ (198-98)2+(5-50)2=
9 49 99 50 2 (49-11)2+ (99-200)2+(50-50)2= (49-51)2+ (99-98)2+(50-50)2=5
Iteration 1
• Update centroid locations
Ex: K-means clustering
• Updated centroid locations are
Centroid 1: (13.5,191.2,9.3) , Centroid 2: (50,98.3,50)
Sr No R G B Centroid Dist from centroid1 Dist from centroid2
label

1 10 200 5 1 (10-11)2+ (200-200)2+(5-4)2=2 (10-51)2+ (200-98)2+(5-50)2=


2 29 150 30 1 (29-11)2+ (150-200)2+(30-4)2=3500 (29-51)2+ (150-98)2+(30-50)2= 3588
3 11 200 4 1 (11-11)2+ (200-200)2+(4-4)2=0 (11-51)2+ (200-98)2+(4-50)2=
4 11 200 6 1 (11-11)2+ (200-200)2+(6-4)2=4 (11-51)2+ (200-98)2+(6-50)2=
5 50 98 50 2 (50-11)2+ (98-200)2+(50-4)2= (50-51)2+ (98-98)2+(50-50)2= 1
6 51 98 50 2 (51-11)2+ (98-200)2+(50-4)2= (51-51)2+ (98-98)2+(50-50)2= 0
7 11 199 6 1 (11-11)2+ (199-200)2+(6-4)2=5 (11-51)2+ (199-98)2+(6-50)2=
8 9 198 5 1 (9-11)2+ (198-200)2+(5-4)2=9 (9-51)2+ (198-98)2+(5-50)2=
9 49 99 50 2 (49-11)2+ (99-200)2+(50-50)2= (49-51)2+ (99-98)2+(50-50)2=5

Iteration 2
Ex: K-means clustering
• Centroid 1: (13.5,191.2,9.3) , Centroid 2: (50,98.3,50)
• Calculate distances from updated centroids
Sr No R G B Centroid Dist from centroid1 Dist from centroid2
label

1 10 200 5 (10-11)2+ (200-200)2+(5-4)2=2 (10-51)2+ (200-98)2+(5-50)2=


2 29 150 30 (29-11)2+ (150-200)2+(30-4)2=3500 (29-51)2+ (150-98)2+(30-50)2= 3588
3 11 200 4 (11-11)2+ (200-200)2+(4-4)2=0 (11-51)2+ (200-98)2+(4-50)2=
4 11 200 6 (11-11)2+ (200-200)2+(6-4)2=4 (11-51)2+ (200-98)2+(6-50)2=
5 50 98 50 (50-11)2+ (98-200)2+(50-4)2= (50-51)2+ (98-98)2+(50-50)2= 1
6 51 98 50 (51-11)2+ (98-200)2+(50-4)2= (51-51)2+ (98-98)2+(50-50)2= 0
7 11 199 6 (11-11)2+ (199-200)2+(6-4)2=5 (11-51)2+ (199-98)2+(6-50)2=
8 9 198 5 (9-11)2+ (198-200)2+(5-4)2=9 (9-51)2+ (198-98)2+(5-50)2=
9 49 99 50 (49-11)2+ (99-200)2+(50-50)2= (49-51)2+ (99-98)2+(50-50)2=5

Iteration 2
Ex: K-means clustering
• Centroid 1: (13.5,191.2,9.3) , Centroid 2: (50,98.3,50)
• Calculate distances from updated centroids
Sr No R G B Centroid Dist from centroid1 Dist from centroid2
label

1 10 200 5 (10-13.5)2+ (200-191.2)2+(5-9.3)2= (10-50)2+ (200-98.3)2+(50-50)2=


2 29 150 30 (29-11)2+ (150-200)2+(30-4)2=3500 (29-51)2+ (150-98)2+(30-50)2= 3588
3 11 200 4 (11-11)2+ (200-200)2+(4-4)2=0 (11-51)2+ (200-98)2+(4-50)2=
4 11 200 6 (11-11)2+ (200-200)2+(6-4)2=4 (11-51)2+ (200-98)2+(6-50)2=
5 50 98 50 (50-11)2+ (98-200)2+(50-4)2= (50-51)2+ (98-98)2+(50-50)2= 1
6 51 98 50 (51-11)2+ (98-200)2+(50-4)2= (51-51)2+ (98-98)2+(50-50)2= 0
7 11 199 6 (11-11)2+ (199-200)2+(6-4)2=5 (11-51)2+ (199-98)2+(6-50)2=
8 9 198 5 (9-11)2+ (198-200)2+(5-4)2=9 (9-51)2+ (198-98)2+(5-50)2=
9 49 99 50 (49-11)2+ (99-200)2+(50-50)2= (49-51)2+ (99-98)2+(50-50)2=5

Iteration 2
Ex: K-means clustering
• Centroid 1: (13.5,191.2,9.3) , Centroid 2: (50,98.3,50)
• Calculate distances from updated centroids
Sr No R G B Centroid Dist from centroid1 Dist from centroid2
label

1 10 200 5 (10-13.5)2+ (200-191.2)2+(5-9.3)2= (10-50)2+ (200-98.3)2+(5-50)2=


2 29 150 30 (29-13.5)2+ (150-191.2)2+(30- (29-50)2+ (150-98.3)2+(30-
9.3)2=24.25+1697.44+428.49=2150 50)2=441+2672.89+400=3514
3 11 200 4 (11-13.5)2+ (200-191.2)2+(4-9.3)2= (11-50)2+ (200-98.3)2+(4-50)2=
4 11 200 6 (11-13.5)2+ (200-191.2)2+(6-9.3)2= (11-50)2+ (200-98.3)2+(6-50)2=
5 50 98 50 (50-13.5)2+ (98-191.2)2+(50-9.3)2= (50-50)2+ (98-98.3)2+(50-50)2=
6 51 98 50 (51-13.5)2+ (98-191.2)2+(50-9.3)2= (51-50)2+ (98-98.3)2+(50-50)2=
7 11 199 6 (11-13.5)2+ (199-191.2)2+(6-9.3)2= (11-50)2+ (199-98.3)2+(6-50)2=
8 9 198 5 (9-13.5)2+ (198-191.2)2+(5-9.3)2= (9-50)2+ (198-98.3)2+(5-50)2=
9 49 99 50 (49-13.5)2+ (99-191.2)2+(50-9.3)2= (49-50)2+ (99-98.3)2+(50-50)2=
Iteration 2
Ex: K-means clustering
• Centroid 1: (13.5,191.2,9.3) , Centroid 2: (50,98.3,50)
• Calculate distances from updated centroids
Sr No R G B Centroid Dist from centroid1 Dist from centroid2
label

1 10 200 5 (10-13.5)2+ (200-191.2)2+(5-9.3)2= (10-50)2+ (200-98.3)2+(5-50)2=


2 29 150 30 (29-13.5)2+ (150-191.2)2+(30- (29-50)2+ (150-98.3)2+(30-
9.3)2=24.25+1697.44+428.49=2150 50)2=441+2672.89+400=3514
3 11 200 4 (11-13.5)2+ (200-191.2)2+(4-9.3)2= (11-50)2+ (200-98.3)2+(4-50)2=
4 11 200 6 (11-13.5)2+ (200-191.2)2+(6-9.3)2= (11-50)2+ (200-98.3)2+(6-50)2=
5 50 98 50 (50-13.5)2+ (98-191.2)2+(50-9.3)2= (50-50)2+ (98-98.3)2+(50-50)2=
6 51 98 50 (51-13.5)2+ (98-191.2)2+(50-9.3)2= (51-50)2+ (98-98.3)2+(50-50)2=
7 11 199 6 (11-13.5)2+ (199-191.2)2+(6-9.3)2= (11-50)2+ (199-98.3)2+(6-50)2=
8 9 198 5 (9-13.5)2+ (198-191.2)2+(5-9.3)2= (9-50)2+ (198-98.3)2+(5-50)2=
9 49 99 50 (49-13.5)2+ (99-191.2)2+(50-9.3)2= (49-50)2+ (99-98.3)2+(50-50)2=
Iteration 2
Ex: K-means clustering
• Assign label to each sample

Sr No R G B Centroid Dist from centroid1 Dist from centroid2


label

1 10 200 5 (10-13.5)2+ (200-191.2)2+(5-9.3)2= (10-50)2+ (200-98.3)2+(5-50)2=


2 29 150 30 (29-13.5)2+ (150-191.2)2+(30- (29-50)2+ (150-98.3)2+(30-
9.3)2=24.25+1697.44+428.49=2150 50)2=441+2672.89+400=3514
3 11 200 4 (11-13.5)2+ (200-191.2)2+(4-9.3)2= (11-50)2+ (200-98.3)2+(4-50)2=
4 11 200 6 (11-13.5)2+ (200-191.2)2+(6-9.3)2= (11-50)2+ (200-98.3)2+(6-50)2=
5 50 98 50 (50-13.5)2+ (98-191.2)2+(50-9.3)2= (50-50)2+ (98-98.3)2+(50-50)2=
6 51 98 50 (51-13.5)2+ (98-191.2)2+(50-9.3)2= (51-50)2+ (98-98.3)2+(50-50)2=
7 11 199 6 (11-13.5)2+ (199-191.2)2+(6-9.3)2= (11-50)2+ (199-98.3)2+(6-50)2=
8 9 198 5 (9-13.5)2+ (198-191.2)2+(5-9.3)2= (9-50)2+ (198-98.3)2+(5-50)2=
9 49 99 50 (49-13.5)2+ (99-191.2)2+(50-9.3)2= (49-50)2+ (99-98.3)2+(50-50)2=
Iteration 2
Ex: K-means clustering
• Assign label to each sample

Sr No R G B Centroid Dist from centroid1 Dist from centroid2


label

1 10 200 5 1 (10-13.5)2+ (200-191.2)2+(5-9.3)2= (10-50)2+ (200-98.3)2+(5-50)2=


2 29 150 30 1 (29-13.5)2+ (150-191.2)2+(30- (29-50)2+ (150-98.3)2+(30-
9.3)2=24.25+1697.44+428.49=2150 50)2=441+2672.89+400=3514
3 11 200 4 1 (11-13.5)2+ (200-191.2)2+(4-9.3)2= (11-50)2+ (200-98.3)2+(4-50)2=
4 11 200 6 1 (11-13.5)2+ (200-191.2)2+(6-9.3)2= (11-50)2+ (200-98.3)2+(6-50)2=
5 50 98 50 2 (50-13.5)2+ (98-191.2)2+(50-9.3)2= (50-50)2+ (98-98.3)2+(50-50)2=
6 51 98 50 2 (51-13.5)2+ (98-191.2)2+(50-9.3)2= (51-50)2+ (98-98.3)2+(50-50)2=
7 11 199 6 1 (11-13.5)2+ (199-191.2)2+(6-9.3)2= (11-50)2+ (199-98.3)2+(6-50)2=
8 9 198 5 1 (9-13.5)2+ (198-191.2)2+(5-9.3)2= (9-50)2+ (198-98.3)2+(5-50)2=
9 49 99 50 2 (49-13.5)2+ (99-191.2)2+(50-9.3)2= (49-50)2+ (99-98.3)2+(50-50)2=
Iteration 2
Ex: K-means clustering
• Compare labels for iteration 1 and iteration 2
• Stop iteration
Sr No R G B Centroid label Centroid label

1 10 200 5 1 1
2 29 150 30 1 1
3 11 200 4 1 1
4 11 200 6 1 1
5 50 98 50 2 2
6 51 98 50 2 2
7 11 199 6 1 1
8 9 198 5 1 1
9 49 99 50 2 2

Iteration 1 Iteration 2
Ex: K-means clustering
• Assign different colors to two groups of pixels in image

Sr No R G B Centroid label Centroid label

1 10 200 5 1 1
2 29 150 30 1 1
3 11 200 4 1 1
4 11 200 6 1 1
5 50 98 50 2 2
6 51 98 50 2 2
7 11 199 6 1 1
8 9 198 5 1 1
9 49 99 50 2 2

Iteration 1 Iteration 2
Image Classification using K means clustering
• Samples are pixels
• Features of each pixel are intensities of red, green and blue
• Color of each pixel is dependent on 3 features
Image Classification using K means clustering
• Image has 426 rows, 640 columns and 3 channels
• Has 272640 pixels
• Has unique 172388 colors
Image Classification using K means clustering
• Each centroid has 3 features
• K means allocates one centroid (label) to each pixel
• Each label and corresponding pixel is given a unique color

K= 40 K= 5
K-means clustering
• Some common stopping conditions for k-means clustering are:
• There is no significant change in centroids in next iteration
• Data points remain in the same cluster in next iteration
• Completed set number of iterations
Ex: Color Segmentation with K means Clustering
• Apply K means algorithm to cluster balls with same colors
Ex: Color Segmentation with K means Clustering
• Apply K means algorithm to cluster balls with same colors
• Count number of balls
Ex: Color Segmentation with K means Clustering
• Apply K means algorithm to cluster balls with same colors
• Count number of balls
K-means clustering (optimum value of ‘k’)

• Correct choice of K is ambiguous


• Value of K depends on
• shape and scale of the distribution of points
• desired clustering resolution of the user
• Increasing K reduce the amount of error in the cluster allocation
• Elbow method can be used to determine the value of K
K means clustering (Elbow Method)
• Define clusters such that the total intra-cluster variation Within Cluster Sum of Square
(WCSS) is minimized
• Also called objective function
• WCSS measures the compactness of the clustering
• Number of cases is number of data points in a cluster
K means clustering (Elbow Method)
• Compute K-Means clustering for different values
• For each value of K, calculate the total within-cluster sum of square (WCSS)
• The location of a bend (knee) denotes optimum value of k

K-Means fails sometimes due to the


random choice of centroids which is
called The Random Initialization Trap
K means clustering (optimum value of ‘k’)
K-means algorithm

• Pros
• Fast unsupervised machine learning algorithm
• Useful for large dataset
• Cons
• Need to choose the value of K
• Converges to a local minimum
• Sensitive to initialization of centroid
• Sensitive to scaling of dataset and images
• Sensitive to outliers
• Only finds “spherical” clusters, does not work if
clusters have a complex geometric shape
Mean Shift Algorithm

• Also known as Mode-seeking algorithm


• “mean shift” signifies “shifting to the mean” in an iterative way
• Is an unsupervised learning clustering algorithm
• Does not require specific number of clusters in advance
• Number of clusters is dependent on the data
• Widely used in real time image segmentation
• Useful for datasets where the clusters have arbitrary shapes and are
not well-separated by linear boundaries
Steps for Mean Shift Algorithm

1. Choose weighted function (kernel), K and bandwidth, W


2. For each point:
a) Center a window on that point
b) Compute the mean of the data in the search window
c) Center the search window at new mean location
d) Repeat (b,c) until mean changes with a small amount
3. Assign points that lead to nearby modes (mean) or cluster
Mean Shift Algorithm

2D feature space • Center window at new


• Initially, mean value is pixel
mean value and
at a random location
recalculate mean of data
points within window
Mean Shift Algorithm

Center window at new mean value and Repeat till there is no significant change
recalculate mean of data points within (less than a threshold) in mean value
window
Mean Shift Algorithm

Assign mean value to its cluster Test for other points near window
Assign data points within window to
corresponding cluster
Segmentation using Mean Shift Algorithm

• Apply mean shift till all points are assigned to a


mean
• If windows of two means overlap, assign nearest
mean to data points in overlapped windows
• Based on the feature value, each data point
climbs up the hill within its neighbourhood
• Each hill (peak) represents one cluster
Image Segmentation using Mean Shift Algorithm

• Features of image can be texture, color, etc


• Example: Features are 3 channels of image as
features
• Depending on color model of image, channels
are RGB, HSV or any other
Mean Shift Algorithm with weighted mean
Start point

window
Mean Shift Algorithm with weighted mean
Start point
• Weighted mean

xi are data point within window

• For uniform window of radious, R

Where d is distance from current mean


window
Kernel for Weighted Mean

Gaussian weighted mean For each data point within window, apply
Gaussian function of standard deviation σ

Overall Gaussian function for data points of window


Size of Window for Mean Shift Algorithm
Mean shift with large bandwidth (window size)

• May ignore the local structure of the dataset


Size of Window for Mean Shift Algorithm

Mean shift with small bandwidth

• Several noisy clusters can appear


Bandwidth of Kernel for Weighted Mean
Gaussian weight function for different standard deviation σ (bandwidth)

Optimum Bandwidth

Small Bandwidth

Large Bandwidth
Ex: Mean Shift Algorithm
• 3-bit color image has following pixel values-
(5,4,5),(1,3,3), (1,3,2), (2,3,2), (3,3,3), (3,3,3), (4,5,4), (5,4,4), (5,4,4), (5,4,4), (5,4,4),
(5,4,4)
• Apply mean shift clustering with bandwidth of 2 and flat kernel (uniform window)
Ex: Mean Shift Algorithm
• Consider initial point as (2,3,2) Sr no R G B
• Use window centered at initial point 1 5 4 5
• Select points within window of 2 1 3 2
bandwidth 2 3 1 3 3
• Use Manhatten distance 4 2 3 2
5 3 3 3
6 3 3 3
7 4 5 4
8 5 4 4
9 5 4 4
10 5 4 4
11 5 4 4
12 5 4 4
Ex: Mean Shift Algorithm
• Select points within window 1 Sr no R G B
• Initial mean of window 1= (2,3,2) 1 5 4 5
• Select points within window of 2 1 3 3
bandwidth 2 3 1 3 3
• New mean of window 1=(2,3,2.8) 4 2 3 2
5 3 3 3
6 3 3 3
7 4 5 4
8 5 4 4
9 5 4 4
10 5 4 4
11 5 4 4
12 5 4 4
Ex: Mean Shift Algorithm
• New mean=(2,3,2.8) Sr no R G B
• Mean of points within window1 = 1 5 4 5
(2,3,2.8) 2 1 3 3
• No change in mean of window 1 3 1 3 3
• Choose center for window 2 4 2 3 2
• Choose initial point in remaining 5 3 3 3
points as (5,4,4) 6 3 3 3
7 4 5 4
8 5 4 4
9 5 4 4
10 5 4 4
11 5 4 4
12 5 4 4
Ex: Mean Shift Algorithm
• New mean=(2,3,3) Sr no R G B
• Mean of points within window1 = 1 5 4 5
(2,3,2.8) 2 1 3 3
• No change in mean 3 1 3 3
• Choose center for window2 4 2 3 2
• Choose initial point in remaining 5 3 3 3
points as (5,4,4) 6 3 3 3
• Select points within window 2 7 4 5 4
8 5 4 4
9 5 4 4
10 5 4 4
11 5 4 4
12 5 4 4
Ex: Mean Shift Algorithm
• Choose an initial point in remaining Sr no R G B
points as (5,4,4) 1 5 4 5
• Select points within window 2 2 1 3 3
• New mean of window 2 = (5.6,4.8,4.8) 3 1 3 3
• There is no change in allocation of points 4 2 3 2
to window2 5 3 3 3
• All the points are covered 6 3 3 3
• If not, choose a center for window3 to 7 4 5 4
address remaining points 8 5 4 4
• If any point in window 1 has smaller 9 5 4 4
distance from mean of window2, include 10 5 4 4
it in window 2 11 5 4 4
12 5 4 4
Mean Shift Algorithm

Image
Feature space
(color channels)
Mean Shift Algorithm for color image

Image
Feature space
(color channels)
Mean Shift Algorithm for color images
Object Tracking using Mean Shift Algorithm

• Tracking objects is an important application in the field of computer vision


• Use cases are survillance system, defence, self driving cars etc
• Steps are
1. Identify object of the image to be tracked
2. Determine histogram of object
3. Determine histogram of image where object is to be tracked
4. Apply backprojection of histogram of object on image
5. For backprojection, replace probability of each pixel of the image with the
corresponding probability (histogram) of the object to be tracked
Object Tracking using Mean Shift Algorithm

• If intensity value of each pixel which is similar to the object to be tracked


have higher probability values than the regions which are not similar
• Pixels similar to the object have higher probability values
• In each iteration, mean shift identifies center which is biased towards pixels with
higher weight (probability)
• Thus center/mode/mean shifts to location with high probability
• Updated center location is the location of object
Mean Shift Algorithm

Pros:
 Finds number of modes depending on the data values
 Robust to outliers
 Does not assume any prior shape like spherical, elliptical,
etc. on data clusters
Cons:
 Clustering depends on window size
 Computationally more expensive than K-means
 Can identify noisy pixel as clusters
 Finds arbitrary number of clusters
Comparison of K Means and Mean Shift Algorithm

Original data k-means (k=3) Mean shift


References
• https://www.geeksforgeeks.org/ml-mean-shift-clustering/
• https://www.ipb.uni-bonn.de/html/teaching/photo12-2021/2021-pho1-12-
segmentation.pptx.pdf
• https://www.youtube.com/watch?v=PCNz_zttmtA
• https://towardsdatascience.com/understanding-mean-shift-clustering-and-
implementation-with-python-6d5809a2ac40
• https://www.google.com/search?q=mean+shift+image+segmentation&prmd=ivbn&sxs
rf=APwXEddohKdfqPEbzOGf2F1y-
2OyNKjr9g:1682526084571&source=lnms&tbm=vid&sa=X&ved=2ahUKEwjhjOKM-sf-
AhXmTGwGHRoJAisQ_AUoAnoECAcQAg&biw=412&bih=652&dpr=2.63#fpstate=ive&vl
d=cid:bf9033a8,vid:PCNz_zttmtA
• https://www.google.com/url?sa=t&source=web&rct=j&url=https://m.youtube.com/wa
tch%3Fv%3D0h3bLXme46I&ved=2ahUKEwii_auQ-sf-
AhWUSWwGHeYCBZwQo7QBegQIBhAG&usg=AOvVaw0jNto63CcYRxKwuHL5pula

You might also like