0% found this document useful (0 votes)
25 views6 pages

Cvlab 1

The document outlines an experiment aimed at implementing various feature extraction techniques for image classification using Python. It describes several methods such as SIFT, SURF, HOG, CNN, color histograms, LBP, and Gabor filters, along with a pseudo code for the implementation process. The learning outcomes include understanding feature extraction, testing and training images, and familiarization with different algorithms.

Uploaded by

ayushvarun1234
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as PDF, TXT or read online on Scribd
0% found this document useful (0 votes)
25 views6 pages

Cvlab 1

The document outlines an experiment aimed at implementing various feature extraction techniques for image classification using Python. It describes several methods such as SIFT, SURF, HOG, CNN, color histograms, LBP, and Gabor filters, along with a pseudo code for the implementation process. The learning outcomes include understanding feature extraction, testing and training images, and familiarization with different algorithms.

Uploaded by

ayushvarun1234
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as PDF, TXT or read online on Scribd
You are on page 1/ 6

Experiment:1.

Aim: Write a program to implement various feature extraction techniques for image classification.

Software Required: Any Python IDE (e.g., PyCharm, Jupyter Notebook, Google Colab)

Description: There are several feature extraction techniques commonly used in image
classification tasks. These techniques aim to capture relevant information from images and
transform them into meaningful representations that can be used by machine learning algorithms
for classification. Some popular feature extraction techniques are:

Scale-Invariant Feature Transform (SIFT): SIFT is a widely used technique that identifies
keypoints and extracts local invariant descriptors from images. It is robust to changes in scale,
rotation, and illumination.

Speeded-Up Robust Features (SURF): SURF is another technique that detects and describes
local features in images. It is similar to SIFT but computationally more efficient, making it
suitable for realtime applications.

Histogram of Oriented Gradients (HOG): HOG computes the distribution of gradient


orientations in an image. It captures the shape and edge information and has been particularly
successful in object detection and pedestrian recognition tasks.

Convolutional Neural Networks (CNN): CNNs are a type of deep learning model that
automatically learn hierarchical features from images. They consist of multiple convolutional
layers that extract lowlevel to high-level features. CNNs have revolutionized image
classification and achieved state-of-theart performance in various tasks.

Color Histograms: Color histograms capture the distribution of colors in an image. They
represent the color content of images by quantizing pixel colors into bins and counting their
occurrences. Color histograms are simple yet effective features for certain types of image
classification problems.
Local Binary Patterns (LBP): LBP encodes the texture information by comparing each pixel's
intensity value with its neighboring pixels. It is commonly used in texture analysis tasks and has
shown good performance in various image classification applications.

CourseName:Computer Vision

Gabor Filters: Gabor filters are a set of linear filters that capture localized frequency and
orientation information in images. They are commonly used for texture analysis and have been
successfully applied in face recognition and fingerprint recognition tasks.

Deep Convolutional Features: Instead of using pre-defined feature extraction techniques, it is


also common to use pre-trained CNN models (e.g., VGG, ResNet, Inception) and extract
features from intermediate layers. These deep convolutional features retain more high-level
semantics and have been shown to generalize well across different image classification tasks.

Pseudo code/Algorithms/Flowchart/Steps:
Import resources and display image image URL
Send a request to the URL and get the image content
Convert the image content into a NumPy array
Decode the array into an OpenCV image
Convert the training image to RGB
Convert the training image to gray
scale
Create test image by adding Scale Invariance and Rotational Invariance
Display training image and testing image
Detect keypoints and Create Descriptor
Display image with and without keypoints size
Print the number of keypoints detected in the training image
Print the number of keypoints detected in the query image
Matching Keypoints, create a Brute Force Matcher object.
Perform the matching between the SIFT descriptors of the training image and the test image
The matches with shorter distance are the ones we want.
Display the best matching points
Print total number of matching points between the training and query images

Implementation
CourseName:Computer Vision Lab Course Code: CSP-422

Output:
Learning Outcomes:
Learnt about feature extraction in images.
Learnt how to test and train images for extraction.
Learnt about various algorithms.

You might also like