Cvlab 1
Cvlab 1
Aim: Write a program to implement various feature extraction techniques for image classification.
Software Required: Any Python IDE (e.g., PyCharm, Jupyter Notebook, Google Colab)
Description: There are several feature extraction techniques commonly used in image
classification tasks. These techniques aim to capture relevant information from images and
transform them into meaningful representations that can be used by machine learning algorithms
for classification. Some popular feature extraction techniques are:
Scale-Invariant Feature Transform (SIFT): SIFT is a widely used technique that identifies
keypoints and extracts local invariant descriptors from images. It is robust to changes in scale,
rotation, and illumination.
Speeded-Up Robust Features (SURF): SURF is another technique that detects and describes
local features in images. It is similar to SIFT but computationally more efficient, making it
suitable for realtime applications.
Convolutional Neural Networks (CNN): CNNs are a type of deep learning model that
automatically learn hierarchical features from images. They consist of multiple convolutional
layers that extract lowlevel to high-level features. CNNs have revolutionized image
classification and achieved state-of-theart performance in various tasks.
Color Histograms: Color histograms capture the distribution of colors in an image. They
represent the color content of images by quantizing pixel colors into bins and counting their
occurrences. Color histograms are simple yet effective features for certain types of image
classification problems.
Local Binary Patterns (LBP): LBP encodes the texture information by comparing each pixel's
intensity value with its neighboring pixels. It is commonly used in texture analysis tasks and has
shown good performance in various image classification applications.
CourseName:Computer Vision
Gabor Filters: Gabor filters are a set of linear filters that capture localized frequency and
orientation information in images. They are commonly used for texture analysis and have been
successfully applied in face recognition and fingerprint recognition tasks.
Pseudo code/Algorithms/Flowchart/Steps:
Import resources and display image image URL
Send a request to the URL and get the image content
Convert the image content into a NumPy array
Decode the array into an OpenCV image
Convert the training image to RGB
Convert the training image to gray
scale
Create test image by adding Scale Invariance and Rotational Invariance
Display training image and testing image
Detect keypoints and Create Descriptor
Display image with and without keypoints size
Print the number of keypoints detected in the training image
Print the number of keypoints detected in the query image
Matching Keypoints, create a Brute Force Matcher object.
Perform the matching between the SIFT descriptors of the training image and the test image
The matches with shorter distance are the ones we want.
Display the best matching points
Print total number of matching points between the training and query images
Implementation
CourseName:Computer Vision Lab Course Code: CSP-422
Output:
Learning Outcomes:
Learnt about feature extraction in images.
Learnt how to test and train images for extraction.
Learnt about various algorithms.