The document discusses image classification and leaf species recognition using computer vision and machine learning techniques. It describes image classification as a supervised learning problem where a model is trained on labeled image data to learn how to classify new images. It then discusses building a system to classify plant leaf images and use the predictions to provide farmers and gardeners information to increase productivity. Key steps involved are extracting features from leaf images like texture, shape, and combining different feature descriptors to effectively represent the images for classification.
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as PPTX, PDF, TXT or read online on Scribd
0 ratings0% found this document useful (0 votes)
29 views15 pages
Project
The document discusses image classification and leaf species recognition using computer vision and machine learning techniques. It describes image classification as a supervised learning problem where a model is trained on labeled image data to learn how to classify new images. It then discusses building a system to classify plant leaf images and use the predictions to provide farmers and gardeners information to increase productivity. Key steps involved are extracting features from leaf images like texture, shape, and combining different feature descriptors to effectively represent the images for classification.
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as PPTX, PDF, TXT or read online on Scribd
You are on page 1/ 15
HEADING/TITLE
• What is Image Classification?
• The ability of a machine learning model to classify or label an image into its respective class with the help of learned features from hundreds of images is called as Image Classification. • This is a supervised learning problem where we humans must provide training data (set of images along with its labels) to the machine learning model so that it learns how to discriminate each image (by learning the pattern behind each image) with respect to its label. LEAF SPECIES RECOGNITION • Project Idea To build an intelligent system that was trained with massive dataset of plant leaf images. • The system predicts the label/class of the plant using Computer Vision techniques and Machine Learning algorithms. • The system searches the web for all the plant related data after predicting the label/class of the captured image. • Your system helps gardeners and farmers to increase their productivity and yield with the help of automating tasks in garden/farm. • Your system applies the recent technological advancements such as Internet of Things (IoT) and Machine Learning in the agricultural domain. • You build such a system for your home or your garden to monitor your plants using a Raspberry Pi. CLASSIFICATION PROBLEM • Plant or Flower Species Classification is one of the most challenging and difficult problems in Computer Vision due to a variety of reasons. 1)Availability of plant dataset Collecting plant dataset is a time-consuming task. 2)Millions of plant species around the world To accomodate every such species, we need to train our model with such large number of images with its labels which needs tremendous computing power. 3)High inter-class as well as intra-class variation
4) Fine-grained classification problem
5) Segmentation, View-point, Occlusion,
Illumination etc. Segmenting the leaf region from an image is a challenging task. This is because we might need to remove the unwanted background and take only the foreground object (leaf) which is again a difficult thing due to the shape of leaf. FEATURE EXTRACTION
1) Features are the information or list of
numbers that are extracted from an image. These are real-valued numbers (integers, float or binary). 2) When deciding about the features that could quantify a leaf, we could possibly think of Texture and Shape as the primary ones. 3)But this approach is less likely to produce good results, if we choose only one feature vector. So, we need to quantify the image by combining different feature descriptors so that it describes the image more effectively. GLOBAL FEATURE DESCRIPTORS
• These are the feature descriptors that quantifies
an image globally. It takes in the entire image for processing. Some of the commonly used global feature descriptors are:- 1)Shape - Hu Moments, Zernike Moments 2)Texture - Haralick Texture, Local Binary Patterns(LBP) 3)Others - Histogram of Oriented Gradients (HOG), Threshold Adjancency Statistics (TAS) LOCAL FEATURE DESCRIPTORS • These are the feature descriptors that quantifies local regions of an image. Interest points are determined in the entire image and image patches/regions surrounding those interest points are considered for analysis. Some of the commonly used local feature descriptors are:-
COMBINING GLOBAL FEATURES • There are two popular ways to combine these feature vectors. • For global feature vectors, we just concatenate each feature vector to form a single global feature vector. This is the approach we will be using in this tutorial. • For local feature vectors as well as combination of global and local feature vectors, we need something called as Bag of Visual Words (BOVW). This approach is not discussed in this tutorial, but there are lots of resources to learn this technique. Normally, it uses Vocabulary builder, K-Means clustering, Linear SVM, and Td-Idf vectorization. GLOBAL FEATURE EXTRACTION We will use a simpler approach to produce a baseline accuracy for our problem. It means everything should work somehow without any error. The folder structure for this example is given below. Create the folders and files as shown. • Line 1 - 8 imports the necessary libraries we need to work with. • Line 11 used to convert the input image to a fixed size of (500, 500). • Line 14 is the path to our training dataset. • Line 17 is the number of trees we need to initialize for Random Forests classifier. • Line 20 is the number of bins for color histograms. • Line 23 is the amount of training data and testing data split (0.10 means splitting the training dataset as 90% train data and 10% test data). • Line 26 is the seed we keep to reproduce same results everytime we run this script. FUNCTIONS FOR GLOBAL FEATURE DESCRIPTORS 1) HU MOMENTS > To extract Hu Moments features from the image, we use cv2.HuMoments() function provided by OpenCV. > The argument to this function is the moments of the image cv2.moments() flatenned. It means we compute the moments of the image and convert it to a vector using flatten(). > Before doing that, we convert our color image into a grayscale image as moments expect images to be grayscale. 2)HARALICK TEXTURES • To extract Haralick Texture features from the image, we make use of mahotas library. The function we will be using is mahotas.features.haralick(). Before doing that, we convert our color image into a grayscale image as haralick feature descriptor expect images to be grayscale.
Computer Vision Fundamental Matrix: Please, suggest a subtitle for a book with title 'Computer Vision Fundamental Matrix' within the realm of 'Computer Vision'. The suggested subtitle should not have ':'.