0% found this document useful (0 votes)
31 views84 pages

Finalreport 2 DS

Uploaded by

vishal Upadhyay
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as PDF or read online on Scribd
0% found this document useful (0 votes)
31 views84 pages

Finalreport 2 DS

Uploaded by

vishal Upadhyay
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as PDF or read online on Scribd
You are on page 1/ 84
PROJECT REPORT ON “WASTE SEGREGATION MACHINE” Submitted to the Department of Electronics and Communication Engineering In partial fulfillment of the requirement Forthe degree of Bachelor of Technology In Electronics and Communication Submitted by Prasoon Kumar (1900950100057) Deevanshi Sharma (1900950310004) Under the supervision of Mr. Sanjay Sharma (Assistant Professor) Mahatma Gandhi Mission's College of Engineering and Technology, Noida Affiliated to Dr. A.P.J. Abdul Kalam Technical University, Lucknow, Uttar Pradesh May 2023 DECLARATION We hereby declare that this submission is our own work and that, to the best of our knowledge and belief, it contains no material previously published or written by another person nor material which to a substantial extent has been accepted for the avard of any other degree or diploma of the university or other institute of higher learning, except where due acknowledgment has been made in the text. Signature Name: Deevenshi Sharma Rall No.:1900950310004 Signature Name: Prasoon Kumar Roll No.1900950100057 Date: ACKNOWLEDGEMENT It gives us a great sense of pleasure to present the report of the B. Tech Project undertaken during B. Tech. Final Year. We owe special debt of gratitude to Assist. Prof Mr. Sanjay Sharma, Department of Electronics and Communication Engineering for his constant support and guidance throughout the course of our work. His sincerity, thoroughness and perseverance have been a constant source of inspiration for us. It is only her cognizant efforts that our endeavors have seen light of the day. We also do not lke to miss the opportunity to acknowledge the contribution of Dr Shilpi Shukla, HOD, Department of Electronic & Communication Engineering, and all teaching and non-teaching staff of the Department for their kind assistance and cooperation during the development of our project. We also acknowledge our friends for their contribution, continuous support and encouragement in the completion of the project. Signature Name: Prasoon Kumar Roll No.:1900950100057 Signature Name: Deevanshi Sharma Roll No. 1900950310004 Date: CERTIFICATE This is to certify that Report entitled ‘Waste Segregation Machine’ which is submitted by “Deevanshi Sharma & Prasoon Kumar” in partial fulfillment of the requirement for the award of degree B. Tech. in Department of Electronic and Communication Engineering of Dr APJ Abdul Kalam Technical University is a record of the candidates own work carried out by them under my supervision. The matter embodied in this thesis is original and has not been submitted for the award of any other degree. Date: Supervisor Mr. SANJAY SHARMA (ASSISTANT PROFESSOR) Department of Electronics and Communication MGM's COET, NOIDA, UP TABLE OF CONTENT DECLARATION ii ACKNOWLEDGEMENT iii CERTIFICATE iv LIST OF ABBREVIATION vii LIST OF FIGURES viii ABSTRACT ix CHAPTER 1 INTRODUCTION 10 1.1 Background 10 1.2. Objective 10 CHAPTER 2 LITERATURE SURVEY 12 2.1 Related Works 12 2.2 Problem statement 14 CHAPTER 3 PROJECT METHODOLOGY 15 3.1 Hardware model 15 3.2 Work flow diagram 7 3.3 Block Diagram 19 CHAPTER 4 MODEL AND ALGORITHM 21 4.1 CNN Model 21 4.2 Object Detection Model 25 4.3 EfficientDet 26 4.3 EfficientDet Lite [0-4] 28 CHAPTER 5 WORKING PRINCIPLE 30 5.1 Custom Dataset 31 5.2 Working 33 CHAPTER 6 HARDWARE DESCRIPTION 37 6.1 Arduino Uno 37 6.1.1 Description of Arduino Uno 38 6.1.2 Pin description of Arduino Uno 40 6.2 Webcam 41 6.3 DC Motor 42 6.4 Metal Sensor 43 6.5 Servo Motor 44 6.6 Relay 45 CHAPTER 7 SOFTWARE DESCRIPTION 46 7.1 Python IDLE 46 7.2. Anaconda Software 46 7.3 Haar cascade Classifier 48 7.4 Open CV Python 50 7.5 TensorFlow Lite Python 52 54 CHAPTER 8 PROJECT IMPLEMENTATION 54 8.1 Algorithm of Machine CHAPTER 9 RESULT 48 9.1 Model Performance Table 57 9.2 Sample Results 59 CHAPTER 10 FUTURE PROSPECTIVE 61 REFERENCE APPENDIX LIST OF ABBREVIATIONS IDLE Integrated Development and Leaming Environment IDE Integrated Development Environment USB) Universal Serial Bus LAN’ Local Area Network PIO General Purpose Input Output CPU Central Processing Unit GPU Graphic Processing Unit GND Ground TXD) Transmitted Data RXD) Received Data UART Universal Asynchronous Receiver and Transmitter LIST OF FIGURES Fig 3.1 Modelling of the Machine 16 Fig 3.2 Work flow chart 18 Fig3.3 | Block Diagram 19 Fig 4.1 CNN Layers 22 Fig4.2 | Analysis Times of Models 24 Fig 4.3 ‘Object Detector Model 25 Fig4.4 | Feature Extractor 26 Fig 4.5 Efficient Det 27 Fig 4.6 | Feature Network Design 28 Fig 5.1 Sample Dataset 32 Fig5.2 | Working 34 Fig 6.1 Arduino Uno 37 Fig6.2 | All about Arduino Uno 38 Fig 6.3 Pin Diagram 41 Fig6.4 | Web Camera 42 Fig 6.5 DC Motor 43 Fig 6.6 Metal Sensor 44 Fig 6.7 Servo Motor 45 Fig 6.8 Relay 45 Fig 7.1 Harcascade Classifier 48 Fig 9.1 Segregation machine 59 Fig 9.2 Metal segregation 59 Fig 9.3 Cardboard object Classified 60 Fig9.4 | Plastic object Classified 60 Fig 9.5 Leaf Dona classified 60 ABSTRACT ‘Huge amount of waste is considered as a major curse for this world. There are number of people who works in the dumping ground for the segregation of waste which is dangerous or hazardous for their life. This project is implemented for the proper segregation of waste. The purpose of this project is to eliminate the manual work and putting machine in place of men for the segregation of waste purpose. The camera captures the image of the waste. Based on the image captured, the CNN model will work on it, conveyor belt will start moving. Metal sensor is placed at the end of machine for the segregation metal waste. This machine consists of Arduino UNO and Web Camera Module. OpenCV will be used for image processing and Python is used for programming the Arduino. Though the prototype works satisfactorily, some work needs to be done before making the machine commercially available. For example, the datasets should be modified to increase the efficiency of machine. CHAPTER 1 INTRODUCTION 1.1 Background Biodegradable substances are naturally degradable such as bacteria, fungi, ultraviolet rays, ozone, oxygen, water. Decomposition is the breaking down of complex organic materials into simple units. These simple units provide the soil with a variety of nutrients. Biodegradable materials are generally non-toxic and do not heat up in the environment over long periods. Therefore, they are not considered environmental pollutants. Examples of biodegradable materials include anything made from natural materials, such as plants or animals. These biodegradable substances do not harm the ecosystem. Such products include biodegradable plastics, polymers, and household cleaners Natural processes cannot degrade non-biodegradable substances in nature, and therefore, these substances remain in the environment longer without being degraded. Examples of common non-biodegradable materials include plastic, polyethylene, scrap metals, aluminum cans, glass bottles, etc. Since these substances do not disappear in nature for many years, they are also harmful to the ecosystem. Many substances, such as nonbiodegradable metallic substances pollute natural waters and soils and cause various hazardous problems. The use of nonbiodegradable materials harms countries’ ecosystems. Developing countries, in particular, are now paying attention to the use of biodegradable materials 1.2 Objectives + The primary objective of this project is to develop a sophisticated machine that can efficiently segregate waste into three distinct categories: biodegradable, non- biodegradable, and metal waste. By achieving this objective, the project aims to address the pressing need for improved waste management systems. + To accomplish the waste classification task, this project harnesses the power of Convolutional Neural Network (CNN) techniques. The CNN model is specifically designed to accurately classify waste materials into the predefined categories of biodegradable, non-biodegradable, and metal waste. This approach enables precise sorting and disposal, facilitating environmentally responsible waste management practices. + The CNN model employed in this project carries out critical steps, including pre- processing and feature extraction, which are fundamental to achieving accurate waste classification. To train and evaluate the model, an extensive dataset of 5430 waste images is utilized. This dataset is thoughtfully divided, with 70% designated for training, 15% for validation, and 15% for testing the performance and generalization capabilities of the model + By leveraging the capabilities of machine learning algorithms and computer vision technology, this system aims to revolutionize waste segregation processes. Through enhanced efficiency and accuracy, the project endeavours to contribute to effective waste management practices, leading to a cleaner and more sustainable environment. The utilization of advanced technologies allows for better resource allocation, reduced environmental impact, and improved recycling efforts. CHAPTER 2 LITERATURE REVIEW For the development of the waste segregation system an excessive amount of research has been conducted to solve the problem of segregation of waste and eliminating the manual work done in this 2.1 RELATED WORKS 1. Problem: Metal Segregation in the Steel Industry using Programmable Logie Controller (PLC) In the paper published in 2018, the authors addressed the problem of metal segregation in the steel industry. They used simple collectors controlled by Programmable Logic Controllers to separate dry metal and wet metal. Optical sensors were also incorporated into the process. However, the existing system only segregated metal and non-metal waste, limiting the scope of segregation. Solution: To enhance the metal segregation process, a proposed solution is to expand the segregation criteria beyond metal and non-metal waste. By incorporating additional sensors and programming logic into the system, a wider range of metals can be accurately identified and segregated. This would enable more efficient and precise sorting of metal waste in the steel industry. 2. Problem: Trash Classification using Image Processing and Machine Learning The paper published in 2017 focused on trash classification using image processing techniques. They utilized features such as HOG, SIFT, Gabor filters, and Fischer Kernels, along with machine learning algorithms such as SVM. However, the achieved accuracy of the device was only 70.1%, indicating room for improvement. Solution: To improve trash classification accuracy, a proposed solution is to incorporate a larger dataset and apply deep learning techniques in addition to the existing feature extraction methods. By training the machine learning model on diverse and extensive dataset, the accuracy of trash classification can be enhanced, potentially achieving an accuracy of around 90%. This improvement would make the device more reliable and effective in waste management applications. 3. Problem: Segregation of Scrap Material using PIC Microcontroller The paper published in 2018 discussed the use of a PIC microcontroller for garbage separation. The system consisted of a robot controlled by the PIC microcontroller, with obstacle sensors for separating degradable and non- biodegradable waste. However, the complicated connections and devices in the model made it challenging to carry and deploy. Solution: To address the portability issue, a proposed solution is to simplify the waste segregation device by reducing the complexity of connections and components. By optimizing the design and incorporating more efficient and lightweight materials, a compact and easy-to-carry waste segregation device can be developed. This would enhance the usability and practicality of the device in real-world waste management scenarios. 4. Problem: Model Efficiency in Object Detection Recent advancements in object detection have led to more accurate models, but the large model sizes and computational costs pose challenges for their deployment in tesource-constrained applications such as robotics and self-driving cars Solution: The problem of model efficiency in object detection can be addressed by focusing on developing models that are optimized for specific resource constraints. Instead of a onesize-fits-all approach, different resource constraints of various real-world applications, ranging from mobile devices to data centers, should be considered. By designing and training object detection models specifically tailored to different resource constraints, their deployment in real-world scenarios can be facilitated, enabling more efficient and practical applications. 2.2 Problem Statement: Fiom the research papers reviewed, it is evident that efficiency and processing speed are significant challenges faced by previous researchers and students working on waste segregation devices. To develop an effective waste segregation system, itis crucial to improve the efficiency of the device and address the processing speed limitations. CHAPTER 3 PROJECT METHODOLOGY The methodology implemented in this project focuses on reducing human effort, saving time, and aiding individuals working in waste dumping grounds by facilitating the segregation of waste. To achieve this objective, a model has been developed that effectively detects and separates garbage on a conveyor belt. The model incorporates automated cameras that are attached to the conveyor belt and operate at regular intervals to identify the presence of garbage. 3.1 Hardware model Fig 3.1 Model of waste Segregation System The model consists of the following elements: 1 Conveyor Belt: The conveyor belt serves as the platform for transporting waste materials through the system. It enables the smooth movement of garbage for efficient detection and segregation Automated Cameras: Cameras are strategically placed along the conveyor belt to capture images of the waste materials. These automated cameras operate at regular intervals, capturing images of the garbage for further analysis. Image Processing Unit: The captured images are processed by an image processing unit. This unit applies various computer vision techniques and algorithms to analyse the images and identify the presence of garbage. Garbage Detection Algorithm: The image processing unit incorporates a garbage detection algorithm that utilizes advanced image recognition techniques. This algorithm examines the processed images and determines whether each item on the conveyor belt is garbage ornot. Segregation Mechanism: Once the garbage is detected, a segregation mechanism is activated. This mechanism could involve physical arms or robotic systems that separate the identified garbage from other items on the conveyor belt, ensuring proper waste management and disposal It provides a systematic and efficient approach for individuals working in waste dumping grounds to streamline their operations. By implementing this model, the project contributes to improving waste management practices and promoting a cleaner environment. 3.2 Work flow diagram The waste segregation system follows a streamlined process to efficiently categorize different types of waste objects, The working flow chart provides a clear overview of the system's functionality. The process begins when the system is powered on and ready to operate. Waste objects are placed on the conveyor belt, which serves as the input for the segregation system. An infrared (IR) sensor is used to detect the presence of waste objects on the conveyor belt. Once a waste object is detected, the system proceeds to the next step. Awebcam or camera is utilized to capture an image of the waste object on the conveyor belt. This image serves as valuable input for further analysis and classification. The captured image is then processed through a pretrained model or algorithm specifically designed for waste classification. The model thoroughly analyzes the image, taking into account various characteristics and properties of the waste object. Based on the analysis, the waste object is classified into one of three categories: metal waste, biodegradable waste, or non-biodegradable waste. The classification result determines the subsequent action for the waste object. If the waste object is identified as metal waste, itis directed to the metal waste bin. If itis classified as biodegradable waste, it is directed to the biodegradable waste bin. And if itis categorized as non-biodegradable waste, it is directed to the non- biodegradable waste bin Once the waste object has been appropriately sorted into the respective bins, the process repeats itself for the next waste object on the conveyor belt. This iterative process ensures that each waste abject is accurately classified and sorted according to its nature. By employing this systematic approach, the waste segregation system optimizes the efficiency and accuracy of waste management practices. It significantly reduces the manual effort required for waste segregation and ensures that different types of waste are appropriately directed to the relevant, disposal or recycling bins The flow chart of the model is shown in the figure 3.2 below. Figure 3.2 Flow Chart 3.3 Block Diagram of the Project ARDUINO OL L — \ ‘CONVEYOR BELT weacam Fig. 3.3: The block diagram of Waste Segregation Machine The block diagram depicts the main components of the waste segregation system, highlighting their roles and interactions, The diagram showcases the following key blocks: 1. Sensors: The system incorporates two types of sensors. The IR sensor is used to detect the presence of waste objects on the conveyor belt, while the metal sensor identifies the presence of metal objects. These sensors provide crucial input for the subsequent processes 2. Arduino Uno Microcontroller: The Arduino Uno serves as the central control unit of the system. It receives input from the sensors and sends signals to control the operation of other components. The microcontroller plays a vital role in coordinating the different actions of the system 3. Laptop: The laptop is utilized for model implementation and object classification. It hosts the pretrained model and performs the necessary computations for accurate classification of waste objects. The laptop acts as the processing unit leveraging advanced algorithms and techniques to classify the objects based on their characteristics. 4, Webcam: The webcam is employed to capture images of the waste objects on the conveyor belt. These images serve es visual input for the classification process. The webcam enables the system to gather visual data for anelysis and identification purposes. Conveyor Belt: The conveyor belt transports the waste objects through the system, ensuring a smooth flow. It carries the objects in front of the sensors and webcam, allowing for detection and image capture at appropriate intervals. 6. Slider: The slider is responsible for sliding the waste objects off the conveyor belt into their respective bins after they have been classified. It facilitates the physical separation of the objects based on their classifications, ensuring proper disposel or recycling The combination of these components forms a cohesive waste segregation system. The sensors detect the objects, the Arduino Uno microcontroller coordinates the system's operations, the laptop performs object classification, the webcam captures images, the conveyor belt transports the objects, and the slider facilitates the final separation. This integrated system streamlines waste management processes, improving efficiency and accuracy in waste segregation. Top of Form CHAPTER 4 MODEL USED 4.1 CNN Model Convolutional Neural Networks are also a type of multilayer perceptron inspired by the visual center of animals. CNN is widely used in image classification, image recognition, and object trecking and has highperformance values. in a convolutional layer, the feature maps of the previous layer become learning kernels, and the activation function is used to generate the output feature map. Each output map can combine convolutions with multiple input maps. In general, the CNNis formulated as in Equation 1 alah (Sema kt bi) a) Here, Mj represents a selection of the input map. If the output map j and map k are collected on the input map i, the kemels applied to map i will differ for output maps j and. The convolutional architectures used in the study are listed below. AlexNet; this is a deep learning algorithm proposed by Alex Krizhevsky, llya Sutskever, and Geoffrey Hinton. This deep convolutional neural network consisting of 25 layers consists of 5 convolutional layers, three max pool layers, two dropout layers, three fully connected layers, seven relu layers, two normalization layers, softmax layers, input and classification layers (output). The image processing in the input layer is 227x227x3. In the last layer, the classification is done, and the value of the classification number is given in the input image. The layers of AlexNet are shown in Figure The GoogleNet algorithm consists of 144 layers: the convolutional layer, the max- pooling layer, the softmax layer, the fully linked layer, the relay layer, the input layer, and the output layer. The image to be included in the input layer is 224x224x3. 1x1, 3x3, and 5x5 filters are used in the convolution layer. Pooling of size 3x3 is used. Linear activation is used for activation. The filter structures used are shown in Figure The architecture of SqueezeNet consists of an independent convolutional layer (con1), eight fire modules (fire29), and a final convolutional layer (con10). The graphicel representation of SqueezeNet can be found in figure. iter ‘soft Npieg Wag Ma osing Fig 4.1 CNN layers ShuffleNet has less complexity and fewer parameters compared to other CNN architectures. Moreover, itis suitable for low-power mobile devices because deep convolution is applied only in the feature map with the bottleneck. The ShuffleNet architecture is shown in Figure. - = — SS = ~ f__| |= I = Google Net Architecture Mowpoet/2 Maxpooif2 Marpooi/2 lob avepon Hd ‘Squeeze Net Architecure ™ [ own chet sie, | 3x3 DWConv ts 1x1 GComw ShuffleNet architecture When the training times of the analyzes are also examined, as shown in Figure 9, the fastest taining was the AlexNet analysis at 55 minutes. After that, SqueezeNet completed the training in 58 minutes and ShuffleNet in 68 minutes. The longest training was GoogleNet at 263 minutes. As the number of iterations increases, the training process becomes longer and takes much more time. The performance results of the CNN models can be found in Table 3. Training times of models Shutter SqueezeNet Mi Googleve AlexNet “Tine crite) Figure 4.2: Analysis times of the mode's. When examining the performance results of the CNN models, AlexNet has the lowest classification rate of 96.33% and the ShuffleNet model provides the most effective classification accuracy of 98.73%. Looking at the accuracy scores, the ShufileNet architecture has the highest, and the AlexNet architecture has the lowest. In terms of specificity, GoogleNet has the highest value of 0.9924, while AlexNet has the lowest value of 0.9570. In terms of precision, GoogleNet achieves the highest value with 0.9923. ShuffleNet follows it with 0.980, SqueezeNet with 0.9750, and AlexNet with 0.9575 Figure 10 shows the classification confusion matrices of the analysis results of AlexNet, GoogleNet, ShuffleNet, and SqueezeNet. When the complexity matrices were examined, the CNN models’ ability to discriminate between biodegradable and non- biodegradable materials was close. 4.2 Object Detection Model Object detection is a computer vision technique that involves identifying and localizing objects of interest within an image. In the context of waste segregation, the objects of interest are the different types of waste materials, such as plastic bottles, cardboard boxes, and cans. To detect and classify these waste materials, we developed an object detection model using the EfficientDet Lite architecture and transfer learning. EfficientDet Lite is a lightweight version of the EfficientDet architecture, which is a state-of-the- art object detection architecture that achieves high accuracy with fewer computational resources. Transfer learning is a machine learning technique that involves reusing pre-trained models and fine-tuning them on new datasets to improve their performance. Baclbone~ feature Extractor a =z nan BN = Figure 4.3 Object Detector Model a woe tf | pm | ay As shown in Figure 4.2, backbone of an object detector performs the task of feature extraction. Low level featurehave high resolution but they are of less use or less important whereas high level features have low resolution but carries the important information, Feature network performs the process of combining all the features extracted by the backbone. Than the detection head has SoftMax classifier and regressor which borders the image and labels it. Backbone ~ feature Extractor ~—< ric Hemet | ae atv tL Figure 4.4 Feature Extractor Backbone : In Figure 4.3, The backbone module exploits the essential features of different resolutions, and the neck module fuses the features of different resolutions. Finally, multiple head modules perform the detection of objects in different resolutions. Backbones play an important role in object detectors. The performance of object detectors highly relies on features extracted by backbones. For example, a large accuracy increase could be obtained by simply replacing a ResNet-50 [8] backbone with stronger networks, e.g., ResNet-101 or ResNet-152. 4.3 EfficientDet EfficientDet is a type of object detection model, which utilizes several optimization and backbone tweaks, such as the use of a BiFPN, and a compound scaling method that uniformly scales the resolution, depth and width for all backbones, feature networks and box/class prediction networks at the same time. EfficientDet Architecture x prodonret cet bactbone Figure 4.5 EfficientDet Architecture Figure 4.3 shows the overall architecture of EfficientDet, which largely follows the one-stage detectors paradigm|[24, 30, 20, 21]. We employ ImageNet-pretrained EfficientNets as the backbone network. Our proposed BiFPN serves as the feature network, which takes level 3-7 features {P3, P4, PS, P6, P7} from the backbone network and repeatedly applies top-down and bottom-up bidirectional feature fusion. These fused features are fed to a class and box network to produce object class and bounding box predictions respectively. Similar to [21], the class and box network weights are shared across all levels of features EfficientDet lite Feature fusion seeks to combine representations of a given image at different resolutions. Typically, the fusion uses the last few feature layers from the ConyNet, but the exact neural architecture may vary. (aSePH Fig 4.6 Feature Network Design In the Fig 4.5, FPN is a baseline way to fuse features with a top down flow. PA net allows the feature fusion to flow backwards and forwards from smaller to larger resolution. NAS-FPN is a feature fusion technique that was discovered through neural architecture search, and it certainly does not look like the first design one might think of. The EfficientDet paper uses “intuition” (and presumably many, many development sets) to edit the structure of NAS-FPN to settle on the BiFPN, a bidirectional feature pyramid network. The EfficientDet model stacks these BIFPN blocks on top of each other. The number of blocks varies in the model scaling procedure. Additionally, the authors hypothesize that certain features and feature channels might vary in the amount that they contribute to the end prediction, so they add a set of weights at the beginning of the channel that are learnable. 4.4 EfficientDet Lite [0-4] EfficientDet Lite [0-4] is a series of lightweight object detection models that are part of the EfficientDet family. These models are designed to provide a balance between accuracy and computational efficiency, making them well-suited for resource-constrained devices such 2s mobile phones and edge devices. They are based on a compound scaling technique that allows for efficient model architecture design. The number in the model's name, ranging from 0 to 4, indicates the level of model complexity and accuracy, with 0 being the lightest and being the most accurate but also the most computationally demanding ‘The main advantage of EfficientDet Lite models is their reduced model size and computational requirements, allowing for real-time object detection even on devices with limited computing power. Despite their lightweight nature these models still maintain competitive accuracy compared to their heavier counterparts. EfficientDet Lite models leverage efficient design choices, such as depth-wise separable convolutions and efficient feature fusion, to achieve a good trade-off between model size, inference speed, and detection performance. This makes them particularly suitable for applications where fast and accurate object detection is required on devices with limited resources. The EfficientDet Lite models can be trained on large-scale object detection datasets using techniques such as transfer learning to improve their performance on specific tasks or domains. Additionally, they can be deployed using frameworks like TensorFlow Lite for seamless integration into mobile and edge applications. Overall, EfficientDet Lite [0-4] models offer an efficient and lightweight solution for object detection tasks, providing a range of options that cater to different trade offs between model complexity, accuracy, and computational requirements Chapter 5 Working Principle The waste segregation model operates based on the following working principles: 1. Object Detection: The model utilizes object detection techniques to identify and locate waste objects within an image. It analyzes the input image and detects the presence of waste materials by drawing bounding boxes around them 2. Image Classification: Once the waste objects are detected, the model performs image classification to determine the type of waste material present. It assigns a class label to each detected object, such as metal, biodegradable, or non-biodegradable, based on its visual characteristics. 3. Fine-tuning with Transfer Learning: The model starts with a pre-trained base model, such as EfficientDetLite, which has already leamed from a large dataset. To adapt the model to the specific waste segregation task, it undergoes fine-tuning using a custom dataset of waste material images. This fine-tuning process updates the model's weights and parameters to enhance its ability to detect and classify waste objects accurately. 4. Real-time Operation: The model operates in real-time, allowing for the timely detection and classification of waste objects as they move along a conveyor belt. This ensures efficient and continuous waste segregation without significant delays or interruptions. Integration of Hardware Components: The mode is integrated with various hardware components, including sensors, an Arduino Uno microcontroller, a webcam, a conveyor belt, and a slider. These components work together to enable the smooth flow of waste objects, capture their images, provide input to the model, and physically separate the objects into their respective waste bins 6. Continuous Process: The waste segregation model operates in a continuous loop, where waste objects are placed on the conveyor belt, detected by sensors, and classified by the model. The categorized waste objects are then directed to the appropriate bins, and the process repeets for the next object, By applying these working principles, the waste segregation model effectively detects, classifies, and segregates waste materials, contributing to efficient waste management practices and reducing manual effort. 5.1 Custom Dataset The current custom dataset used for training and testing the waste segregation model comprises a total of 561 images in jpg format, accompanied by 561 corresponding xml annotation files. This dataset serves as the foundation for training and evaluating the model's performance. To ensure effective model training and evaluation, the dataset is divided into two subsets: a training dataset and a testing dataset. The training dataset consists of 382 images and their respective 382 annotation files. This subset is utilized for training the waste segregation model. During the training process, the model learns from the images and their annotations, gradually improving its ability to accurately classify and categorize waste objects. The testing dataset, on the other hand, contains 179 images and their corresponding 179 annotation files. This subset is used to evaluate the model's performance after it has been trained. The testing dataset serves as an independent measure of the model's effectiveness in accurately segregating waste objects. By evaluating the model on unseen data, it is possible to gauge its generalization capabilities and assess its ability to handle real-world scenarios The division of the dataset into training and testing subsets allows for robust model development and performance assessment. The training dataset facilitates the model's learning process, while the testing dataset provides a reliable measure of its performance on unseen data. This approach ensures that the waste segregation model is well-trained and capable of accurately classifying waste objects in various situations. Sample Dataset Figure 5.1 Sample Dataset 5.2 Working The waste segregation model works by employing a combination of computer vision and machine learning techniques to accurately classify and segregate waste objects. Here is a brief explanation of its working: 1. Image Acquisition: The model receives input images of waste objects through a camera or image sensor. These images capture the waste objects placed on the conveyor belt. 2. Preprocessing: The acquired images undergo pre-processing steps to enhance their quality and suitability for classification. This may include resizing, normalization, and noise reduction techniques to ensure consistent and reliable data. 3. Object Detection: The pre-processed images are fed into the model, which utilizes object detection algorithms to identify and locate waste objects within the images. It identifies the regions of interest where waste objects are present. 4, Feature Extraction: The model extracts relevant features from the detected waste objects. These features cepture the distinctive characteristics and patterns that differentiate different types of waste, such as texture, shape, or colour. Classification: Using machine learning algorithms, specifically transfer learning on a pretrained EfficientDet-Lite model, the extracted features are used to classify the waste objects into predefined categories. The model has been fine-tuned on a custom dataset to lear the specific characteristics of waste materials. 6. Decision-Making: Based on the classification results, the model makes decisions on how to segregate the waste objects. If an object is classified as metal waste, it is directed to the metal waste bin. If it is categorized as biodegradable waste, it goes into the biodegradable waste bin. Similarly, non-biodegradable waste objects are directed to the respective bin. 7. Continuous Operation: The model operates continuously, capturing images of waste objects as they move along the conveyor belt. It processes each image, classifies the waste objects, and directs them to the appropriate bins. This process repeats for each object on the conveyor belt. Figure 5.2 (a) Object detection process : The model detects the object using ML and image processing shown in the figure below Figure 5.2 (b) Figure 5.2 (c) An LED display board is connected to the waste segregation model to provide real- time updates. It illuminates when an object is detected, captured, and classified, offering a quick visual indication of each stage of the process. This allows for efficient monitoring and ensures thet the user can easily track the progress and results of the waste segregation model. As shown below: Figure 5.2 (d) Once the waste is detected, the conveyor belt initiates its movement, transporting the waste along its path. As the waste reaches the end of the conveyor belt, the segregator mechanism comes into action, pushing the waste into the corresponding bin designated for that type of waste. To identify metal waste, a dedicated metal sensor is employed, as depicted in Figure 5.6. The metal sensor utilizes specific detection techniques to identify the presence of metal in the waste. This enables the system to accurately identify and segregate metal waste from other types of waste materials. Figure 5.2 (e) CHAPTER 6 HARDWARE DESCRIPTION For designing of the proposed model of traffic density-based control system some hardware and software components are needed. The hardware components required for this management system are given below 6.1 Arduino UNO Arduino Uno is a microcontroller board based on the ATmega328P (datasheet). It has 14 digital input/output pins (of which 6 can be used as PWM outputs), 6 analog inputs, a 16 MHz ceramic resonator (CSTCE16MOV53-R0), a USB connection, a power jack, an ICSP header and a reset button. It contains everything needed to support the microcontroller; simply connect it to a computer with a USB cable or power it with a AC-to-DC adapter or battery to get started 6.1.1 Description of Arduino UNO. Fig 6.2 All about Arduino Uno Power USB Arduino boerd can be powered by using the USB cable from your computer. All you need to do is connect the USB cable to the USB connection Power (Barrel Jack) Arduino boards can be powered directly from the AC mains power supply by connecting it to the Barrel Jack Voltage Regulator The function of the voltage regulator is to control the voltage given to the Arduino board and stabilize the DC voltages used by the processor and other elements. Crystal Oscillator The crystal oscillator helps Arduino in dealing with time issues. How does Arduino calculate time? The answer is, by using the crystal oscillator. The number printed on top of the Arduino crystal is 16.000H9H. It tells us that the frequency is 16,000,000 Hertz or 16 MHz. Arduino Reset You can reset your Arduino board, i.e, start your program from the beginning. You can reset the UNO board in two ways. First, by using the eset button on the board Second, you can connect an external reset button to the Arduino pin labelled RESET. Pins (3.3, 5, GND, Vin) © 3.3V- Supply 3.3 output volt 5V - Supply 5 output volt Most of the components used with Arduino board works fine with 3.3 volt and 5 volt. GND (Ground) - There are several GND pins on the Arduino, any of which can be used to ground your circuit. Vin - This pin also can be used to power the Arduino board from an external power source like AC mains power supply. Analog pins The Arduino UNO board has six analog input pins AO through AS. These pins can read the signal from an analog sensor like the humidity sensor or temperature sensor and conver it into a digital value that can be read by the microprocessor. Power LED indicator This LED should light up when you plug your Arduino into a power source to indicate that your board is powered up correctly. If this light does not turn on, then there is something wrong with the connection Digital /o The Arduino UNO board has 14 digital 1/0 pins (of which 6 provide PWM (Pulse Width Modulation) output. These pins can be configured to work as input digital pins to read logic values (0 oF 1) or as digital output pins to drive different modules like LEDs, relays, etc. The pins labeled "~" can be used to generate PWM. AREF AREF stands for Analog Reference. It is sometimes, used to set an external reference voltage (between 0 and 5 Volts) as the upper limit for the analog input, pins 6.1.2 Pin description of Arduino UNO. TRESET) pcs PCS (ADCSISCL) (RXD) PDO PC4 (ADC4/SDA) (TXD) PD1 PC3 (ADC3) (INTO) PD2 PC2 (ADC2) (INT1) PD3 PC1 (ADC1) (XCK/T0) PD4 PCo (ADCO) vec [71 aTMEGA 328P [24 SND GND AREF (XTAL1/TOSC1) PB6- AVCC (XTAL2/TOSC2) PB7- PBS (SCK) (11) PDS PB4 (MISO) (AINO) PDE PB3 (MOSI/OC2) (AIN1) PDT PB2(SS/0C1B) (ICP1) PBO PB1 (OC1A) Fig.6.3: Pin diagram of Arduino UNO 6.2 Webcam Awebcam is a video camera that feeds or streams an image or video in real time to or through a computer to a computer network, such as the Internet. Webcams are typically small cameras that sit on @ desk, attach to a user's monitor, or are built into the hardware. Webcams can be used during a video chat session involving two or more people, with conversations that include live audio end video. Webcam software enables users to record a video or stream the video on the Internet. As video streaming over the Internet requires a lot of bandwidth, such streams usually use compressed formats. The maximum resolution of a webcam is also lower than most handheld video cameras, as higher resolutions would be reduced during transmission. The lower resolution enables webcams to be relatively inexpensive compared to most video cameras, but the effect is adequate for video chat sessions. The term "webcam" (a clipped compound) may also be used in its original sense of a video camera connected to the Web continuously for an indefinite time, rather than for a particular session, generally supplying a view for anyone who visits its web page over the Internet. Some of them, for example, those used as online traffic cameras, are expensive, rugged professional video cameras, Fig.6.4: Webcam 6.3 DC Motor ADC motoris any of a class of rotary electrical motors that converts direct current electrical energy into mechanical energy. The most common types rely on the forces produced by magnetic fields. Nearly all types of DC motors have some internal mechanism, either electromechanical or electronic, to periodically change the direction of current in part of the motor. Specifications of DC Motor : *Standard 130 Type DC motor “Operating Voltage: 4.5V to 9V *Recommended/Rated Voltage: 6V “Current at No load: 70mA (max) *No-load Speed: 9000 rpm *Loaded current: 250mA (approx) *Rated Load: 10g*em *Motor Size: 27.5mm x 20mm x 15mm. *Weight: 17 grams Fig.6.5: DC Motor 6.4 Metal Sensor Inductive proximity sensors can only detect metal targets. Inductive proximity sensor is used as a metal sensor in our project. They do not detect non-metal targets such as plastic, wood, paper, and ceramic. Unlike photoelectric sensors, this allows a inductive proximity sensors to detect a metal object through opaque plastic. Specifications *Sensing distance: 2mm + 10%, 4 mm + 10%, 5 mm + 10%, 8 mm + 10%,12 mm + 10% *Differential Travel: max. 10% of sensing distance *Detectable Object: Ferrous metal *Power Supply Voltage (Operating Voltage Range): 12 to 24 VDC, ripple (pp): max. 10% *Current Consumption: 15 mA max +Indicators: Operation indicator (red) Fig 6.6 Metal Sensor 6.5 SERVO MOTOR: Aservomotor (or servo motor) is a rotary actuator or linear actuator that allows for precise control of angular or linear position, velocity and acceleration.” It consists of a suitable motor coupled to a sensor for position feedback. It also requires a relatively sophisticated controlley, often a dedicated module designed specifically for use with servomotors Servomotors are not a specific class of motor, although the term servomotoris often used to refer to a motor suitable for use in a closed-loop control system. FEATURES The servo motor is specialized for high-response, high-precision positioning. As a motor capable of accurate rotation angle and speed control, it can be used for a variety of equipment. Closed Loop Control Arotation detector (encoder) is mounted on the motor and feeds the rotation position/speed of the motor shaft back to the driver. The driver calculates the error of the pulse signal or analog voltage (position command/speed command) from the controller and the feedback signal (current position/speed) and controls the motor rotation so the error becomes zero. The closed loop control method is achieved with a driver, motor and encoder, so the motor can carry out highly accurate positioning operations Fig.6.7: Servo Motor 6.6 RELAY ARelay is a simple electromechanical switch. While we use normal switches to close or open a circuit manually, a Relay is also a switch that connects or disconnects two circuits. But instead of a manual operation, a relay uses an electrical signal to control an electromagnet, which in tum connects or disconnects another circuit. Relays can be of different types like electromechanical, solid state. Electromechanical relays are frequently used. Let us see the internal parts of this relay before knowing about it working. Although many different types of relays were present, their working is same. An electromagnetic relay is composed of electromagnet, armature, spring, movable contact and a stationary contact. Arelay can handle the high power needed to directly control a load but the difference is of voltage. Normally Closed ‘Common Normally Open Fig 6.8 Relay CHAPTER 7 SOFTWARE DESCRIPTION 7.1 Python IDLE IDLE (Integrated Development and Learning Environment) is an integrated development environment (IDLE) for Python. The Python installer for Windows contains the IDLE module by default. IDLE is not available by default in Python distributions for Linux. It needs to be installed using the respective package managers IDLE can be used to execute a single statement just like Python Shell and also to create, modify and execute Python scripts. IDLE provides a fully-featured text editor to create Python scripts that includes features like syntax highlighting, auto completion and smart indent. It also has a debugger with stepping and breakpoints features. Python is an interpreter language. It means it executes the code line by line. Python provides a Python Shell (also known as Python Interactive Shell) which is used to execute a single Python command and get the result. Python Shell waits for the input command from the user. As soon as the user enters the command, it executes it and displays the result. 7.2 Anaconda Software Anaconda is an open-source distribution of the Python and R programming languages for data science that aims to simplify package management and deployment. Package versions in Anaconda are managed by the package management system, conda, which analyzes the current environment before executing an installation to avoid disrupting other frameworks and packages. The Anaconda distribution comes with over 250 packages automatically installed Over 7500 additional open-source packages can be installed from PyPl as well as the conda package and virtual environment manager. It also includes a GUI (graphical user interface), Anaconda Navigator, as a graphical altemative to the command line interface. Anaconda Navigator is included in the Anaconda distribution, and allows users to launch applications and manage conda packages, environments and channels without using commandiline commands. Navigator can search for packages, install them in an environment, run the packages and update them. The big difference between conda and the pip package manager is in how package dependencies are managed, which isa significant challenge for Python data science. When pip installs @ package, it automatically installs any dependent Python packages without checking if these conflict with previously installed packages. It will install a package and any of its dependencies regardless of the state of the existing installation. Because of this, a user with a working installation of, for example TensorFlow, can find that it stops working after using pip to install a different package that requires a different version of the dependent NumPy library than the one used by TensorFlow. In some cases, the package may appear to work but produce different results in execution. In contrast, conda analyzes the current environment including everything currently installed, and together with any version limitations specified (e.g,, the user may wish to have TensorFlow version 2.0 or highe:), works out how to install a compatible set of dependencies, and shows a warning if this cannot be done. Open-source packages can be individuelly installed from the Anaconda repository, Anaconda Cloud (anaconda.org), or the user's own private repository or mirror, using the conda install command. Anaconda Inc. compiles and builds the packages available in the Anaconda repository itself, and provides binaries for Windows 32/64-bit, Linux 64-bit and MacOS 64-bit. Anything available on PyPI may be installed into a conda environment using pip, and conda will keep track of whatit has installed itself and what pip has installed 7.3 HAAR CASCADE CLASSIFIERS AHaar Cascade is basically a classifier which is used to detect the object for which it has been trained for, from the source. The Haar Cascade is trained by superimposing the positive image over a set of negative images. The training is generally done on a server and on various stages. Object Detection using Haar feature-based cascade classifiers is an effective object detection method proposed by Paul Viola and Michael Jones in their paper, "Rapid Object Detection using a Boosted Cascade of Simple Features" in 2001. It is a machine learning based approach where a cascade function is trained from a lot of positive and negative images. It is then used to detect objects in other images. Here we will work with face detection shown in the below image are used. They are just like our convolutional kernel. Each feature is a single value obtained by subtracting sum of pixels under the white rectangle from sum of pixels under the black rectangle Initially, the algorithm needs a lot of positive images (images of faces) and negative images (images without faces) to train the classifier. Then we need to extract features from it. For this, Haar features shown in the below image are used, They are just like our convolutional kernel. Each feature is a single value obtained by subtracting sum of pixels under the white rectangle from sum of pixels under the black rectangle. (i (a) Edge Features i (b) Line Features (c) Four-rectangle features Fig 7.1: An example for HarrCascade Classifier o/p Now, all possible sizes and locations of each kernel are used to caloulate lots of features, (Just imagine how much computation it needs? Even a 24x24 window results over 160000 features). For each feature calculation, we need to find the sum of the pixels under white and black rectangles. To solve this, they intioduced the integral image. However large your image, it reduces the calculations for a given pixel to an operation involving just four pixels. Nice, isn't it? It makes things superfast. But among all these features we calculated, most of them are irrelevant. For example, consider the image below. The top row shows two good features. The first feature selected seems to focus on the property that the region of the eyes is often darker than the region of the nose and cheeks. The second feature selected relies on the property that the eyes are darker than the bridge of the nose. But the same windows applied to cheeks or any other place is irrelevant. So how do we select the best features out of 160000+ features. It is achieved by Adaboost. 7.4 Arduino IDE The Arduino Integrated Development Environment (IDE) is a software platform used for programming and developing applications for Arduino microcontrollers. It provides a user-friendly interface for writing, compiling, and uploading code to Arduino boards. Here are some key features and components of the Arduino IDE 1. Code Editor: The Arduino IDE provides a text-based code editor where you can write and edit your Arduino sketches. It supports the C/C++ programming language and provides syntax highlighting to make the code more readable. 2. Skeiches: In the Arduino IDE, programs are referred to as “sketches” A sketch consists of two required functions: ‘setup()’ and ‘loop()’. The ‘setup’ function is called once when the microcontroller starts, and the ‘loop()’ function runs repeatedly after the setup() function completes. 3. Libraries: The Arduino IDE includes a library manager that allows you to easily install and manage libraries. Libraries provide pre-written code that can be used to simplify complex tasks and enable additional functionality in your Arduino projects. 4. Serial Monitor The Arduino IDE has a built-in Serial Monitor tool that allows you to communicate with the Arduino board via the serial port. It enables you to send and receive data between the Arduino and your computer, making it useful for debugging and testing purposes. Board Manager: The Board Manager is a feature in the Arduino IDE that allows you to install and configure board definitions for various Arduino- compatible microcontrollers. It provides a selection of boards with different specifications, such as the Arduino Uno, Arduino Mega, or Arduino Nano. 6. Upload and Compilation: The Arduino IDE offers a simple oneclick upload feature that compiles your code and uploads it to the connected Arduino board. Ithandles the compilation process, generates the necessary machine code, and transfers it to the microcontroller. Overall, the Arduino IDE provides a beginner-friendly and intuitive development environment for programming Arduino boards. It simplifies the process of writing, compiling, and uploading code, making it accessible to individuals with minimal programming experience. 7.5 OPENCV-PYTHON OpenCV (Open-Source Computer Vision) is an open-source library that provides a wide range of computer vision and image processing functions. It is written in C++ and offers interfaces for various programming languages, including Python. OpenCV for Python enables developers to work with images, videos, and real-time computer vision applications. Here's an overview of OpenCV for Python: 1. Installation: To use OpenCV in Python, you need to install the OpenCV package. You can install it using pip, the Python package manager, by running the command pip install OpenCV-python. Additionally, there are other optional packages like OpenCV-contrib-python that provide additional functionality. 2. Image and Video Manipulation: OpenCV for Python provides functions to read, write, and manipulate images and videos. You can load images and videos from files or capture them from cameras. OpenCV supports a variety of image formats and provides tools for resizing, cropping, rotating, and applying various filters and transformations to images and videos. 3. Image Processing: OpenCV offers a wide range of image processing techniques and algorithms. You can perform operations such as image thresholding, edge detection, image segmentation, morphological operations, and noise reduction. These functions allow you to enhance images, extract meaningful information, and pre-process images before further analysis 4. Object Detection and Tracking: OpenCV includes pretrained models and algorithms for object detection and tracking. You can use popular algorithms like Haar cascades or more advanced methods like deep learning based object detection frameworks (such as YOLO or SSD) to detect and track objects in images or videos. Feature Extraction and Matching: OpenCV provides algorithms for feature extraction and matching, including techniques like SIFT (Scale-Invariant Feature Transform) and SURF (Speeded-Up Robust Features). These algorithms allow you to detect and match key points in images, which can be useful for tasks like image stitching, object recognition, and augmented reality. 6. Camere Calibration and 3D Reconstruction: OpenCV supports camera calibration, allowing you to calibrate cameras and correct for distortions. It also provides functions for 3D reconstruction from multiple images, enabling you to create 3D models or perform depth estimation 7. Integration and Visualization: OpenCV for Python integrates well with other Python libraries and frameworks. You can easily combine OpenCV with libraries like NumPy and Matplotlib for efficient array manipulation and visualizations. This allows you to process and analyse images using @ vatiety of Python tools and techniques. OpenCV for Python is a powerful library for computer vision tasks, offering a wide range of functionalities and algorithms. Its simplicity, extensive documentation, and large community make it a popular choice for computer vision applications in Python. 7.6 TENSORFLOW LITE-PYTHON ‘TensorFlow Lite for Python is a lightweight and optimized version of the ‘TensorFlow library that allows developers to run machine learning models on resource-constrained devices such as mobile phones, embedded systems, and loT devices. Here's an overview of TensorFlow Lite for Python’ 1 Installation: To get started with TensorFlow Lite, you need to install the ‘TensorFlow library and then install the TensorFlow Lite Python package. You can install it using pip, the Python package manage, by running the command pip install TensorFlow. This installs both TensorFlow and TensorFlow Lite. Model Conversion: TensorFlow Lite supports model conversion from ‘TensorFlow models to a format suitable for deployment on resource- constrained devices. You can convert your TensorFlow model to a ‘TensorFlow Lite model using the Tenso:Flow Lite Converter. The converter optimizes the model for size and performance and it supports various optimizations like quantization and pruning Interpreter: Once you have a TensorFlow Lite model, you can use the ‘TensorFlow Lite Interpreter to run inference on it. The interpreter provides an AP! that allows you to load the model, pre-process input data, and run predictions. It provides efficient execution and low latency, making it suitable for real-time applications. Input and Output: TensorFlow Lite models typically take input as numerical arrays or tensors. You need to pre-process your input data to match the expected input format of the model. The interpreter retums predictions as numerical arrays or tensors, which you can further process or use for your application's output. Supported Operations: TensorFlow Lite supports a subset of TensorFlow operations optimized for efficient execution on resource-constrained devices. However, not all operations available in TensorFlow are supported in ‘TensorFlow Lite. You can check the TensorFlow Lite documentation to see the list of supported operations. 6. Integration: TensorFlow Lite for Python integrates well with other Python libraries and frameworks. You can use TensorFlow Lite models in your Python applications, whether they are standalone scripts, web applications, or embedded systems TensorFlow Lite for Python empowers developers to deploy and run machine learning models efficiently on devices with limited resources. Its lightweight nature and optimizations make it suitable for a wide range of applications, from mobile and embedded systems to loT devices. CHAPTER 8 PROJECT IMPLEMENTATION In this chapter, the whole procedure of project designing has been presented with the help of suitable diagrams 8.1 Algorithm of Waste Segregation Machine 1. Waste on the conveyor belt: The waste materials, which may include various types such as plastic, paper, glass, metal, or organic waste, are placed on the conveyor belt. The conveyor belt carries the waste materials towards the waste segregation area 2. Image capture and transmission: A high-resolution webcam or camera is positioned strategically above the conveyor belt to capture clear images of the waste materials as they pass by. The captured images are then transmitted to the waste segregation machine's system for further processing, 3. Convolutional Neural Network (CNN) model: The system utilizes a pre- trained or custom-built CNN model for image recognition and classification This model has been trained on a large dataset of waste material images to learn the distinguishing features of different waste types. 4. Image analysis and classification: The received images are fed into the CNN model for analysis. The CNN model employs complex algorithms and pattem recognition techniques to identify the visual characteristics of the waste materials. It analyses the image and classifies it into one of several predefined waste categories, such as plastic, paper, glass, metal, or organic waste, Displaying material information: Once the CNN model has classified the waste material, the waste segregation machine's interface displays the name or type of material on a screen. This information helps operators or workers to easily identify the type of waste material passing through the system. 6. Waste segregation mechanism: Based on the detected type of material, the waste segregation machine employs various mechanisms to physically separate the waste materials into different categories. These mechanisms can include robotic arms, pneumatic systems with air jets, or conveyor belt diverters. For example, if the waste is classified as plastic, the machine activates the corresponding mechanism to divert the plastic waste into a separate container designated for plastic recycling 7. Continuous operation: The waste segregation machine operates continuously, continuously capturing and analysing images of the waste materials on the conveyor belt. The CNN model performs real-time analysis and classification, allowing the machine to handle a high volume of waste materials efficiently. By following this algorithm, the waste segregation machine can accurately identify and segregate different types of waste materials, contributing to effective recycling and waste management processes. The integration of advanced image recognition techniques and physical segregation mechanisms enhances the overall efficiency and accuracy of the waste segregation process. CHAPTER 9 RESULTS The waste segregation machine provides several significant results: 1. Accurate waste classification: By employing a trained machine learning model, such as a Convolutional Neural Network (CNN), the waste segregation machine achieves accurate and reliable waste classification. The model analyzes the images captured by the webcam and accurately identifies the type of waste material present in each image. This allows for precise segregation based on the detected waste categories 2. Efficient waste segregation: The machine utilizes the classification results to effectively segregate the waste materials. Based on the identified waste category, the machine activates the appropriate mechanism, such as robotic arms, air jets, or conveyor belt diverters, to separate the waste materials into different containers or compartments. This automated process ensures efficient and precise segregation without the need for manual sorting 3. Metal waste detection and segregation: The waste segregation machine incorporates a metal sensor to detect metal waste items. When the sensor detects the presence of metal in a waste item, it triggers the segregation process specifically for metal waste. The machine utilizes the Arduino Uno microcontroller to activate the necessary mechanisms, ensuring the proper separation of metal waste from other waste materials. 4. Enhanced waste management practices: The waste segregation machine significantly improves waste management practices. It eliminates the reliance on manual labor for waste sorting, reducing human error and increasing efficiency. By accurately segregating different waste materials, such as plastics, paper, glass, metal, and organic waste, the machine facilitates effective recycling and disposal processes. This contributes to environmental sustainability and helps minimize the negative impact of improper waste management. Time and cost savings: The automated waste segregation process saves considerable time and reduces labor costs. By leveraging the capabilities of the CNN model and image processing techniques, the machine rapidly analyzes and classifies waste materials, enabling swift and accurate segregation. This efficiency translates into cost savings and increased productivity for waste management operations 6. Potential for scalability: The weste segregation machine can be designed and implemented to accommodate various scales of operations. It can be customized to handle different volumes of waste and adapted to specific waste management requirements. This scalability makes the machine suitable for various environments, such as recycling facilities, industrial sites, and municipal waste management systems. Overall, the waste segregation machine delivers accurate waste classification, efficient segregation, enhanced waste management practices, time and cost savings, and potential scalability. By automating the waste segregation process and leveraging advanced technologies, it significantly contributes to improved waste management practices, environmental sustainability, and a cleaner future. 9.1 Model Performance Table EfficientDet Lite is a lightweight object detection model that strikes a balance between model size, accuracy, and inference speed. Versions 0 to 4 offer different capabilities to cater to specific needs. With reduced memory footprint, EfficientDet Lite excels in resource-constrained environments. Its accurate object detection, evaluated through AP and fast inference speed, measured by latency, make it an empowering solution for various applications. EfficientDet Lite revolutionizes object detection by delivering efficiency without compromising on performance. Here is a model performance table for EfficientDet Lite versions 0 to 4, including the model size, average precision (AP), and latency Table 5.1 Average Latency aed) EfficientDe 4.4MB 55.6% 646 Ms tLite 0 EfficientDe 5.8MB 60.5% 359 Ms tLite 1 EfficientDe 7.2MB 73.9% 196 Ms tLite 2 EfficientDe 11.4 MB 15.7% 189 Ms tLite 3 EfficientDe 19.9MB 19.6% 176 Ms tLite 4 Table 5.1 is showing the average precision and latency of different models trained. The model size refers to the storage space required to store the model parameters and configuration. It is measured in megabytes (MB) and indicates the memory footprint of the model The average precision (AP) is a commonly used metric in object detection tasks. It measures the accuracy of the model in terms of precision and recall. The higher the AP value, the better the model's performance. The AP scores listed in the table are based on the evaluation of the models on the COCO (Common Objects in Context) dataset The latency indicates the inference speed of the model, measured in milliseconds (ms). It represents the time taken by the model to process an input image and generate the object detection results. Lower latency values indicate faster inference speed, which is crucial for real-time applications. EfficientDet Lite models are designed to provide a trade-off between model size, accuracy, and inference speed, making them suitable for deployment on resource- constrained devices such as mobile phones, embedded systems, or edge devices. The table provides an overview of the performance characteristics of EfficiemtDet Lite versions 0 to 4, allowing users to choose the model that best fits their specific requirements based on size, accuracy, and inference speed considerations 9.2 SAMPLE RESULTS Fig 9.2: Metal waste object is being segregated. Fig 9. Fig 9.4: Plastic bottle waste object classified. Fig 9.5: Leaf Dona waste object classified. CHAPTER 10 FUTURE PROSPECTIVE Further advancements in this waste segregation machine are not only possible but also highly desirable. These advancements can significantly enhance the machine's performance, efficiency, and capabilities, leading to more effective waste management practices. Here are some potential areas for further development 1 With the help of advanced computers: The waste segregation machine can be enhanced by incorporating advanced computer systems. This can involve utilizing powerful processors, increased memory capacity, and advanced algorithms to improve the overall performance and functionality of the machine. By leveraging advanced computing capabilities, the functioning of the machine can be made even smoother and simpler, resulting in more accurate waste segregation. Processing speed reduction: As technology progresses, advancements in hardware and software can significantly reduce the processing time of the waste segregation machine. This means that the machine can analyze and classify waste materials at a faster rate, enabling higher throughput and efficiency in waste segregation operations. Increased efficiency: Through continuous research and development, the efficiency of the waste segregation machine can be further improved. This can involve optimizing the algorithms used in the image analysis and classification process, fine-tuning the machine's hardware components, or implementing intelligent decision-making mechanisms to enhance the accuracy and speed of waste segregation. Increased efficiency translates to a higher level of waste sorting accuracy and a more productive waste management system 4, Increased types of waste for segregation: As the waste segregation machine evolves, it can be expanded to handle a wider variety of waste types. By training the CNN mode! on a larger dataset of waste material images, the machine can leam to recognize and classify additional types of waste, beyond the basic categories of plastic, paper, glass, metal, and organic waste. This expansion in waste segregation capabilities allows for more comprehensive recycling and waste management practices, contributing to a more sustainable environment. Overall, the continuous advancement of the waste segregation machine can lead to improved performance, increased efficiency, and the ability to handle a broader range of waste materials, ultimately promoting effective waste management and environmental sustainability.

You might also like