0% found this document useful (0 votes)
8 views43 pages

ML Report .

The project report presents EnviroMate, an AI-driven waste management application that utilizes real-time image recognition and geo-tagging to classify waste and locate nearby recycling facilities. It aims to educate users on responsible waste disposal while promoting environmental awareness through a user-friendly interface. The report outlines the project's objectives, methodology, and significance in addressing the challenges of improper waste disposal and enhancing sustainable practices.
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as PDF, TXT or read online on Scribd
0% found this document useful (0 votes)
8 views43 pages

ML Report .

The project report presents EnviroMate, an AI-driven waste management application that utilizes real-time image recognition and geo-tagging to classify waste and locate nearby recycling facilities. It aims to educate users on responsible waste disposal while promoting environmental awareness through a user-friendly interface. The report outlines the project's objectives, methodology, and significance in addressing the challenges of improper waste disposal and enhancing sustainable practices.
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as PDF, TXT or read online on Scribd
You are on page 1/ 43

AI-BASED WASTE RECOGNITION AND DISPOSAL

SYSTEM USING IMAGE PROCESSING AND GEO-


TAGGING

Project Report

AL3451 : MACHINE LEARING

( Project Driven Course)

Submitted by

AKHILESH J (311523243007)

DAKSHINESHWAR VEL A (311523243014)

PRAVEEN R (311523243046)

DEPARTMENT OF

ARTIFICIAL INTELLIGENCE AND DATA SCIENCE

MEENAKSHI SUNDARARAJAN ENGINEERING COLLEGE,


(An Autonomous Institution)
KODAMBAKKAM, CHENNAI-24
ANNA UNIVERSITY: CHENNAI 600 025
MAY 2025
MEENAKSHI SUNDARARAJAN ENGINEERING COLLEGE
(An Autonomous Institution)

ANNA UNIVERSITY: CHENNAI 600 025


BONAFIDE CERTIFICATE

Certified that this project report “AI-BASED WASTE RECOGNITION AND


DISPOSAL SYSTEM USING IMAGE PROCESSING AND GEO-
TAGGING” is the bonafide work of “AKHILESH J (311523243007)
DAKSHINESHWAR VEL A (311523243014 ) PRAVEEN R (311523243046)
” who carried out the project work under my supervision.

SIGNATURE SIGNATURE

Mrs.N.Mathangi.,M.E.,[Ph.D.] Mrs.V.Sathya, M.E

HEAD OF THE DEPARTMENT COURSE INSTRUCTOR

Department Name Department Name

Meenakshi Sundararajan Engineering Meenakshi Sundararajan Engineering

College, College,

No.363, Arcot Road, Kodambakkam, No.363, Arcot Road, Kodambakkam,

Chennai -600 024. Chennai -600 024.

Submitted for the end semester project review of Machine Learning in the
Department of ________________________ held on .

INTERNAL EXAMINER EXTERNAL EXAMINER


ACKNOWLEDGEMENT

We sincerely thank the management of Meenakshi Sundararajan Engineering College

(An Autonomous Institution) for supporting and motivating us during the course of study.

We wish to express our deep sense of gratitude to Mr. N. Sreekanth M.S, Secretary

and Dr.S.V.Saravanan B.E.,M.E.,Ph.D., Principal of Meenakashi Sundararajan Engineering

College, Chennai, for providing me an excellent academic environment and facilities for

pursuing our B.Tech Program. We extend our heartfelt gratitude and sincere thanks to

Mrs.N.Mathangi.,M.E.,[Ph.D.] HOD/AI-DS.

We owe our wholehearted thanks and appreciation to the entire staff of AI&DS

department for their assistance during the course of our study. We hope to build upon the

experience and knowledge that we have gained in this course to make a valuable contribution

towards the industry in coming future.

I would also like to thank my family and friends who motivated me during the course

of the project work

i
ABSTRACT

This project presents EnviroMate, an AI-driven waste management application that

leverages real-time image recognition and location-based services to promote sustainable

environmental practices. The system uses deep learning models for intelligent waste

classification through live camera input, identifying categories such as paper, plastic, e-waste,

glass, and organic matter with high precision. Upon detection, the application dynamically

locates and recommends the nearest recycling or disposal facilities using integrated geo-tagging

and mapping APIs. In addition to core functionalities, the system offers educational insights on

waste impact and eco-friendly practices to raise environmental awareness. Featuring a user-

centric interface and adaptive design, EnviroMate empowers individuals across homes,

schools, and workplaces to engage in responsible recycling. By fusing computer vision,

geolocation services, and environmental data, the project delivers an end-to-end sustainable

waste management solution for smart cities and green communities.

Keywords

Waste Classification, Deep Learning, CNN, Computer Vision, Recycling Locator,

Sustainability, Smart Environment, Environmental Awareness, Image Recognition, Geo-

Tagging

i
TABLE OF CONTENTS

CHAPTER NO TITLE PAGE


NO.

ABSTRACT i

LIST OF FIGURES iv

1 INTRODUCTION 1

1.1 BACKGROUND 1

1.2 PROBLEM STATEMENT 2

1.3 OBJECTIVES 3

1.4 SCOPE OF THE PROJECT 4

1.5 SIGNIFICANCE OF STUDY 5

2 LITERATURE REVIEW 6

2.1 REVIEW OF RELATED RESEARCH 6

2.2 COMPARATIVE ANALYSIS OF 7


EXISTING WASTE MANAGEMENT
SYSTEMS

2.3 COMPARATIVE STUDY OF CNN IN TEXT 8


RECOGNITION

3 SYSTEM DESIGN 9

3.1 EXISTING SYSTEM ARCHITECTURE 9

3.2 PROPOSED MODEL ARCHITECTURE 9

3.3 DEEP LEARNING MODEL 11

3.4 DATA ANALYSIS AND VISUALIZATION 12

4 SYSTEM REQUIREMENTS 13

ii
4.1 SOFTWARE REQUIREMENTS 13

4.2 HARDWARE REQUIREMENTS 14

5 METHODOLOGY 15

5.1 DATA COLLECTION AND INGESTION 15

5.2 DATA PREPROCESSING AND CLEANING 15

5.3 FEATURE ENGINEERING 16

5.4 MODEL TRAINING AND EVALUATION 17

5.5 PERFORMANCE OPTIMIZATION 18


STRATEGIES

5.6 MODEL DEPLOYMENT AND 18


INTEGRATION
19
5.7 USER FEEDBACK AND SYSTEM
ADAPTATION

6 RESULTS AND DISCUSSION 20

6.1 OUTPUT SAMPLES FROM VARIOUS FILE 20


TYPES

6.2 STRENGTHS OF THE OCR MODEL 21

7 CHALLENGES AND LIMITATIONS 23

8 APPENDICES 27

8.1 SOURCE CODE 27

8.2 SCREENSHOTS 31

9 CONCLUSION AND FUTURE 34


ENHANCEMENTS

10 REFERENCES 36

ii
LIST OF FIGURES

PAGE
FIGURE NO. FIGURE NAME
NO.
Home page interface (frontend ui)
8.1 31

8.2 Object Detecting Interface 32

8.3 Classification of the detected objects 33

8.4 Locating the nearby recycle shops 33

Iv
CHAPTER 1
INTRODUCTION

1.1 BACKGROUND

The rapid growth of urbanization and consumerism has led to an unprecedented increase in
waste generation, posing serious environmental and public health challenges worldwide.
Traditional waste management systems, often reliant on manual sorting and limited public
awareness, struggle to keep pace with the growing complexity and volume of waste streams.
Moreover, many individuals lack the knowledge or motivation to dispose of waste responsibly,
resulting in improper recycling practices and increased environmental degradation.
One of the key barriers to effective waste management is the general population’s limited
understanding of material classification and disposal methods. Conventional recycling programs
often require users to sort waste accurately—yet without the proper tools or guidance, this process
becomes error-prone and inefficient. Furthermore, the absence of real-time information on local
recycling options discourages user participation in sustainable practices.
To bridge this gap, the proposed project introduces EnviroMate, an intelligent mobile application
that empowers users to participate in environmentally responsible behavior through a seamless,
AI-driven experience. By utilizing real-time image recognition, EnviroMate can instantly identify
waste types—ranging from paper and plastics to electronic and organic materials—via a
smartphone camera. The system then provides immediate feedback and navigational support by
directing users to the nearest appropriate recycling or disposal facility, fostering timely and correct
waste handling.
In addition to its core classification and navigation features, EnviroMate promotes environmental
awareness by delivering educational insights on the long-term impact of various waste materials.
Its user-friendly design ensures accessibility across age groups and technical proficiencies, making
it an ideal tool for households, schools, and workplaces. Through the integration of machine
learning, computer vision, and geospatial services, this solution paves the way for smarter waste
management and a more eco-conscious society.

1
1.2 PROBLEM STATEMENT

Despite growing awareness around environmental sustainability, improper waste disposal


remains a widespread issue, driven by both a lack of public knowledge and the absence of
accessible tools for responsible recycling. Conventional waste management systems place the
burden of accurate waste segregation on individuals without offering real-time guidance or support.
Most users are unaware of how to differentiate between types of waste—such as recyclables, e-
waste, and organics—and often lack information about nearby recycling or drop-off centers. This
knowledge gap contributes to contamination in recycling streams, increased landfill usage, and
environmental degradation.
Moreover, in the era of smart cities and mobile technology, the waste management domain still
lags behind in leveraging intelligent systems that promote user engagement and sustainable habits.
The challenge lies in designing a solution that not only automates waste identification but also
educates users and encourages environmentally responsible behavior through an intuitive digital
experience.
To address this critical gap, the proposed system must incorporate the following capabilities:
• AI-Based Waste Identification: The system should use advanced image recognition
techniques to classify waste in real time via smartphone cameras, ensuring accurate
material categorization across a wide range of waste types including plastic, paper, glass,
metal, and electronic waste.
• Geo-Enabled Recycling Navigation: It must integrate geolocation services to recommend
the nearest appropriate recycling or disposal facilities, empowering users to take immediate
action based on their waste type.
• Eco-Educational Feedback: Beyond identification and navigation, the system should
provide informative insights about the environmental impact of different waste materials,
encouraging behavior change and sustainable living.
• User-Centric Design: The platform should prioritize accessibility, ease of use, and
minimal interaction steps, making it suitable for individuals of all ages and technical
backgrounds, including students, families, and working professionals.

2
1.3 OBJECTIVES

This project aims to develop an intelligent, AI-powered waste management assistant that
empowers individuals to identify, sort, and dispose of waste responsibly through real-time image
recognition and location-aware services. The following objectives define the core features
driving this sustainable innovation Dual-Mode Accessibility: To build a two-way input system
that supports both voice (via Web Speech API) and sign language gestures (via computer vision),
enabling inclusive participation from both regular users and those with speech or hearing
disabilities.

• AI-Driven Waste Classification:


To implement a deep learning-based image recognition system capable of accurately
detecting and categorizing various waste types—including paper, plastic, glass, metal,
e-waste, and organics—using a smartphone camera in real time.

• Smart Recycling Facility Locator:


To integrate geo-tagging and mapping APIs that dynamically identify and suggest
the nearest appropriate recycling or disposal facilities based on the detected waste
category and user location.

• User-Friendly, Accessible Interface:


To design an intuitive and inclusive mobile interface that supports users of all ages
and backgrounds—making responsible waste disposal easy and accessible in
everyday environments such as homes, schools, offices, and public spaces.

• Real-Time Feedback and Recommendations:


To deliver instant visual and textual feedback after waste classification, including
color-coded categories, disposal instructions, and navigational support, enabling
users to take immediate and informed actions.

• Promoting Green Living Through Technology:


To showcase how AI, computer vision, and mobile technology can be combined to
drive social impact, reduce environmental footprint, and build a scalable framework
for smart, eco-friendly urban living.

3
1.4 SCOPE OF THE PROJECT

The scope of this project focuses on the design and development of EnviroMate, an AI-
powered mobile application aimed at promoting responsible waste management through real-time
waste recognition and intelligent recycling guidance. This solution targets individual users—
whether at home, school, or in public settings—who seek a simple, reliable way to classify waste
and locate nearby disposal or recycling facilities.
EnviroMate leverages deep learning models for accurate image-based classification of common
waste types, including paper, plastic, metal, glass, e-waste, and organic materials. Users interact
with the system through their smartphone camera, enabling intuitive and touch-free identification
of discarded items. Once classified, the system uses geolocation services and mapping APIs to
direct users to the closest appropriate recycling center or drop-off point.
Beyond core identification and navigation functions, the application also includes an educational
component that provides users with brief but impactful eco-insights. These insights explain the
long-term environmental effects of various waste types and suggest best practices for sustainable
disposal, encouraging environmentally conscious habits.
The project is designed to operate across major mobile platforms (Android and iOS), ensuring a
seamless and responsive user experience. The application will include a clean, user-friendly
interface that supports quick interactions and visual feedback.

However, the current scope is limited to:


• Recognition of commonly discarded waste items based on image input.
• English-language eco-insights and UI elements.
• Static database-driven suggestions for recycling centers (dynamic integration with
government or third-party APIs may be scoped for future enhancement).
• Waste classification accuracy based on trained categories—complex composite materials
or hazardous waste may not be fully supported at this stage.

This project serves as a practical and scalable entry point into smart, tech-enabled environmental
solutions, laying the groundwork for future expansions involving advanced sorting, broader
material databases, multilingual support, and integration with municipal waste systems.

4
1.5 SIGNIFICANCE OF STUDY

This study holds substantial significance in the context of environmental sustainability and the
practical application of artificial intelligence for public benefit. Improper waste segregation and
disposal continue to pose critical challenges to ecosystems and urban infrastructure worldwide.
While numerous awareness campaigns and regulations exist, the lack of real-time, user-friendly
tools remains a barrier to individual participation in responsible waste management.

EnviroMate addresses this gap by demonstrating how emerging technologies—such as computer


vision, geolocation services, and intelligent content delivery—can be integrated into an everyday
mobile application to foster environmentally responsible behavior. By enabling users to instantly
identify waste types and locate appropriate recycling facilities through their smartphones, the
system transforms passive awareness into actionable sustainability practices.

The project is especially impactful for educational institutions, workplaces, and households
seeking to cultivate green habits. Its user-centric design ensures accessibility across age groups and
technical skill levels, making it a scalable tool for widespread adoption. Furthermore, the inclusion
of eco-insights provides users with contextual knowledge about the long-term environmental
effects of different waste materials, bridging the gap between awareness and behavior change.

From a technological standpoint, the study contributes to the evolving field of AI for environmental
applications. It showcases the use of deep learning in real-time object classification and reinforces
the importance of intuitive design in encouraging public engagement with sustainability efforts.
The system serves as a model for future smart-city initiatives, where AI-powered tools assist
citizens in making eco-conscious decisions.

By aligning with the principles of the United Nations Sustainable Development Goal 11
(Sustainable Cities and Communities) and Goal 13 (Climate Action), this project exemplifies
how digital innovation can drive positive environmental impact at both the individual and
community levels.

5
CHAPTER 2
LITERATURE REVIEW

2.1 REVIEW OF RELATED RESEARCH

The integration of artificial intelligence in environmental sustainability—particularly in waste


management—has garnered growing attention in recent years. With the global increase in waste
generation and the need for sustainable disposal methods, researchers and technologists are
exploring smart solutions that combine computer vision, geolocation, and user education to
promote responsible behavior. Several notable approaches and technologies relevant to
EnviroMate include:

• Convolutional Neural Networks (CNNs): CNNs are widely used in image classification
tasks, including waste type recognition. Models like MobileNet, ResNet, and Inception
have been trained on waste datasets to categorize items such as plastic, glass, paper, and
organic materials. These models offer high accuracy in identifying visual patterns in real-
world waste images.

• YOLO (You Only Look Once): An object detection model that enables real-time
identification of multiple objects within a single frame. YOLO has been applied in smart
bin and waste-sorting applications to detect and classify different types of trash quickly and
efficiently.

• Geolocation and Mapping APIs: Technologies such as Google Maps API and
OpenStreetMap have been used in apps to help users locate nearby recycling centers or
disposal facilities. These tools enhance user convenience and encourage proper waste
segregation by minimizing the effort involved in finding the right location.

• Mobile-based Environmental Apps: Existing apps like “RecycleNation,” “iRecycle,” and


“Recyclica” provide users with information about recyclable materials and collection
centers.
Despite these advancements, most existing systems either lack real-time recognition,
require manual categorization, or do not personalize educational content. This gap
highlights the need for a more intelligent, user-friendly platform—such as
EnviroMate—that combines image-based recognition, contextual awareness, and
location intelligence to simplify and enhance the waste management experience

6
2.2 COMPARATIVE ANALYSIS OF EXISTING WASTE
MANAGEMENT

The current landscape of digital waste management tools includes a variety of applications and
platforms, each attempting to address environmental challenges through different technical and
user-focused strategies. While these systems provide varying degrees of support for proper waste
segregation and disposal, they differ significantly in terms of functionality, accessibility, and
technological integration. A comparative analysis of notable existing solutions is presented below:

1. Manual Input-Based Applications (e.g., iRecycle, RecycleNation)

These platforms allow users to search for items manually and receive recycling instructions or the
location of nearby facilities.
• Strengths: Informative and location-aware; useful for users already knowledgeable about
waste types.
• Limitations: Depend heavily on user input; no image recognition; lack of real-time
automation.
2. Smart Bin Systems with Sensors and AI (e.g., Bin-e, CleanRobotics)

These are physical waste bins integrated with AI and sensor-based systems for automated sorting
and disposal.
• Strengths: High precision in waste identification; suitable for public infrastructure and
high-traffic areas.
• Limitations: High implementation cost; limited accessibility for individual users; not
mobile or widely scalable.
3. Mobile Applications with Basic AI Features (e.g., Recyclica, TrashOut)
These apps incorporate limited AI features, sometimes using barcode scanning or database
matching to categorize waste.
• Strengths: More interactive than manual systems; offer educational content.
• Limitations: Lack image-based recognition; limited to structured input; do not personalize
content delivery.
4. Educational Platforms Promoting Sustainability
Some platforms focus on spreading environmental awareness through games, videos, or content
recommendations.
• Strengths: Help build eco-conscious behavior, especially among youth.

7
2.3 Contributions and Distinctions of EnviroMate in Waste Management
Systems:
EnviroMate introduces a transformative approach to waste identification and recycling
facilitation by bridging the gap between environmental responsibility and smart technology. Unlike
conventional tools that rely on manual entry or offer limited automation, EnviroMate distinguishes
itself through its unique blend of artificial intelligence, user engagement, and accessibility.
Key Contributions of EnviroMate:
1. AI-Powered Visual Waste Classification:
o Leverages advanced computer vision models to recognize and categorize waste
(e.g., paper, plastic, metal, organic, e-waste) in real time using a smartphone
camera.
o Enhances accuracy and speed, removing the need for manual sorting or item search.
2. Integrated Smart Recycling Hub Locator:
o Connects users instantly to the nearest relevant recycling or waste disposal center
using GPS and mapping services.
o Promotes timely and proper disposal, reducing the likelihood of environmental
harm.
3. Eco-Education and Behavioral Awareness:
o Provides personalized insights about the environmental impact of each waste type
and suggests sustainable alternatives.
o Encourages behavioral change through learning, not just action.
4. User-Centric, Mobile-First Design:
o Designed for accessibility across age groups and educational levels, with an
intuitive interface suitable for individuals, families, and institutions.
o Operates across devices (smartphones, tablets), increasing its reach and
adaptability.
5. Support for a Greener Ecosystem:
o Fosters eco-responsibility through intelligent feedback loops, visual cues, and
location-based nudges.
o Contributes to broader sustainability goals by empowering users to take immediate,
informed action.

8
CHAPTER 3

SYSTEM DESIGN

3.1 Existing System Architecture


Existing waste management systems often face several challenges that hinder efficient
recycling and waste disposal. Current platforms generally rely on basic methods for waste
classification and disposal recommendations, but they encounter the following limitations:

Manual Sorting Methods: Most systems still rely on manual sorting by waste collection services
or individuals, which can be inaccurate and time-consuming, especially for non-experts.
Lack of Real-Time Waste Identification: Many waste management systems do not provide real-
time feedback on how to sort different materials, which leads to improper recycling practices.
Limited Educational Support: Existing systems offer minimal educational content about
recycling practices, failing to raise awareness about environmental impacts and best practices for
waste disposal.
Non-Interactive User Experience: Most waste management systems have static interfaces that
do not adapt to user needs, leading to a lack of engagement and understanding.

Due to these challenges, there is a clear need for a more intelligent, AI-powered system that offers
real-time waste identification, location-based disposal recommendations, and dynamic educational
support.

3.2 Proposed Model Architecture :

The EnviroMate system introduces an AI-driven approach to waste management,


combining real-time waste identification, smart location services, and interactive educational
insights to guide users through responsible recycling. The proposed architecture consists of the
following core components:

Data Collection Module:


Input: Collects images of waste items from the user via camera or app interface.
Data Sources: Aggregates data on common waste materials, including plastics, paper, metals, e-
waste, and organic matter.
9
Data Preprocessing Module:
Image Processing: Uses AI image recognition to clean, normalize, and process images of waste.
Feature Extraction: Identifies features of the waste item, such as material type and size, to aid in
classification.
Categorization: Classifies waste into categories such as recyclables, compostable, or non-
recyclable.

AI-Powered Waste Identification (Deep Learning Model):


Deep Learning Model (Convolutional Neural Network): A CNN-based deep learning model is
trained on a large dataset of labeled waste images to classify waste into different categories. This
model excels at recognizing objects in images and identifying patterns related to material type,
size, and shape.
Model Architecture: The CNN architecture is designed with multiple convolutional layers followed
by pooling layers, leading to a fully connected layer for classification. The model includes:
Input Layer: Takes in raw images of waste items captured via the camera.
Convolutional Layers: Extracts low-level features like edges, textures, and shapes from the images.
Pooling Layers: Reduces the spatial dimensions of the image to extract higher-level features.
Fully Connected Layers: Combines the learned features to make final predictions about the waste
category.
Softmax Layer: Outputs probabilities for each class (e.g., recyclable, compostable, or non-
recyclable).
Model Training: The model is trained on a large dataset of waste materials (images) with labels for
each category (e.g., plastic, glass, paper). The training uses techniques like backpropagation and
gradient descent to minimize classification errors.
Model Tuning: Hyperparameters, such as the number of layers, learning rate, and batch size, are
fine-tuned using cross-validation and grid search to ensure optimal model performance.

Recycling Hub Locator Module:


Geolocation: Uses GPS data to locate nearby recycling centers, composting facilities, or drop-off
points based on user location.
Dynamic Recommendations: Provides users with real-time information about the most convenient
recycling options.

10
Educational Insights & Feedback Layer:
User Guidance: Provides feedback about waste sorting, along with educational content such as the
environmental impact of incorrect disposal and tips for sustainable practices.
Visual & Textual Information: Integrates text-based information and images to explain recycling
practices.

User Interface:
Mobile and Web App: A user-friendly interface that works across devices to provide seamless
interaction with the system.
Real-Time Interaction: Allows users to input images of waste, receive feedback, and locate the
nearest recycling facilities.

3.3 Deep Learning Model :

The Deep Learning Model used in EnviroMate primarily revolves around Convolutional
Neural Networks (CNN), which are highly effective for image-based classification tasks. Given
the visual nature of the input (images of waste), CNNs are particularly well-suited for accurately
identifying and categorizing different types of waste materials.

Model Description :

Convolutional Neural Network (CNN) :The core deep learning model used for image recognition
and classification of waste items. It uses multiple convolutional layers to extract features and
classify the images into predefined categories.
Transfer Learning (e.g., ResNet, VGG) :Pre-trained CNN architectures like ResNet or VGG are
used to leverage existing models trained on large datasets like ImageNet, adapting them to the
waste classification task by fine-tuning them on specific waste datasets.
Data Augmentation: Techniques like rotation, flipping, and scaling of images to expand the
training dataset, improving model robustness and accuracy.
Model Training and Optimization : The CNN model is trained using a large dataset of labeled
waste images. Optimization techniques like Adam optimizer and dropout layers are used to reduce
overfitting and improve model performance.
The CNN-based deep learning model was selected for the final implementation due to its high
accuracy in visual recognition tasks and its ability to generalize well to various types of waste

11
materials.

3.4 Data Analysis and Visualization :

Effective data analysis is vital for building a reliable waste management system. The
following steps were taken during data analysis:

Waste Classification Accuracy: Metrics such as accuracy, precision, recall, and F1-score were

used to evaluate the performance of the deep learning model.

Confusion Matrix: A confusion matrix was generated to assess the classification accuracy of

waste categories (e.g., recyclables vs. non-recyclables).

Feature Importance: Analysis was conducted to understand which waste features (e.g., size,

shape, color) most influenced classification accuracy.

Recycling Hub Proximity: Visualized the distance from user locations to nearby recycling hubs

using geospatial data analysis tools.

User Interaction Trends: Analyzed user engagement to identify patterns in how users interact

with the app and learn about recycling.

These analyses were crucial in optimizing the system for improved waste sorting accuracy, user
engagement, and efficient waste disposal recommendations.

12
CHAPTER 4

SYSTEM REQUIREMENTS

This project is developed using Python and its standard libraries, with a GUI built in
Tkinter. It integrates speech recognition, gesture detection (via image processing), and
external APIs for enhanced language learning experiences.

4.1 SOFTWARE REQUIREMENTS :

4.1.1 Development Tools and Languages


• Python 3.9+ – Primary programming language for backend logic and GUI development.
• Tkinter – Built-in GUI package for creating the application interface.
• pip – Python package installer for managing dependencies.
4.1.2 Required Python Libraries
• SpeechRecognition – For capturing and converting voice to text.
• OpenCV – For gesture detection and webcam access.
• Pillow (PIL) – For image rendering in the GUI.
• requests – To call external APIs like Unsplash and OpenAI.
• transformers (HuggingFace) – For NLP tasks (grammar word detection, explanations).
• Pyttsx3 – For text-to-speech support.
• NumPy – For basic array processing during image and gesture handling.
4.1.3 External APIs
• Web Speech API (if integrated with browser) – For browser-based voice input (optional).
• Unsplash API – To fetch related images for vocabulary words.
• OpenAI API (ChatGPT) – For grammar explanations and definitions (optional fallback).
4.1.4 Platform Compatibility
• Operating Systems:
o Windows 10/11
o Linux (Ubuntu 20.04+)
o macOS Monterey+
• Dependencies:
o Python and required packages must be pre-installed.
13
o Internet connection required for API-based features.

4.2 HARDWARE REQUIREMENTS


Since the system is lightweight and runs locally, minimal hardware resources are needed.

4.2.1 Development Environment


• Processor: Intel Core i3 / AMD Ryzen 3 or better
• RAM: 4 GB (minimum), 8 GB (recommended)
• Storage: 200 MB free space for Python and packages
• Webcam & Microphone: Built-in or external, for gesture and speech input
4.2.2 User Device Requirements
• Device: Desktop or Laptop
• Display: 720p or better resolution
• Input Devices: Webcam and microphone (for gesture and voice features)
• OS: Windows, Linux, or macOS with Python installed

System Architecture Overview

The system runs as a standalone desktop application, where:


• Tkinter handles user interface.
• OpenCV accesses the webcam and detects gestures.
• SpeechRecognition listens to and transcribes speech input.
• Python functions and APIs process input and return real-time feedback with explanations,
images, and definitions.
This architecture ensures offline accessibility (with limited functionality) and online API-powered
enhancements (for image and definition fetching), making it both lightweight and inclusive.

14
CHAPTER 5

METHODOLOGY

5. METHODOLOGY:

5.1 Data Collection and Ingestion :


The first step in building an AI-powered waste management and recycling system is to
gather relevant data. This project focuses on waste classification, recycling center geolocation, and
user behavior data. Data is collected from the following sources:

Public Datasets: Datasets from open data platforms like government waste management websites
and environmental agencies.

Crowdsourced Data: Data from mobile apps and IoT sensors related to waste disposal, recycling
patterns, and waste sorting.

Recycling APIs: APIs from platforms like Earth911 or Recycling Near You to gather information
about recycling centers, accepted materials, and locations.

User-generated Data: Data from the application, such as waste categories submitted by users or
their location data for geolocation-based services.

Once collected, the data is ingested into the system using Python libraries like Pandas, and stored
in structured formats like CSV, Excel, or a relational database for future analysis.

Figure 5.1.1: Data Collection Process Diagram

5.2 Data Preprocessing and Cleaning :

Waste classification and user data often contain the following challenges:
Missing values: Certain waste type categories or user locations might be missing.

Inconsistent entries: User input might have typos or multiple representations for the same item
(e.g., "paper" vs. "recycled paper").
15
Outliers: Unusual or erroneous data points, like a very high number of waste items submitted in a
single transaction.

Redundant features: Unnecessary data, like extra metadata or irrelevant features, that could affect
model performance.

The preprocessing phase will involve:

Handling missing values: Imputing missing data through methods like mean, median, or using
domain-specific filling techniques (e.g., location imputation based on neighboring areas).

Removing duplicates: Ensuring that each data record is unique, eliminating any redundant entries.

Outlier detection: Identifying and removing anomalies using statistical methods such as
Interquartile Range (IQR) or Z-score analysis.

Data type conversion: Ensuring that all categorical features (e.g., waste type, user location) and
numerical data (e.g., amount of waste) are correctly identified and processed.

Encoding categorical variables: Using one-hot encoding or label encoding for categorical
features such as waste category, user profile, etc.

Figure 5.2.1: Data Preprocessing Workflow Diagram

5.3 Feature Engineering :

Feature engineering improves the model's accuracy by transforming raw data into valuable
inputs:

Creating new features: Examples include waste-to-recycling ratios, frequency of waste disposal
per user, or geographic distance to the nearest recycling center.

Binning: Converting continuous variables such as user engagement (number of interactions with

16
the system) into discrete bins (e.g., high, medium, low).

Scaling: Normalizing features like waste volume or recycling frequency to bring them into a
uniform range, especially if distance-based algorithms are used.

Dimensionality reduction: Using techniques like Principal Component Analysis (PCA) or feature
importance analysis from decision trees to eliminate less useful features and reduce model
complexity.

Figure 5.3.1: Feature Selection Overview Diagram

5.4 Model Training and Evaluation :

For the waste classification and user prediction models, various machine learning
algorithms can be employed. In this project, we focus on Deep Learning Models (e.g.,
Convolutional Neural Networks for image-based waste classification) and Random Forest for the
recycling center location recommendation.

Data Splitting: Dividing the dataset into a training set (80%) and a testing set (20%) for model
evaluation.

Training: The selected deep learning model or Random Forest algorithm is trained using the
training data.

Testing: Predictions are made on the testing set to evaluate how well the model generalizes to
unseen data.

Evaluation Metrics:
Accuracy (for classification tasks)
Precision/Recall/F1 Score (for waste classification)
Mean Absolute Error (MAE) (for regression tasks like distance to nearest recycling
center)

17
Root Mean Squared Error (RMSE) (for performance evaluation of the model)

Figure 5.4.1: Model Training Flowchart

5.5 Performance Optimization Strategies :

To improve model performance and accuracy, the following strategies are applied:

Hyperparameter Tuning: Techniques such as GridSearchCV or RandomizedSearchCV are used


to optimize model parameters like the number of trees (for Random Forest) or filter size (for CNN).

Cross-validation: k-Fold Cross-Validation is employed to ensure the model's robustness and reduce
overfitting by testing it on different subsets of the dataset.

Ensemble Techniques: Comparison of different models such as Gradient Boosting or XGBoost


for potential improvements in classification and prediction tasks.

Feature Selection: Removing less important features that contribute little to model prediction,
thereby reducing complexity and improving training speed.

Figure 5.5.1: Hyperparameter Tuning and Performance Optimization Diagram

5.6 Model Deployment and Integration :

Once the model achieves satisfactory performance, it will be deployed using the following
steps:

Backend Deployment: Deploying the trained model as an API using Flask or Django to interact
with the front-end application.

Frontend Integration: Integrating the model's output (waste classification, recycling

18
recommendations) into the user interface of the application. This will be a web-based application
that can also support mobile interfaces through Progressive Web Apps (PWA).

Real-Time Prediction: The system will process user inputs (images of waste or voice commands)
in real-time to classify waste and suggest nearby recycling centers.

Model Updates: Continuous monitoring of model performance, and periodic retraining as new
data (such as user feedback or updated recycling center information) becomes available.

Figure 5.6.1: Model Deployment and API Integration Diagram

5.7 User Feedback and System Adaptation :

To continuously improve the model and system performance, the following mechanisms
will be implemented:

User Feedback Loop: Collecting user feedback on the waste classification and recycling
recommendations to refine the model.

System Adaptation: The model will be fine-tuned periodically based on new user data, ensuring
that it remains accurate and up-to-date.

19
CHAPTER 6

RESULTS AND DISCUSSION

6.1 OUTPUT SAMPLES FROM VARIOUS MODULES :

This section summarizes the performance of different functional modules in EnvironMaate


during testing. Inputs were given through GUI forms, camera image uploads, and text entries. The
outputs were evaluated based on accuracy, response time, and user experience.

A. Waste Classification (Image-Based Input)

1. Output: The system successfully classified waste into predefined categories such as
biodegradable, non-biodegradable, e-waste, and recyclable.
2. Example: An uploaded image of a banana peel was correctly identified as biodegradable
waste.
3. Accuracy: Between 85% to 92%, depending on lighting and background clarity.
4. Discussion: The CNN model (trained on a custom dataset) performed well in daylight and
controlled environments, with slight confusion between certain visually similar waste
items.

B. Text-Based Waste Queries


1. Output: Users typed waste item names like "plastic bottle" or "vegetable peels", and the
system responded with the correct category and disposal instructions.
2. Example: Input: "plastic bottle" → Output: "Non-biodegradable. Please dispose in the blue
bin or recycle if possible."
3. Accuracy: Over 95% for known keywords.
4. Discussion: The keyword-matching and rule-based logic worked efficiently. For unknown
terms, fallback suggestions were triggered.

C. Suggestions & Tips Module :

1. Output: The assistant provided eco-friendly disposal tips, waste reduction ideas, and
recycling techniques.
20
2. Example: Input: "metal can" → Output: "Rinse and recycle. Use yellow bin if available."
3. Additional Feature: For educational purposes, facts about environmental impact were
shown alongside the tips.

D. GUI Interaction with Tkinter :

1. Output: User-friendly interface with real-time response to button clicks and inputs.
2. Discussion: The system maintained consistent frame transitions and feedback messages.
Image upload, dropdown selection, and input validation all worked as expected.

6.2 STRENGTHS OF THE SYSTEM


The EnvironMaate project demonstrated several strengths during its testing phase,
especially in terms of modular performance and accessibility.

A. Accurate Waste Identification


• High classification accuracy using a CNN-based model for common waste items.
• Differentiated between organic, inorganic, e-waste, and recyclables.

B. Educational and User-Friendly


• Displayed tips, environmental facts, and bin color guidance, making it suitable for use
in schools, homes, and municipal awareness programs.
• Easy to use even by non-technical users due to its simple Tkinter interface.

C. Lightweight and Offline Support


• Fully functional offline (no need for constant internet).
• Fast image processing and response using lightweight ML models.

D. Environment-Focused Intelligence
• Suggested eco-friendly alternatives and disposal tips based on user input.
• Aimed at behavioral change by promoting sustainable practices.

E. Expandable Architecture
• Can be extended to include:

21
o QR-code bin mapping
o Waste pickup scheduler
o Voice-based queries using SpeechRecognition
o Mobile version using Kivy or PWA

Conclusion of Results :

The EnvironMaate system performed reliably across multiple input types and user
interactions. Its strength lies in its intelligent waste categorization, interactive learning content, and
easy-to-use interface. Future improvements may include expanding the waste dataset, integrating
multilingual support, and deploying the system on cloud or mobile platforms for broader
accessibility.

22
CHAPTER 7

CHALLENGES AND LIMITATIONS

7.1 Variability in User Input Quality :

One of the primary challenges encountered in EnvironMaate was managing the inconsistency
in user-submitted inputs, particularly for image-based classification.
• Blurry or Low-Resolution Images: Images captured with low-quality or shaky cameras
significantly impacted waste type prediction.
• Lighting and Background Issues: Dark lighting, shadows, or cluttered backgrounds made
waste item identification more difficult.
• Unclear Object Positioning: Waste items photographed at odd angles or partially visible
were more likely to be misclassified.
To mitigate these issues, basic preprocessing techniques such as resizing, sharpening, and contrast
enhancement were implemented. However, real-world performance still varies based on user
adherence to input guidelines.

7.2 Classification Accuracy for Diverse Waste Types :

While the model handled common waste items (like paper, food scraps, or plastic) with high
confidence, it faced limitations with:
• Visually Similar Objects: Confusion between recyclable plastics and non-recyclables
(e.g., plastic toys vs. bottles).
• E-Waste & Complex Items: Items like batteries, circuit boards, or mixed-material
packaging often caused misclassification.
• Unlabeled or Custom Waste: Rare or region-specific items not seen during training were
either rejected or wrongly tagged.
This limitation can be addressed in the future by expanding the training dataset to include more
real-world samples and rare waste categories.

7.3 Limited Dataset and Overfitting Risk :

Due to constraints in project scope and timeline, the training dataset was relatively small and
manually curated.
23
• Insufficient Class Balance: Some waste categories were underrepresented (e.g., metal
waste, e-waste).
• Risk of Overfitting: The model tended to memorize training images rather than generalize
to unseen examples.
• Lack of Diversity: The absence of images taken in varied lighting conditions, angles, or
environments reduced the model's robustness.
Future work should include the use of large-scale, open-source waste datasets and synthetic data
augmentation to boost generalization.

7.4 GUI Design Constraints :

The use of Tkinter ensured quick development and simplicity, but it also introduced design
limitations:
• Lack of Responsive Layout: The UI doesn’t automatically scale well across different
screen sizes or resolutions.
• Limited Multimedia Support: Integration of features like drag-and-drop, voice input, or
camera feed was restricted.
• Basic Error Feedback: Input errors (like unsupported file formats) had to be handled with
simple message boxes.
Switching to a more advanced GUI framework (e.g., PyQt, Kivy, or web-based interfaces) would
allow for more dynamic, user-friendly features.

7.5 Offline Performance and Speed:

While EnvironMaate works offline (an intentional design choice), this created constraints in
terms of model size and speed.
• Lightweight Model Requirement: Heavy models could not be deployed, leading to some
trade-offs in accuracy.
• Processing Delay: Image classification could take a few seconds on older machines due to
limited optimization.
• No Cloud Support: Without internet connectivity, users couldn’t benefit from scalable
cloud-based inference or data logging.
Future improvements may include an optional online mode for enhanced performance on

24
supported devices.

7.6 Text Query and Knowledge Base Limitations:

The assistant’s response system for text queries is currently based on rule-based logic and
simple keyword matching.
• Limited Language Understanding: Complex or grammatically incorrect queries may be
misinterpreted.
• No Semantic Search: The assistant lacks deeper NLP understanding or paraphrasing
capabilities.
• Restricted Tip Database: Suggestions and eco-tips are pre-written and do not dynamically
adapt to regional practices or updates.
Integrating NLP models like BERT or using a cloud-based FAQ knowledge base could overcome
these limitations.

7.7 Scalability and Dataset Maintenance :

As new types of waste emerge or classification rules change, keeping the system updated is an
ongoing challenge.
• Manual Dataset Updates: Adding new waste types or re-training models requires
developer intervention.
• Static Output Responses: Without an adaptive backend, all suggestions remain fixed and
need manual revisions.
• Version Control: Changes in model versions or tips database must be carefully logged to
avoid inconsistencies in advice.
Developing a modular, update-ready backend with versioning and a semi-automated training
pipeline would enhance scalability.

7.8 Educational and Behavioral Limitations:

While the project aims to promote environmental awareness, influencing actual behavior
remains difficult.
• Limited Motivation Triggers: Users may ignore suggestions or not follow disposal
instructions correctly.

25
• No Feedback or Progress Tracking: Users receive tips but cannot see the environmental
impact of their actions.
• Language and Accessibility Gaps: The system currently supports only English and basic
text display, which may not suit all user groups.
Integrating gamification, multilingual content, and accessibility features (like text-to-speech or
image-to-audio) could increase impact.

7.9 Security and Privacy Considerations:;

Though no personal data is stored or transmitted, basic security precautions are still important.
• File Handling Risks: Uploaded images must be checked to prevent malicious formats or
code injection.
• Log Access: If future versions include data storage, access control and encryption will be
essential.
• User Privacy: Even non-personal waste images can reveal household habits or location if
misused.

26
CHAPTER 8

APPENDICES

1.1 SOURCE CODE

import cv2
import time
import sqlite3
import webbrowser
from io import BytesIO
from tkinter import *
from tkinter import messagebox
from tkinter.ttk import Progressbar
from tkinter import ttk
from cvzone.ClassificationModule import Classifier
from PIL import Image, ImageTk

classifier = Classifier("Model/keras_model.h5", "Model/labels.txt")


con = sqlite3.connect('enviromate.db')
cursor = con.cursor()

fh = open('Model/labels.txt')
class_lst = fh.readlines()
classes = []
for i in class_lst:
j = i.split()[-1]
classes.append(j)

win = Tk()
win.geometry("1300x700")
win.title("Start")
win.config(bg='#6BD275')

page1 = PhotoImage(file='Image/page1.png')
page2 = PhotoImage(file='Image/page2.png')
strt_img = PhotoImage(file='Image/start.png')
sett_img = PhotoImage(file='Image/settings.png')

win1 = Frame(win, height=700, width=1300)


win1.pack()
img1 = Label(win1, image=page1)
img1.pack()

def image_tk(img):
tk_image = PhotoImage(file=img)
return tk_image

def back_to_menu(window,cam = False):


window.destroy()
27
if cam != False:
cam[0].release()
cam[1].destroy()
start()

def map_locate(link):
webbrowser.open(link)

def start():
win1.destroy()
win.title("Menu")
win2 = Frame(win, height=700, width=1300)
win2.pack()
img2 = Label(win2, image=page2)
img2.pack()

def Scan():
win2.destroy()
win.title("Scanner")
cap = cv2.VideoCapture(0)
cap.set(3, 640)
cap.set(4, 480)
global c, load
c=0
load = False
predictions = []
def update():
_, frame = cap.read()
if _:
global frame_res
frame_res = cv2.resize(frame, (422, 377))
rgb_img = cv2.cvtColor(frame_res, cv2.COLOR_BGR2RGB)
pil_img = Image.fromarray(rgb_img)
tk_img = ImageTk.PhotoImage(pil_img)
vid.config(image=tk_img)
vid.image = tk_img
vid.after(10, update)
if load == True:
return 0

def Scanner():
global frame_res
prog_lab = Label(win, bg='#A1F2A5', text="", font=("Times New Roman", 23),
fg="#6BD275")
prog_lab.place(relx=0.63, rely=0.558)

for i in range(1,11):
update()
predictions.append(frame_res)
prog_var = IntVar()

28
prog_lab.config(text="Analyzing"+('.')*i)
prog = Progressbar(win, orient=HORIZONTAL, length=300, mode='determinate',
variable=prog_var)
prog.place(relx=0.585, rely=0.678)

for j in range(100):
prog_var.set((j / 100) * 100)
win.update_idletasks()
time.sleep(0.008)

prog_var.set(100)
prog.start()

prog.destroy()
cap.release()
vid.destroy()

def information(category):
win3.destroy()
fram.destroy()
scan.destroy()
prog_lab.destroy()
win.title(category.capitalize())
win4 = Frame(win, height=700, width=1300)
for img in predictions:
prediction, confidence = classifier.getPrediction(img)
predicts.append(classes[confidence])
for i in list(dict.fromkeys(predicts)):
dict_pred[i] = predicts.count(i)
Class = max(dict_pred.values())

if Class > 5 :
for i in dict_pred.keys():
if dict_pred[i] == Class:
Class = i
prog_lab.config(text="")
prog_lab.config(text=Class)
information(Class)
else:
messagebox.showerror("ERROR","TRY AGAIN !")
else:
messagebox.showerror("ERROR","Please restart and try !")

pag = PhotoImage(file='Image/page3.png')
win3 = Label(win, image=pag)
win3.pack()

fram = Frame(win, height=430, width=480, bg='#BFF2B7')


fram.place(x=183, y=176)
vid = Label(fram, bg='#BFF2B7')
29
vid.pack()

back = ttk.Button(text='Back',command=lambda:back_to_menu(win3,[cap,fram]))
back.place(x=0,y=0)
scan = Button(win, command=Scanner, image=scan_img, bd=0, relief=FLAT,
bg="#A1F2A5", fg="#A1F2A5")
scan.place(relx=0.58, rely=0.409)
update()
win.mainloop()

def footprint():
pass

def yourBin():
pass

scan = Button(win2, command=Scan, image=scan_img, bd=0, relief=FLAT, bg="#A1F2A5",


fg="#A1F2A5")
scan.place(relx=0.3815, rely=0.353)
footPrint = Button(win2, command=footprint, image=carbFoot_img, bd=0, relief=FLAT,
bg="#A1F2A5", fg="#A1F2A5")
footPrint.place(relx=0.3789, rely=0.4885)
Bin = Button(win2, command=yourBin, image=bin_img, bd=0, relief=FLAT, bg="#A1F2A5",
fg="#A1F2A5")
Bin.place(relx=0.38, rely=0.624)
win2.mainloop()

def settings():
pass

strt = Button(win1, command=start, image=strt_img, bd=0, relief=FLAT, bg="#A1F2A5",


fg="#A1F2A5")
strt.place(relx=0.3787, rely=0.353)
setng = Button(win1, command=settings, image=sett_img, bd=0, relief=FLAT, bg="#A1F2A5",
fg="#A1F2A5")
setng.place(relx=0.3787, rely=0.484)
Exit = Button(win1, command=quit, image=exit_img, bd=0, relief=FLAT, bg="#A1F2A5",
fg="#A1F2A5")
Exit.place(relx=0.3787, rely=0.6195)

win1.mainloop()
win.mainloop()

30
1.2 SCREENSHOTS

Figure.No:8.1 Home page interface (frontend ui)

31
Figure.No:8.2 Object Scanning interface

32
Figure.No:8.3 Classification of the detected objects

Figure.No:8.4 Locating the nearby recycle shops


33
CHAPTER 9

CONCLUSION AND FUTURE ENHANCEMENTS

9.1 Conclusion
In this project, we developed EnvironMaate, a smart waste management assistant that
combines image classification, text-based interaction, and an intuitive Tkinter-based GUI to
educate users on proper waste disposal. The system uses a lightweight machine learning model to
classify waste items into biodegradable, recyclable, or hazardous categories and offers eco-friendly
suggestions to encourage responsible behavior.

The core objective was to create a user-friendly, offline-capable tool that helps bridge the gap
between household waste disposal and environmental awareness. The assistant allows users to
either upload an image of waste or enter a text query to receive disposal advice and sustainability
tips. This dual-input approach improves accessibility and serves both visual and verbal
communication preferences.

Our implementation successfully balances technical functionality and environmental education,


making it suitable for schools, households, and small communities seeking to build eco-conscious
habits. The simplicity of Tkinter ensures ease of deployment across different systems, while the
modular design allows room for future upgrades.

9.2 Future Enhancements


To improve the usability, accuracy, and scalability of EnvironMaate, the following future
enhancements are proposed:

Multilingual Support
Integrate support for multiple languages (e.g., Tamil, Hindi) to ensure accessibility for non-English
speakers and increase adoption in diverse regions.

Larger and More Diverse Training Dataset


Expand the waste image dataset to include more real-world scenarios, such as complex or mixed
waste types, e-waste, and regional waste materials.

34
Real-Time Camera Integration
Allow users to use a device camera to capture and classify waste items instantly, eliminating the
need to upload saved images.

Gamification and Progress Tracking


Introduce interactive features such as eco-scores, badges, or daily tips to motivate consistent use
and promote behavioral change through gamification.

Text-to-Speech (TTS) Support


Add audio feedback to make the assistant accessible for visually impaired users and for educational
settings like classrooms.

Cloud Integration for Data Logging


Enable optional cloud connectivity to store user queries and feedback, which can help improve
recommendations over time through feedback-driven model refinement.

Mobile App Deployment


Port the system into a lightweight mobile application for broader access and real-time usage,
especially in rural or on-the-go settings.

Final Note:
While the current version of EnvironMaate offers a promising step toward AI-assisted waste
classification and education, these enhancements will further transform it into a scalable, inclusive,
and intelligent platform for sustainable living.

35
CHAPTER 10

REFERENCES

1. Rad, M. R., von Kaenel, P., Minter, J., Mazzolini, A., Zhang, Y., Soleimani, E., ..
& Mojsilovic, A. (2017). A computer vision system to localize and classify wastes
on the streets. In 2017 IEEE Winter Conference on Applications of Computer
Vision (WACV) (pp. 1037-1045). IEEE.

2. Mittal, A., Bhardwaj, A., & Raj, R. (2016). IoT based smart waste management
system: A research paper. International Journal of Computer Applications,
162(3), 6-9.

3. Yang, Y., Nguyen, T., San, P. P., Jin, Y., & See, S. (2016). Deep learning for
practical image recognition: Case study on waste classification. In 2016 IEEE
International Conference on Big Data and Smart Computing (BigComp) (pp. 313-
316). IEEE.

4. Saha, M., & Bhattacharya, J. (2021). Solid waste management using digital twin
technology. Environmental Science and Pollution Research, 28(16), 20428–
20440.

36

You might also like