ML Report .
ML Report .
Project Report
Submitted by
AKHILESH J (311523243007)
PRAVEEN R (311523243046)
DEPARTMENT OF
SIGNATURE SIGNATURE
College, College,
Submitted for the end semester project review of Machine Learning in the
Department of ________________________ held on .
(An Autonomous Institution) for supporting and motivating us during the course of study.
We wish to express our deep sense of gratitude to Mr. N. Sreekanth M.S, Secretary
College, Chennai, for providing me an excellent academic environment and facilities for
pursuing our B.Tech Program. We extend our heartfelt gratitude and sincere thanks to
Mrs.N.Mathangi.,M.E.,[Ph.D.] HOD/AI-DS.
We owe our wholehearted thanks and appreciation to the entire staff of AI&DS
department for their assistance during the course of our study. We hope to build upon the
experience and knowledge that we have gained in this course to make a valuable contribution
I would also like to thank my family and friends who motivated me during the course
i
ABSTRACT
environmental practices. The system uses deep learning models for intelligent waste
classification through live camera input, identifying categories such as paper, plastic, e-waste,
glass, and organic matter with high precision. Upon detection, the application dynamically
locates and recommends the nearest recycling or disposal facilities using integrated geo-tagging
and mapping APIs. In addition to core functionalities, the system offers educational insights on
waste impact and eco-friendly practices to raise environmental awareness. Featuring a user-
centric interface and adaptive design, EnviroMate empowers individuals across homes,
geolocation services, and environmental data, the project delivers an end-to-end sustainable
Keywords
Tagging
i
TABLE OF CONTENTS
ABSTRACT i
LIST OF FIGURES iv
1 INTRODUCTION 1
1.1 BACKGROUND 1
1.3 OBJECTIVES 3
2 LITERATURE REVIEW 6
3 SYSTEM DESIGN 9
4 SYSTEM REQUIREMENTS 13
ii
4.1 SOFTWARE REQUIREMENTS 13
5 METHODOLOGY 15
8 APPENDICES 27
8.2 SCREENSHOTS 31
10 REFERENCES 36
ii
LIST OF FIGURES
PAGE
FIGURE NO. FIGURE NAME
NO.
Home page interface (frontend ui)
8.1 31
Iv
CHAPTER 1
INTRODUCTION
1.1 BACKGROUND
The rapid growth of urbanization and consumerism has led to an unprecedented increase in
waste generation, posing serious environmental and public health challenges worldwide.
Traditional waste management systems, often reliant on manual sorting and limited public
awareness, struggle to keep pace with the growing complexity and volume of waste streams.
Moreover, many individuals lack the knowledge or motivation to dispose of waste responsibly,
resulting in improper recycling practices and increased environmental degradation.
One of the key barriers to effective waste management is the general population’s limited
understanding of material classification and disposal methods. Conventional recycling programs
often require users to sort waste accurately—yet without the proper tools or guidance, this process
becomes error-prone and inefficient. Furthermore, the absence of real-time information on local
recycling options discourages user participation in sustainable practices.
To bridge this gap, the proposed project introduces EnviroMate, an intelligent mobile application
that empowers users to participate in environmentally responsible behavior through a seamless,
AI-driven experience. By utilizing real-time image recognition, EnviroMate can instantly identify
waste types—ranging from paper and plastics to electronic and organic materials—via a
smartphone camera. The system then provides immediate feedback and navigational support by
directing users to the nearest appropriate recycling or disposal facility, fostering timely and correct
waste handling.
In addition to its core classification and navigation features, EnviroMate promotes environmental
awareness by delivering educational insights on the long-term impact of various waste materials.
Its user-friendly design ensures accessibility across age groups and technical proficiencies, making
it an ideal tool for households, schools, and workplaces. Through the integration of machine
learning, computer vision, and geospatial services, this solution paves the way for smarter waste
management and a more eco-conscious society.
1
1.2 PROBLEM STATEMENT
2
1.3 OBJECTIVES
This project aims to develop an intelligent, AI-powered waste management assistant that
empowers individuals to identify, sort, and dispose of waste responsibly through real-time image
recognition and location-aware services. The following objectives define the core features
driving this sustainable innovation Dual-Mode Accessibility: To build a two-way input system
that supports both voice (via Web Speech API) and sign language gestures (via computer vision),
enabling inclusive participation from both regular users and those with speech or hearing
disabilities.
3
1.4 SCOPE OF THE PROJECT
The scope of this project focuses on the design and development of EnviroMate, an AI-
powered mobile application aimed at promoting responsible waste management through real-time
waste recognition and intelligent recycling guidance. This solution targets individual users—
whether at home, school, or in public settings—who seek a simple, reliable way to classify waste
and locate nearby disposal or recycling facilities.
EnviroMate leverages deep learning models for accurate image-based classification of common
waste types, including paper, plastic, metal, glass, e-waste, and organic materials. Users interact
with the system through their smartphone camera, enabling intuitive and touch-free identification
of discarded items. Once classified, the system uses geolocation services and mapping APIs to
direct users to the closest appropriate recycling center or drop-off point.
Beyond core identification and navigation functions, the application also includes an educational
component that provides users with brief but impactful eco-insights. These insights explain the
long-term environmental effects of various waste types and suggest best practices for sustainable
disposal, encouraging environmentally conscious habits.
The project is designed to operate across major mobile platforms (Android and iOS), ensuring a
seamless and responsive user experience. The application will include a clean, user-friendly
interface that supports quick interactions and visual feedback.
This project serves as a practical and scalable entry point into smart, tech-enabled environmental
solutions, laying the groundwork for future expansions involving advanced sorting, broader
material databases, multilingual support, and integration with municipal waste systems.
4
1.5 SIGNIFICANCE OF STUDY
This study holds substantial significance in the context of environmental sustainability and the
practical application of artificial intelligence for public benefit. Improper waste segregation and
disposal continue to pose critical challenges to ecosystems and urban infrastructure worldwide.
While numerous awareness campaigns and regulations exist, the lack of real-time, user-friendly
tools remains a barrier to individual participation in responsible waste management.
The project is especially impactful for educational institutions, workplaces, and households
seeking to cultivate green habits. Its user-centric design ensures accessibility across age groups and
technical skill levels, making it a scalable tool for widespread adoption. Furthermore, the inclusion
of eco-insights provides users with contextual knowledge about the long-term environmental
effects of different waste materials, bridging the gap between awareness and behavior change.
From a technological standpoint, the study contributes to the evolving field of AI for environmental
applications. It showcases the use of deep learning in real-time object classification and reinforces
the importance of intuitive design in encouraging public engagement with sustainability efforts.
The system serves as a model for future smart-city initiatives, where AI-powered tools assist
citizens in making eco-conscious decisions.
By aligning with the principles of the United Nations Sustainable Development Goal 11
(Sustainable Cities and Communities) and Goal 13 (Climate Action), this project exemplifies
how digital innovation can drive positive environmental impact at both the individual and
community levels.
5
CHAPTER 2
LITERATURE REVIEW
• Convolutional Neural Networks (CNNs): CNNs are widely used in image classification
tasks, including waste type recognition. Models like MobileNet, ResNet, and Inception
have been trained on waste datasets to categorize items such as plastic, glass, paper, and
organic materials. These models offer high accuracy in identifying visual patterns in real-
world waste images.
• YOLO (You Only Look Once): An object detection model that enables real-time
identification of multiple objects within a single frame. YOLO has been applied in smart
bin and waste-sorting applications to detect and classify different types of trash quickly and
efficiently.
• Geolocation and Mapping APIs: Technologies such as Google Maps API and
OpenStreetMap have been used in apps to help users locate nearby recycling centers or
disposal facilities. These tools enhance user convenience and encourage proper waste
segregation by minimizing the effort involved in finding the right location.
6
2.2 COMPARATIVE ANALYSIS OF EXISTING WASTE
MANAGEMENT
The current landscape of digital waste management tools includes a variety of applications and
platforms, each attempting to address environmental challenges through different technical and
user-focused strategies. While these systems provide varying degrees of support for proper waste
segregation and disposal, they differ significantly in terms of functionality, accessibility, and
technological integration. A comparative analysis of notable existing solutions is presented below:
These platforms allow users to search for items manually and receive recycling instructions or the
location of nearby facilities.
• Strengths: Informative and location-aware; useful for users already knowledgeable about
waste types.
• Limitations: Depend heavily on user input; no image recognition; lack of real-time
automation.
2. Smart Bin Systems with Sensors and AI (e.g., Bin-e, CleanRobotics)
These are physical waste bins integrated with AI and sensor-based systems for automated sorting
and disposal.
• Strengths: High precision in waste identification; suitable for public infrastructure and
high-traffic areas.
• Limitations: High implementation cost; limited accessibility for individual users; not
mobile or widely scalable.
3. Mobile Applications with Basic AI Features (e.g., Recyclica, TrashOut)
These apps incorporate limited AI features, sometimes using barcode scanning or database
matching to categorize waste.
• Strengths: More interactive than manual systems; offer educational content.
• Limitations: Lack image-based recognition; limited to structured input; do not personalize
content delivery.
4. Educational Platforms Promoting Sustainability
Some platforms focus on spreading environmental awareness through games, videos, or content
recommendations.
• Strengths: Help build eco-conscious behavior, especially among youth.
7
2.3 Contributions and Distinctions of EnviroMate in Waste Management
Systems:
EnviroMate introduces a transformative approach to waste identification and recycling
facilitation by bridging the gap between environmental responsibility and smart technology. Unlike
conventional tools that rely on manual entry or offer limited automation, EnviroMate distinguishes
itself through its unique blend of artificial intelligence, user engagement, and accessibility.
Key Contributions of EnviroMate:
1. AI-Powered Visual Waste Classification:
o Leverages advanced computer vision models to recognize and categorize waste
(e.g., paper, plastic, metal, organic, e-waste) in real time using a smartphone
camera.
o Enhances accuracy and speed, removing the need for manual sorting or item search.
2. Integrated Smart Recycling Hub Locator:
o Connects users instantly to the nearest relevant recycling or waste disposal center
using GPS and mapping services.
o Promotes timely and proper disposal, reducing the likelihood of environmental
harm.
3. Eco-Education and Behavioral Awareness:
o Provides personalized insights about the environmental impact of each waste type
and suggests sustainable alternatives.
o Encourages behavioral change through learning, not just action.
4. User-Centric, Mobile-First Design:
o Designed for accessibility across age groups and educational levels, with an
intuitive interface suitable for individuals, families, and institutions.
o Operates across devices (smartphones, tablets), increasing its reach and
adaptability.
5. Support for a Greener Ecosystem:
o Fosters eco-responsibility through intelligent feedback loops, visual cues, and
location-based nudges.
o Contributes to broader sustainability goals by empowering users to take immediate,
informed action.
8
CHAPTER 3
SYSTEM DESIGN
Manual Sorting Methods: Most systems still rely on manual sorting by waste collection services
or individuals, which can be inaccurate and time-consuming, especially for non-experts.
Lack of Real-Time Waste Identification: Many waste management systems do not provide real-
time feedback on how to sort different materials, which leads to improper recycling practices.
Limited Educational Support: Existing systems offer minimal educational content about
recycling practices, failing to raise awareness about environmental impacts and best practices for
waste disposal.
Non-Interactive User Experience: Most waste management systems have static interfaces that
do not adapt to user needs, leading to a lack of engagement and understanding.
Due to these challenges, there is a clear need for a more intelligent, AI-powered system that offers
real-time waste identification, location-based disposal recommendations, and dynamic educational
support.
10
Educational Insights & Feedback Layer:
User Guidance: Provides feedback about waste sorting, along with educational content such as the
environmental impact of incorrect disposal and tips for sustainable practices.
Visual & Textual Information: Integrates text-based information and images to explain recycling
practices.
User Interface:
Mobile and Web App: A user-friendly interface that works across devices to provide seamless
interaction with the system.
Real-Time Interaction: Allows users to input images of waste, receive feedback, and locate the
nearest recycling facilities.
The Deep Learning Model used in EnviroMate primarily revolves around Convolutional
Neural Networks (CNN), which are highly effective for image-based classification tasks. Given
the visual nature of the input (images of waste), CNNs are particularly well-suited for accurately
identifying and categorizing different types of waste materials.
Model Description :
Convolutional Neural Network (CNN) :The core deep learning model used for image recognition
and classification of waste items. It uses multiple convolutional layers to extract features and
classify the images into predefined categories.
Transfer Learning (e.g., ResNet, VGG) :Pre-trained CNN architectures like ResNet or VGG are
used to leverage existing models trained on large datasets like ImageNet, adapting them to the
waste classification task by fine-tuning them on specific waste datasets.
Data Augmentation: Techniques like rotation, flipping, and scaling of images to expand the
training dataset, improving model robustness and accuracy.
Model Training and Optimization : The CNN model is trained using a large dataset of labeled
waste images. Optimization techniques like Adam optimizer and dropout layers are used to reduce
overfitting and improve model performance.
The CNN-based deep learning model was selected for the final implementation due to its high
accuracy in visual recognition tasks and its ability to generalize well to various types of waste
11
materials.
Effective data analysis is vital for building a reliable waste management system. The
following steps were taken during data analysis:
Waste Classification Accuracy: Metrics such as accuracy, precision, recall, and F1-score were
Confusion Matrix: A confusion matrix was generated to assess the classification accuracy of
Feature Importance: Analysis was conducted to understand which waste features (e.g., size,
Recycling Hub Proximity: Visualized the distance from user locations to nearby recycling hubs
User Interaction Trends: Analyzed user engagement to identify patterns in how users interact
These analyses were crucial in optimizing the system for improved waste sorting accuracy, user
engagement, and efficient waste disposal recommendations.
12
CHAPTER 4
SYSTEM REQUIREMENTS
This project is developed using Python and its standard libraries, with a GUI built in
Tkinter. It integrates speech recognition, gesture detection (via image processing), and
external APIs for enhanced language learning experiences.
14
CHAPTER 5
METHODOLOGY
5. METHODOLOGY:
Public Datasets: Datasets from open data platforms like government waste management websites
and environmental agencies.
Crowdsourced Data: Data from mobile apps and IoT sensors related to waste disposal, recycling
patterns, and waste sorting.
Recycling APIs: APIs from platforms like Earth911 or Recycling Near You to gather information
about recycling centers, accepted materials, and locations.
User-generated Data: Data from the application, such as waste categories submitted by users or
their location data for geolocation-based services.
Once collected, the data is ingested into the system using Python libraries like Pandas, and stored
in structured formats like CSV, Excel, or a relational database for future analysis.
Waste classification and user data often contain the following challenges:
Missing values: Certain waste type categories or user locations might be missing.
Inconsistent entries: User input might have typos or multiple representations for the same item
(e.g., "paper" vs. "recycled paper").
15
Outliers: Unusual or erroneous data points, like a very high number of waste items submitted in a
single transaction.
Redundant features: Unnecessary data, like extra metadata or irrelevant features, that could affect
model performance.
Handling missing values: Imputing missing data through methods like mean, median, or using
domain-specific filling techniques (e.g., location imputation based on neighboring areas).
Removing duplicates: Ensuring that each data record is unique, eliminating any redundant entries.
Outlier detection: Identifying and removing anomalies using statistical methods such as
Interquartile Range (IQR) or Z-score analysis.
Data type conversion: Ensuring that all categorical features (e.g., waste type, user location) and
numerical data (e.g., amount of waste) are correctly identified and processed.
Encoding categorical variables: Using one-hot encoding or label encoding for categorical
features such as waste category, user profile, etc.
Feature engineering improves the model's accuracy by transforming raw data into valuable
inputs:
Creating new features: Examples include waste-to-recycling ratios, frequency of waste disposal
per user, or geographic distance to the nearest recycling center.
Binning: Converting continuous variables such as user engagement (number of interactions with
16
the system) into discrete bins (e.g., high, medium, low).
Scaling: Normalizing features like waste volume or recycling frequency to bring them into a
uniform range, especially if distance-based algorithms are used.
Dimensionality reduction: Using techniques like Principal Component Analysis (PCA) or feature
importance analysis from decision trees to eliminate less useful features and reduce model
complexity.
For the waste classification and user prediction models, various machine learning
algorithms can be employed. In this project, we focus on Deep Learning Models (e.g.,
Convolutional Neural Networks for image-based waste classification) and Random Forest for the
recycling center location recommendation.
Data Splitting: Dividing the dataset into a training set (80%) and a testing set (20%) for model
evaluation.
Training: The selected deep learning model or Random Forest algorithm is trained using the
training data.
Testing: Predictions are made on the testing set to evaluate how well the model generalizes to
unseen data.
Evaluation Metrics:
Accuracy (for classification tasks)
Precision/Recall/F1 Score (for waste classification)
Mean Absolute Error (MAE) (for regression tasks like distance to nearest recycling
center)
17
Root Mean Squared Error (RMSE) (for performance evaluation of the model)
To improve model performance and accuracy, the following strategies are applied:
Cross-validation: k-Fold Cross-Validation is employed to ensure the model's robustness and reduce
overfitting by testing it on different subsets of the dataset.
Feature Selection: Removing less important features that contribute little to model prediction,
thereby reducing complexity and improving training speed.
Once the model achieves satisfactory performance, it will be deployed using the following
steps:
Backend Deployment: Deploying the trained model as an API using Flask or Django to interact
with the front-end application.
18
recommendations) into the user interface of the application. This will be a web-based application
that can also support mobile interfaces through Progressive Web Apps (PWA).
Real-Time Prediction: The system will process user inputs (images of waste or voice commands)
in real-time to classify waste and suggest nearby recycling centers.
Model Updates: Continuous monitoring of model performance, and periodic retraining as new
data (such as user feedback or updated recycling center information) becomes available.
To continuously improve the model and system performance, the following mechanisms
will be implemented:
User Feedback Loop: Collecting user feedback on the waste classification and recycling
recommendations to refine the model.
System Adaptation: The model will be fine-tuned periodically based on new user data, ensuring
that it remains accurate and up-to-date.
19
CHAPTER 6
1. Output: The system successfully classified waste into predefined categories such as
biodegradable, non-biodegradable, e-waste, and recyclable.
2. Example: An uploaded image of a banana peel was correctly identified as biodegradable
waste.
3. Accuracy: Between 85% to 92%, depending on lighting and background clarity.
4. Discussion: The CNN model (trained on a custom dataset) performed well in daylight and
controlled environments, with slight confusion between certain visually similar waste
items.
1. Output: The assistant provided eco-friendly disposal tips, waste reduction ideas, and
recycling techniques.
20
2. Example: Input: "metal can" → Output: "Rinse and recycle. Use yellow bin if available."
3. Additional Feature: For educational purposes, facts about environmental impact were
shown alongside the tips.
1. Output: User-friendly interface with real-time response to button clicks and inputs.
2. Discussion: The system maintained consistent frame transitions and feedback messages.
Image upload, dropdown selection, and input validation all worked as expected.
D. Environment-Focused Intelligence
• Suggested eco-friendly alternatives and disposal tips based on user input.
• Aimed at behavioral change by promoting sustainable practices.
E. Expandable Architecture
• Can be extended to include:
21
o QR-code bin mapping
o Waste pickup scheduler
o Voice-based queries using SpeechRecognition
o Mobile version using Kivy or PWA
Conclusion of Results :
The EnvironMaate system performed reliably across multiple input types and user
interactions. Its strength lies in its intelligent waste categorization, interactive learning content, and
easy-to-use interface. Future improvements may include expanding the waste dataset, integrating
multilingual support, and deploying the system on cloud or mobile platforms for broader
accessibility.
22
CHAPTER 7
One of the primary challenges encountered in EnvironMaate was managing the inconsistency
in user-submitted inputs, particularly for image-based classification.
• Blurry or Low-Resolution Images: Images captured with low-quality or shaky cameras
significantly impacted waste type prediction.
• Lighting and Background Issues: Dark lighting, shadows, or cluttered backgrounds made
waste item identification more difficult.
• Unclear Object Positioning: Waste items photographed at odd angles or partially visible
were more likely to be misclassified.
To mitigate these issues, basic preprocessing techniques such as resizing, sharpening, and contrast
enhancement were implemented. However, real-world performance still varies based on user
adherence to input guidelines.
While the model handled common waste items (like paper, food scraps, or plastic) with high
confidence, it faced limitations with:
• Visually Similar Objects: Confusion between recyclable plastics and non-recyclables
(e.g., plastic toys vs. bottles).
• E-Waste & Complex Items: Items like batteries, circuit boards, or mixed-material
packaging often caused misclassification.
• Unlabeled or Custom Waste: Rare or region-specific items not seen during training were
either rejected or wrongly tagged.
This limitation can be addressed in the future by expanding the training dataset to include more
real-world samples and rare waste categories.
Due to constraints in project scope and timeline, the training dataset was relatively small and
manually curated.
23
• Insufficient Class Balance: Some waste categories were underrepresented (e.g., metal
waste, e-waste).
• Risk of Overfitting: The model tended to memorize training images rather than generalize
to unseen examples.
• Lack of Diversity: The absence of images taken in varied lighting conditions, angles, or
environments reduced the model's robustness.
Future work should include the use of large-scale, open-source waste datasets and synthetic data
augmentation to boost generalization.
The use of Tkinter ensured quick development and simplicity, but it also introduced design
limitations:
• Lack of Responsive Layout: The UI doesn’t automatically scale well across different
screen sizes or resolutions.
• Limited Multimedia Support: Integration of features like drag-and-drop, voice input, or
camera feed was restricted.
• Basic Error Feedback: Input errors (like unsupported file formats) had to be handled with
simple message boxes.
Switching to a more advanced GUI framework (e.g., PyQt, Kivy, or web-based interfaces) would
allow for more dynamic, user-friendly features.
While EnvironMaate works offline (an intentional design choice), this created constraints in
terms of model size and speed.
• Lightweight Model Requirement: Heavy models could not be deployed, leading to some
trade-offs in accuracy.
• Processing Delay: Image classification could take a few seconds on older machines due to
limited optimization.
• No Cloud Support: Without internet connectivity, users couldn’t benefit from scalable
cloud-based inference or data logging.
Future improvements may include an optional online mode for enhanced performance on
24
supported devices.
The assistant’s response system for text queries is currently based on rule-based logic and
simple keyword matching.
• Limited Language Understanding: Complex or grammatically incorrect queries may be
misinterpreted.
• No Semantic Search: The assistant lacks deeper NLP understanding or paraphrasing
capabilities.
• Restricted Tip Database: Suggestions and eco-tips are pre-written and do not dynamically
adapt to regional practices or updates.
Integrating NLP models like BERT or using a cloud-based FAQ knowledge base could overcome
these limitations.
As new types of waste emerge or classification rules change, keeping the system updated is an
ongoing challenge.
• Manual Dataset Updates: Adding new waste types or re-training models requires
developer intervention.
• Static Output Responses: Without an adaptive backend, all suggestions remain fixed and
need manual revisions.
• Version Control: Changes in model versions or tips database must be carefully logged to
avoid inconsistencies in advice.
Developing a modular, update-ready backend with versioning and a semi-automated training
pipeline would enhance scalability.
While the project aims to promote environmental awareness, influencing actual behavior
remains difficult.
• Limited Motivation Triggers: Users may ignore suggestions or not follow disposal
instructions correctly.
25
• No Feedback or Progress Tracking: Users receive tips but cannot see the environmental
impact of their actions.
• Language and Accessibility Gaps: The system currently supports only English and basic
text display, which may not suit all user groups.
Integrating gamification, multilingual content, and accessibility features (like text-to-speech or
image-to-audio) could increase impact.
Though no personal data is stored or transmitted, basic security precautions are still important.
• File Handling Risks: Uploaded images must be checked to prevent malicious formats or
code injection.
• Log Access: If future versions include data storage, access control and encryption will be
essential.
• User Privacy: Even non-personal waste images can reveal household habits or location if
misused.
26
CHAPTER 8
APPENDICES
import cv2
import time
import sqlite3
import webbrowser
from io import BytesIO
from tkinter import *
from tkinter import messagebox
from tkinter.ttk import Progressbar
from tkinter import ttk
from cvzone.ClassificationModule import Classifier
from PIL import Image, ImageTk
fh = open('Model/labels.txt')
class_lst = fh.readlines()
classes = []
for i in class_lst:
j = i.split()[-1]
classes.append(j)
win = Tk()
win.geometry("1300x700")
win.title("Start")
win.config(bg='#6BD275')
page1 = PhotoImage(file='Image/page1.png')
page2 = PhotoImage(file='Image/page2.png')
strt_img = PhotoImage(file='Image/start.png')
sett_img = PhotoImage(file='Image/settings.png')
def image_tk(img):
tk_image = PhotoImage(file=img)
return tk_image
def map_locate(link):
webbrowser.open(link)
def start():
win1.destroy()
win.title("Menu")
win2 = Frame(win, height=700, width=1300)
win2.pack()
img2 = Label(win2, image=page2)
img2.pack()
def Scan():
win2.destroy()
win.title("Scanner")
cap = cv2.VideoCapture(0)
cap.set(3, 640)
cap.set(4, 480)
global c, load
c=0
load = False
predictions = []
def update():
_, frame = cap.read()
if _:
global frame_res
frame_res = cv2.resize(frame, (422, 377))
rgb_img = cv2.cvtColor(frame_res, cv2.COLOR_BGR2RGB)
pil_img = Image.fromarray(rgb_img)
tk_img = ImageTk.PhotoImage(pil_img)
vid.config(image=tk_img)
vid.image = tk_img
vid.after(10, update)
if load == True:
return 0
def Scanner():
global frame_res
prog_lab = Label(win, bg='#A1F2A5', text="", font=("Times New Roman", 23),
fg="#6BD275")
prog_lab.place(relx=0.63, rely=0.558)
for i in range(1,11):
update()
predictions.append(frame_res)
prog_var = IntVar()
28
prog_lab.config(text="Analyzing"+('.')*i)
prog = Progressbar(win, orient=HORIZONTAL, length=300, mode='determinate',
variable=prog_var)
prog.place(relx=0.585, rely=0.678)
for j in range(100):
prog_var.set((j / 100) * 100)
win.update_idletasks()
time.sleep(0.008)
prog_var.set(100)
prog.start()
prog.destroy()
cap.release()
vid.destroy()
def information(category):
win3.destroy()
fram.destroy()
scan.destroy()
prog_lab.destroy()
win.title(category.capitalize())
win4 = Frame(win, height=700, width=1300)
for img in predictions:
prediction, confidence = classifier.getPrediction(img)
predicts.append(classes[confidence])
for i in list(dict.fromkeys(predicts)):
dict_pred[i] = predicts.count(i)
Class = max(dict_pred.values())
if Class > 5 :
for i in dict_pred.keys():
if dict_pred[i] == Class:
Class = i
prog_lab.config(text="")
prog_lab.config(text=Class)
information(Class)
else:
messagebox.showerror("ERROR","TRY AGAIN !")
else:
messagebox.showerror("ERROR","Please restart and try !")
pag = PhotoImage(file='Image/page3.png')
win3 = Label(win, image=pag)
win3.pack()
back = ttk.Button(text='Back',command=lambda:back_to_menu(win3,[cap,fram]))
back.place(x=0,y=0)
scan = Button(win, command=Scanner, image=scan_img, bd=0, relief=FLAT,
bg="#A1F2A5", fg="#A1F2A5")
scan.place(relx=0.58, rely=0.409)
update()
win.mainloop()
def footprint():
pass
def yourBin():
pass
def settings():
pass
win1.mainloop()
win.mainloop()
30
1.2 SCREENSHOTS
31
Figure.No:8.2 Object Scanning interface
32
Figure.No:8.3 Classification of the detected objects
9.1 Conclusion
In this project, we developed EnvironMaate, a smart waste management assistant that
combines image classification, text-based interaction, and an intuitive Tkinter-based GUI to
educate users on proper waste disposal. The system uses a lightweight machine learning model to
classify waste items into biodegradable, recyclable, or hazardous categories and offers eco-friendly
suggestions to encourage responsible behavior.
The core objective was to create a user-friendly, offline-capable tool that helps bridge the gap
between household waste disposal and environmental awareness. The assistant allows users to
either upload an image of waste or enter a text query to receive disposal advice and sustainability
tips. This dual-input approach improves accessibility and serves both visual and verbal
communication preferences.
Multilingual Support
Integrate support for multiple languages (e.g., Tamil, Hindi) to ensure accessibility for non-English
speakers and increase adoption in diverse regions.
34
Real-Time Camera Integration
Allow users to use a device camera to capture and classify waste items instantly, eliminating the
need to upload saved images.
Final Note:
While the current version of EnvironMaate offers a promising step toward AI-assisted waste
classification and education, these enhancements will further transform it into a scalable, inclusive,
and intelligent platform for sustainable living.
35
CHAPTER 10
REFERENCES
1. Rad, M. R., von Kaenel, P., Minter, J., Mazzolini, A., Zhang, Y., Soleimani, E., ..
& Mojsilovic, A. (2017). A computer vision system to localize and classify wastes
on the streets. In 2017 IEEE Winter Conference on Applications of Computer
Vision (WACV) (pp. 1037-1045). IEEE.
2. Mittal, A., Bhardwaj, A., & Raj, R. (2016). IoT based smart waste management
system: A research paper. International Journal of Computer Applications,
162(3), 6-9.
3. Yang, Y., Nguyen, T., San, P. P., Jin, Y., & See, S. (2016). Deep learning for
practical image recognition: Case study on waste classification. In 2016 IEEE
International Conference on Big Data and Smart Computing (BigComp) (pp. 313-
316). IEEE.
4. Saha, M., & Bhattacharya, J. (2021). Solid waste management using digital twin
technology. Environmental Science and Pollution Research, 28(16), 20428–
20440.
36