0% found this document useful (0 votes)
23 views66 pages

Fulldoc - Avc MSC - Trash Classification

Uploaded by

siva vimal
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as DOCX, PDF, TXT or read online on Scribd
0% found this document useful (0 votes)
23 views66 pages

Fulldoc - Avc MSC - Trash Classification

Uploaded by

siva vimal
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as DOCX, PDF, TXT or read online on Scribd
You are on page 1/ 66

ABSTRACT

Trash classification is an important step in the waste management process, as it helps


identify the types of waste and how they should be handled. Traditional waste classification
methods are typically manual and time-consuming, which can result in errors and
inconsistencies. With the increasing amount of waste being generated globally, there is a
need for more efficient and accurate methods for waste classification. Machine learning
techniques, such as deep learning algorithms, have shown promising results in automating
waste classification. Among these algorithms, the VGG architecture has been widely used for
image classification tasks and has achieved state-of-the-art performance on several
benchmarks. The VGG architecture consists of several convolutional layers and pooling
layers, followed by several fully connected layers, and has the ability to learn complex image
features. In this project, we propose a method for smart wastage classification using the VGG
(Visual Geometry Group) algorithm. The proposed method involves training a deep
Convolutional Neural Network (CNN) based on the VGG architecture to classify waste
images into different categories, such as paper, plastic, glass, metal, and organic. The CNN
model is trained on a large dataset of waste images, which is pre-processed and augmented to
improve the model's accuracy. The proposed method is evaluated on a test dataset and
compared with other state-of-the-art methods, demonstrating its effectiveness in smart
wastage classification. The results indicate that the proposed method can accurately classify
waste images, which can help improve waste management practices and reduce
environmental pollution.

1
CHAPTER 1
INTRODUCTION

1.1 BACKGROUND

EXISTING SYSTEM

The traditional method of waste classification involves manual sorting and visual
inspection by human workers. This method is often prone to errors, inconsistencies, and is
time-consuming, labor-intensive, and not scalable. Therefore, there is a need for more
efficient and accurate methods for waste classification. Machine learning and computer
vision-based approaches have been proposed as a solution to these challenges. In existing
system implement support vector machine algorithm to classify the waste images. Support
Vector Machines (SVMs) are a widely used machine learning algorithm that can be used for
waste classification tasks. SVMs work by finding the hyperplane that best separates the data
points in a high-dimensional space. In waste classification, the SVM algorithm can be trained
to classify different types of waste based on their composition, texture, and other features. To
use SVMs for waste classification, waste images are first pre-processed to extract relevant
features such as color, texture, and shape. These features are then fed into the SVM
algorithm, which learns to classify the waste images into different categories based on the
extracted features. SVMs can achieve high accuracy in waste classification tasks and can
handle both binary and multi-class classification problems

DISADVANTAGES OF EXISTING SYSTEM

The disadvantages of exiting system are:

 They are prone to errors and inconsistencies due to human subjectivity and
judgment.
 They are time-consuming and labour-intensive, which can lead to high costs
and low efficiency.
 They are not scalable, which means they cannot handle large volumes of waste
materials.
 They are not adaptable to changes in waste composition and material, which
can result in inaccurate classification

2
PROPOSED SYSTEM

The proposed system for smart waste classification using VGG16 CNN involves
training a deep learning model using the VGG16 architecture to classify different types of
waste based on images. The VGG16 architecture is a popular CNN architecture that has been
shown to achieve high accuracy in image classification tasks. The system involves several
steps, including data collection, pre-processing, model training, and evaluation. The data
collection process involves collecting a large dataset of waste images, including images of
different types of waste such as paper, plastic, glass, and metal. The dataset is then pre-
processed to resize the images and normalize the pixel values. The pre-processed dataset is
then split into training and testing sets, with a portion of the dataset used for training the
VGG16 CNN model. During the training process, the VGG16 model learns to identify
patterns and features in the waste images that are specific to different types of waste. The
trained model is then evaluated on the testing set to determine its accuracy and performance.
Once the model is trained and evaluated, it can be used for smart waste classification in real-
world scenarios. This can be done by taking an image of a piece of waste and passing it
through the trained model to determine the type of waste. The system can be deployed in
waste management facilities or in public spaces such as parks or streets to automatically sort
waste into different categories, making waste management more efficient and
environmentally friendly

ADVANTAGES OF PROPOSED SYSTEM

The major advantages are Improved accuracy, Automation, Scability:

 Improved accuracy: The use of deep learning and the VGG16 architecture can
improve the accuracy of waste classification compared to traditional methods.
This can result in better waste management practices and reduced
environmental impact.
 Automation: The system can be automated, allowing for efficient waste
sorting and classification in waste management facilities or public spaces. This
can save time and resources compared to manual sorting methods.
 Scalability: The system can be scaled up to handle large volumes of waste,
making it suitable for use in industrial waste management

3
1.2 OBJECTIVES

The project aims to develop an efficient trash classification system using


Convolutional Neural Networks (CNNs) to automatically categorize different types of waste
items. This technology-driven solution seeks to improve recycling efforts and reduce
environmental impact by accurately identifying and sorting items like plastic, paper, metal,
and glass. The project involves collecting a diverse dataset of waste images, preprocessing
them for uniformity, and labelling them accordingly. The selected CNN architecture will be
trained and fine-tuned for optimal performance in accuracy, speed, and resource efficiency.
The trained model will be evaluated using a validation dataset, and its decisions will be
interpreted for better understanding. The system will be deployed with a user-friendly
interface, integrated with existing waste management processes, and continuously optimized
for better performance and scalability. Overall, the project aims to make a significant impact
on waste management practices by enhancing recycling capabilities and reducing
environmental pollution

4
1.3 PURPOSE, SCOPE AND APPLICABILITY

The purpose of the project on trash classification using CNNs is to address the
pressing issue of waste management by developing a technology-driven solution that can
automatically classify different types of trash. By accurately categorizing items such as
plastic, paper, metal, and glass, the project aims to improve recycling efforts and reduce
environmental impact.

The scope of the project includes data collection and preprocessing, model selection
and training, evaluation and validation, model interpretability, deployment and integration,
performance optimization, and impact assessment. It involves gathering a diverse dataset of
waste images, training a CNN model for classification, evaluating its performance,
interpreting its decisions, deploying it with a user-friendly interface, and continuously
refining it for better accuracy and efficiency.

The availability of the project will be in the form of a deployable system that can be
integrated with existing waste management processes. It will be scalable and adaptable to
different environments and scenarios, allowing for easy integration with new waste
management systems or expansion to other types of waste classification tasks. The project's
impact will be assessed based on its effectiveness in increasing recycling rates, reducing
waste contamination, and improving resource utilization efficiency, ultimately contributing to
a more sustainable approach to waste management

1.4 ACHIEVEMENTS

Through its innovative approach and tangible results, the project has made a
significant contribution to promoting a more sustainable approach to waste management.

5
1.5 ORGANIZATION OF REPORT

This dissertation mainly provides the newly generated idea, the concepts that have
been applied and finally the output has been shaped. The dissertation contains six chapters.

 Chapter 1 explains the introduction of the project including existing system,


proposed system and objectives of the project
 Chapter 2 consists of a survey of technologies including front end and backend
software’s utilized for project implementation.
 Chapter 3 presents the project implementation work with various hardware
and software requirements and data flow diagrams.
 Chapter 4 presents the implementation of modules and description of each
module with database designs.
 Chapter 5 presents the implementation of coding’s and testing of each module
process.
 Chapter 6 describes the result and description of the project with output
screens.
 Chapter 7 describes the conclusion of the proposed work. And also provides
the limitations present in the project implementation and future scope of the
system.

6
CHAPTER 2
SURVEY OF TECHNOLOGIES
2.1 FRONT END: PYTHON

Python is a general-purpose interpreted, interactive, object-oriented, and high-level


programming language. It was created by Guido van Rossum during 1985- 1990. Like Perl,
Python source code is also available under the GNU General Public License (GPL). This
tutorial gives enough understanding on Python programming language.
Python is a high-level, interpreted, interactive and object-oriented scripting language. Python
is designed to be highly readable. It uses English keywords frequently where as other
languages use punctuation, and it has fewer syntactical constructions than other languages.
Python is a MUST for students and working professionals to become a great Software
Engineer specially when they are working in Web Development Domain.
Python is currently the most widely used multi-purpose, high-level programming language.
Python allows programming in Object-Oriented and Procedural paradigms. Python programs
generally are smaller than other programming languages like Java. Programmers have to type
relatively less and indentation requirement of the language, makes them readable all the time.
Python language is being used by almost all tech-giant companies like – Google, Amazon,
Facebook, Instagram, Dropbox, Uber… etc. The biggest strength of Python is huge collection
of standard libraries which can be used for the following:
 Machine Learning
 GUI Applications (like Kivy, Tkinter, PyQt etc.)
 Web frameworks like Django (used by YouTube, Instagram, Dropbox)
 Image processing (like OpenCV, Pillow)
 Web scraping (like Scrapy, BeautifulSoup, Selenium)
 Test frameworks
 Multimedia
 Scientific computing
 Text processing and many more.
Tensor Flow
TensorFlow is an end-to-end open-source platform for machine learning. It has a
comprehensive, flexible ecosystem of tools, libraries, and community resources that lets

7
researchers push the state-of-the-art in ML, and gives developers the ability to easily build
and deploy ML-powered applications.
TensorFlow provides a collection of workflows with intuitive, high-level APIs for both
beginners and experts to create machine learning models in numerous languages. Developers
have the option to deploy models on a number of platforms such as on servers, in the cloud,
on mobile and edge devices, in browsers, and on many other JavaScript platforms. This
enables developers to go from model building and training to deployment much more easily.
Keras
Keras is a deep learning API written in Python, running on top of the machine
learning platform TensorFlow. It was developed with a focus on enabling fast
experimentation.

 Allows the same code to run on CPU or on GPU, seamlessly.


 User-friendly API which makes it easy to quickly prototype deep learning
models.
 Built-in support for convolutional networks (for computer vision), recurrent
networks (for sequence processing), and any combination of both.
 Supports arbitrary network architectures: multi-input or multi-output models,
layer sharing, model sharing, etc. This means that Keras is appropriate for
building essentially any deep learning model, from a memory network to a
neural Turing machine.
Pandas
pandas is a fast, powerful, flexible and easy to use open source data analysis and
manipulation tool, built on top of the Python programming language. Pandas is a Python
package that provides fast, flexible, and expressive data structures designed to make working
with "relational" or "labeled" data both easy and intuitive. It aims to be the fundamental high-
level building block for doing practical, real world data analysis in Python. Pandas is mainly
used for data analysis and associated manipulation of tabular data in Data frames. Pandas
allows importing data from various file formats such as comma-separated values, JSON,
Parquet, SQL database tables or queries, and Microsoft Excel. Pandas allows various data
manipulation operations such as merging, reshaping, selecting, as well as data cleaning, and
data wrangling features. The development of pandas introduced into Python many
comparable features of working with Data frames that were established in the R

8
programming language. The panda’s library is built upon another library NumPy, which is
oriented to efficiently working with arrays instead of the features of working on Data frames.
NumPy
NumPy, which stands for Numerical Python, is a library consisting of
multidimensional array objects and a collection of routines for processing those arrays. Using
NumPy, mathematical and logical operations on arrays can be performed. NumPy is a
general-purpose array-processing package. It provides a high-performance multidimensional
array object, and tools for working with these arrays.

Matplotlib
Matplotlib is a comprehensive library for creating static, animated, and interactive
visualizations in Python. Matplotlib makes easy things easy and hard things possible.
Matplotlib is a plotting library for the Python programming language and its numerical
mathematics extension NumPy. It provides an object-oriented API for embedding plots into
applications using general-purpose GUI toolkits like Tkinter, wxPython, Qt, or GTK.
Scikit Learn
Scikit-learn is a Python module for machine learning built on top of SciPy and is
distributed under the 3-Clause BSD license. Scikit-learn are a free software machine learning
library for the Python programming language. Scikit features various classification,
regression and clustering algorithms including support-vector machines, random forests,
gradient boosting, k-means and DBSCAN, and is designed to interoperate with the Python
numerical and scientific libraries NumPy and SciPy.
Pillow
Pillow is the friendly PIL fork by Alex Clark and Contributors. PIL is the Python
Imaging Library by Fredrik Lundh and Contributors. Python pillow library is used to image
class within it to show the image. The image modules that belong to the pillow package have
a few inbuilt functions such as load images or create new images, etc.
OpenCV
OpenCV is an open-source library for the computer vision. OpenCV provides the
facility to the machine to recognize the faces or objects. In OpenCV, the CV is an
abbreviation form of a computer vision, which is defined as a field of study that helps
computers to understand the content of the digital images such as photographs and videos.

9
2.2 BACK END: MySQL
MySQL tutorial provides basic and advanced concepts of MySQL. Our MySQL
tutorial is designed for beginners and professionals. MySQL is a relational database
management system based on the Structured Query Language, which is the popular language
for accessing and managing the records in the database. MySQL is open-source and free
software under the GNU license. It is supported by Oracle Company. These queries are:
insert records, update records, delete records, select records, create tables, drop tables, etc.
There are also given MySQL interview questions to help you better understand the MySQL
database.
MySQL is currently the most popular database management system software used for
managing the relational database. It is open-source database software, which is supported by
Oracle Company. It is fast, scalable and easy to use database management system in
comparison with Microsoft SQL Server and Oracle Database. It is commonly used in
conjunction with PHP scripts for creating powerful and dynamic server-side or web-based
enterprise applications. It is developed, marketed, and supported by MySQL AB, a Swedish
company, and written in C programming language and C++ programming language. The
official pronunciation of MySQL is not the My Sequel; it is My Ess Que Ell. However, you
can pronounce it in your way. Many small and big companies use MySQL. MySQL supports
many Operating Systems like Windows, Linux, MacOS, etc. with C, C++, and Java
languages.
Interimages

MySQL is primarily an RDBMS and ships with no GUI tools to administer MySQL
databases or manage data contained within the databases. Users may use the included
command line tools, or use MySQL "front-ends", desktop software and web applications that
create and manage MySQL databases, build database structures, back up data, inspect status,
and work with data records. The official set of MySQL front-end tools, MySQL Workbench
is actively developed by Oracle, and is freely available for use.

RDBMS Terminology

To explain the MySQL database system, let us revise a few definitions related to the database

 Database − A database is a collection of tables, with related data.


 Table − A table is a matrix with data. A table in a database looks like a
simple spreadsheet.

10
 Column − One column (data element) contains data of one and the same
kind, for example the column postcode.
 Row − A row (= tuple, entry or record) is a group of related data, for example
the data of one subscription.
 Redundancy − Storing data twice, redundantly to make the system faster.
 Primary Key − A primary key is unique. A key value cannot occur twice in
one table. With a key, you can only find one row.
 Foreign Key − A foreign key is the linking pin between two tables.
 Compound Key − A compound key (composite key) is a key that consists of
multiple columns, because one column is not sufficiently unique.
 Index − An index in a database resembles an index at the back of a book.
 Referential Integrity − Referential Integrity makes sure that a foreign key
value always points to an existing row.

MySQL Database

MySQL is a fast, easy-to-use RDBMS being used for many small and big businesses.
MySQL is developed, marketed and supported by MySQL AB, which is a Swedish
company. MySQL is becoming so popular because of many good reasons −

 MySQL is released under an open-source license. So you have nothing to pay


to use it
 MySQL is a very powerful program in its own right. It handles a large subset
of the functionality of the most expensive and powerful database packages.
 MySQL uses a standard form of the well-known SQL data language.
 MySQL works on many operating systems and with many languages
including PHP, PERL, C, C++, JAVA, etc.
 MySQL works very quickly and works well even with large data sets.
 MySQL is very friendly to PHP, the most appreciated language for web
development.
 MySQL supports large databases, up to 50 million rows or more in a table.
The default file size limit for a table is 4GB, but you can increase this (if your
operating system can handle it) to a theoretical limit of 8 million terabytes
(TB).

11
 MySQL is customizable. The open-source GPL license allows programmers
to modify the MySQL software to fit their own specific environments.

HTML

HTML is a markup language for describing web documents (web pages).

 Hyper is the opposite of linear. Hyper used to be that computer programs had to
move in a linear fashion. This before this, this before this, and so on. HTML does
not hold to that pattern and allows the person viewing the World Wide Web page
to go anywhere, anytime they want.
 Text is what you will use. Real, honest to goodness English letters.
 Mark up is what you will do. You will write in plain English and then mark up
what you wrote. More to come on that in the next Primer.
 Language because they needed something that started with “ L ” to finish HTML
and Hypertext Markup Louie didn’t flow correctly. Because it’s a language, really
but the language is plain English.

HTML remains for Hyper Text Markup Language. HTML is a basic content designing
dialect used to make hypertext records. HTML is a stage free dialect not at all like most other
programming dialect. HTML is impartial and can be utilized on numerous stage or desktop.
HTML is this component of HTML that makes it mainstream as standard on the WWW.

This adaptable dialect permits the making of hypertext connections, otherwise called
hyperlinks. These hyperlinks can be utilized to unite reports on diverse machine, on the same
system or on an alternate system, or can even indicate purpose of content in the same record.

HTML is utilized for making archives where the accentuation is on the presence of the
record. HTML is likewise utilized for DTP. The records made utilizing HTML can have
content with diverse sizes, weights and hues. HTML can also contain graphics to make the
document more effective.

2.3 WAMPSERVER
WampServer is a Windows web development environment. WampServer allows you
to create web applications with Apache2, PHP and a MySQL database. Alongside,
PhpMyAdmin allows you to manage easily your database.

12
WampServer is a reliable web development software program that lets you create web
apps with MYSQL database and PHP Apache2. With an intuitive interface, the application
features numerous functionalities and makes it the preferred choice of developers from
around the world. The software is free to use and doesn’t require a payment or subscription.

13
CHAPTER 3
REQUIREMENTS AND ANALYSIS
3.1 PROBLEM DEFINITION

Outdoor trash detection is a pioneering application that leverages the capabilities of


artificial intelligence (AI) and sensor technologies to tackle the pressing challenges
associated with waste management in urban and public spaces. As urbanization continues to
rise and populations grow, effective monitoring and management of outdoor litter have
become paramount for maintaining clean and sustainable environments. This innovative
approach involves deploying smart sensor systems and high-resolution cameras strategically
in outdoor locations. These systems work collaboratively with AI algorithms to identify and
classify various types of waste in real time, ranging from common litter to larger items. The
AI algorithms analyse the captured data, enabling automatic detection and providing valuable
insights into the types and quantities of waste present. The system's real-time monitoring
allows for prompt responses to changing conditions, alerting authorities immediately when a
buildup of waste is detected. The benefits of outdoor trash detection include more efficient
waste management, environmental conservation by preventing pollution, data-driven
decision-making for public awareness campaigns, waste collection schedules, and cost
savings through optimized resource allocation. In summary, outdoor trash detection
represents a technological advancement that has the potential to revolutionize how we
manage and mitigate outdoor litter, contributing to cleaner and more sustainable urban
environments. In an era marked by technological advancements and a growing concern for
environmental sustainability, the need for innovative solutions to address waste management
has never been more pressing. One such solution that holds great promise is outdoor trash
detection powered by artificial intelligence (AI). The proliferation of urban areas has given
rise to increased volumes of outdoor waste, posing challenges for efficient disposal and
environmental preservation. In this context, the integration of AI into outdoor trash detection
systems emerges as a transformative approach. The conventional methods of waste
monitoring often fall short in effectively managing the expanding scale of outdoor litter.
Traditional surveillance systems struggle to provide real-time insights, leading to delayed
responses and inadequate waste management. AI, with its ability to process vast amounts of
data swiftly and accurately, offers a paradigm shift in how we identify, monitor, and address
outdoor trash-related challenges

14
3.2 REQUIREMENTS SPECIFICATION

This project has implemented using Python with MySQL Server.

3.3 SOFTWARE AND HARDWARE REQUIREMENTS

HARDWARE REQUIREMENTS

Processor : Intel i5 Core


RAM : 4GB
Hard disk : 500 GB
Keyboard : Standard keyboard
Monitor : 19 inch LED monitor

SOFTWARE REQUIREMENTS

Operating System : Windows 11


Front End : Python
Back End : MySQL Server
Application : Web Application
IDE : PyCharm

15
3.4 CONCEPTUAL MODELS

A data flow diagram is a two-dimensional diagram that explains how data is


processed and transferred in a system. The graphical depiction identifies each source of data
and how it interacts with other data sources to reach a common output. Individuals seeking to
draft a data flow diagram must identify external inputs and outputs, determine how the inputs
and outputs relate to each other, and explain with graphics how these connections relate and
what they result in. This type of diagram helps business development and design teams
visualize how data is processed and identify or improve certain aspects.

Data flow Symbols:

Symbol Description

An entity. A source of data or a


destination for data.

A process or task that is


performed by the system.

A data store, a place where


data is held between processes.

A data flow.

16
LEVEL 0

The Level 0 DFD shows how the system is divided into 'sub-systems' (processes),
each of which deals with one or more of the data flows to or from an external agent, and
which together provide all of the functionality of the system as a whole. It also identifies
internal data stores that must be present in order for the system to do its job, and shows the
flow of data between the various parts of the system.

17
LEVEL 1

The next stage is to create the Level 1 Data Flow Diagram. This highlights the main
functions carried out by the system. As a rule, to describe the system was using between two
and seven functions - two being a simple system and seven being a complicated system. This
enables us to keep the model manageable on screen or paper.

18
LEVEL 2

A Data Flow Diagram (DFD) tracks processes and their data paths within the business
or system boundary under investigation. A DFD defines each domain boundary and
illustrates the logical movement and transformation of data within the defined boundary. The
diagram shows 'what' input data enters the domain, 'what' logical processes the domain
applies to that data, and 'what' output data leaves the domain. Essentially, a DFD is a tool for
process modeling and one of the oldest.

19
CHAPTER 4
SYSTEM DESIGN
4.1 BASIC MODULES

 Dataset Collection
 Preprocessing
 Features Extraction
 Model Training
 Waste Classification

MODULE DESCRIPTION

Dataset Collection
Dataset acquisition refers to the process of obtaining data for use in various
applications, such as machine learning, data analysis, and research. In this module, we can
input the pest datasets that are collected from KAGGLE web sources. It contains the various
waste details as in image format. There are several publicly available datasets for waste
classification, such as the UCI waste classification dataset, the TrashNet dataset, and the
DUST dataset. These datasets can be accessed online and used for training and testing the
model. Dataset important to ensure that the datasets used for training and testing the model
are diverse and representative of the waste that the system will be classifying in real-world
scenarios. This can help ensure that the model is able to accurately classify waste under a
range of different conditions.

Preprocessing

In smart waste classification using VGG16 CNN, image preprocessing is a crucial


step in preparing the data for training and testing the model. The first step in image
preprocessing is to load the images from the dataset using a Python library like OpenCV or
PIL. Once the images are loaded, they may need to be resized to a specific dimension before
being used in the model. This is important to ensure that all the images have the same size,
which is required for the VGG16 CNN architecture. Additionally, it's common to apply
image augmentation techniques like rotation, flipping, and zooming to increase the diversity
of the dataset and improve the robustness of the model. Other common preprocessing
techniques include normalizing the pixel values to a range between 0 and 1, and converting

20
the images to grayscale if colour information is not required. These preprocessing steps help
to classification.

Features Extraction

In smart waste classification using VGG16 CNN, feature extraction is the process of
extracting meaningful features from the pre-processed images. This is achieved by using the
convolutional layers of the VGG16 CNN model, which are designed to identify patterns and
features within the images. Feature extraction is a critical step in training the VGG16 CNN
model for smart waste classification. By using the pre-trained convolutional layers of the
VGG16 CNN model, we can leverage the power of transfer learning to extract meaningful
features from the images without having to train the model from scratch. This helps to
improve the accuracy of the model and reduce the amount of time and resources required to
train it.

Model Training

Once the preprocessed images have been passed through the VGG16 CNN model for
feature extraction, the next step is to train the model to accurately classify the images into
their respective waste categories. This is done using a technique called supervised learning,
where the model is trained on a labeled dataset consisting of images and their corresponding
waste categories. The training process is typically repeated for several epochs until the model
has converged and achieved the desired level of accuracy. Once the model has been trained, it
can be used to classify new images into their respective waste categories.

Waste Classification

Waste classification is the process of categorizing waste into different types based on
their characteristics, composition, and potential risks to the environment and public health.
Proper waste classification is important for effective waste management, as it enables the
identification of appropriate disposal methods and the implementation of measures to
minimize the environmental impact of waste. In this module classify the waste using CNN
framework.

21
4.2 DATA DESIGN

Database design is the process of creating a structured representation of the data to be


stored in a database. It involves defining the entities, attributes, relationships, and constraints
of the data, and organizing them into a logical model that can be easily implemented in a
database management system (DBMS). A well-designed database is essential for ensuring the
efficient and effective storage, retrieval, and management of data. It provides a foundation for
data integrity, security, and scalability, and can support a wide range of applications and
business processes.

The process of database design typically involves several steps, including


requirements gathering, conceptual modeling, logical modeling, and physical modeling.
During each of these steps, the database designer works with stakeholders to identify the data
requirements, create a conceptual model of the data, translate the conceptual model into a
logical model, and then implement the logical model in a specific DBMS. The goal of
database design is to create a data model that accurately reflects the real-world entities and
relationships of the data, while also providing efficient and effective storage, retrieval, and
management. It requires a balance between the needs of the application or system, the
requirements of the stakeholders, and the capabilities of the DBMS. Overall, effective
database design is critical to the success of any data-driven application or system, and
requires careful planning, analysis, and implementation. It is a complex and iterative process
that requires collaboration between stakeholders, designers, and developers, and must be
continuously evaluated and updated to ensure the ongoing integrity and effectiveness of the
data.

22
4.3 USER INTERFACE DESIGN

User interface is the front-end application view to which user interacts in order to use
the software. User can manipulate and control the software as well as hardware by means
of user interface. Today, user interface is found at almost every place where digital
technology exists, right from computers, mobile phones, cars, music players, airplanes,
ships etc.

User interface is part of software and is designed such a way that it is expected to
provide the user insight of the software. UI provides fundamental platform for human-
computer interaction.

The software becomes more popular if its user interface is:

 Attractive
 Simple to use
 Responsive in short time
 Clear to understand
 Consistent on all interfacing screens

UI is broadly divided into two categories:

 Command Line Interface


 Graphical User Interface

4.4 SECURITY ISSUES

Some of the security issues in trash detection technology are:

 The use of waste recognition technology raises significant privacy concerns.


 Individuals may not consent to having their trash scanned and stored in
databases, yet they may still be subject to surveillance without their
knowledge or consent.

23
 Waste recognition systems may produce false positives or false negatives,
leading to miscarriages of justice and undermining public trust in the
recyclable.

4.5 TEST CASE DESIGN

In python based automatic trash detection project implementation the errors are found
out easily and rectified with the help of error handling concept.

24
CHAPTER 5

IMPLEMENTATION AND TESTING

5.1 IMPLEMENTATION APPROACHES

Implementation is the stage in the project where the theoretical design is turned into a
working system. The most critical stage is achieving a successful system and in giving
confidence on the new system for the users, what it will work efficient and effectively. It
involves careful planning, investing of the current system, and its constraints on
implementation, design of methods to achieve the changeover methods. The implementation
process begins with preparing a plan for the implementation of the system. According to this
plan, the activities are to be carried out in these plans; discussion has been made regarding
the equipment, resources and how to test activities. The coding step translates a detail design
representation into a programming language
Realization: Programming languages are vehicles for communication between human and
computers programming language characteristics and coding style can profoundly affect
software quality and maintainability. The coding is done with the following characteristics in
mind.
 Ease of design to code translation.
 Code efficiency.
 Memory efficiency.
 Maintainability.
Implementation is the stage of the project when the theoretical design is turned out into a
working system. Thus, it can be considered to be the most critical stage in achieving a
successful new system and in giving the user, confidence that the new system will work and
be effective.

25
5.2 CODING DETAILS AND CODE EFFICIENCY

import numpy as np

import matplotlib.pyplot as plt

import os

import random

import glob

# to find files

import seaborn as sns

# Seaborn library for bar chart

# Libraries for TensorFlow

from tensorflow.keras.utils import to_categorical

from tensorflow.keras.preprocessing import image

from tensorflow.keras import models, layers

# Library for Transfer Learning

from tensorflow.keras.applications import VGG16

from keras.applications.vgg16 import preprocess_input

print("Importing libraries completed.")

train_folder ="Data/"

# variables for image size

26
img_width = 100

img_height = 100

# variable for model

batch_size = 32

epochs = 10

print("Variable declaration completed.")

# listing the folders containing images

# Train Dataset

train_class_names = os.listdir(train_folder)

print("Train class names: %s" % (train_class_names))

print("\nDataset class name listing completed.")

# declaration of functions

# Declaring variables

x = [] # to store array value of the images

y = [] # to store the labels of the images

for folder in os.listdir(train_folder):

image_list = os.listdir(train_folder + "/" + folder)

27
for img_name in image_list:

# Loading images

img = image.load_img(train_folder + "/" + folder + "/" + img_name, target_size=(img_width,


img_height))

# Converting to arrary

img = image.img_to_array(img)

# Transfer Learning: this is to apply preprocess of VGG16 model to our images before
passing it to VGG16

img = preprocess_input(img) # Optional step

# Appending the arrarys

x.append(img) # appending image array

y.append(train_class_names.index(folder)) # appending class index to the array

print("Preparing Training Dataset Completed.")

# Training Dataset

print("Training Dataset")

x = np.array(x) # Converting to np arrary to pass to the model

print(x.shape)

y = to_categorical(y) # onehot encoding of the labels

28
print(y.shape)

print("Summary of default VGG16 model.\n")

# we are using VGG16 for transfer learnin here. So we have imported it

from tensorflow.keras.applications import VGG16

model_vgg16 = VGG16(weights='imagenet')

model_vgg16.summary()

print("Summary of Custom VGG16 model.\n")

input_layer = layers.Input(shape=(img_width, img_height, 3))

model_vgg16 = VGG16(weights='imagenet', input_tensor=input_layer, include_top=False)

model_vgg16.summary()

last_layer = model_vgg16.output

flatten = layers.Flatten()(last_layer)

output_layer = layers.Dense(6, activation='softmax')(flatten)

model = models.Model(inputs=input_layer, outputs=output_layer)

model.summary()

for layer in model.layers[:-1]:

layer.trainable = False

model.summary()

29
from sklearn.model_selection import train_test_split

xtrain, xtest, ytrain, ytest = train_test_split(x, y, test_size=0.2, random_state=0)

print("Splitting data for train and test completed.")

model.compile(loss='categorical_crossentropy', optimizer='adam', metrics=['accuracy'])

print("Model compilation completed.")

history2 = model.fit(xtrain, ytrain, epochs=epochs, batch_size=batch_size, verbose=True,


validation_data=(xtrain, ytrain))

print("Fitting the model completed.")

model.save("Vggmodel.h5")

acc = history2.history['accuracy']

val_acc = history2.history['val_accuracy']

epochs = range(len(acc))

plt.plot(epochs, acc, label='Training Accuracy')

plt.plot(epochs, val_acc, label='Validation Accuracy')

plt.title('Training and Validation Accuracy')

plt.xlabel('Epochs')

plt.ylabel('Accuracy')

30
plt.legend()

plt.grid(True)

plt.show()

# Plot Model Loss

loss_train = history2.history['loss']

loss_val = history2.history['val_loss']

plt.plot(epochs, loss_train, label='Training Loss')

plt.plot(epochs, loss_val, label='Validation Loss')

plt.title('Training and Validation Loss')

plt.xlabel('Epochs')

plt.ylabel('Loss')

plt.legend()

plt.grid(True)

plt.show()

y_pred = model.predict(xtrain)

y_pred = np.argmax(y_pred, axis=1)

print(y_pred)

y_test=np.argmax(ytrain, axis=1)

from sklearn.metrics import classification_report

from sklearn.metrics import confusion_matrix

print(classification_report(y_test, y_pred))

31
cm = confusion_matrix(y_test, y_pred)

print(cm)

sns.heatmap(cm, annot=True, fmt='d', cmap='Blues')

plt.xlabel('Predicted Label')

plt.ylabel('True Label')

plt.title('Confusion Matrix of Vgg16 ')

plt.show()

from flask import Flask, render_template, flash, request, session

import warnings

import os

import mysql.connector

app = Flask(__name__)

app.config.from_object(__name__)

app.config['SECRET_KEY'] = '7d441f27d441f27567d441f2b6176a'

@app.route("/")

def homepage():

return render_template('index.html')

@app.route("/Prediction")

def Prediction():

return render_template('Prediction.html')

32
@app.route("/predict", methods=['GET', 'POST'])

def predict():

if request.method == 'POST':

import tensorflow as tf

import numpy as np

import cv2

from keras.preprocessing import image

file = request.files['file']

file.save('static/upload/Test.jpg')

org = 'static/upload/Test.jpg'

img1 = cv2.imread('static/upload/Test.jpg')

dst = cv2.fastNlMeansDenoisingColored(img1, None, 10, 10, 7, 21)

noi = 'static/upload/noi.jpg'

cv2.imwrite(noi, dst)

classifierLoad = tf.keras.models.load_model('Vggmodel.h5')

test_image = image.load_img('static/upload/Test.jpg', target_size=(100, 100))

test_image = np.expand_dims(test_image, axis=0)

result = classifierLoad.predict(test_image)

print(result)

result = classifierLoad.predict(test_image)

33
print(result)

result = np.argmax(result, axis=1)

print(result)

out = ''

pre = ''

if result[0] == 0:

print("cardboard")

out = "cardboard"

pre = "Degradable"

elif result[0] == 1:

print("glass")

out = "glass"

pre = "Non-Degradable"

elif result[0] == 2:

print("metal")

out = "metal"

pre = "Non-Degradable"

elif result[0] == 3:

print("paper")

out = "paper"

pre = "Degradable"

34
elif result[0] == 4:

print("plastic")

out = "plastic"

pre = "Non-Degradable"

elif result[0] == 5:

print("trash")

out = "trash"

pre = "Degradable"

sendmsg("9486365535",'Prediction Result : ' + out + ' ' + pre)

return render_template('Result.html', res=out, pre=pre, org=org, noi=noi)

def sendmsg(targetno, message):

import requests

requests.post(

"http://sms.creativepoint.in/api/push.json?
apikey=6555c521622c1&route=transsms&sender=FSSMSS&mobileno=" + str(

targetno) + "&text=Dear customer your msg is " + message + " Sent By FSMSG FSSMSS")

@app.route("/Camera")

def Camera():

import warnings

35
import cv2

import numpy as np

import os

import time

args = {"confidence": 0.5, "threshold": 0.3}

flag = False

labelsPath = "./yolo-coco/coco.names"

LABELS = open(labelsPath).read().strip().split("\n")

final_classes = ['bottle', 'wine glass', 'cup', 'cell phone', 'book']

np.random.seed(42)

COLORS = np.random.randint(0, 255, size=(len(LABELS), 3),

dtype="uint8")

weightsPath = os.path.abspath("./yolo-coco/yolov3-tiny.weights")

configPath = os.path.abspath("./yolo-coco/yolov3-tiny.cfg")

# print(configPath, "\n", weightsPath)

net = cv2.dnn.readNetFromDarknet(configPath, weightsPath)

ln = net.getLayerNames()

ln = [ln[i[0] - 1] for i in net.getUnconnectedOutLayers()]

36
vs = cv2.VideoCapture(0)

writer = None

(W, H) = (None, None)

flag = True

flagg = 0

while True:

# read the next frame from the file

(grabbed, frame) = vs.read()

# if the frame was not grabbed, then we have reached the end

# of the stream

if not grabbed:

break

# if the frame dimensions are empty, grab them

if W is None or H is None:

(H, W) = frame.shape[:2]

blob = cv2.dnn.blobFromImage(frame, 1 / 255.0, (416, 416),

swapRB=True, crop=False)

net.setInput(blob)

37
start = time.time()

layerOutputs = net.forward(ln)

end = time.time()

# initialize our lists of detected bounding boxes, confidences,

# and class IDs, respectively

boxes = []

confidences = []

classIDs = []

# loop over each of the layer outputs

for output in layerOutputs:

# loop over each of the detections

for detection in output:

# extract the class ID and confidence (i.e., probability)

# of the current object detection

scores = detection[5:]

classID = np.argmax(scores)

confidence = scores[classID]

# filter out weak predictions by ensuring the detected

# probability is greater than the minimum probability

if confidence > args["confidence"]:

# scale the bounding box coordinates back relative to

38
# the size of the image, keeping in mind that YOLO

# actually returns the center (x, y)-coordinates of

# the bounding box followed by the boxes' width and

# height

box = detection[0:4] * np.array([W, H, W, H])

(centerX, centerY, width, height) = box.astype("int")

# use the center (x, y)-coordinates to derive the top

# and and left corner of the bounding box

x = int(centerX - (width / 2))

y = int(centerY - (height / 2))

# update our list of bounding box coordinates,

# confidences, and class IDs

boxes.append([x, y, int(width), int(height)])

confidences.append(float(confidence))

classIDs.append(classID)

# apply non-maxima suppression to suppress weak, overlapping

# bounding boxes

idxs = cv2.dnn.NMSBoxes(boxes, confidences, args["confidence"],

args["threshold"])

# ensure at least one detection exists

if len(idxs) > 0:

# loop over the indexes we are keeping

39
for i in idxs.flatten():

# extract the bounding box coordinates

(x, y) = (boxes[i][0], boxes[i][1])

(w, h) = (boxes[i][2], boxes[i][3])

if (LABELS[classIDs[i]] in final_classes):

flagg += 1

# print(flag)

# sendmsg("9486365535", " Animal Detected " + LABELS[classIDs[i]])

color = [int(c) for c in COLORS[classIDs[i]]]

cv2.rectangle(frame, (x, y), (x + w, y + h), color, 2)

text = "{}: {:.4f}".format(LABELS[classIDs[i]],

confidences[i])

cv2.putText(frame, text, (x, y - 5),

cv2.FONT_HERSHEY_SIMPLEX, 0.5, color, 2)

if (flagg == 40):

flagg = 0

out = LABELS[classIDs[i]]

pre = ''

if out=="bottle":

pre = "Non-Degradable"

elif out=="wine glass":

pre = "Non-Degradable"

40
elif out=="cup":

pre = "Degradable"

elif out == "cell phone":

pre = "Non-Degradable"

elif out == "book":

pre = "Degradable"

sendmsg("9486365535",'Prediction Result : ' + out + ' ' + pre)

else:

flag = True

cv2.imshow("Output", frame)

if cv2.waitKey(1) == ord('q'):

break

# release the webcam and destroy all active windows

vs.release()

cv2.destroyAllWindows()

return render_template('index.html')

if __name__ == '__main__':

# app.run(host='0.0.0.0',debug = True, port = 5000)

app.run(debug=True, use_reloader=True)

41
5.3 TESTING APPROACH

TESTING

Software testing is a method of assessing the functionality of a software program.


There are many different types of software testing but the two main categories are dynamic
testing and static testing. Dynamic testing is an assessment that is conducted while the
program is executed; static testing, on the other hand, is an examination of the program's code
and associated documentation. Dynamic and static methods are often used together.

Testing is a set activity that can be planned and conducted systematically. Testing
begins at the module level and work towards the integration of entire computers-based
system. Nothing is complete without testing, as it is vital success of the system.

Testing Objectives:

There are several rules that can serve as testing objectives, they are
1. Testing is a process of executing a program with the intent of finding an error
2. A good test case is one that has high probability of finding an undiscovered error.
3. A successful test is one that uncovers an undiscovered error.

If testing is conducted successfully according to the objectives as stated above, it


would uncover errors in the software. Also testing demonstrates that software functions
appear to the working according to the specification, that performance requirements appear to
have been met.
There are three ways to test a program
1. For Correctness
2. For Implementation efficiency
3. For Computational Complexity.

Tests for correctness are supposed to verify that a program does exactly what it was
designed to do. This is much more difficult than it may at first appear, especially for large
programs.

Testing used for implementation efficiency attempt to find ways to make a correct
program faster or use less storage. It is a code-refining process, which reexamines the
implementation phase of algorithm development. Tests for computational complexity amount

42
to an experimental analysis of the complexity of an algorithm or an experimental comparison
of two or more algorithms, which solve the same problem.

Tests for correctness are supposed to verify that a program does exactly what it was
designed to do. This is much more difficult than it may at first appear, especially for large
programs. Test is a code-refining process, which reexamines the implementation phase of
algorithm development. Tests for computational complexity amount to an experimental
analysis of the complexity of an algorithm or an experimental comparison of two or more
algorithms, which solve the same problem.
The testing is carried in the following steps,

1. Unit Testing
2. Integration Testing
3. System Testing
4. Validation Testing
5. Acceptance Testing
6. Functional Testing

1. Unit Testing

Unit testing refers testing of all the individual programs. This is sometimes called as
program testing. This test should be carried out during programming stage in order to find the
errors in coding and logic for each program in each module. Unit test focuses verification
effort on the smallest unit of software design module. In this project, the user must fill each
field otherwise the user to enter values.

 Reduces Defects in the newly developed features or reduces bugs when changing the
existing functionality.
 Reduces Cost of Testing as defects are captured in very early phase.
 Improves design and allows better refactoring of code.
 Unit Tests, when integrated with build gives the quality of the build as well.

2. Integration Testing

Software integration testing is the incremental integration testing of two or more


integrated software components on a single platform to produce failures caused by interface
defects. The task of the integration test is to check that components or software applications,

43
e.g. components in a software system or – one step up – software applications at the company
level – interact without error.

4. System Testing

System testing is used to test the entire system (Integration of all the modules). It also
tests to find the discrepancies between the system and the original objective, current
specification and system documentation. The entire system is checked to correct deviation
to achieve correctness.

3. Validation Testing

Valid and invalid data should be created and the program should be made to process
this data to catch errors. When the user of each module wants to enter into the page by the
login page using the use rid and password. If the user gives the wrong password or use rid
then the information is provided to the user like “you must enter user id and password”. Here
the inputs given by the user are validated. That is password validation, format of date are
correct, textbox validation. Changes that need to be done after result of this testing.

5. Acceptance Testing

Acceptance testing can be defined in many ways, but a simple definition is the
succeeds when the software functions in a manner that can be reasonable expected by the
customer. After the acceptance test has been conducted, one of the two possible conditions
exists. This is to fine whether the inputs are accepted by the database or other validations. For
example, accept only numbers in the numeric field, date format data in the date field and also
the null check for the not null fields. If any error occurs then show the error messages. The
function of performance characteristics to specification and is accepted. A deviation from
specification is uncovered and a deficiency list is created. User Acceptance Testing is a
critical phase of any project and requires significant participation by the end user. It also
ensures that the system meets the functional requirements.

6. Functional Testing

Functional test can be defined as testing two or more modules together with the
intent of finding defects, demonstrating that defects are not present, verifying that the module
performs its intended functions as stated in the specification and establishing confidence that
a program does what it is supposed to do.

44
CHAPTER 6
RESULT AND DISCUSSION

6.1 TEST REPORTS

HOME PAGE

MODEL BUILD

M:\Project-2024\BE\KU\Hlaf\WasteClassificationNewPy\venv\Scripts\python.exe
M:/Project-2024/BE/KU/Hlaf/WasteClassificationNewPy/VggModel.py

2024-03-07 20:59:20.787413: W
tensorflow/stream_executor/platform/default/dso_loader.cc:64] Could not load dynamic
library 'cudart64_110.dll'; dlerror: cudart64_110.dll not found

2024-03-07 20:59:20.788114: I tensorflow/stream_executor/cuda/cudart_stub.cc:29] Ignore


above cudart dlerror if you do not have a GPU set up on your machine.

Importing libraries completed.

45
Variable declaration completed.

Train class names: ['cardboard', 'glass', 'metal', 'paper', 'plastic', 'trash']

Dataset class name listing completed.

Preparing Training Dataset Completed.

Training Dataset

(2527, 100, 100, 3)

(2527, 6)

Summary of default VGG16 model.

2024-03-07 20:59:32.932845: W
tensorflow/stream_executor/platform/default/dso_loader.cc:64] Could not load dynamic
library 'nvcuda.dll'; dlerror: nvcuda.dll not found

2024-03-07 20:59:32.934668: W tensorflow/stream_executor/cuda/cuda_driver.cc:269] failed


call to cuInit: UNKNOWN ERROR (303)

2024-03-07 20:59:32.950200: I tensorflow/stream_executor/cuda/cuda_diagnostics.cc:169]


retrieving CUDA diagnostic information for host: DESKTOP-9BF8NUN

2024-03-07 20:59:32.950538: I tensorflow/stream_executor/cuda/cuda_diagnostics.cc:176]


hostname: DESKTOP-9BF8NUN

2024-03-07 20:59:32.956161: I tensorflow/core/platform/cpu_feature_guard.cc:151] This


TensorFlow binary is optimized with oneAPI Deep Neural Network Library (oneDNN) to use
the following CPU instructions in performance-critical operations: AVX AVX2

To enable them in other operations, rebuild TensorFlow with the appropriate compiler flags.

2024-03-07 20:59:33.458594: W tensorflow/core/framework/cpu_allocator_impl.cc:82]


Allocation of 411041792 exceeds 10% of free system memory.

2024-03-07 20:59:34.463299: W tensorflow/core/framework/cpu_allocator_impl.cc:82]


Allocation of 411041792 exceeds 10% of free system memory.

46
2024-03-07 20:59:35.174664: W tensorflow/core/framework/cpu_allocator_impl.cc:82]
Allocation of 411041792 exceeds 10% of free system memory.

2024-03-07 20:59:36.361364: W tensorflow/core/framework/cpu_allocator_impl.cc:82]


Allocation of 67108864 exceeds 10% of free system memory.

2024-03-07 20:59:36.481056: W tensorflow/core/framework/cpu_allocator_impl.cc:82]


Allocation of 67108864 exceeds 10% of free system memory.

Model: "vgg16"

_________________________________________________________________

Layer (type) Output Shape Param #

=================================================================

input_1 (InputLayer) [(None, 224, 224, 3)] 0

block1_conv1 (Conv2D) (None, 224, 224, 64) 1792

block1_conv2 (Conv2D) (None, 224, 224, 64) 36928

block1_pool (MaxPooling2D) (None, 112, 112, 64) 0

block2_conv1 (Conv2D) (None, 112, 112, 128) 73856

block2_conv2 (Conv2D) (None, 112, 112, 128) 147584

block2_pool (MaxPooling2D) (None, 56, 56, 128) 0

block3_conv1 (Conv2D) (None, 56, 56, 256) 295168

47
block3_conv2 (Conv2D) (None, 56, 56, 256) 590080

block3_conv3 (Conv2D) (None, 56, 56, 256) 590080

block3_pool (MaxPooling2D) (None, 28, 28, 256) 0

block4_conv1 (Conv2D) (None, 28, 28, 512) 1180160

block4_conv2 (Conv2D) (None, 28, 28, 512) 2359808

block4_conv3 (Conv2D) (None, 28, 28, 512) 2359808

block4_pool (MaxPooling2D) (None, 14, 14, 512) 0

block5_conv1 (Conv2D) (None, 14, 14, 512) 2359808

block5_conv2 (Conv2D) (None, 14, 14, 512) 2359808

block5_conv3 (Conv2D) (None, 14, 14, 512) 2359808

block5_pool (MaxPooling2D) (None, 7, 7, 512) 0

flatten (Flatten) (None, 25088) 0

48
fc1 (Dense) (None, 4096) 102764544

fc2 (Dense) (None, 4096) 16781312

predictions (Dense) (None, 1000) 4097000

=================================================================

Total params: 138,357,544

Trainable params: 138,357,544

Non-trainable params: 0

_________________________________________________________________

Summary of Custom VGG16 model.

Model: "vgg16"

_________________________________________________________________

Layer (type) Output Shape Param #

=================================================================

input_2 (InputLayer) [(None, 100, 100, 3)] 0

block1_conv1 (Conv2D) (None, 100, 100, 64) 1792

block1_conv2 (Conv2D) (None, 100, 100, 64) 36928

49
block1_pool (MaxPooling2D) (None, 50, 50, 64) 0

block2_conv1 (Conv2D) (None, 50, 50, 128) 73856

block2_conv2 (Conv2D) (None, 50, 50, 128) 147584

block2_pool (MaxPooling2D) (None, 25, 25, 128) 0

block3_conv1 (Conv2D) (None, 25, 25, 256) 295168

block3_conv2 (Conv2D) (None, 25, 25, 256) 590080

block3_conv3 (Conv2D) (None, 25, 25, 256) 590080

block3_pool (MaxPooling2D) (None, 12, 12, 256) 0

block4_conv1 (Conv2D) (None, 12, 12, 512) 1180160

block4_conv2 (Conv2D) (None, 12, 12, 512) 2359808

block4_conv3 (Conv2D) (None, 12, 12, 512) 2359808

block4_pool (MaxPooling2D) (None, 6, 6, 512) 0

50
block5_conv1 (Conv2D) (None, 6, 6, 512) 2359808

block5_conv2 (Conv2D) (None, 6, 6, 512) 2359808

block5_conv3 (Conv2D) (None, 6, 6, 512) 2359808

block5_pool (MaxPooling2D) (None, 3, 3, 512) 0

=================================================================

Total params: 14,714,688

Trainable params: 14,714,688

Non-trainable params: 0

_________________________________________________________________

Model: "model"

_________________________________________________________________

Layer (type) Output Shape Param #

=================================================================

input_2 (InputLayer) [(None, 100, 100, 3)] 0

block1_conv1 (Conv2D) (None, 100, 100, 64) 1792

block1_conv2 (Conv2D) (None, 100, 100, 64) 36928

block1_pool (MaxPooling2D) (None, 50, 50, 64) 0

51
block2_conv1 (Conv2D) (None, 50, 50, 128) 73856

block2_conv2 (Conv2D)

(None, 50, 50, 128) 147584

block2_pool (MaxPooling2D) (None, 25, 25, 128) 0

block3_conv1 (Conv2D) (None, 25, 25, 256) 295168

block3_conv2 (Conv2D) (None, 25, 25, 256) 590080

block3_conv3 (Conv2D) (None, 25, 25, 256) 590080

block3_pool (MaxPooling2D) (None, 12, 12, 256) 0

block4_conv1 (Conv2D) (None, 12, 12, 512) 1180160

block4_conv2 (Conv2D) (None, 12, 12, 512) 2359808

block4_conv3 (Conv2D) (None, 12, 12, 512) 2359808

block4_pool (MaxPooling2D) (None, 6, 6, 512) 0

52
block5_conv1 (Conv2D) (None, 6, 6, 512) 2359808

block5_conv2 (Conv2D) (None, 6, 6, 512) 2359808

block5_conv3 (Conv2D) (None, 6, 6, 512) 2359808

block5_pool (MaxPooling2D) (None, 3, 3, 512) 0

flatten (Flatten) (None, 4608) 0

dense (Dense) (None, 6) 27654

=================================================================

Total params: 14,742,342

Trainable params: 14,742,342

Non-trainable params: 0

_________________________________________________________________

Model: "model"

_________________________________________________________________

Layer (type) Output Shape Param #

=================================================================

input_2 (InputLayer) [(None, 100, 100, 3)] 0

53
block1_conv1 (Conv2D) (None, 100, 100, 64) 1792

block1_conv2 (Conv2D) (None, 100, 100, 64) 36928

block1_pool (MaxPooling2D) (None, 50, 50, 64) 0

block2_conv1 (Conv2D) (None, 50, 50, 128) 73856

block2_conv2 (Conv2D) (None, 50, 50, 128) 147584

block2_pool (MaxPooling2D) (None, 25, 25, 128) 0

block3_conv1 (Conv2D) (None, 25, 25, 256) 295168

block3_conv2 (Conv2D) (None, 25, 25, 256) 590080

block3_conv3 (Conv2D) (None, 25, 25, 256) 590080

block3_pool (MaxPooling2D) (None, 12, 12, 256) 0

block4_conv1 (Conv2D) (None, 12, 12, 512) 1180160

block4_conv2 (Conv2D) (None, 12, 12, 512) 2359808

54
block4_conv3 (Conv2D) (None, 12, 12, 512) 2359808

block4_pool (MaxPooling2D) (None, 6, 6, 512) 0

block5_conv1 (Conv2D) (None, 6, 6, 512) 2359808

block5_conv2 (Conv2D) (None, 6, 6, 512) 2359808

block5_conv3 (Conv2D) (None, 6, 6, 512) 2359808

block5_pool (MaxPooling2D) (None, 3, 3, 512) 0

flatten (Flatten) (None, 4608) 0

dense (Dense) (None, 6) 27654

=================================================================

Total params: 14,742,342

Trainable params: 27,654

Non-trainable params: 14,714,688

_________________________________________________________________

Splitting data for train and test completed.

Model compilation completed.

55
Epoch 1/10

64/64 [==============================] - 92s 1s/step - loss: 5.9744 - accuracy:


0.6032 - val_loss: 1.0710 - val_accuracy: 0.8615

Epoch 2/10

64/64 [==============================] - 89s 1s/step - loss: 1.1482 - accuracy:


0.8605 - val_loss: 0.3089 - val_accuracy: 0.9317

Epoch 3/10

64/64 [==============================] - 90s 1s/step - loss: 0.3918 - accuracy:


0.9228 - val_loss: 0.1518 - val_accuracy: 0.9609

Epoch 4/10

64/64 [==============================] - 88s 1s/step - loss: 0.2000 - accuracy:


0.9604 - val_loss: 0.1320 - val_accuracy: 0.9619

Epoch 5/10

64/64 [==============================] - 89s 1s/step - loss: 0.0789 - accuracy:


0.9787 - val_loss: 0.0990 - val_accuracy: 0.9758

Epoch 6/10

64/64 [==============================] - 95s 1s/step - loss: 0.1212 - accuracy:


0.9748 - val_loss: 0.0663 - val_accuracy: 0.9842

Epoch 7/10

64/64 [==============================] - 94s 1s/step - loss: 0.0828 - accuracy:


0.9807 - val_loss: 0.0296 - val_accuracy: 0.9926

Epoch 8/10

64/64 [==============================] - 95s 1s/step - loss: 0.0714 - accuracy:


0.9857 - val_loss: 0.0924 - val_accuracy: 0.9812

Epoch 9/10

64/64 [==============================] - 99s 2s/step - loss: 0.0549 - accuracy:


0.9926 - val_loss: 0.0352 - val_accuracy: 0.9946

56
Epoch 10/10

64/64 [==============================] - 103s 2s/step - loss: 0.0455 - accuracy:


0.9931 - val_loss: 0.0311 - val_accuracy: 0.9965

Fitting the model completed.

TRAINING ACCURACYs

57
TRAINING LOSS

58
CLASSIFICATION REPORT

precision recall f1-score support

0 1.00 1.00 1.00 333

1 0.99 1.00 0.99 399

2 1.00 0.98 0.99 323

3 1.00 1.00 1.00 465

4 1.00 1.00 1.00 394

5 1.00 1.00 1.00 107

accuracy 1.00 2021

macro avg 1.00 1.00 1.00 2021

weighted avg 1.00 1.00 1.00 2021

CONFUSION MATRIX

59
PREDICTION

UPLOAD TRASH IMAGE

60
TRASH CLASSIFICATION

PREPROCESSING

61
TRASH CLASSIFICATION

REAL TIME INTERFACE-BASED TRASH DETECTION

62
6.2 USER DOCUMENTATION

Users of a system are not all the same. The producer of documentation must structure it to
cater for different user tasks and different levels of expertise and experience. It is particularly
important to distinguish between end-users and system administrators:

1. End-users use the software to assist with some task. This may be flying an aircraft,
managing insurance policies, writing a book, etc. They want to know how the software
can help them. They are not interested in computer or administration details.
2. System administrators are responsible for managing the software used by end-users.
This may involve acting as an operator if the system is a large mainframe system, as a
network manager is the system involves a network of workstations or as a technical
guru who fixes end-users software problems and who liaises between users and the
software supplier.

63
CHAPTER 7
CONCLUSIONS
7.1 CONCLUSION

The Trash Classification Using Deep Learning Algorithm system using VGG16 CNN
is an efficient approach towards automatic waste classification using deep learning
techniques. The proposed system aims to solve the issue of improper waste management by
classifying waste materials into different categories. The VGG16 architecture has been used
for the proposed system as it is a powerful and widely used architecture in image
classification. The system requires pre-processing of the images for enhancing the quality of
the input images. The images are then trained using the VGG16 CNN model, and the features
are extracted to perform waste classification. The proposed system has various advantages
such as high accuracy, reduced human intervention, and better waste management. The
system can handle large datasets and can classify the waste into different categories with high
accuracy, which helps in waste management and recycling. Compared to existing waste
classification algorithms, the proposed VGG16-based system showed better accuracy and
robustness. The VGG16 architecture, with its deep layers and ability to learn complex
features, proved to be a powerful tool in image classification tasks.

7.2 LIMITATION OF THE SYSTEM

 The system's accuracy and robustness may be influenced by factors such as


lighting conditions, image quality, and variations in waste appearance.
 The effectiveness of the system heavily relies on the diversity and
representativeness of the training dataset.
 Limited access to high-performance computing infrastructure may pose
challenges for scalability and real-time processing, particularly in resource-
constrained environments.

64
7.3 FUTURE SCOPE OF THE PROJECT

With further research and development, the proposed system can be integrated with a
smart waste collection system to provide real-time waste classification and segregation.
Another possible area for improvement is the integration of real-time waste detection using
sensors and cameras. This would allow for more accurate and efficient waste classification,
as well as the ability to monitor and analyse waste trends over time. Additionally, the
implementation of automated waste sorting systems could be explored, where the waste is
sorted into appropriate categories using robotics and artificial intelligence

65
REFERENCES
BOOKS

 Heinold, Brian. "A practical introduction to Python programming." (2021).


 Kneusel, Ronald T. Practical deep learning: “A Python-based introduction”.
No Starch Press, 2021.
 Dhruv, Akshit J., Reema Patel, and Nishant Doshi. "Python: the most
advanced programming language for computer science applications." Science
and Technology Publications”, Lda (2021): 292-299.
 Sundnes, Joakim. “Introduction to scientific programming with Python”.
“Springer Nature”, 2020.
 Hill, Christian. “Learning scientific programming with Python”. Cambridge
University Press, 2020.

WEBSITES

https://docs.python.org/3/tutorial/
https://www.w3schools.com/python/
https://www.tutorialspoint.com/python/index.htm
https://www.programiz.com/python-programming

66

You might also like