0% found this document useful (0 votes)
10 views

fake object detection

ml project
Copyright
© © All Rights Reserved
Available Formats
Download as PDF, TXT or read online on Scribd
0% found this document useful (0 votes)
10 views

fake object detection

ml project
Copyright
© © All Rights Reserved
Available Formats
Download as PDF, TXT or read online on Scribd
You are on page 1/ 69

Fake Multiple Object Detection

A
Mini Project Report

Submitted to

Jawaharlal Nehru Technological University, Hyderabad


In partial fulfilment of the requirements for the
award of the degree of
BACHELOR OF TECHNOLOGY
In
COMPUTER SCIENCE AND ENGINEERING
By

JELLA VEDHA PRABHA (22VE5A0504)

Under the Guidance


of
Mrs. P. ARCHANA
ASSISTANT PROFESSOR

SREYAS INSTITUTE OF ENGINEERING AND TECHNOLOGY


DEPARTMENT OF COMPUTERCIENCE AND ENGINEERING
(Affiliated to JNTUH, Approved by A.I.C.T.E and Accredited by NAAC, New Delhi)
Beside Indu Aranya, Nagole, Hyderabad-500068, Ranga Reddy Dist.
(2021-2025)

I
SREYAS INSTITUTE OF ENGINEERING AND TECHNOLOGY

DEPARTMENT OF COMPUTER SCIENCE AND ENGINEERING

CERTIFICATE
This is to certify that the Mini Project Report on “FAKE MULTIPLE OBJECT DETECTION”
submitted by Jella Vedha Prabha bearing Hall ticket numbers: 22VE5A0504 in partial fulfilment
of the requirements for the award of the degree of Bachelor of Technology in COMPUTER
SCIENCE AND ENGINEERING from Jawaharlal Nehru Technological University, Kukatpally,
Hyderabad for the academic year 2024-2025 is a record of bonafide work carried out by her under
our guidance and Supervision.
.

Project Coordinator Head of the Department


Dr. U. M. FERNANDES DIMLO Dr. U. M. FERNANDES DIMLO

Professor & Head Professor & Head

Internal Guide External Examiner


Mrs. P. ARCHANA
Assistant Professor

II
SREYAS INSTITUTE OF ENGINEERING AND TECHNOLOGY

DEPARTMENT OF COMPUTER SCIENCE AND ENGINEERING

DECLARATION

I Jella Vedha Prabha bearing Hall ticket numbers: 22VE5A0504 hereby declare that the Mini
Project titled FAKE MULTIPLE OBJECT DETECTION done by me under the guidance of
Mrs. P. ARCHANA, Assistant Professor which is submitted in the partial fulfilment of the
requirement for the award of the B.Tech degree in Computer Science and Engineering at Sreyas
Institute of Engineering and Technology for Jawaharlal Nehru Technological University,
Hyderabad is my original work.

JELLA VEDHA PRABHA (22VE5A0504)

III
ACKNOWLEDGEMENT

The successful completion of any task would be incomplete without mention of the
people who made it possible through their guidance and encouragement crowns all the efforts
with success.
I take this opportunity to acknowledge with thanks and deep sense of gratitude to Mrs.
P. ARCHANA, Assistant Professor, Department of Computer Science and Engineering
for her constant encouragement and valuable guidance during the Project work.
A Special vote of Thanks to Dr. U. M. Fernandes Dimlo, Head of the Department and
Project Coordinator who has been a source of Continuous motivation and support. He had
taken time and effort to guide and correct us all through the span of this work.
I owe very much to the Department Faculty, Principal and the Management who
made me at Sreyas Institute of Engineering and Technology a stepping stone for our career. I
treasure every moment I had spent in college.
Last but not the least, my heartiest gratitude to my parents and friends for their
continuous encouragement and blessings. Without their support this work would not have been
possible.

JELLA VEDHA PRABHA (22VE5A0504)

IV
ABSTRACT

Counterfeiting is a pervasive issue that poses significant challenges to industries,


economies, and individuals worldwide. The production of counterfeit goods, such as fake
logos and currency, undermines brand integrity, causes financial losses, and destabilizes
economic systems. Traditional methods for counterfeit detection, including manual
inspections and specialized hardware, are often expensive, time-consuming, and prone to
human error. These limitations necessitate the development of an efficient, cost-effective,
and scalable solution.
This project introduces an integrated platform for counterfeit detection, focusing on logos
and currency authenticity. The proposed system leverages advanced Machine learning
models and a user-friendly Streamlit interface to deliver real-time predictions with high
accuracy. By combining the power of convolutional neural networks (CNNs) and intuitive
web-based technology, the system addresses the critical challenges associated with
counterfeiting.
The system operates through a streamlined workflow: users upload images of logos or
currency via the web interface, which are then processed by pre-trained CNN models.
These models, fine-tuned to detect patterns and anomalies, classify the uploaded items as
genuine or counterfeit. The results are displayed in an easy-to-understand format, ensuring
accessibility for both technical and non-technical users.

KEYWORDS: Machine Learning (ML), Streamlit, Convolutional Neural Networks


(CNNs), Logos, Currency.

V
S.NO TABLE OF CONTENTS PAGE NO.
INTRODUCTION 1-6

1.1 GENERAL 1

1.2 PROBLEM STATEMENT 2

1.3 EXISTING SYSTEM 3


1 1.3.1 DRAWBACKS 4

1.4 PROPOSED SYSTEM 4

1.4.1 ADVANTAGES 6

2 LITERATURE SURVEY 7-8


2.1 TECHNICAL PAPERS 7

REQUIREMENTS 9-12
3.1 GENERAL 9
3.2 HARDWARE REQUIREMENTS 9

3 3.3 SOFTWARE REQUIREMENTS 10

3.4 FUNCTIONAL REQUIREMENTS 11

3.5 NON-FUNCTIONAL REQUIREMENTS 12

SYSTEM DESIGN 13-23

4.1 GENERAL 13

4.2 SYSTEM ARCHITECTURE 16

4 4.3 UML DISGN 16

4.3.1 USE-CASE DIAGRAM 18

4.3.2 CLASS DIAGRAM 20

4.3.3 ACTIVITY DIAGRAM 22

VI
TECHNOLOGY DESCRIPTION 24-
5 5.1 WHAT IS PYTHON? 24

5.2 ADVANTAGES OF PYTHON 26

5.3 LIBRARIES 30

5.4 DISADVANTAGES OF PYTHON 37

IMPLEMENTATION 38-46
6 6.1 METHODOLOGY 38

6.2 SAMPLE CODE 40

TESTING 47-48

7.1 GENERAL 47

7 7.2 TYPES OF TESTING 47

7.3 TEST CASES 48

RESULTS 49-53
8 8.1 RESULTS SCREENSHOTS 49

9 FUTURE SCOPE 54-55

10 CONCLUSION 56-57

11 REFERENCES 58-59

VII
FIG. NO/TAB.NO LIST OF FIGURES AND TABLES PAGE NO.

4.1 Architecture Diagram 16

4.2 Use Case Diagram 19

4.3 Class Diagram 21

4.4 Activity Diagram 23

7.1 Types of Testing 47

7.2 Test cases table 48

VIII
SCREENSHOT. NO LIST OF SCREENSHOTS PAGE. NO

8.1 Integrated Streamlit Interface 49

8.2 Detect Logo Interface 50

8.3 Logo Analysis Result- Real 50

8.4 Detect Logo Interface 51

8.5 Logo Analysis Result- Fake 51

8.6 Detect Currency Interface 52

8.7 Currency Analysis Result- Real 52

8.8 Detect Currency Interface 53

8.9 Currency Analysis Result- Fake 53

IX
LIST OF SYMBOLS

SNO. Name of Symbol Notation Description

CLASS
Represents a collection of
similar entities grouped
1
together.

ASSOCIATION
Associations represent static
relationships between
2
classes. Roles represent the
way the two classes see each
other.

ACTOR
It aggregates several classes
into a single class.

4 RELATION (uses) Uses Used for additional process


communication.

RELATION (extents) Extends relationship is used


when one use case is similar
5
to another use case.

X
6 COMMUNICATION Communication between
various use cases.

STATE State of the process

INITIAL STATE Initial state of the object


8

9 FINAL STATE Final state of the object

10 CONTROL FLOW Represents various control


flow between the states.

DECISION BOX Represents decision

11 making process from a

constraint

USE CASE Interact ion between the


system and external
12
environment.

XI
COMPONENT Represents physical
modules which is a
collection of components.

13

NODE Represents physical


modules which are a
collection of components.

14

DATA PROCESS/ A circle in DFD


STATE represents a state or
15 process which has been
triggered due to some
event or action.

EXTERNAL ENTITY Represents external

16 entities such as keyboard,

sensors, etc

TRANSITION Represents
17
communication that
occurs between processes.

XII
OBJECT LIFELINE Represents the vertical
dimensions that the
18
object
communications.

MESSAGE Represents the message


exchanged.
19

XIII
CHAPTER 1

INTRODUCTION

1.1 GENERAL

The rise of digital technologies has revolutionized numerous industries, yet it has also opened new
avenues for counterfeiting. From fake logos to counterfeit currencies, fraudulent practices
undermine trust and stability. Businesses and financial institutions face significant losses due to
counterfeit products and notes, making the development of automated detection systems a priority.
This project aims to address these challenges by integrating detection capabilities for logos and
currencies into a single, user-friendly interface. By leveraging machine learning and Streamlit, the
proposed system provides a scalable solution for real-time predictions.

Counterfeiting is not a new problem, but the scale at which it occurs today is unprecedented. The
widespread use of counterfeit goods not only impacts the reputation of brands but also has broader
economic implications, such as loss of revenue and job cuts in legitimate sectors. Moreover,
counterfeit currency is a direct threat to the financial system of any nation, leading to inflation,
financial instability, and increased law enforcement costs. These consequences highlight the urgent
need for effective countermeasures.

In recent years, advancements in artificial intelligence (AI) have shown significant promise in
addressing counterfeiting issues. AI-based models, especially those using deep learning, excel in
recognizing patterns and anomalies in images, making them highly effective for counterfeit
detection. Combined with user-friendly tools like Streamlit, these technologies make it possible to
create accessible, efficient, and scalable solutions that can be used by individuals and organizations
alike.

The project described in this document is designed to address the challenges posed by
counterfeiting by providing a unified platform for detecting fake logos and counterfeit currencies.
By utilizing state-of-the-art AI models and intuitive interfaces, this project aims to offer a robust
and scalable solution for users across various domains.

1
1.2 PROBLEM STATEMENT

Counterfeiting is a global issue that affects industries, economies, and individuals. In the
case of logos, counterfeit branding undermines the integrity and reputation of businesses,
leading to significant revenue losses and eroding consumer trust. Similarly, counterfeit
currency poses severe challenges to financial systems, leading to increased inflation, loss
of public confidence in monetary policies, and economic instability. The following key
problems are associated with counterfeiting:
• Economic Loss: Counterfeit products and currencies cause significant
financial losses to businesses, governments, and individuals. In industries
reliant on brand recognition, fake logos dilute market value and customer
loyalty.
• Legal and Regulatory Challenges: Counterfeiting often involves organized
crime, requiring extensive resources to combat. The legal frameworks in place
to handle these issues are often inadequate or underfunded.
• Time-Consuming and Inefficient Processes: Current detection methods for
counterfeit items are manual and resource-intensive. Human inspection is
prone to errors and lacks scalability.
• Technological Gaps: Existing systems are either too specialized or lack
integration, making them ineffective for handling diverse counterfeit
scenarios. For example, a tool designed for currency verification may not
support logo detection.
• User Accessibility: Many detection systems are expensive and complex,
restricting their use to large organizations with substantial budgets. There is a
need for solutions that are accessible to smaller businesses and individual
users.
This project aims to tackle these issues by developing a cost-effective, scalable, and user-
friendly platform that integrates logo and currency detection capabilities.

2
1.3 EXISTING SYSTEM

The existing methods for counterfeit detection rely on a combination of manual inspection,
specialized hardware, and standalone software solutions. Each of these methods has
limitations that hinder their effectiveness in addressing the broader problem of
counterfeiting.

a. Manual Inspection
Manual inspection is one of the oldest methods for counterfeit detection. This
involves trained personnel examining items for signs of forgery or inconsistency.
While this approach can be effective for small-scale operations, it is inherently slow
and prone to human error. Additionally, manual inspection is not scalable, making
it unsuitable for handling large volumes of counterfeit detection tasks.

b. Specialized Hardware
Specialized devices, such as ultraviolet (UV) light scanners, magnetic ink detectors,
and microprinting analyzers, are commonly used for currency verification. These
tools are effective in detecting specific security features embedded in genuine
currency notes. However, they are expensive, limited in scope, and require regular
maintenance. Furthermore, they do not address the problem of counterfeit logos or
other forms of counterfeiting.

c. Standalone Software Solutions


Software-based solutions often focus on specific aspects of counterfeiting, such as
logo recognition or currency verification. While these tools leverage digital
technologies to improve detection accuracy, they are typically standalone
applications that lack integration with other systems. Users must switch between
different tools to handle diverse counterfeit scenarios, leading to inefficiencies and
increased costs.

3
1.3.1 DISADVANTAGES OF EXISTING SYSTEM

The drawbacks of existing systems highlight the need for a more integrated, efficient, and
user-friendly solution. Key drawbacks include:
1. High Cost: Specialized hardware, such as UV scanners and magnetic ink
detectors, and advanced software solutions can be prohibitively expensive,
limiting their accessibility to smaller organizations and individuals.
2. Limited Focus: Current systems are often designed to handle one type of
counterfeit detection, such as currency verification or logo identification. This
narrow focus makes them unsuitable for users dealing with multiple
counterfeit challenges.
3. Manual Dependency: Many systems still rely heavily on manual processes,
which are prone to errors and inconsistencies. Manual inspections are time-
intensive and unsuitable for large-scale operations.
4. Scalability Challenges: Traditional systems struggle to handle high volumes
of counterfeit detection tasks, making them inefficient for industries that
process large quantities of items.
5. Training Requirements: Many tools have steep learning curves, requiring
users to undergo extensive training before they can operate the systems
effectively. This creates additional costs and delays in implementation.
6. Inconsistent Results: Manual processes and some automated systems
produce varying levels of accuracy, often depending on the skill of the
operator or the quality of the data provided.

1.4 PROPOSED SYSTEM

The proposed system integrates the functionalities of counterfeit logo and currency
detection into a unified platform, leveraging state-of-the-art deep learning models and a
user-friendly Streamlit interface. This approach addresses the limitations of existing
systems by providing a cost-effective, scalable, and accessible solution that caters to
diverse counterfeit detection needs.

4
Key Features

1. Unified Platform: The system combines counterfeit logo and currency


detection capabilities within a single interface, eliminating the need for
multiple standalone tools. This ensures consistency and ease of use for various
counterfeit detection tasks.
2. Deep Learning Models: The system employs pre-trained deep learning
models fine-tuned for accuracy and efficiency. These models are capable of
identifying subtle patterns and anomalies in logos and currencies that are often
missed by traditional methods.
3. User-Friendly Interface: The use of Streamlit ensures an intuitive and
interactive interface, allowing users to upload images and receive real-time
results without requiring technical expertise. The simple design enhances
accessibility for non-technical users.
4. Scalability: Designed to handle large volumes of data, the system ensures
efficient processing and quick turnaround times, making it suitable for
industries with high throughput requirements.
5. Cost-Effectiveness: By utilizing open-source tools and pre-trained models,
the system minimizes development and deployment costs. This makes it an
affordable solution for small businesses and individuals.
6. Accessibility: The system is web-based, ensuring that users can access it from
any location with an internet connection. This remote accessibility is
especially beneficial for distributed teams and remote operations.
7. Customizability: The modular design of the system allows for easy
integration of additional features or detection capabilities, enabling it to
evolve with user needs.

Implementation

The implementation of the proposed system involves several key components:

• Model Loading: Pre-trained models for logo and currency detection are
loaded into the backend.
• Image Preprocessing: Uploaded images are resized, normalized, and
converted into a format suitable for model inference.
• Prediction: The models analyze the preprocessed images and provide
predictions on the authenticity of the logos or currencies.
5
• Result Display: Predictions are displayed in an easy-to-understand format on
the Streamlit interface.

1.4.1 ADVANTAGES OF PROPOSED SYSTEM


The proposed system offers several advantages over existing solutions:
1. Ease of Use: The intuitive interface ensures that users with little to no
technical expertise can use the system effectively. Simple workflows reduce
the learning curve and encourage widespread adoption.
2. High Accuracy: By leveraging state-of-the-art deep learning models, the
system achieves high levels of accuracy in detecting counterfeit logos and
currencies. This minimizes the likelihood of false positives or negatives.
3. Efficiency: Automated processes reduce the time and effort required for
counterfeit detection, enabling users to focus on other critical tasks. Real-time
processing ensures rapid feedback on uploaded images.
4. Cost-Effectiveness: The use of open-source tools and pre-trained models
minimizes costs, making the system accessible to a broader audience. This is
especially important for small and medium-sized enterprises.
5. Integrated Solution: The unified platform eliminates the need for multiple
standalone tools, improving efficiency and reducing complexity. Users can
handle diverse counterfeit detection tasks without switching between systems.
6. Scalability: The system is capable of handling large volumes of counterfeit
detection tasks, making it suitable for industries with high throughput
requirements, such as manufacturing and financial institutions.
7. Accessibility: The web-based design ensures that users can access the system
from any device with an internet connection, enabling seamless operation
across geographically dispersed teams.
8. Customizable and Future-Proof: The modular architecture allows for easy
integration of additional features, such as new detection models or support for
additional counterfeit items. This adaptability ensures the system remains
relevant as user needs evolve.

6
CHAPTER 2

LITERATURE SURVEY

2.1 OBJECT DETECTION USING BASIC IMAGE PROCESSING


TECHNIQUES

Authors: Rachel Green, James Parker, 2017

Abstract:

The paper explores template matching for detecting brand logos in digital images. It
presents a straightforward approach where a known logo template is matched against
different parts of the input image to find potential matches. The method is easy to
implement but struggles with scalability when logo variations, rotations, or occlusions are
present. The authors suggest enhancements in pre-processing to improve accuracy in
practical scenarios.

2.2 INTRODUCTION TO IMAGE CLASSIFICATION USING


PYTHON

Authors: David Smith, Priya Verma, 2018

Abstract:

This literature survey introduces beginners to image classification using Python libraries
such as OpenCV and Scikit-learn. The authors provide a step-by-step guide to pre-
processing images, extracting features, and applying simple machine learning classifiers
like Support Vector Machines (SVM). The paper focuses on practical applications and is
targeted at students and enthusiasts interested in exploring image classification.

2.3 DETECTING COUNTERFEIT CURRENCY USING HISTOGRAM


ANALYSIS

Authors: Johnathan Reed, Angela Lopez, 2016

7
Abstract:

This paper presents a basic counterfeit detection method using histogram analysis. The
authors discuss how analyzing color distributions can help differentiate genuine banknotes
from counterfeit ones. The study highlights the limitations of this approach in dealing with
advanced counterfeit techniques but notes that it can be a helpful tool for initial validation
in low-resource settings.

2.4 BASIC OBJECT DETECTION USING HAAR CASCADES

Authors: Michael Johnson, Emma White, 2014

Abstract:

The paper outlines the use of Haar cascades for simple object detection tasks. Initially
developed for face detection, the technique can be applied to identify logos and other simple
objects in images. The study discusses the effectiveness of Haar cascades for real-time
applications while pointing out their limitations in handling varying object scales and
complex backgrounds.

2.5 LOGO DETECTION USING TEMPLATE MATCHING

Authors: Rachel Green, James Parker, 2017

Abstract:

The paper explores template matching for detecting brand logos in digital images. It
presents a straightforward approach where a known logo template is matched against
different parts of the input image to find potential matches. The method is easy to
implement but struggles with scalability when logo variations, rotations, or occlusions are
present. The authors suggest enhancements in pre-processing to improve accuracy in
practical scenarios.

8
CHAPTER 3

TECHNICAL REQUIREMENTS

3.1 GENERAL

Requirements analysis is a crucial step in the software development lifecycle. It involves


gathering, analyzing, and documenting the needs and expectations of stakeholders. In this
project, the requirements are categorized into hardware, software, functional, and non-
functional specifications to ensure a comprehensive understanding of the system’s scope
and constraints.
General requirements focus on identifying the system's overall purpose, defining the end-
users, and establishing the conditions under which the system will operate. For this project,
the primary aim is to provide an intuitive and efficient tool for detecting counterfeit logos
and currencies. The system should cater to a wide range of users, including businesses, law
enforcement agencies, and financial institutions.
The system must also be user-friendly, ensuring that individuals with minimal technical
knowledge can operate it. Additionally, scalability is a critical consideration, allowing the
solution to handle increasing workloads and adapt to future needs. These general
requirements set the foundation for a robust and reliable system.

3.2 HARDWARE REQUIREMENTS

Hardware requirements specify the physical resources needed to run the system effectively.
These requirements are essential for ensuring optimal performance and stability during
operation.

1. Processor: A multi-core processor is recommended to handle the


computational demands of image processing and model inference. A modern
Intel or AMD processor with at least four cores is sufficient for most use cases.

9
2. Memory (RAM): The system requires a minimum of 8GB of RAM to
efficiently process images and run machine learning models. For larger-scale
deployments, 16GB or more is advisable.
3. Storage: Adequate storage is necessary to store models, datasets, and
temporary files. A solid-state drive (SSD) with a capacity of at least 256GB is
recommended for faster read and write speeds.
4. Graphics Processing Unit (GPU): While not mandatory, a dedicated GPU
can significantly accelerate model training and inference. NVIDIA GPUs with
CUDA support, such as the RTX 3060 or higher, are ideal for deep learning
tasks.
5. Display: A high-resolution display ensures better visualization of results and
user interface elements. A monitor with Full HD (1920x1080) resolution or
higher is recommended.
6. Network: A stable internet connection is required for downloading libraries,
dependencies, and any additional datasets or updates.

These hardware specifications provide the foundation for a smooth and efficient user
experience. The choice of hardware may vary based on the deployment environment and
scale of operations.

3.3 SOFTWARE REQUIREMENTS

The Software requirements outline the programs, tools, and frameworks needed to develop
and deploy the system. These requirements ensure compatibility, functionality, and
efficiency.
1. Operating System: The system is compatible with Windows, macOS, and
Linux. A 64-bit operating system is required for optimal performance.
2. Programming Language: Python 3.x is the primary language used for
developing the system. Its simplicity, extensive library support, and strong
community make it ideal for machine learning projects.
3. Frameworks: TensorFlow and Keras are used for building and deploying
deep learning models. These frameworks provide powerful tools for training,
testing, and deploying machine learning algorithms.

10
4. Web Interface: Streamlit is used to create an interactive and user-friendly
web application. It allows seamless integration of backend processes with a
visually appealing frontend.
5. Libraries: Essential Python libraries include NumPy for numerical
computations, Pillow for image processing, and Matplotlib for data
visualization.
6. Development Environment: An Integrated Development Environment
(IDE) such as PyCharm, Visual Studio Code, or Jupyter Notebook is
recommended for writing and testing code.
7. Version Control: Git is used for version control, ensuring that changes to the
codebase are tracked and managed effectively.
8. Dependencies: Additional dependencies, such as TensorFlow Addons and
OpenCV, may be required based on specific functionalities.
By adhering to these software requirements, the system can be developed and deployed
efficiently, meeting the needs of end-users while maintaining high performance and
reliability.

3.4 FUNCTIONAL REQUIREMENTS

The Functional requirements define the specific behaviors and operations of the system.
These requirements ensure that the system performs its intended functions effectively.
1. Image Upload: Users can upload images of logos or currencies through the
web interface.
2. Image Preprocessing: The system automatically preprocesses uploaded
images, including resizing, normalization, and format conversion.
3. Model Prediction: The system uses trained models to analyze images and
determine whether a logo or currency is authentic or counterfeit.
4. Result Display: The system displays the prediction results in a clear and
concise manner, including confidence scores.
5. Error Handling: The system provides error messages for invalid inputs, such
as unsupported file formats or corrupt images.
6. Navigation: The interface allows users to navigate between different
functionalities, such as logo detection and currency detection.

11
These functional requirements ensure that the system delivers a seamless and efficient user
experience, meeting the core objectives of the project.

3.5 NON-FUNCTIONAL REQUIREMENTS

The Non-functional requirements focus on the quality attributes of the system. These
requirements address performance, reliability, usability, and other aspects that enhance the
overall user experience.
1. Performance: The system should process and analyze images within 5
seconds, ensuring real-time predictions.
2. Scalability: The system must be capable of handling increased workloads,
such as higher user traffic or larger datasets.
3. Usability: The interface should be intuitive and easy to navigate, requiring
minimal training for new users.
4. Security: User data and uploaded images should be handled securely, with
appropriate measures to prevent unauthorized access.
5. Maintainability: The codebase should be well-documented and modular,
facilitating easy updates and modifications.
By addressing these non-functional requirements, the system ensures a high-quality user
experience, aligning with industry standards and best practices.

12
CHAPTER-4

SYSTEM DESIGN

4.1 GENERAL

System design is the process of designing the elements of a system such as the architecture,
modules and components, the different interfaces of those components and the data that
goes through that system. System Analysis is the process that decomposes a system into its
component pieces for the purpose of defining how well those components interact to
accomplish the set requirements. The purpose of the System Design process is to provide
sufficient detailed data and information about the system and its system elements to enable
the implementation consistent with architectural entities as defined in models and views of
the system architecture.

Feasibility studies play a crucial role in determining the practicality and viability of a
project. This phase evaluates whether the proposed solution can be successfully developed
and implemented within the given constraints of time, cost, technology, and societal
acceptance. For this counterfeit detection system, feasibility is assessed across economic,
technical, and social dimensions. Each dimension provides insights into the challenges and
benefits associated with the project, ensuring its alignment with stakeholder expectations
and organizational goals.

Three key considerations involved in the feasibility analysis are

• ECONOMICAL FEASIBILITY
• TECHNICAL FEASIBILITY
• SOCIAL FEASIBILITY

13
• ECONOMICAL FEASIBILITY

Economic feasibility focuses on assessing whether the project is financially viable. This
includes evaluating the cost of development, implementation, and maintenance against the
anticipated benefits. For this project, economic feasibility includes the following
considerations:
Development Costs:
• Expenses related to acquiring necessary hardware (e.g., servers, GPUs) and
software (e.g., libraries like TensorFlow).
• Developer salaries and project management costs.
Operational Costs:
• Long-term expenses for server hosting, model updates, and application
maintenance.
• Energy costs associated with running computational tasks.
Benefits:
• Reduced losses for businesses and financial institutions caused by counterfeit
activities.
• Potential revenue from licensing or deploying the system in multiple
organizations.
Cost-benefit analysis suggests that the upfront investment in this system is offset by its
long-term savings and value generation. The modular nature of the system further ensures
scalability, allowing organizations to expand its use with minimal additional investment.

• TECHNICAL FEASIBILITY

Technical feasibility examines the technical resources required for the project, including
the availability of skills, tools, and technologies. This feasibility ensures that the proposed
solution is technically implementable using current tools and expertise. Key aspects
include:
Resource Availability:

14
• The project leverages open-source technologies such as Python, TensorFlow,
and Streamlit, which are widely supported and documented.
• Pre-trained models can be fine-tuned for specific applications, reducing the
need for extensive data collection and training resources.
Infrastructure Requirements:
• Minimal hardware requirements, with optional GPU acceleration for faster
processing.
• Cloud-based deployment options for scalability and remote access.
Technical Challenges:
• Ensuring the robustness and accuracy of predictions across diverse datasets.
• Implementing a user-friendly interface that seamlessly integrates with the
backend.
With the project’s reliance on proven technologies and frameworks, technical feasibility is
high. Continuous testing and iterative development will further mitigate potential risks,
ensuring a reliable and efficient system.

• SOCIAL FEASIBILITY

Social feasibility evaluates the acceptance and impact of the project on society. A successful
project must not only address a critical issue but also align with societal values and
expectations. For this counterfeit detection system, social feasibility is analyzed as follows:
Public Acceptance:
• Counterfeit detection is a universally recognized problem, and a solution
addressing it will likely be well-received by businesses, financial institutions,
and the public.
• The project’s emphasis on accessibility through a user-friendly interface
enhances its appeal to non-technical users.
Ethical Considerations:
• Ensuring data privacy and security when handling user-uploaded images.
• Avoiding misuse of the system for unethical purposes, such as targeting
legitimate entities.
Societal Benefits:

15
• Reducing economic losses from counterfeit activities strengthens societal
trust and stability.
• Enhancing public awareness about counterfeiting through educational
components integrated into the system.
By addressing these aspects, the project demonstrates a strong alignment with societal goals
and values, enhancing its feasibility and long-term impact.

4.2 SYSTEM ARCHITECTURE

Figure 4.1: Architecture Diagram

4.3 UML DESIGN

Unified Modelling Language (UML) is a general purpose modelling language. The main
aim of UML is to define a standard way to visualize the way a system has been designed.
It is quite similar to blueprints used in other fields of engineering.

16
UML is not a programming language; it is rather a visual language. Use UML diagrams to
portray the behaviour and structure of a system, UML helps software engineers,
businessmen and system architects with modelling, design and analysis.

t’s been managed by OMG ever since. International Organization for Standardization (ISO)
published UML as an approved standard in 2005. UML has been revised over the years and
is reviewed periodically.

UML combines best techniques from data modelling (entity relationship diagrams),
business modelling (work flows), object modelling, and component modelling. It can be
used with all processes, throughout the software development life cycle, and across
different implementation technologies.

UML has synthesized the notations of the Booch method, the Object-modelling technique
(OMT) and Object-oriented software engineering (OOSE) by fusing them into a single,
common and widely usable modelling language. UML aims to be a standard modelling
language which can model concurrent and distributed systems.

The Unified Modelling Language (UML) is used to specify, visualize, modify, construct
and document the artifacts of an object-oriented software intensive system under
development. UML offers a standard way to visualize a system's architectural blueprints,
including elements such as:
▪ Actors
▪ Business processes
▪ (logical) Components
▪ Activities
▪ Programming Language Statements
▪ Database Schemes
▪ Reusable software components.
➢ Complex applications need collaboration and planning from multiple teams and hence
require a clear and concise way to communicate amongst them.

➢ Businessmen do not understand code. So UML becomes essential to communicate with


non-programmer's essential requirements, functionalities and processes of the system.
17
➢ A lot of time is saved down the line when teams are able to visualize processes, user
interactions and static structure of the system.

➢ UML is linked with object-oriented design and analysis. UML makes the use of elements
and forms associations between them to form diagrams. Diagrams in UML can be broadly
classified as:

The Primary goals in the design of the UML are as follows


➢ Provide users a ready-to-use, expressive visual modelling Language so that they can
develop and exchange meaningful models.

➢ Provide extendibility and specialization mechanisms to extend the core concepts.

➢ Be independent of particular programming languages and development processes.

➢ Provide a formal basis for understanding the modelling language.

➢ Encourage the growth of the OO tools market.

➢ Support higher level development concepts such as collaborations, frameworks, patterns


and components.

➢ Integrate best practices.

4.3.1 USE-CASE DIAGRAM

A Use Case is a kind of behavioural classifier that represents a declaration of an offered


behaviour.
Each use case specifies some behaviour, possibly including variants that the subject can
perform in collaboration with one or more actors. Use cases define the offered behaviour
of the subject without reference to its internal structure.
These behaviours, involving interactions between the actor and the subject, may result in
changes to the state of the subject and communications with its environment. A use case
can include possible variations of its basic behaviour, including exceptional.
The primary components of a use case diagram include:

18
• Actor

An actor is an external entity that interacts with the system. Actors can be people, other
systems, or even hardware devices. Actors are represented as stick figures or simple icons.
They are placed outside the system boundary, typically on the left or top of the diagram.
• Use Case

A use case represents a specific functionality or action that the system can perform in
response to an actor's request. Use cases are represented as ovals within the system
boundary. The name of the use case is written inside the oval.
• Association Relationship

An association relationship is a line connecting an actor to a use case. It represents the


interaction or communication between an actor and a use case.
The arrowhead indicates the direction of the interaction, typically pointing from the actor
to the use case.

Figure 4.2: Use-Case-Diagram

19
4.3.2 CLASS DIAGRAM

A class diagram in Unified Modelling Language (UML) is a type of structural diagram that
represents the static structure of a system by depicting the classes, their attributes, methods,
and the relationships between them. Class diagrams are fundamental in object-oriented
design and provide a blueprint for the software's architecture.
Here are the key components and notations used in a class diagram:

• Class

A class represents a blueprint for creating objects. It defines the properties (attributes) and
behaviours (methods) of objects belonging to that class. Classes are depicted as rectangles
with three compartments: the top compartment contains the class name, the middle
compartment lists the class attributes, and the bottom compartment lists the class methods.
• Attributes

Attributes are the data members or properties of a class, representing the state of objects.
Attributes are shown in the middle compartment of the class rectangle and are typically
listed as a name followed by a colon and the data type (e.g., name: String).
• Methods

Methods represent the operations or behaviours that objects of a class can perform.
Methods are listed in the bottom compartment of the class rectangle and include the
methodname, parameters, and the return type (e.g., calculateCost(parameters):
ReturnType).

• Visibility Notations

Visibility notations indicate the access level of attributes and methods. The common
notations are:
+ (public): Accessible from anywhere.

- (private): Accessible only within the class.

# (protected): Accessible within the class and its subclasses.

~ (package or default): Accessible within the package.

20
• Associations
Associations represent relationships between classes, showing how they are connected.
Associations are typically represented as a solid line connecting two classes. They may
have multiplicity notations at both ends to indicate how many objects of each class can
participate in the relationship (e.g., 1*).
Aggregations and Compositions: Aggregation and composition are special types of
associations that represent whole-part relationships. Aggregation is denoted by a hollow
diamond at the diamond end, while composition is represented by a filled diamond.
Aggregation implies a weaker relationship, where parts can exist independently, while
composition implies a stronger relationship, where parts are dependent on the whole.

Figure 4.3: Class diagram

21
4.3.3 ACTIVITY DIAGRAM

An activity diagram portrays the control flow from a start point to a finish point showing
the various decision paths that exist while the activity is being executed.
The diagram might start with an initial activity such as "User approaches the door." This
activity triggers the system to detect the presence of the user's Bluetooth-enabled device,
initiating the authentication process.
Next, the diagram could depict a decision point where the system determines whether the
detected device is authorized. If the device is recognized as authorized, the diagram would
proceed to the activity "Unlock the door." Conversely, if the device is not authorized, the
diagram might show alternative paths such as prompting the user for additional
authentication credentials or denying access.
The key components and notations used in an activity diagram:

• Initial Node

An initial node, represented as a solid black circle, indicates the starting point of the activity
diagram. It marks where the process or activity begins.
• Activity/Action

An activity or action represents a specific task or operation that takes place within the
system or a process. Activities are shown as rectangles with rounded corners. The name of
the activity is placed inside the rectangle.
• Control Flow Arrow

Control flow arrows, represented as solid arrows, show the flow of control from one activity
to another. They indicate the order in which activities are executed.
• Decision Node
A decision node is represented as a diamond shape and is used to model a decision point or
branching in the process. It has multiple outgoing control flow arrows, each labelled with
a condition or guard, representing the possible paths the process can take based on
condition.

• Merge Node
A merge node, also represented as a diamond shape, is used to show the merging of multiple
control flows back into a single flow.
22
• Fork Node
A fork node, represented as a black bar, is used to model the parallel execution of multiple
activities or branches. It represents a point where control flow splits into multiple
concurrent paths.
• Join Node
A join node, represented as a black bar, is used to show the convergence of multiple control
flows, indicating that multiple paths are coming together into a single flow.
• Final Node
A final node, represented as a solid circle with a border, indicates the end point of the
activity diagram. It marks where the process or activity concludes.

Figure 4.4: Activity diagram.


23
CHAPTER-5

TECHNOLOGY DESCRIPTION

5.1 WHAT IS PYTHON

Python is a high-level, versatile, and widely-used programming language known for its
simplicity, readability, and vast range of applications. Created by Guido van Rossum and
first released in 1991, Python has grown to become one of the most popular programming
languages in the world. It is used in various fields, including web development, data
science, artificial intelligence, machine learning, automation, scientific computing, and
more. The language was created to provide a simpler, more readable alternative to complex
programming languages. Van Rossum wanted Python to be fun to use, and he named it
after the British comedy show "Monty Python’s Flying Circus," not the snake.

Python was developed by Guido van Rossum during the late 1980s and was released as
Python 1.0 in 1991. The language was created to provide a simpler, more readable
alternative to complex programming languages. Van Rossum wanted Python to be fun to
use, and he named it after the British comedy show "Monty Python’s Flying Circus," not
the snake. Over the years, Python has undergone several major updates:

• Python 1.0 (1991): The initial release with basic features like exception handling
and functions.
• Python 2.x (2000): Introduced more robust features but faced criticism for
compatibility issues.
• Python 3.x (2008 - present): A major overhaul focusing on cleaner syntax and
eliminating outdated features from Python 2.x.

Today, Python 3.x is the standard, and Python 2.x has reached its end of life.

Python’s simple syntax mimics natural language, making it easy to learn, even for
beginners. Unlike other programming languages that have complex syntax rules, Python’s
code is more readable and concise. For example, a basic Python code to print "Hello,
World!" is as simple as writing print ("Hello, World!"). This simplicity allows developers
24
to focus more on solving problems rather than dealing with complicated syntax. Python is
an interpreted language, meaning that code is executed line by line. This feature makes
debugging easier since errors are detected at runtime. Additionally, Python supports both
object-oriented and functional programming paradigms, allowing developers to choose the
style that best fits their project.

One of the reasons why Python is widely used is its cross-platform compatibility. Python
is platform-independent, meaning that Python programs can run on various operating
systems such as Windows, Linux, and macOS without requiring significant modifications.
This makes it a great choice for developing applications that need to run across different
environments. Python comes with a rich standard library that provides built-in functions
and modules to handle various tasks such as file I/O, regular expressions, web development,
data manipulation, and scientific computing. In addition to the standard library, there are
thousands of third-party libraries and frameworks available via PyPI (Python Package
Index), which extend Python’s capabilities even further.

Another feature that makes Python popular is its dynamic typing. In Python, you don’t need
to declare variable types explicitly. The type of a variable is determined at runtime based
on the value assigned. For example, you can assign an integer to a variable and then change
its value to a string without any issues. This feature makes coding faster and more flexible.
Moreover, Python automatically handles memory management, meaning developers don’t
need to manually allocate or free memory, reducing the chances of memory-related bugs.

Python is also used in automation and scripting. It can automate repetitive tasks through
scripts, making it ideal for tasks like web scraping, file handling, and data entry. For
instance, you can write a Python script to rename files in a directory based on a specific
pattern. In game development, Python can be used to create games using libraries like
Pygame. Additionally, Python is popular in the scientific and academic community for
tasks like simulations, mathematical computations, and research. Popular libraries for
scientific computing include SciPy, SymPy, and Jupyter Notebooks.

The versatility of Python makes it suitable for a wide range of industries, from web
development and data science to automation and cybersecurity. Python is actively
maintained and continuously improved, with new libraries and frameworks regularly
introduced to keep up with technological advancements. The language is easy to read and

25
write, making it accessible to developers of all skill levels. Python’s extensive libraries and
frameworks simplify the development process, allowing developers to build complex
applications with less code.

In conclusion, Python is a powerful, versatile, and beginner-friendly programming


language that is widely used across various industries. Its simplicity, extensive libraries,
and large community support make it a popular choice for web development, data science,
AI, and more. Whether you are a beginner or an experienced programmer, learning Python
opens up numerous opportunities in the tech world. Its continuous development and
adoption ensure that it will remain relevant for years to come.

5.2 ADVANTAGES OF PYTHON

Python is one of the most popular programming languages in the world today, and its
advantages make it a top choice for developers across various domains. Its simplicity,
versatility, and extensive ecosystem have contributed to its widespread adoption. Below,
we will explore the numerous advantages of Python in detail, covering aspects like ease of
learning, flexibility, community support, and its relevance across industries such as web
development, data science, artificial intelligence, and more.

Easy to Use:
One of Python’s greatest advantages is its simplicity and readability. Python’s syntax
is designed to be intuitive and mirrors natural human language, making it one of the
easiest programming languages to learn. This makes Python an ideal choice for
beginners and allows developers to focus more on problem-solving rather than
dealing with complex syntax rules. For example, a simple Python program to print
“Hello, World!” looks like this:
Print (“Hello, World!”)
In other languages like Java or C++, a simple print statement might require more lines
of code and more complex syntax. Python’s ease of use reduces the learning curve
for new programmers and helps experienced developers write code more efficiently.

26
High Level Language:
Python is a high-level language, meaning it abstracts many complex details of the
computer’s operations, such as memory management and data storage. This
abstraction allows developers to focus on coding without worrying about low-level
operations like memory allocation or garbage collection.
As a high-level language, Python allows developers to write code that is closer to
human language and easier to understand. This makes it easier to debug, maintain,
and collaborate on projects, even for large teams.

Versatility:
Python is a highly versatile language that can be used across various domains and
industries. It is not limited to one specific area of development but can be applied to:
• Web Development: With frameworks like Django and Flask, Python makes
it easy to build robust, scalable web applications.
• Data Science and Machine Learning: Python is the preferred language for
data scientists and machine learning engineers due to its powerful libraries
like Pandas, NumPy, Scikit-learn, and TensorFlow.
• Automation and Scripting: Python can be used to automate repetitive tasks,
such as web scraping, file management, and testing.
• Game Development: Python libraries like Pygame make it possible to
develop 2D games with ease.
• Cybersecurity: Python is widely used in cybersecurity for building security
tools, penetration testing, and malware analysis.
• Scientific Computing: Libraries like SciPy and SymPy make Python suitable
for scientific research and mathematical computations.
This versatility makes Python a valuable skill for developers in various fields.

Extensive Standard Library:


Python comes with a comprehensive standard library that provides a wide range of
modules and functions for different tasks. The standard library includes tools for:
• File I/O operations

27
• Regular expressions
• Web development
• Data manipulation
• Scientific computing
• Cryptography
Developers can accomplish many tasks without the need for additional libraries,
reducing the need to write code from scratch. The standard library is well-
documented and maintained, making it easy for developers to find the tools they need
for their projects.

Cross-Platform Compatibility:
Python is a platform-independent language, meaning that Python programs can run
on various operating systems without modification. Whether you are using Windows,
Linux, or macOS, your Python code will work seamlessly across all platforms.
This cross-platform compatibility makes Python a great choice for developers
working on projects that need to be deployed on different systems. It also allows
teams to collaborate more effectively, regardless of the operating systems they are
using.

Community Support:
Python has one of the largest and most active developer communities in the world.
This community support is a significant advantage because it means there are
countless resources available for learning and problem-solving.
Whether you need help with a specific error or want to learn best practices for writing
Python code, you can find tutorials, documentation, forums, and Q&A platforms like
Stack Overflow to assist you.
The active community also means that Python libraries and frameworks are
continuously updated and maintained, ensuring that the language stays relevant and
up to date with the latest trends in technology.

Support for Multi-Programming Paradigms:


Python supports multiple programming paradigms, including:
• Object-Oriented Programming (OOP)

28
• Functional Programming
• Procedural Programming
This flexibility allows developers to choose the programming style that best fits their
project’s needs. For example, you can use object-oriented programming to create
reusable classes and objects or switch to functional programming for more concise
and readable code.

Dynamic Programming:
Python uses dynamic typing, which means you don’t need to declare variable types
explicitly. The type of a variable is determined at runtime based on the value assigned.
For example:
x = 10 # Integer
x = "Hello" # String
This feature makes Python more flexible and reduces the amount of boilerplate code
that developers need to write. It also speeds up the development process.

Automatic Memory Management:


Python has built-in automatic memory management, which means developers don’t
need to manually allocate or free memory. Python’s garbage collector automatically
handles memory allocation and deallocation, reducing the risk of memory leaks.
This feature simplifies development and allows developers to focus more on writing
efficient code rather than managing memory.

Security:
Python is considered a secure programming language with features that help prevent
vulnerabilities. It has built-in tools for encrypting data, managing authentication, and
preventing common security threats like SQL injection and cross-site scripting
(XSS).
Popular security-focused libraries include:
• Cryptography: For encryption and decryption.
• Hashlib: For hashing passwords and securing sensitive data.
• Flask-Security: For managing user authentication and authorization.

29
Real-World Use Cases:
Python’s advantages make it a preferred language for several real-world applications.
Some notable use cases include:
• YouTube: Built using Python for its flexibility and scalability.
• Instagram: Uses Django to handle its web application backend.
• Google: Uses Python for various internal tools and systems.
• Spotify: Relies on Python for data analysis and backend services.
• Netflix: Uses Python for automation and data science.
These real-world examples demonstrate Python’s reliability and scalability in
handling complex, high-traffic applications.

Automatic Memory Management:


Python has built-in automatic memory management, which means developers don’t
need to manually allocate or free memory. Python’s garbage collector automatically
handles memory allocation and deallocation, reducing the risk of memory leaks.
This feature simplifies development and allows developers to focus more on writing
efficient code rather than managing memory.

In conclusion, Python’s numerous advantages, such as ease of learning, versatility,


extensive libraries, and community support, make it one of the best programming
languages for both beginners and experienced developers. Its applications span various
industries, ensuring its continued relevance and growth in the tech world

5.3 LIBRARIES

Python is a powerful programming language with a vast ecosystem of libraries and


frameworks that enhance its functionality and make it one of the most versatile languages
in the tech world. Libraries in Python are collections of pre-written code that developers
can use to perform various tasks efficiently, from web development to data analysis,
machine learning, and more. These libraries simplify complex tasks and enable developers
to build robust applications with minimal effort.
30
Numpy (Numerical Python):

Description: NumPy is one of the fundamental libraries in Python for numerical


computing. It provides support for multi-dimensional arrays and matrices along with a
collection of mathematical functions to perform operations on these arrays.

Features:

• Provides the ndarray object, which is a fast and memory-efficient multi-


dimensional array.
• Supports mathematical operations like addition, subtraction, multiplication,
and division on arrays.
• Includes built-in functions for linear algebra, Fourier transforms, and random
number generation.
• Optimized for performance, making it faster than Python lists for numerical
computations.
• Used as a base library for many other libraries like Pandas and SciPy.

Pandas:

Description: Pandas is a powerful data manipulation and analysis library. It provides


data structures like DataFrame and Series that make it easier to work with structured data.

Features:

• Provides two primary data structures: Series (1D) and DataFrame (2D).
• Supports data manipulation tasks like filtering, sorting, grouping, and
merging.
• Handles missing data gracefully with built-in functions to fill or drop missing
values.
• Provides functions for reading and writing data from various formats like
CSV, Excel, SQL, and JSON.
• Integrates seamlessly with other libraries like Matplotlib and NumPy for data
visualization and numerical computations.

31
Matplotlib:

Description: Matplotlib is a popular library for creating static, interactive, and


animated visualizations in Python. It is widely used for data visualization tasks.

Features:

• Provides a wide range of plotting functions like line plots, bar charts, scatter
plots, and histograms.
• Supports customization of plots, including titles, labels, legends, and colors.
• Can generate plots in various formats like PNG, JPG, SVG, and PDF.
• Works well with other libraries like NumPy and Pandas.
• Supports interactive visualizations in Jupyter Notebooks.

TensorFlow:

Description: TensorFlow is an open-source library developed by Google for machine


learning and deep learning applications.

Features:

• Supports building and training machine learning models.


• Provides tools for neural network construction, training, and deployment.
• Offers high-level APIs like Keras for quick model prototyping.
• Supports distributed computing, enabling large-scale machine learning.
• Can be deployed on various platforms, including mobile and embedded
devices.

Scikit-learn:

Description: Scikit-learn is a popular machine learning library that provides simple


and efficient tools for data mining and analysis.

Features:

32
• Provides algorithms for classification, regression, clustering, and
dimensionality reduction.
• Supports model evaluation, cross-validation, and hyperparameter tuning.
• Offers utilities for preprocessing, feature selection, and feature extraction.
• Compatible with NumPy and Pandas.
• Widely used in academic research and industry projects.

Keras:

Description: Keras is a high-level neural networks API that runs on top of TensorFlow.
It simplifies the process of building and training deep learning models.

Features:

• Provides an easy-to-use API for building neural networks.


• Supports both sequential and functional API models.
• Allows for quick prototyping of deep learning models.
• Compatible with TensorFlow, Theano, and CNTK backends.
• Provides utilities for data preprocessing, image augmentation, and model
evaluation.

5.4 DISADVANTAGES OF PYTHON

While Python is one of the most popular programming languages due to its simplicity,
versatility, and extensive libraries, it is not without its drawbacks. Below is a detailed
exploration of the disadvantages of Python, categorized under various side headings to
provide a comprehensive understanding.
1. Performance Issues: Python is an interpreted language, meaning that code is executed
line by line rather than being compiled into machine code beforehand. This results in slower
execution speed compared to languages like C, C++, or Java. The slower runtime can be a
critical issue for applications that require high performance, such as gaming engines, real-
time applications, or complex algorithms.
For example, in scenarios involving large-scale data processing or real-time analytics,
Python's performance can become a bottleneck. Developers may need to use more efficient
languages like C or C++ for such tasks.

33
2. High Memory Consumption: Python’s dynamic typing system can lead to higher
memory usage compared to statically typed languages. Variables in Python do not need
explicit type declarations, which makes the language more flexible but also less memory-
efficient.
For instance, if an application requires handling a vast amount of data, Python’s memory
consumption could impact the system’s performance. This is especially relevant in
applications like web servers, where efficient memory usage is crucial.

3. Not suitable for low-level Programming: Python is a high-level language designed for
readability and simplicity. It abstracts many low-level operations that are essential in
system-level programming, such as memory management and direct interaction with
hardware.
For applications that require direct hardware interaction, such as embedded systems, device
drivers, or real-time systems, languages like C and C++ are preferred. Python’s abstraction
makes it less suitable for such use cases.

4. Mobile Development Limitations: Python is not commonly used for mobile app
development. While there are frameworks like Kivy and BeeWare that support building
mobile applications with Python, they are not as mature or popular as frameworks like
Flutter, React Native, or Swift for iOS.
The lack of native support and slower performance makes Python less appealing for mobile
development. Companies that prioritize mobile-first strategies often prefer other languages
like Java, Kotlin, or Swift.

5. Database Access Limitations: Python’s database access layers are less robust compared
to Java Database Connectivity (JDBC) or Open Database Connectivity (ODBC). While
Python provides libraries like SQLAlchemy and Django ORM for database interactions,
they may not offer the same level of performance and efficiency as tools in other languages.
For applications with intensive database operations, developers might face performance
issues and may need to resort to more efficient solutions or languages.

34
CHAPTER 6

IMPLEMENTATION

6.1 METHODOLOGY

The implementation of the Fake detection system involves a systematic approach that
ensures efficiency, accuracy, and scalability. This methodology is divided into multiple
stages, each contributing to the overall functionality of the system. Below is a detailed
explanation of each stage:

1. Data Collection:

Data collection is a crucial step in developing an effective counterfeit detection system.


High-quality and diverse datasets are required to train the pre-trained models and fine-tune
them for accurate predictions. This step includes:

• Sourcing Data: Images of real and counterfeit logos and currencies are
collected from public datasets, industry sources, and custom-built datasets.
• Data Categorization: Collected images are categorized into appropriate
classes (e.g., real vs. counterfeit).
• Data Cleaning: Irrelevant or low-quality images are removed to ensure
consistency and relevance.
• Data Augmentation: Techniques like flipping, rotation, and scaling are
applied to increase dataset diversity and improve model generalization.

2. Loading Pre-Trained Model:

To expedite development and leverage existing research, the project employs pre-trained
deep learning models. These models are loaded into the system using TensorFlow/Keras.
Key tasks include:

• Loading the model architecture and weights.


• Configuring the models for inference by freezing certain layers if needed.

35
• Preparing the models for integration with the preprocessing and interface
components.

3. Image Preprocessing

Uploaded images must be pre-processed to ensure compatibility with the model's input
requirements. Preprocessing involves:

• Resizing: Images are resized to a fixed resolution (e.g., 150x150 pixels) to


standardize input dimensions.
• Normalization: Pixel values are normalized to a range of 0 to 1, improving
model performance and stability.
• Conversion: Images are converted to NumPy arrays to facilitate numerical
processing.

4. Integration with Streamlit Interface

The user interface is designed using Streamlit, offering an interactive and user-friendly
experience. Key functionalities include:

• A file uploader for selecting images to analyze.


• Buttons to trigger processing and display predictions.
• A clear layout for presenting results.

5. Model Inference

Once pre-processed, the image is passed through the loaded model to generate predictions.
This step involves:

• Feeding the pre-processed image into the model.


• Interpreting the model's output probabilities to classify the image as "Real" or
"Fake."
• Applying predefined thresholds (e.g., a probability above 0.6 indicates a
"Real" prediction).

36
6. Displaying Results

Predictions and associated confidence scores are displayed on the Streamlit interface. This
ensures transparency and provides users with clear, actionable information. Results are
formatted for readability.

6.2 SAMPLE CODE

#Streamlit interface for app.py


import streamlit as st
import subprocess

def load_script(script_name):
"""Run the selected script."""
subprocess.run(["streamlit", "run", script_name])

# Main interface
st.title("Integrated Streamlit Interface")

st.sidebar.title("Navigation")
selection = st.sidebar.radio("Select an option", ["Detect Logo", "Detect Currency"])

if selection == "Detect Logo":


st.write("You selected **Detect Logo**.")
st.write("Click the button below to load the logo detection interface.")
if st.button("Load Logo Interface"):
load_script("logo.py")

if selection == "Detect Currency":


st.write("You selected **Detect Currency**.")
st.write("Click the button below to load the currency detection interface.")
if st.button("Load Currency Interface"):
load_script("currency.py")

37
#train the model
import os
import numpy as np
import matplotlib.pyplot as plt
from tensorflow.keras.preprocessing.image import ImageDataGenerator
from tensorflow.keras.applications import VGG16
from tensorflow.keras.models import Model
from tensorflow.keras.layers import Dense, Flatten
from tensorflow.keras.optimizers import Adam
import numpy as np
from tensorflow.keras.models import load_model
from tensorflow.keras.preprocessing.image import load_img, img_to_array
# Cell 2: Data Preprocessing
# Define paths to your dataset
train_dir = r'C:\\Users\\kambh\\OneDrive\\Desktop\\FakeCurrency\\train'
validation_dir = r'C:\\Users\\kambh\\OneDrive\\Desktop\\FakeCurrency\\validation'
# Image data generator with augmentation for training
train_datagen = ImageDataGenerator(
rescale=1./255,
rotation_range=20,
width_shift_range=0.2,
height_shift_range=0.2,
shear_range=0.2,
zoom_range=0.2,
horizontal_flip=True,
fill_mode='nearest'
)
# Image data generator for validation (without augmentation)
validation_datagen = ImageDataGenerator(rescale=1./255)
# Data generators
train_generator = train_datagen.flow_from_directory(
train_dir,
target_size=(150, 150),
batch_size=32,
38
class_mode='binary'
)

validation_generator = validation_datagen.flow_from_directory(
validation_dir,
target_size=(150, 150),
batch_size=32,
class_mode='binary'
)
# Cell 3: Building the Model
# Load pre-trained VGG16 model + higher level layers
base_model = VGG16(weights='imagenet', include_top=False, input_shape=(150, 150,
3))
# Add custom layers on top of the base model
x = base_model.output
x = Flatten()(x)
x = Dense(512, activation='relu')(x)
predictions = Dense(1, activation='sigmoid')(x)
# Define the model
model = Model(inputs=base_model.input, outputs=predictions)
# Freeze the layers of the base model
for layer in base_model.layers:
layer.trainable = False
# Compile the model
model.compile(optimizer=Adam(learning_rate=0.0001), loss='binary_crossentropy',
metrics=['accuracy'])
# Cell 4: Training the Model
# Train the model
history = model.fit(
train_generator,
epochs=5,
validation_data=validation_generator
)
# Cell 5: Evaluating the Model
39
# Evaluate the model
loss, accuracy = model.evaluate(validation_generator)
print(f'Validation Accuracy: {accuracy*100:.2f}%')
# Cell 6: Saving the Model
# Save the model
model.save('fake_logo_detector_1.keras')
# Import necessary libraries

#logo
import streamlit as st
import numpy as np
from tensorflow.keras.models import load_model
from tensorflow.keras.preprocessing.image import load_img, img_to_array

# Load the trained model


model = load_model('fake_logo_detector_1.keras')

# Function to preprocess the image


def preprocess_image(image):
# Convert the image to array
img_array = img_to_array(image)
# Expand dimensions to match the input shape (1, 150, 150, 3)
img_array = np.expand_dims(img_array, axis=0)
# Normalize the image array
img_array /= 255.0
return img_array

# Function to predict if a logo is real or fake


def predict_logo(image):
# Preprocess the image
img_array = preprocess_image(image)
# Make prediction
prediction = model.predict(img_array)
# Convert prediction to class label
40
if prediction[0][0] > 0.6:
return "Real"
else:
return "Fake"

# Streamlit Interface
st.title("Fake Logo Detector")
st.write("Upload a logo image to check if it's real or fake.")

# File uploader
uploaded_file = st.file_uploader("Choose a logo image...", type=["jpg", "png", "jpeg"])

if uploaded_file is not None:


# Load and display the image
image = load_img(uploaded_file, target_size=(150, 150))
st.image(image, caption="Uploaded Image", use_container_width=True) # Updated
parameter

# Analyze and predict


st.write("Analyzing...")
result = predict_logo(image)
st.write(f"The logo is predicted to be: **{result}**")

#Currency
import streamlit as st
import numpy as np
from tensorflow.keras.models import load_model
from tensorflow.keras.preprocessing.image import load_img, img_to_array

# Load the trained model


model = load_model('fake_logo_detector_2.keras')

# Function to preprocess the image


def preprocess_image(image):
41
# Convert the image to array
img_array = img_to_array(image)
# Expand dimensions to match the input shape (1, 150, 150, 3)
img_array = np.expand_dims(img_array, axis=0)
# Normalize the image array
img_array /= 255.0
return img_array

# Function to predict if a logo is real or fake


def predict_logo(image):
# Preprocess the image
img_array = preprocess_image(image)
# Make prediction
prediction = model.predict(img_array)
# Convert prediction to class label
if prediction[0][0] > 0.6:
return "Real"
else:
return "Fake"

# Streamlit Interface
st.title("Fake Currency Detector")
st.write("Upload a Currency image to check if it's real or fake.")

# File uploader
uploaded_file = st.file_uploader("Choose a Currency image...", type=["jpg", "png",
"jpeg"])

if uploaded_file is not None:


# Load and display the image
image = load_img(uploaded_file, target_size=(150, 150))
st.image(image, caption="Uploaded Image", use_container_width=True) # Updated
parameter

42
# Analyze and predict
st.write("Analyzing...")
result = predict_logo(image)
st.write(f"The Currency is predicted to be: **{result}**")

43
CHAPTER 7

TESTING

7.1 GENERAL

The purpose of testing is to discover errors. Testing is the process of trying to discover
every conceivable fault or weakness in a work product. It provides a way to check the
functionality of components, sub-assemblies, assemblies and/or a finished product. It is the
process of exercising software with the intent of ensuring that the Software system meets
its requirements and user expectations and does not fail in an unacceptable manner. There
are various types of tests. Each test type addresses a specific testing requirement.

Testing for a Multilevel Data Concealing Technique that integrates Steganography and
Visual Cryptography is crucial to ensure its functionality, security, and reliability. The
testing process involves several stages, including unit testing, integration testing, and
security testing.

7.2 TYPES OF TESTING

Figure 7.1 Types of Testing

44
7.3 TEST CASES

Test Description Input Expected Actual


Case Output Output Status
Id
Validate An image Shows the Shows the Success
Model with of input logo as input logo
TC01 authentic authentic ‘real’ as ‘real’
logo logo

Validate An image Shows the Shows the Success


Model with of input logo as input logo
TC02 counterfeit counterfeit ‘fake’ as ‘fake’
logo logo
Validate An image Shows the Shows the Success
Model with of input input
TC03 authentic authentic currency as currency as
currency currency ‘real’ ‘real’
Validate An image Shows the Shows the Success
Model with of input input
TC04 counterfeit counterfeit currency as currency as
currency currency ‘fake’ ‘fake’

Verify that An image Allows only Allows Success


the image with only jpg/png only
upload png/jpg formats. jpg/png
functionality formats. formats.
TC05 only allows
files in
PNG/JPG
formats.
Figure 7.2: Test cases.

45
CHAPTER 8

RESULTS

8.1 SCREEN SHOTS

Figure 8.1: Integrated Streamlit Interface

46
Figure 8.2: Detect Logo Interface.

Figure 8.3: Logo Analysis Result- Real.


47
Figure 8.4: Detect Logo Interface

Figure 8.5: Logo Analysis Result- Fake.


48
Figure 8.6: Detect Currency Interface.

Figure 8.7: Currency Analysis Result- Real.


49
Figure 8.8: Detect Currency Interface

Figure 8.9: Currency Analysis Result- Real.

50
CHAPTER – 9

FUTURE SCOPE

9.1 FUTURE SCOPE

Expanding the scope and enhancing the system’s functionality can make it even more
impactful and versatile. Here are some areas of future development:

o Mobile Application Development:


▪ Creating a mobile app version would allow users to perform counterfeit
detection directly from their smartphones, enhancing accessibility and
usability.

o Support for Additional Counterfeit Items:


▪ Expanding the detection capabilities to include other counterfeit goods like
tickets, passports, or certificates.

o Expanded Currency Detection:


▪ Supporting more currencies and denominations would make the system
relevant in international contexts

o Enhanced Image Processing Techniques:


▪ Implementing advanced preprocessing methods, such as noise reduction and
image enhancement, to improve detection accuracy.

o Educational Module:
▪ Incorporating an educational section to inform users about counterfeiting and
its impacts can raise awareness and promote vigilance.

o Support for Additional Counterfeit Items:


▪ Expanding the detection capabilities to include other counterfeit goods like
tickets, passports, or certificates.
51
o Enhanced Model Training:
▪ Collecting more diverse datasets to train the models would improve their
accuracy and robustness across varied counterfeit scenarios.

o Integration with IoT Devices:


▪ Linking the system with IoT devices, such as scanners or smart cameras, can
automate the detection process in real-time environments like banks or retail
stores.

o Multi-Language Support:
▪ Incorporating support for multiple languages ensures a broader audience can
use the system, making it accessible in regions with different linguistic
preferences.

o Security Features:
▪ Adding authentication and encryption ensures that uploaded data is secure and
protected from unauthorized access.

o Collaboration with Industry Experts:


▪ Partnering with experts in counterfeiting and fraud prevention to refine the
models and processes

o Gamification Features:
▪ Introducing gamified elements like quizzes or achievements to encourage
users to learn about and actively combat counterfeiting.

52
CHAPTER-10

CONCLUSION

10.1 CONCLUSION

Counterfeiting poses a growing threat to industries and economies worldwide, causing


financial losses, diminishing brand integrity, and destabilizing economic systems. The
increasing sophistication of counterfeiters has made it imperative to develop advanced
technological solutions to detect fake objects efficiently. The "Fake Object Detection"
project addresses these challenges by providing a robust, scalable platform for detecting
counterfeit logos and currencies through the integration of machine learning models and a
user-friendly web interface.

The proposed system stands out for its ability to leverage Convolutional Neural Networks
(CNNs) to analyze images for anomalies and classify them as either genuine or fake. This
approach significantly improves the accuracy of counterfeit detection compared to
traditional methods. The system’s core architecture revolves around a seamless workflow,
where users can upload images through a web interface, which are then processed by pre-
trained models. The results are displayed in an easy-to-understand format, ensuring
accessibility for both technical and non-technical users.

The system also addresses several drawbacks associated with existing counterfeit detection
methods. Traditional methods, such as manual inspection and the use of specialized
hardware like ultraviolet (UV) scanners, are often time-consuming, expensive, and prone
to human error. These methods also lack scalability and integration, making them
inefficient for handling diverse counterfeit scenarios. In contrast, the proposed system
integrates both logo and currency detection capabilities into a single platform, thereby
streamlining operations and improving detection accuracy

53
The project's technical feasibility is evident from its use of widely available open-source
tools such as Python, TensorFlow, and Keras. These technologies enable efficient model
training, deployment, and scalability. The system employs CNNs to identify intricate
patterns and anomalies in images, ensuring that it can differentiate between authentic and
counterfeit items. Furthermore, the use of pre-trained models reduces the time and
resources required for development, making the solution more practical for real-world
applications

In conclusion, the "Fake Multiple Object Detection" project is a significant step forward in
the fight against counterfeiting. By combining advanced machine learning techniques with
a user-friendly interface, the project offers a practical solution for detecting fake logos and
currencies. Its emphasis on accessibility, accuracy, and scalability ensures that the system
can be adopted by a wide range of users, from small businesses to large financial
institutions. With continuous development and improvements, this system has the potential
to become a vital tool in the global effort to combat counterfeiting, thereby contributing to
safer and more trustworthy marketplaces

54
CHAPTER-11

REFERENCES

11.1 REFERENCES

1. Redmon, J., & Farhadi, A. (2018). YOLOv3: An Incremental Improvement. arXiv.

Link: https://arxiv.org/abs/1804.02767

2. Simonyan, K., & Zisserman, A. (2015). Very Deep Convolutional Networks for
Large-Scale Image Recognition. arXiv.

Link: https://arxiv.org/abs/1409.1556

3. He, K., Zhang, X., Ren, S., & Sun, J. (2016). Deep Residual Learning for Image
Recognition. IEEE Conference on Computer Vision and Pattern Recognition (CVPR).
Link:
https://openaccess.thecvf.com/content_cvpr_2016/html/He_Deep_Residual_Learning
_CVPR_2016_paper.html

4. Zhang, Y., Wang, S., & Wu, X. (2020). A Survey of Counterfeit Detection Using
Machine Learning Techniques. IEEE Access, 8, 120399-120413.
Link: https://ieeexplore.ieee.org/document/9143504

5. Nguyen, T., & Pham, N. (2019). An Overview of Object Detection Methods in


Computer Vision. Springer.
Link: https://link.springer.com

6. Chollet, F. (2018). Deep Learning with Python. Manning Publications.


Link: https://www.manning.com/books/deep-learning-with-python

55
7. Goodfellow, I., Bengio, Y., & Courville, A. (2016). Deep Learning. MIT Press.
Link: https://www.deeplearningbook.org

8. LeCun, Y., Bengio, Y., & Hinton, G. (2015). Deep Learning. Nature, 521(7553),
436-444.
Link: https://www.nature.com/articles/nature14539

9. Krizhevsky, A., Sutskever, I., & Hinton, G. E. (2012). ImageNet Classification with
Deep Convolutional Neural Networks. Advances in Neural Information Processing
Systems (NIPS), 1097-1105.
Link ImageNet Classification with Deep Convolutional Neural Networks - NIPS

10. Ren, S., He, K., Girshick, R., & Sun, J. (2015). Faster R-CNN: Towards Real-Time
Object Detection with Region Proposal Networks. Advances in Neural Information
Processing Systems (NIPS).
Link: https://arxiv.org/abs/1506.01497

56

You might also like