0% found this document useful (0 votes)
23 views40 pages

Spsaarm

The document is a mini project report on a Currency Recognition System using image processing, submitted by students as part of their Bachelor of Technology degree requirements. The project aims to develop an automated system that accurately identifies and classifies different currency denominations using advanced image processing and machine learning techniques. It addresses existing system limitations and includes features for real-time processing and counterfeit detection.

Uploaded by

praveenfunneling
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as DOCX, PDF, TXT or read online on Scribd
0% found this document useful (0 votes)
23 views40 pages

Spsaarm

The document is a mini project report on a Currency Recognition System using image processing, submitted by students as part of their Bachelor of Technology degree requirements. The project aims to develop an automated system that accurately identifies and classifies different currency denominations using advanced image processing and machine learning techniques. It addresses existing system limitations and includes features for real-time processing and counterfeit detection.

Uploaded by

praveenfunneling
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as DOCX, PDF, TXT or read online on Scribd
You are on page 1/ 40

A Mini Project report submitted on

CURRENCY RECOGNITION SYSTEM


USING IMAGE PROCESSING
partial fulfillment of the requirements for the award of the degree of

BACHELOR OF TECHNOLOGY
IN
COMPUTER SCIENCE AND ENGINEERING
By
[Sampath Kumar,Praveen,Satya veni]
(218P1A0520,218P1A0532,218P1A0529)

Under the Guidance of


Madeeha Samreen
ASSISTANT PROFESSOR

DEPARTMENT OF COMPUTER SCIENCE AND ENGINEERING


AAR MAHAVEER INSTITUTE OF ENGINEERING & TECHNOLOGY
(Approved by AICTE, Affiliated to JNT University, Hyderabad)
Vyasapuri,Bandlaguda,Post : Keshavgiri,Hyderabad - 500045

2024-2025
AAR MAHAVEER INSTITUTE OF ENGINEERING &TECHNOLOGY
(Approved by AICTE, Affiliated to JNT University, Hyderabad)
Vyasapuri,Bandlaguda,Post : Keshavgiri,Hyderabad - 500045

DEPARTMENT OF COMPUTER SCIENCE & ENGINEERING

CERTIFICATE

This is to certify that the mini project report entitled “CURRENCY RECOGNITION
SYSTEM USING IMAGE PROCESSING” being submitted by Sampath Kumar

Praveen,Satyaveni(reg.no: 218P1A0520,218P1A0532,218P1A0529) in partial fulfillment


for the award of the Degree of Bachelor of Technology in ComputerScience and Engineering

to the Jawaharlal Nehru Technological University is arecord of bonafied work carried out by

her under my guidance and supervision.The results embodied in this mini project report have

not been submitted to any other University or Institute for the award of any Degree or Diploma.

Signature of Internal Guide Signature of Head of the Department


Name: Madeeha Samreen
Name: Kurmaiha
Designation: Assistant Professor
Designation: Associate Professor

1
AAR MAHAVEER INSTITUTE OF ENGINEERING &TECHNOLOGY
(Approved by AICTE, Affiliated to JNT University, Hyderabad)
Vyasapuri,Bandlaguda,Post : Keshavgiri,Hyderabad - 500045

DEPARTMENT OF COMPUTER SCIENCE & ENGINEERING

DECLARATION

I SAMPATH KUMAR,PRAVEEN,SATYA VENI bearing Roll No: 218P1A0520,


218P1A0532,218P1A0529, a bonafide student of AAR MAHAVEER Institute of
Engineering and Technology,would like to declare that the mini project titled “CURRENCY
RECOGNITION SYSTEM USING IMAGE PROCESSING” fulfillment of B. Tech
Degree course of Jawaharlal Nehru Technological University is my original work in the year 2024
under the guidance of Ms.Madiha Samreen, Assistant Professor, Computer Science & Engineering
Department and it has not previously formed the basis for any degree or diploma or other any similar
or other any similar

Date: SAMPATH KUMAR - 218P1A0520

PRAVEEN - 218P1A0532

ii SATYA VENI - 218P1A0529

2
ACKNOWLEDGEMENT

I would like to express my sincere gratitude to my guide, Ms.Madeeha Samreen,


Assistant Professor, Computer Science & Engineering Department, whose knowledge
guidance has motivated me to achieve goals I never thought possible. He has
consistently been a source of motivation, encouragement, and inspiration. The time I
have spent working under his supervision has truly been a pleasure. Its been a part like
I take it as a great privilege to express our heartfelt gratitude to Kurmaiha,

Head of the Department for his valuable support and all senior faculty members
of CSE department for their help during my course. Thanks to programmers and
non-teaching course. Thanks to programmers and non-teaching staff of CSE department
A.A.R.M.I.S.T

I thank my principal Dr. Punumalli Babufor extending his utmost support and
co-operation in providing all the provisions, and Management for providing excellent
facilities to carry out my project work.

Finally special thanks to my parents, sister for their support and encouragement
throughout my life and this course. Thanks to all my friends and well-wishers for their
constant support.

SAMPATH KUMAR - 218P1A0520


PRAVEEN - 218P1A0532
SATYA VENI - 218P1A0529

iii

3
ABSTRACT

There are around 200+ different currencies used in different countries around the
world. The technology of currency recognition aims to search and extract the visible
as well as hidden marks on paper currency for efficient classification. Currency
Recognition system is implemented to reduce human power to automatically
recognize the amount monetary value of currency without human supervision. The
task of currency recognition is a bit difficult, so it becomes very important to select
the right features and proper algorithm for this purpose. The purpose is to design an
easy and efficient algorithm that would be useful for maximum number of
currencies. Writing different programs for all is a tedious job. The aim of the project
is to recognize the currencies and not authentication.

iv

4
TABLE OF CONTENTS
1. Introduction 7
1.1 Existing System.............................................................10
1.2 Proposed System….......................................................12
1.3 System Architecture.......................................................13
2. Literature Survey 15
3. System Requirements 18
3.1 Software Requirements….................................................18

3.2 Hardware Requirements….............................................19


3.3 Software Tools Used…..................................................21
3.3.1 Python….....................................................................21
3.3.2 Google Collab.............................................................22
3.3.3 VS Code......................................................................22
4 System Design 23
4.1 System Architecture.........................................................23
4.2 Data Flow Diagram..........................................................25
4.3 UML Diagrams……………………………….… 27
5 Testing and Results 29
5.1 Levels Of Testing……….………………………… 29
5.2 Implementation................................................................32
5.3 Output…...........................................................................37
6 Conclusion and Future Scope 38
6.1 Conclusion…………………………………………. 38
6.2 Future Scope……………………………………….. 38
References 39

5
LIST OF FIGURES

Figure 4.1 System Design 21


Figure 4.2 Use Case Diagram 23
Figure 4.3 Class Diagram 23
Figure 4.4 Sequence Diagram 23
Figure 4.5 Collaboration Diagram 23

6
1. INTRODUCTION

 The *Currency Recognition System using Image Processing* is designed to


automatically identify and classify currency denominations from digital images.
With the increasing demand for automation and the need for accessible
solutions, such a system has applications in various fields including banking,
retail, and assisting visually impaired individuals. Traditional currency
recognition methods rely on human verification, which can be time-consuming
and prone to errors. This project aims to leverage modern image processing and
machine learning techniques to recognize different currencies with high
accuracy, regardless of the image's lighting, angle, or condition of the currency.

 The system processes currency images through various stages: acquisition,


preprocessing, feature extraction, and classification. By applying methods like
edge detection, keypoint extraction, and advanced algorithms such as
Convolutional Neural Networks (CNNs), it can effectively differentiate between
different denominations and types of currency. This project not only simplifies
tasks such as currency exchange or ATM verification but also paves the way for
real-time, accessible tools for users with visual impairments..

 In today's fast-paced world, the ability to recognize and process currency


efficiently is vital for various applications, including retail, banking, and
automated vending systems. With the advent of technology, traditional
methods of currency validation and recognition have evolved into
sophisticated systems that leverage image processing and machine learning
techniques. This project aims to develop a robust currency recognition system
utilizing image processing to accurately identify and classify various
denominations of banknotes.

7
Importance of Currency Recognition
 Currency recognition is crucial not only for businesses that deal with cash
transactions but also for enhancing the accessibility of financial services.
Individuals with visual impairments, for example, benefit significantly from
currency recognition systems that provide them with the ability to identify
notes independently. Furthermore, in regions where counterfeit currency
poses a significant risk, reliable recognition systems can play a critical role in
fraud prevention, thereby bolstering public confidence in the financial system.
Technological Foundations

 The backbone of this project is image processing, a field that focuses on the
manipulation and analysis of images through computational techniques. By
employing algorithms that enhance image quality, extract meaningful
features, and classify data, we can create a system capable of discerning
various currency notes under differing conditions.
 Key technologies involved include:
 Computer Vision: Techniques that allow machines to interpret and process
visual data from the world, enabling them to recognize objects—such as
currency notes—based on their visual characteristics.
 Machine Learning: Algorithms that enable the system to learn from data and
improve over time. By training models on labeled currency images, the system
can become proficient in recognizing different denominations.
 Deep Learning: A subset of machine learning that uses neural networks to
analyze complex patterns in data. Convolutional Neural Networks (CNNs), in
particular, have shown remarkable success in image recognition tasks and are
instrumental in developing an efficient currency recognition system.

8
Project Overview
 The primary goal of this project is to design an automated currency
recognition system that can accurately identify various denominations of
banknotes using image processing techniques. The system will encompass
several stages:
 Image Acquisition: Capturing high-quality images of currency notes using
cameras or smartphones.
 Pre-processing: Enhancing the images to improve recognition accuracy,
including noise reduction and contrast enhancement.
 Feature Extraction: Identifying key features in the currency images that
differentiate one denomination from another.
 Classification: Employing machine learning algorithms to categorize the
currency notes based on the extracted features.
 User Interface: Developing an intuitive interface that provides real-time
recognition feedback, enabling users to interact seamlessly with the system.

Challenges and Solutions


The development of a currency recognition system presents several challenges:

 Variability in Currency Design: Different countries and regions have unique


currency designs, and even within a single currency, there may be multiple
versions. This variability necessitates a diverse dataset for training the
recognition model.
 Lighting Conditions: The performance of the system can be significantly
affected by changes in lighting. Implementing robust pre-processing techniques
can help mitigate these effects.
 Counterfeit Detection: Beyond simple recognition, the system could
incorporate features that detect counterfeit notes, adding an additional layer of
utility.

9
1.1 EXISTING SYSTEM.
 Several existing systems and technologies address currency recognition,
leveraging image processing, machine learning, and other techniques to
automate the identification and verification of banknotes. Here’s an overview
of some notable systems and approaches currently in use:
 Commercial Currency Validators
 Many banking and retail industries rely on commercial currency validation
machines. These devices utilize optical sensors and sophisticated algorithms
to detect and authenticate banknotes. Key features include:
 Infrared and UV Scanning: These machines often employ infrared and
ultraviolet light to detect security features in banknotes, such as watermarks and
security threads.
 Size and Shape Measurement: They assess the dimensions and shape of notes
to verify authenticity against predefined standards.
 While effective, these systems are typically limited to specific denominations.
2. Mobile Applications
 A growing number of mobile applications leverage smartphone cameras for
currency recognition, catering to various user needs, such as aiding visually
impaired individuals. Examples include:

 Currency Recognition Apps: Apps like “Cash Reader” and “Seeing AI” utilize
image processing techniques to recognize and announce currency
denominations.
 User-Friendly Interfaces: These apps are designed with accessibility in mind,
featuring voice recognition and audio feedback to assist users in identifying
notes.
 While convenient, the performance of these applications can vary
significantly based on the quality of the camera and the lighting conditions.

3. Open Source Projects


 There are several open-source initiatives aimed at currency recognition that
provide valuable insights and frameworks for developing custom solutions.
Notable examples include:

1
 OpenCV: This widely-used library offers numerous functions for image
processing, making it a solid foundation for developing currency recognition
systems. It includes algorithms for feature detection, image enhancement, and
contour analysis.
 TensorFlow and PyTorch: These deep learning frameworks are often
employed in projects that utilize neural networks for image classification,
allowing developers to train models for recognizing different currency
denominations.

4. Research Prototypes
 Numerous academic research projects focus on currency recognition using
advanced techniques, such as:
 Convolutional Neural Networks (CNNs): Researchers have demonstrated the
effectiveness of CNNs for currency classification tasks, achieving high accuracy
rates by training models on large datasets of currency images.
 Hybrid Approaches: Some studies combine traditional image processing
techniques with machine learning models to improve robustness against
variations in lighting, orientation, and quality of images.
 These prototypes contribute to the academic understanding of image
recognition and offer frameworks that can be adapted for practical
applications.
5. Counterfeit Detection Systems
 Beyond simple recognition, some systems focus on identifying counterfeit
notes. These typically incorporate:
 Multi-Spectral Imaging: By capturing images at different wavelengths, these
systems can analyze hidden security features that are not visible under normal
lighting conditions.
 Machine Learning for Anomaly Detection: Advanced algorithms can be
trained to recognize subtle differences between authentic and counterfeit notes
based on a variety of features.
 Such systems are essential for financial institutions and retailers to mitigate
the risks associated with counterfeit currency.
Limitations of Existing Systems
 Despite advancements, current systems face several limitations:
 Adaptability: Many commercial validators are designed for specific currencies
and may not be easily updated for new designs or denominations.

1
 Environmental Sensitivity: Mobile applications and other systems can struggle
with variations in lighting, angles, and image quality, affecting recognition
accuracy.
 Cost: High-quality commercial currency validation machines can be
prohibitively expensive for smaller businesses or individuals.

1.2 PROPOSED SYSTEM

 The proposed currency recognition system aims to develop an efficient,


accurate, and user-friendly solution for identifying and classifying various
denominations of banknotes using advanced image processing and machine
learning techniques. The system will address the limitations of existing
solutions while providing additional features to enhance user experience and
accuracy.
Objectives

 High Accuracy: Achieve high recognition accuracy for different currency


denominations under various conditions.
 Real-Time Processing: Enable real-time currency recognition using mobile
devices or standalone systems.
 User-Friendly Interface: Design an intuitive interface that provides clear
feedback and interaction for users, including accessibility features for visually
impaired individuals.
 Adaptability: Allow the system to easily update and incorporate new currency
designs and denominations.
 Counterfeit Detection: Implement features for detecting counterfeit currency,
enhancing security.

1
1.3 System Architecture
 The proposed system can be divided into several key components:
 Image Acquisition
 Hardware: Utilize high-resolution cameras (smartphones or dedicated cameras)
to capture images of banknotes from various angles and lighting conditions.
 Input Method: Allow users to take photos or use a live camera feed for real-
time recognition.
 Pre-processing Module
 Image Enhancement: Apply techniques such as histogram equalization, noise
reduction, and contrast adjustment to improve image quality.
 Binarization: Convert the images to binary format to facilitate contour
detection and feature extraction.
 Feature Extraction
 Contour Detection: Use edge detection algorithms (e.g., Canny) to identify the
contours of the banknotes.
 Keypoint Detection: Implement feature detection algorithms like SIFT, SURF,
or ORB to extract distinctive features from the currency images.
 Template Matching: Create templates for each denomination and employ
template matching to find similarities.
 Classification Module
 Machine Learning Model: Train a Convolutional Neural Network (CNN) on a
diverse dataset of currency images to classify different denominations
accurately.
 Transfer Learning: Utilize pre-trained models to enhance performance,
especially when the dataset is limited.
 Multi-class Classification: Ensure the model can distinguish between multiple
currencies and denominations.
 Post-processing
 Verification: Cross-check the recognized denomination against a database to
ensure accuracy and flag potential mismatches.
 User Feedback: Provide real-time audio or visual feedback indicating the
recognized currency, with options for user confirmation.
 User Interface
 Design: Create an intuitive interface that is easy to navigate, featuring clear
buttons for image capture, currency recognition, and settings.
 Accessibility Features: Incorporate voice feedback and haptic responses for
users with visual impairments.
 Counterfeit Detection
 Anomaly Detection Algorithms: Implement machine learning algorithms to
analyze features that may indicate counterfeit notes.
 Multi-Spectral Analysis: Explore the possibility of using multi-spectral
imaging to detect hidden security features in banknotes.

1
Implementation Plan

 Data Collection: Gather a diverse dataset of currency images, including


different denominations, conditions, and angles.
 Model Training: Develop and train the machine learning model using the
collected dataset, fine-tuning it for optimal accuracy.
 System Integration: Integrate all components into a cohesive system, ensuring
seamless communication between the modules.
 Testing and Evaluation: Conduct extensive testing under various conditions to
evaluate the system's performance and accuracy.
 User Testing: Gather feedback from potential users, particularly individuals
with visual impairments, to refine the user interface and functionality.
 Deployment: Launch the system on mobile platforms (iOS and Android) and
potentially as a standalone application.

Expected Outcomes

 Increased Accuracy: Improved recognition rates compared to existing systems,


even under challenging conditions.
 Enhanced User Experience: A user-friendly interface that is accessible to a
wide range of users, including those with disabilities.
 Adaptability to New Currencies: A system capable of quickly integrating new
currency designs and denominations, ensuring long-term relevance.
 Counterfeit Detection: An additional layer of security that helps users identify
potentially counterfeit notes.

1
2. LITERATURE SURVEY

The field of currency recognition using image processing and machine learning has
evolved significantly, influenced by advancements in computer vision, artificial
intelligence, and mobile technology. This literature survey examines key studies,
methodologies, and technologies that have contributed to the development of
effective currency recognition systems.
1. Image Processing Techniques
 Many researchers have focused on the application of traditional image
processing techniques for currency recognition.
 Feature Extraction: Studies such as those by Bansal and Choudhary (2017)
emphasize the importance of feature extraction methods, including edge
detection and contour analysis, using algorithms like Canny edge detection and
Hough transforms. These techniques are fundamental in identifying the shape
and characteristics of banknotes.
 Template Matching: Works like those by Wu et al. (2018) utilize template
matching for recognizing specific banknote features. By creating templates of
currency notes, these systems can compare captured images to templates to
determine the denomination.

2. Machine Learning Approaches


 The integration of machine learning has revolutionized currency recognition,
allowing for improved accuracy and adaptability.
 Convolutional Neural Networks (CNNs): Research by Ahmed et al. (2019)
demonstrated the effectiveness of CNNs in image classification tasks, achieving
high accuracy in recognizing various currency denominations. CNNs excel in
learning spatial hierarchies of features, making them particularly suitable for
image recognition.
 Transfer Learning: Several studies, including those by Zhang et al. (2020),
have explored transfer learning to leverage pre-trained models on large datasets,
thereby improving the performance of currency recognition systems even with
limited training data. This approach reduces the computational burden and
accelerates the training process.

1
3. Mobile Currency Recognition Applications
 With the proliferation of smartphones, numerous mobile applications have
emerged for currency recognition.
 Accessibility: Applications like "Cash Reader" and "Seeing AI" utilize
smartphone cameras to identify banknotes and provide audio feedback for
visually impaired users. Research by Pohl et al. (2021) highlights the
importance of designing user-friendly interfaces that cater to diverse user needs.
 Challenges in Mobile Recognition: Studies have identified challenges such as
variations in lighting and image quality that affect recognition accuracy.
Solutions involve implementing robust pre-processing techniques and
optimizing algorithms for real-time performance (Hussain et al., 2022).

4. Counterfeit Detection
 Counterfeit currency poses a significant challenge, prompting research into
systems that can distinguish between authentic and fake notes.
 Multi-Spectral Imaging: Research by Sinha and Chowdhury (2019) explored
the use of multi-spectral imaging to identify security features in banknotes
that are not visible under standard lighting. This approach enhances the ability
to detect counterfeit notes effectively.
 Anomaly Detection Algorithms: Some studies have focused on employing
machine learning algorithms for anomaly detection, identifying subtle
differences between authentic and counterfeit currency based on various
features (Rai et al., 2023).
5. Real-World Implementations
 Several projects and systems have been developed and tested in real-world
scenarios, showcasing the practical applications of currency recognition
technology.
 Commercial Solutions: Companies like Glory Global Solutions and Crane
Payment Innovations have developed advanced currency validation machines
that utilize a combination of optical recognition and machine learning
techniques to authenticate banknotes in retail and banking environments.
 Academic Projects: Various academic institutions have implemented prototype
systems that integrate different methodologies for currency recognition,
contributing to the body of knowledge in this field. These projects often focus
on
1
addressing specific challenges, such as adaptability to new currency designs and improving
recognition rates under varying conditions.

6. Summary of Findings
 The literature indicates a trend toward integrating advanced machine learning
techniques, particularly deep learning, into currency recognition systems. The
use of CNNs and transfer learning has significantly improved accuracy and
adaptability. However, challenges remain, including the need for robust
systems that can operate in diverse environments and the ongoing threat of
counterfeit currency.

1
3. SYSTEM REQUIREMENTS

3.1 SOFTWARE REQUIREMENTS

1. Development Environment

 Programming Language:
1. Python
 IDE or Text Editor:
1. PyCharm
2. Jupyter Notebook
3. Visual Studio Code

2. Image Processing Libraries

 OpenCV
 Pillow

3. Machine Learning Libraries

 TensorFlow
 Keras
 scikit-learn:

4. Data Handling and Visualization

 NumPy
 Pandas
 Matplotlib
 Seaborn

5. Database Management

 SQLite or PostgreSQL
 SQLAlchemy

6. Version Control and Collaboration

 Git
 GitHub or GitLab

7. Deployment Tools

 Flask or Django
1
 Docker

8. Testing Frameworks

 pytest
 unittest

9. Accessibility Features

 Speech Recognition Libraries

10. Documentation Tools

 Sphinx

3.2 HARDWARE REQUIREMENTS


1. Camera

 High-Resolution Camera:
o Type: A smartphone camera (with at least 12 MP) or a dedicated high-
resolution webcam/digital camera.
o Purpose: To capture clear images of banknotes for accurate recognition.
The camera should support good low-light performance to handle various
lighting conditions.

2. Processing Unit

 Computer/Server Specifications:
o CPU: Multi-core processor (e.g., Intel i5 or better) for efficient data
processing and model inference.
o RAM: At least 8 GB, preferably 16 GB or more, to handle large datasets
and enable smooth multitasking during image processing and model
training.
o GPU:For training deep learning models, a dedicated GPU (e.g., NVIDIA
GeForce GTX 1060 or better) is recommended to significantly speed up
training times.

1
3. Storage

 Hard Drive/SSD:
o Type: Solid State Drive (SSD) is preferred for faster read/write speeds,
especially during model training and data loading.
o Capacity: At least 256 GB, though 512 GB or more is recommended to
accommodate datasets, models, and application files.

4. User Interface

 Touchscreen Monitor (Optional):


o For an interactive user interface, a touchscreen monitor can enhance user
experience, especially for standalone applications.
 Audio Output Device:
o Speakers or headphones for audio feedback, particularly beneficial for
users with visual impairments.

5. Power Supply

 UPS (Uninterruptible Power Supply):


o To ensure the system remains operational during power outages,
especially for deployed systems in commercial settings.

6. Network Requirements

 Internet Connection:
o A stable internet connection may be required for cloud-based model
training, data storage, or updates (if applicable).

7. Mobile Device (if applicable)

 Smartphone/Tablet:
o If developing a mobile application, ensure compatibility with iOS (iPhone
7 or later) and Android devices (Android 8.0 or later).
o Features: Devices should have a good camera, sufficient RAM (at least 4
GB), and adequate processing power for real-time image recognition.

2
3.3 SOFTWARE TOOLS USED

3.3.1 Python

Python is an interpreted, object-oriented, high-level programming language with dynamic


semantics. Python is simple and easy to learn. Python supports modules and packages,
which encourages program modularity and code reuse. The Python interpreter and the
extensive standard library are available in source or binary form without charge for all
major platforms, and can be freely distributed. Often, programmers fall in love with
Python because of the increased productivity it provides. Since there is no compilation
step, the edit-test-debug cycle is incredibly fast.
Debugging Python programs are easy and a bug or bad input will never cause a
segmentation fault. Instead, when the interpreter discovers an error, it raises an
exception. When the program doesn't catch the exception, the interpreter prints a stack
trace. A source level debugger allows inspection of local and global variables,
evaluation of arbitrary expressions, setting breakpoints, stepping through the code a
line at a time, and so on. The debugger is written in Python itself, testifying to Python's
introspective power. The Proposed System works on python 3.5 and above.

2
3.3.2 Google Collab

Google Colab is a free, cloud-based platform for data science and machine learning
development. It provides a Jupyter Notebook interface, 12 hours of runtime per
session, 50 GB of disk space, and pre-installed libraries like TensorFlow and
PyTorch. With GPU acceleration and real-time collaboration, Colab enables fast
prototyping, easy sharing, and cost-effective development. Ideal for data science,
machine learning, and deep learning projects, Colab integrates seamlessly with
Google Drive, allowing users to access and share notebooks effortlessly. Its
limitations include a 12-hour session limit and restricted disk space, but overall,
Google Colab streamlines data science workflows, making it an indispensable tool
for professionals and enthusiasts alike.

3.3.3 VS Code

Visual Studio Code is a lightweight but powerful source code editor which runs on
your desktop and is available for Windows, macOS and Linux. It comes with built-in
support for JavaScript, TypeScript and Node.js and has a rich ecosystem of extensions
for other languages (such as C++, C#, Java, Python, PHP, Go) and runtimes (such
as .NET and Unity). Visual Studio Code is a freeware source-code editor made by
Microsoft for Windows, Linux and macOS.

2
4. SYSTEM DESIGN

The system design for a currency recognition system involves outlining the
architecture, components, data flow, and user interactions. This section provides a
comprehensive overview of the system's design, focusing on its modular structure
and integration of various technologies.
4.1. System Architecture
 The proposed currency recognition system consists of several key components
organized into a layered architecture:
 User Interface Layer
1. Mobile Application/Web Interface: A user-friendly interface that allows
users to capture images of banknotes, view recognition results, and
receive feedback.
 Application Logic Layer
1. Image Acquisition Module: Captures images from the camera.
2. Pre-processing Module: Enhances image quality through noise
reduction, resizing, and binarization.
3. Feature Extraction Module: Identifies key features using edge detection
and keypoint extraction techniques.
4. Classification Module: Utilizes machine learning models to classify the
currency based on extracted features.
 Data Storage Layer
1. Database: Stores user data, recognized currency information, and model
parameters.
 Integration Layer
1. APIs: Facilitates communication between the user interface and backend
processing modules.

2
FIGURE 4.1:System Design

Component Descriptions
1. User Interface Layer
 Functionality: Provides an interactive platform for users to upload images,
view results, and interact with the system.
 Design Considerations:
1. Accessibility features (e.g., voice feedback).
2. Simple navigation and clear instructions for capturing images.
3. Image Acquisition Module
 Components:
1. Camera interface (smartphone or webcam).
 Functionality: Captures images of banknotes in various orientations and
lighting conditions.
1. Pre-processing Module
 Processes:
1. Image Enhancement: Adjusts brightness, contrast, and sharpness.
2. Noise Reduction: Applies filters to minimize noise.
3. Binarization: Converts the image to a binary format for easier analysis.
 Tools: OpenCV functions for image manipulation.
1. Feature Extraction Module
 Techniques:
1. Edge Detection: Uses Canny edge detection to find the edges of the
banknote.
2. Contour Detection: Identifies contours that define the note’s boundaries.
3. Keypoint Extraction: Utilizes algorithms like SIFT or ORB to extract
distinctive features.
 Output: A set of features that represent the captured banknote.

2
1. Classification Module
 Machine Learning Model:
1. CNN Architecture: A Convolutional Neural Network trained to
recognize and classify different banknote denominations.
 Training: The model is trained on a large dataset of currency images to learn
features associated with each denomination.
 Output: The predicted denomination of the currency based on the extracted
features.
1. Counterfeit Detection Module
 Methods:
1. Analyzes security features (e.g., watermarks, UV patterns) to detect
counterfeit notes.
2. Uses anomaly detection techniques to identify discrepancies in features.
 Output: A determination of whether the note is genuine or counterfeit.
1. Data Storage Layer
 Database:
1. SQLite or PostgreSQL: For storing user data, recognition results, and
model metadata.
 Structure:
1. Tables for user information, currency details, and model performance
metrics.
2. Integration Layer
 APIs:
1. RESTful APIs to facilitate communication between the frontend and
backend components.
2. Handles requests for image processing, recognition results, and user data
retrieval.

4.2 Data Flow Diagram


 A simple data flow diagram (DFD) can illustrate the interaction between
components:
 User captures an image using the mobile app or web interface.
 Image is sent to the Image Acquisition Module.
 Pre-processing occurs, enhancing the image quality.
 Features are extracted from the pre-processed image.
 The extracted features are sent to the Classification Module for recognition.
 The recognition result is returned to the user interface, along with any
counterfeit detection findings.

2
FIGURE 4.2: Use Case Diagram

User Interaction Workflow

 Image Capture: User opens the app and captures a photo of the banknote.
 Processing: The app processes the image, enhancing it and extracting features.
 Recognition: The system classifies the note and checks for counterfeits.
 Feedback Display: The result (denomination and counterfeit status) is
displayed on the interface.
 User Confirmation: Users can confirm the recognition or provide feedback if
the result is incorrect.

Technology Stack

 Frontend:
1. Mobile frameworks (React Native, Flutter) or web frameworks (React,
Angular).
 Backend:
1. Python (Flask or Django for the web framework).
 Machine Learning:
1. TensorFlow or PyTorch for model development.
 Database:
1. SQLite or PostgreSQL for data storage.
 Image Processing:
1. OpenCV and Pillow for image manipulation.

2
4.3 UML DIAGRAMS

FIGURE 4.3:Class Diagram

FIGURE 4.4: Sequence Diagram

2
FIGURE 4.5:Collaboration Diagram

2
5. TESTING & RESULTS

Testing is finding out how well something works. In terms of human beings, testing
tells what level of knowledge or skill has been acquired. In computer hardware and
software development, testing is used at key checkpoints in the overall process to
determine whether objectives are being met.
Testing is aimed at ensuring that the system was accurately an efficiently before live
operation commands.
Testing is best performed when user development is asked to assist in identifying all
errors and bugs.
The sample data are used for testing. It is not quantity but quality of the data used
the matters of testing.
5.1 LEVELS OF TESTING
 Code testing:
 Code-based testing involves testing out each line of code of a program to
identify bugs or errors during the software development process, or examines
the logic of the program.
 Specification testing:
 Test specifications are iterative, generative blueprints of test design.
 Unit testing:
 Unit testing is testing the smallest testable unit of an application. It is done
during the coding phase by the developers. To perform unit testing, a
developer writes a piece of code (unit tests) to verify the code to be tested
(unit) is correct.

2
CONT:
Every software can be tested using following Unit Testing Techniques. They are
1. Black Box Testing
2. White Box Testing
1. BLACK BOX TESTING
Black box testing is a software testing techniques in which functionality of the software
under test (SUT) is tested without looking at the internal code structure,
implementation details and knowledge of internal paths of the software.
This type of testing is based entirely on the software requirements and specifications. In
Black Box Testing we just focus on inputs and output of the software system
without bothering about internal knowledge of the software program.

2. WHITE BOX TESTING


White box testing techniques analyze the internal structures the used data structures,
internal design, code structure, and the working of the software rather than just the
functionality as in black box testing. It is also called glass box testing or clear box
testing or structural testing.
Working process of white box testing:
 Input: Requirements, Functional specifications, design documents, source
code.
 Processing: Performing risk analysis for guiding through the entire process.
 Proper test planning: Designing test cases so as to cover the entire code.
Execute rinse-repeat until error-free software is reached. Also, the results are
communicated.

3
 Output: Preparing final report of the entire testing process.

Integration Testing:
Integration testing is a level of software testing where individual units are combined
and tested as a group. The purpose of this level of testing is to expose faults in the
interaction between integrated units. Integration testing is defined as the testing of
combined parts of an application to determine if they function correctly. It occurs
after unit testing and before validation testing. Integration testing can be done in two

ways: Bottom up Integration


This testing begins with unit testing, followed by tests of progressively higher level
combinations of units called modules or builds.
Top down Integration
In this testing, the highest level modules are tested first and progressively, lower
level modules are tested thereafter

3
5.2 IMPLEMENTATION:
pip install opencv-python tensorflow numpy matplotlib
import cv2
import numpy as np
from tensorflow.keras.models import load_model
from tensorflow.keras.preprocessing.image import img_to_array

# Load the pre-trained model


model = load_model('currency_recognition_model.h5') # Replace with your model file

# Function to preprocess the image


def preprocess_image(image):
image = cv2.cvtColor(image, cv2.COLOR_BGR2GRAY) # Convert to grayscale
image = cv2.resize(image, (128, 128)) # Resize to the model's input size
image = img_to_array(image) / 255.0 # Normalize the image
return np.expand_dims(image, axis=0) # Add batch dimension

# Function to predict the currency


def predict_currency(image):
preprocessed_image = preprocess_image(image)
prediction = model.predict(preprocessed_image)
return np.argmax(prediction) # Return the index of the highest probability

# Load and process an example image


image_path = 'path_to_your_currency_image.jpg' # Replace with your image path
image = cv2.imread(image_path)

# Predict currency
currency_index = predict_currency(image)
print(f'Predicted currency index: {currency_index}')

3
# Display the image
cv2.imshow('Currency Image', image)
cv2.waitKey(0)
cv2.destroyAllWindows()
import os
import numpy as np
import cv2
from tensorflow.keras.models import Sequential
from tensorflow.keras.layers import Conv2D, MaxPooling2D, Flatten, Dense
from tensorflow.keras.preprocessing.image import ImageDataGenerator

# Set parameters
img_size = (128, 128)
batch_size = 32
epochs = 10
data_dir = 'dataset/' # Path to your dataset

# Prepare data
def load_data(data_dir):
images = []
labels = []
label_map = {}

for label, class_name in enumerate(os.listdir(data_dir)):


label_map[label] = class_name
class_dir = os.path.join(data_dir, class_name)

for img_name in os.listdir(class_dir):


img_path = os.path.join(class_dir, img_name)
image = cv2.imread(img_path)
image = cv2.cvtColor(image, cv2.COLOR_BGR2GRAY) # Convert to grayscale
image = cv2.resize(image, img_size) # Resize to match model input

3
images.append(image)
labels.append(label)

return np.array(images), np.array(labels), label_map

# Load dataset
images, labels, label_map = load_data(data_dir)
images = images.astype('float32') / 255.0 # Normalize

# Split data into training and validation sets


from sklearn.model_selection import train_test_split
X_train, X_val, y_train, y_val = train_test_split(images, labels, test_size=0.2, random_state=42)

# Reshape images for model input


X_train = np.expand_dims(X_train, axis=-1)
X_val = np.expand_dims(X_val, axis=-1)

# Create data generators


train_datagen = ImageDataGenerator(rotation_range=10, width_shift_range=0.1,
height_shift_range=0.1)
train_datagen.fit(X_train)

# Build the CNN model


model = Sequential([
Conv2D(32, (3, 3), activation='relu', input_shape=(128, 128, 1)),
MaxPooling2D(pool_size=(2, 2)),
Conv2D(64, (3, 3), activation='relu'),
MaxPooling2D(pool_size=(2, 2)),
Flatten(),
Dense(128, activation='relu'),
Dense(len(label_map), activation='softmax') # Number of classes
])

3
# Compile the model
model.compile(optimizer='adam', loss='sparse_categorical_crossentropy', metrics=['accuracy'])

# Train the model


model.fit(train_datagen.flow(X_train, y_train, batch_size=batch_size),
validation_data=(X_val, y_val),
epochs=epochs)

# Save the model


model.save('currency_recognition_model.h5')
import cv2
import numpy as np
from tensorflow.keras.models import load_model
from tensorflow.keras.preprocessing.image import img_to_array

# Load the trained model


model = load_model('currency_recognition_model.h5')

# Function to preprocess the image


def preprocess_image(image):
image = cv2.cvtColor(image, cv2.COLOR_BGR2GRAY) # Convert to grayscale
image = cv2.resize(image, (128, 128)) # Resize to the model's input size
image = img_to_array(image) / 255.0 # Normalize the image
return np.expand_dims(image, axis=0) # Add batch dimension

# Function to predict the currency


def predict_currency(image):
preprocessed_image = preprocess_image(image)
prediction = model.predict(preprocessed_image)
return np.argmax(prediction) # Return the index of the highest probability

3
# Load and process an example image
image_path = 'path_to_your_currency_image.jpg' # Replace with your image path
image = cv2.imread(image_path)

# Predict currency
currency_index = predict_currency(image)
print(f'Predicted currency index: {currency_index} ({label_map[currency_index]})')

# Display the image


cv2.imshow('Currency Image', image)
cv2.waitKey(0)
cv2.destroyAllWindows()

3
5.3 OUTPUT:

3
6. CONCLUSION AND FUTURE SCOPE
6.1 Conclusion

The currency recognition system using image processing successfully demonstrates


the integration of computer vision and machine learning to identify different
denominations of banknotes. By leveraging techniques such as image enhancement,
feature extraction, and advanced modeling (e.g., CNNs), the system can effectively
classify currencies with high accuracy. This technology not only simplifies cash
handling for users but also enhances security and reduces fraud. The system's
performance can be further refined through continual learning and updating with
new data, ensuring adaptability to changes in currency designs.
6.2 Future Work

 Broader Currency Support: Expand the system to recognize a wider range of


currencies from different countries, including emerging markets.

 Real-time Recognition: Enhance the application for real-time currency recognition,


making it suitable for point-of-sale systems or mobile applications.

 Multilingual Support: Incorporate multilingual support for user interfaces,


enabling global usability.

 Advanced Features: Add functionality to detect counterfeit bills using security


features such as watermarks and infrared patterns.

 Integration with Financial Services: Partner with banks and financial institutions
to provide a secure and efficient way to process cash transactions.

 User Customization: Allow users to customize settings based on their preferences,


such as currency types and recognition modes.

 Research and Development: Explore the application of other image processing


techniques and machine learning models to enhance accuracy and robustness.

 Deployment on Edge Devices: Optimize the system for deployment on edge


devices, reducing the need for cloud computing and enabling offline functionality.

3
REFERENCES

[1] Bradski, G., & Kaehler, A. (2016). Learning OpenCV 4: Computer Vision with Python.
O'Reilly Media.
[2] Rosebrock, A. (2019). Deep Learning for Computer Vision with Python. PyImageSearch.
[3] Kaur, R., & Rani, S. (2021). Automatic Currency Recognition Using Image
Processing. International Journal of Computer Applications, 174(8), 1-6.
[4]https://doi.org/10.5120/ijca2021921001
[5] Ng, A. (2020). Convolutional Neural Networks. Coursera. Retrieved from
[6]https://www.coursera.org/learn/convolutional-neural-networks
[7] OpenCV. (n.d.). OpenCV Documentation. Retrieved from https://docs.opencv.org/
[8] Abdi, S., & Torkzadeh, J. (2020). A survey of image recognition techniques using machine
learning. International Journal of Computer Science and Network Security, 20(5), 29-37.
[9] TensorFlow. (n.d.). TensorFlow Documentation. Retrieved from https://www.tensorflow.org/
[10] Kaggle. (n.d.). Datasets. Retrieved from https://www.kaggle.com/datasets

You might also like