Spsaarm
Spsaarm
BACHELOR OF TECHNOLOGY
IN
COMPUTER SCIENCE AND ENGINEERING
By
[Sampath Kumar,Praveen,Satya veni]
(218P1A0520,218P1A0532,218P1A0529)
2024-2025
AAR MAHAVEER INSTITUTE OF ENGINEERING &TECHNOLOGY
(Approved by AICTE, Affiliated to JNT University, Hyderabad)
Vyasapuri,Bandlaguda,Post : Keshavgiri,Hyderabad - 500045
CERTIFICATE
This is to certify that the mini project report entitled “CURRENCY RECOGNITION
SYSTEM USING IMAGE PROCESSING” being submitted by Sampath Kumar
to the Jawaharlal Nehru Technological University is arecord of bonafied work carried out by
her under my guidance and supervision.The results embodied in this mini project report have
not been submitted to any other University or Institute for the award of any Degree or Diploma.
1
AAR MAHAVEER INSTITUTE OF ENGINEERING &TECHNOLOGY
(Approved by AICTE, Affiliated to JNT University, Hyderabad)
Vyasapuri,Bandlaguda,Post : Keshavgiri,Hyderabad - 500045
DECLARATION
PRAVEEN - 218P1A0532
2
ACKNOWLEDGEMENT
Head of the Department for his valuable support and all senior faculty members
of CSE department for their help during my course. Thanks to programmers and
non-teaching course. Thanks to programmers and non-teaching staff of CSE department
A.A.R.M.I.S.T
I thank my principal Dr. Punumalli Babufor extending his utmost support and
co-operation in providing all the provisions, and Management for providing excellent
facilities to carry out my project work.
Finally special thanks to my parents, sister for their support and encouragement
throughout my life and this course. Thanks to all my friends and well-wishers for their
constant support.
iii
3
ABSTRACT
There are around 200+ different currencies used in different countries around the
world. The technology of currency recognition aims to search and extract the visible
as well as hidden marks on paper currency for efficient classification. Currency
Recognition system is implemented to reduce human power to automatically
recognize the amount monetary value of currency without human supervision. The
task of currency recognition is a bit difficult, so it becomes very important to select
the right features and proper algorithm for this purpose. The purpose is to design an
easy and efficient algorithm that would be useful for maximum number of
currencies. Writing different programs for all is a tedious job. The aim of the project
is to recognize the currencies and not authentication.
iv
4
TABLE OF CONTENTS
1. Introduction 7
1.1 Existing System.............................................................10
1.2 Proposed System….......................................................12
1.3 System Architecture.......................................................13
2. Literature Survey 15
3. System Requirements 18
3.1 Software Requirements….................................................18
5
LIST OF FIGURES
6
1. INTRODUCTION
7
Importance of Currency Recognition
Currency recognition is crucial not only for businesses that deal with cash
transactions but also for enhancing the accessibility of financial services.
Individuals with visual impairments, for example, benefit significantly from
currency recognition systems that provide them with the ability to identify
notes independently. Furthermore, in regions where counterfeit currency
poses a significant risk, reliable recognition systems can play a critical role in
fraud prevention, thereby bolstering public confidence in the financial system.
Technological Foundations
The backbone of this project is image processing, a field that focuses on the
manipulation and analysis of images through computational techniques. By
employing algorithms that enhance image quality, extract meaningful
features, and classify data, we can create a system capable of discerning
various currency notes under differing conditions.
Key technologies involved include:
Computer Vision: Techniques that allow machines to interpret and process
visual data from the world, enabling them to recognize objects—such as
currency notes—based on their visual characteristics.
Machine Learning: Algorithms that enable the system to learn from data and
improve over time. By training models on labeled currency images, the system
can become proficient in recognizing different denominations.
Deep Learning: A subset of machine learning that uses neural networks to
analyze complex patterns in data. Convolutional Neural Networks (CNNs), in
particular, have shown remarkable success in image recognition tasks and are
instrumental in developing an efficient currency recognition system.
8
Project Overview
The primary goal of this project is to design an automated currency
recognition system that can accurately identify various denominations of
banknotes using image processing techniques. The system will encompass
several stages:
Image Acquisition: Capturing high-quality images of currency notes using
cameras or smartphones.
Pre-processing: Enhancing the images to improve recognition accuracy,
including noise reduction and contrast enhancement.
Feature Extraction: Identifying key features in the currency images that
differentiate one denomination from another.
Classification: Employing machine learning algorithms to categorize the
currency notes based on the extracted features.
User Interface: Developing an intuitive interface that provides real-time
recognition feedback, enabling users to interact seamlessly with the system.
9
1.1 EXISTING SYSTEM.
Several existing systems and technologies address currency recognition,
leveraging image processing, machine learning, and other techniques to
automate the identification and verification of banknotes. Here’s an overview
of some notable systems and approaches currently in use:
Commercial Currency Validators
Many banking and retail industries rely on commercial currency validation
machines. These devices utilize optical sensors and sophisticated algorithms
to detect and authenticate banknotes. Key features include:
Infrared and UV Scanning: These machines often employ infrared and
ultraviolet light to detect security features in banknotes, such as watermarks and
security threads.
Size and Shape Measurement: They assess the dimensions and shape of notes
to verify authenticity against predefined standards.
While effective, these systems are typically limited to specific denominations.
2. Mobile Applications
A growing number of mobile applications leverage smartphone cameras for
currency recognition, catering to various user needs, such as aiding visually
impaired individuals. Examples include:
Currency Recognition Apps: Apps like “Cash Reader” and “Seeing AI” utilize
image processing techniques to recognize and announce currency
denominations.
User-Friendly Interfaces: These apps are designed with accessibility in mind,
featuring voice recognition and audio feedback to assist users in identifying
notes.
While convenient, the performance of these applications can vary
significantly based on the quality of the camera and the lighting conditions.
1
OpenCV: This widely-used library offers numerous functions for image
processing, making it a solid foundation for developing currency recognition
systems. It includes algorithms for feature detection, image enhancement, and
contour analysis.
TensorFlow and PyTorch: These deep learning frameworks are often
employed in projects that utilize neural networks for image classification,
allowing developers to train models for recognizing different currency
denominations.
4. Research Prototypes
Numerous academic research projects focus on currency recognition using
advanced techniques, such as:
Convolutional Neural Networks (CNNs): Researchers have demonstrated the
effectiveness of CNNs for currency classification tasks, achieving high accuracy
rates by training models on large datasets of currency images.
Hybrid Approaches: Some studies combine traditional image processing
techniques with machine learning models to improve robustness against
variations in lighting, orientation, and quality of images.
These prototypes contribute to the academic understanding of image
recognition and offer frameworks that can be adapted for practical
applications.
5. Counterfeit Detection Systems
Beyond simple recognition, some systems focus on identifying counterfeit
notes. These typically incorporate:
Multi-Spectral Imaging: By capturing images at different wavelengths, these
systems can analyze hidden security features that are not visible under normal
lighting conditions.
Machine Learning for Anomaly Detection: Advanced algorithms can be
trained to recognize subtle differences between authentic and counterfeit notes
based on a variety of features.
Such systems are essential for financial institutions and retailers to mitigate
the risks associated with counterfeit currency.
Limitations of Existing Systems
Despite advancements, current systems face several limitations:
Adaptability: Many commercial validators are designed for specific currencies
and may not be easily updated for new designs or denominations.
1
Environmental Sensitivity: Mobile applications and other systems can struggle
with variations in lighting, angles, and image quality, affecting recognition
accuracy.
Cost: High-quality commercial currency validation machines can be
prohibitively expensive for smaller businesses or individuals.
1
1.3 System Architecture
The proposed system can be divided into several key components:
Image Acquisition
Hardware: Utilize high-resolution cameras (smartphones or dedicated cameras)
to capture images of banknotes from various angles and lighting conditions.
Input Method: Allow users to take photos or use a live camera feed for real-
time recognition.
Pre-processing Module
Image Enhancement: Apply techniques such as histogram equalization, noise
reduction, and contrast adjustment to improve image quality.
Binarization: Convert the images to binary format to facilitate contour
detection and feature extraction.
Feature Extraction
Contour Detection: Use edge detection algorithms (e.g., Canny) to identify the
contours of the banknotes.
Keypoint Detection: Implement feature detection algorithms like SIFT, SURF,
or ORB to extract distinctive features from the currency images.
Template Matching: Create templates for each denomination and employ
template matching to find similarities.
Classification Module
Machine Learning Model: Train a Convolutional Neural Network (CNN) on a
diverse dataset of currency images to classify different denominations
accurately.
Transfer Learning: Utilize pre-trained models to enhance performance,
especially when the dataset is limited.
Multi-class Classification: Ensure the model can distinguish between multiple
currencies and denominations.
Post-processing
Verification: Cross-check the recognized denomination against a database to
ensure accuracy and flag potential mismatches.
User Feedback: Provide real-time audio or visual feedback indicating the
recognized currency, with options for user confirmation.
User Interface
Design: Create an intuitive interface that is easy to navigate, featuring clear
buttons for image capture, currency recognition, and settings.
Accessibility Features: Incorporate voice feedback and haptic responses for
users with visual impairments.
Counterfeit Detection
Anomaly Detection Algorithms: Implement machine learning algorithms to
analyze features that may indicate counterfeit notes.
Multi-Spectral Analysis: Explore the possibility of using multi-spectral
imaging to detect hidden security features in banknotes.
1
Implementation Plan
Expected Outcomes
1
2. LITERATURE SURVEY
The field of currency recognition using image processing and machine learning has
evolved significantly, influenced by advancements in computer vision, artificial
intelligence, and mobile technology. This literature survey examines key studies,
methodologies, and technologies that have contributed to the development of
effective currency recognition systems.
1. Image Processing Techniques
Many researchers have focused on the application of traditional image
processing techniques for currency recognition.
Feature Extraction: Studies such as those by Bansal and Choudhary (2017)
emphasize the importance of feature extraction methods, including edge
detection and contour analysis, using algorithms like Canny edge detection and
Hough transforms. These techniques are fundamental in identifying the shape
and characteristics of banknotes.
Template Matching: Works like those by Wu et al. (2018) utilize template
matching for recognizing specific banknote features. By creating templates of
currency notes, these systems can compare captured images to templates to
determine the denomination.
1
3. Mobile Currency Recognition Applications
With the proliferation of smartphones, numerous mobile applications have
emerged for currency recognition.
Accessibility: Applications like "Cash Reader" and "Seeing AI" utilize
smartphone cameras to identify banknotes and provide audio feedback for
visually impaired users. Research by Pohl et al. (2021) highlights the
importance of designing user-friendly interfaces that cater to diverse user needs.
Challenges in Mobile Recognition: Studies have identified challenges such as
variations in lighting and image quality that affect recognition accuracy.
Solutions involve implementing robust pre-processing techniques and
optimizing algorithms for real-time performance (Hussain et al., 2022).
4. Counterfeit Detection
Counterfeit currency poses a significant challenge, prompting research into
systems that can distinguish between authentic and fake notes.
Multi-Spectral Imaging: Research by Sinha and Chowdhury (2019) explored
the use of multi-spectral imaging to identify security features in banknotes
that are not visible under standard lighting. This approach enhances the ability
to detect counterfeit notes effectively.
Anomaly Detection Algorithms: Some studies have focused on employing
machine learning algorithms for anomaly detection, identifying subtle
differences between authentic and counterfeit currency based on various
features (Rai et al., 2023).
5. Real-World Implementations
Several projects and systems have been developed and tested in real-world
scenarios, showcasing the practical applications of currency recognition
technology.
Commercial Solutions: Companies like Glory Global Solutions and Crane
Payment Innovations have developed advanced currency validation machines
that utilize a combination of optical recognition and machine learning
techniques to authenticate banknotes in retail and banking environments.
Academic Projects: Various academic institutions have implemented prototype
systems that integrate different methodologies for currency recognition,
contributing to the body of knowledge in this field. These projects often focus
on
1
addressing specific challenges, such as adaptability to new currency designs and improving
recognition rates under varying conditions.
6. Summary of Findings
The literature indicates a trend toward integrating advanced machine learning
techniques, particularly deep learning, into currency recognition systems. The
use of CNNs and transfer learning has significantly improved accuracy and
adaptability. However, challenges remain, including the need for robust
systems that can operate in diverse environments and the ongoing threat of
counterfeit currency.
1
3. SYSTEM REQUIREMENTS
1. Development Environment
Programming Language:
1. Python
IDE or Text Editor:
1. PyCharm
2. Jupyter Notebook
3. Visual Studio Code
OpenCV
Pillow
TensorFlow
Keras
scikit-learn:
NumPy
Pandas
Matplotlib
Seaborn
5. Database Management
SQLite or PostgreSQL
SQLAlchemy
Git
GitHub or GitLab
7. Deployment Tools
Flask or Django
1
Docker
8. Testing Frameworks
pytest
unittest
9. Accessibility Features
Sphinx
High-Resolution Camera:
o Type: A smartphone camera (with at least 12 MP) or a dedicated high-
resolution webcam/digital camera.
o Purpose: To capture clear images of banknotes for accurate recognition.
The camera should support good low-light performance to handle various
lighting conditions.
2. Processing Unit
Computer/Server Specifications:
o CPU: Multi-core processor (e.g., Intel i5 or better) for efficient data
processing and model inference.
o RAM: At least 8 GB, preferably 16 GB or more, to handle large datasets
and enable smooth multitasking during image processing and model
training.
o GPU:For training deep learning models, a dedicated GPU (e.g., NVIDIA
GeForce GTX 1060 or better) is recommended to significantly speed up
training times.
1
3. Storage
Hard Drive/SSD:
o Type: Solid State Drive (SSD) is preferred for faster read/write speeds,
especially during model training and data loading.
o Capacity: At least 256 GB, though 512 GB or more is recommended to
accommodate datasets, models, and application files.
4. User Interface
5. Power Supply
6. Network Requirements
Internet Connection:
o A stable internet connection may be required for cloud-based model
training, data storage, or updates (if applicable).
Smartphone/Tablet:
o If developing a mobile application, ensure compatibility with iOS (iPhone
7 or later) and Android devices (Android 8.0 or later).
o Features: Devices should have a good camera, sufficient RAM (at least 4
GB), and adequate processing power for real-time image recognition.
2
3.3 SOFTWARE TOOLS USED
3.3.1 Python
2
3.3.2 Google Collab
Google Colab is a free, cloud-based platform for data science and machine learning
development. It provides a Jupyter Notebook interface, 12 hours of runtime per
session, 50 GB of disk space, and pre-installed libraries like TensorFlow and
PyTorch. With GPU acceleration and real-time collaboration, Colab enables fast
prototyping, easy sharing, and cost-effective development. Ideal for data science,
machine learning, and deep learning projects, Colab integrates seamlessly with
Google Drive, allowing users to access and share notebooks effortlessly. Its
limitations include a 12-hour session limit and restricted disk space, but overall,
Google Colab streamlines data science workflows, making it an indispensable tool
for professionals and enthusiasts alike.
3.3.3 VS Code
Visual Studio Code is a lightweight but powerful source code editor which runs on
your desktop and is available for Windows, macOS and Linux. It comes with built-in
support for JavaScript, TypeScript and Node.js and has a rich ecosystem of extensions
for other languages (such as C++, C#, Java, Python, PHP, Go) and runtimes (such
as .NET and Unity). Visual Studio Code is a freeware source-code editor made by
Microsoft for Windows, Linux and macOS.
2
4. SYSTEM DESIGN
The system design for a currency recognition system involves outlining the
architecture, components, data flow, and user interactions. This section provides a
comprehensive overview of the system's design, focusing on its modular structure
and integration of various technologies.
4.1. System Architecture
The proposed currency recognition system consists of several key components
organized into a layered architecture:
User Interface Layer
1. Mobile Application/Web Interface: A user-friendly interface that allows
users to capture images of banknotes, view recognition results, and
receive feedback.
Application Logic Layer
1. Image Acquisition Module: Captures images from the camera.
2. Pre-processing Module: Enhances image quality through noise
reduction, resizing, and binarization.
3. Feature Extraction Module: Identifies key features using edge detection
and keypoint extraction techniques.
4. Classification Module: Utilizes machine learning models to classify the
currency based on extracted features.
Data Storage Layer
1. Database: Stores user data, recognized currency information, and model
parameters.
Integration Layer
1. APIs: Facilitates communication between the user interface and backend
processing modules.
2
FIGURE 4.1:System Design
Component Descriptions
1. User Interface Layer
Functionality: Provides an interactive platform for users to upload images,
view results, and interact with the system.
Design Considerations:
1. Accessibility features (e.g., voice feedback).
2. Simple navigation and clear instructions for capturing images.
3. Image Acquisition Module
Components:
1. Camera interface (smartphone or webcam).
Functionality: Captures images of banknotes in various orientations and
lighting conditions.
1. Pre-processing Module
Processes:
1. Image Enhancement: Adjusts brightness, contrast, and sharpness.
2. Noise Reduction: Applies filters to minimize noise.
3. Binarization: Converts the image to a binary format for easier analysis.
Tools: OpenCV functions for image manipulation.
1. Feature Extraction Module
Techniques:
1. Edge Detection: Uses Canny edge detection to find the edges of the
banknote.
2. Contour Detection: Identifies contours that define the note’s boundaries.
3. Keypoint Extraction: Utilizes algorithms like SIFT or ORB to extract
distinctive features.
Output: A set of features that represent the captured banknote.
2
1. Classification Module
Machine Learning Model:
1. CNN Architecture: A Convolutional Neural Network trained to
recognize and classify different banknote denominations.
Training: The model is trained on a large dataset of currency images to learn
features associated with each denomination.
Output: The predicted denomination of the currency based on the extracted
features.
1. Counterfeit Detection Module
Methods:
1. Analyzes security features (e.g., watermarks, UV patterns) to detect
counterfeit notes.
2. Uses anomaly detection techniques to identify discrepancies in features.
Output: A determination of whether the note is genuine or counterfeit.
1. Data Storage Layer
Database:
1. SQLite or PostgreSQL: For storing user data, recognition results, and
model metadata.
Structure:
1. Tables for user information, currency details, and model performance
metrics.
2. Integration Layer
APIs:
1. RESTful APIs to facilitate communication between the frontend and
backend components.
2. Handles requests for image processing, recognition results, and user data
retrieval.
2
FIGURE 4.2: Use Case Diagram
Image Capture: User opens the app and captures a photo of the banknote.
Processing: The app processes the image, enhancing it and extracting features.
Recognition: The system classifies the note and checks for counterfeits.
Feedback Display: The result (denomination and counterfeit status) is
displayed on the interface.
User Confirmation: Users can confirm the recognition or provide feedback if
the result is incorrect.
Technology Stack
Frontend:
1. Mobile frameworks (React Native, Flutter) or web frameworks (React,
Angular).
Backend:
1. Python (Flask or Django for the web framework).
Machine Learning:
1. TensorFlow or PyTorch for model development.
Database:
1. SQLite or PostgreSQL for data storage.
Image Processing:
1. OpenCV and Pillow for image manipulation.
2
4.3 UML DIAGRAMS
2
FIGURE 4.5:Collaboration Diagram
2
5. TESTING & RESULTS
Testing is finding out how well something works. In terms of human beings, testing
tells what level of knowledge or skill has been acquired. In computer hardware and
software development, testing is used at key checkpoints in the overall process to
determine whether objectives are being met.
Testing is aimed at ensuring that the system was accurately an efficiently before live
operation commands.
Testing is best performed when user development is asked to assist in identifying all
errors and bugs.
The sample data are used for testing. It is not quantity but quality of the data used
the matters of testing.
5.1 LEVELS OF TESTING
Code testing:
Code-based testing involves testing out each line of code of a program to
identify bugs or errors during the software development process, or examines
the logic of the program.
Specification testing:
Test specifications are iterative, generative blueprints of test design.
Unit testing:
Unit testing is testing the smallest testable unit of an application. It is done
during the coding phase by the developers. To perform unit testing, a
developer writes a piece of code (unit tests) to verify the code to be tested
(unit) is correct.
2
CONT:
Every software can be tested using following Unit Testing Techniques. They are
1. Black Box Testing
2. White Box Testing
1. BLACK BOX TESTING
Black box testing is a software testing techniques in which functionality of the software
under test (SUT) is tested without looking at the internal code structure,
implementation details and knowledge of internal paths of the software.
This type of testing is based entirely on the software requirements and specifications. In
Black Box Testing we just focus on inputs and output of the software system
without bothering about internal knowledge of the software program.
3
Output: Preparing final report of the entire testing process.
Integration Testing:
Integration testing is a level of software testing where individual units are combined
and tested as a group. The purpose of this level of testing is to expose faults in the
interaction between integrated units. Integration testing is defined as the testing of
combined parts of an application to determine if they function correctly. It occurs
after unit testing and before validation testing. Integration testing can be done in two
3
5.2 IMPLEMENTATION:
pip install opencv-python tensorflow numpy matplotlib
import cv2
import numpy as np
from tensorflow.keras.models import load_model
from tensorflow.keras.preprocessing.image import img_to_array
# Predict currency
currency_index = predict_currency(image)
print(f'Predicted currency index: {currency_index}')
3
# Display the image
cv2.imshow('Currency Image', image)
cv2.waitKey(0)
cv2.destroyAllWindows()
import os
import numpy as np
import cv2
from tensorflow.keras.models import Sequential
from tensorflow.keras.layers import Conv2D, MaxPooling2D, Flatten, Dense
from tensorflow.keras.preprocessing.image import ImageDataGenerator
# Set parameters
img_size = (128, 128)
batch_size = 32
epochs = 10
data_dir = 'dataset/' # Path to your dataset
# Prepare data
def load_data(data_dir):
images = []
labels = []
label_map = {}
3
images.append(image)
labels.append(label)
# Load dataset
images, labels, label_map = load_data(data_dir)
images = images.astype('float32') / 255.0 # Normalize
3
# Compile the model
model.compile(optimizer='adam', loss='sparse_categorical_crossentropy', metrics=['accuracy'])
3
# Load and process an example image
image_path = 'path_to_your_currency_image.jpg' # Replace with your image path
image = cv2.imread(image_path)
# Predict currency
currency_index = predict_currency(image)
print(f'Predicted currency index: {currency_index} ({label_map[currency_index]})')
3
5.3 OUTPUT:
3
6. CONCLUSION AND FUTURE SCOPE
6.1 Conclusion
Integration with Financial Services: Partner with banks and financial institutions
to provide a secure and efficient way to process cash transactions.
3
REFERENCES
[1] Bradski, G., & Kaehler, A. (2016). Learning OpenCV 4: Computer Vision with Python.
O'Reilly Media.
[2] Rosebrock, A. (2019). Deep Learning for Computer Vision with Python. PyImageSearch.
[3] Kaur, R., & Rani, S. (2021). Automatic Currency Recognition Using Image
Processing. International Journal of Computer Applications, 174(8), 1-6.
[4]https://doi.org/10.5120/ijca2021921001
[5] Ng, A. (2020). Convolutional Neural Networks. Coursera. Retrieved from
[6]https://www.coursera.org/learn/convolutional-neural-networks
[7] OpenCV. (n.d.). OpenCV Documentation. Retrieved from https://docs.opencv.org/
[8] Abdi, S., & Torkzadeh, J. (2020). A survey of image recognition techniques using machine
learning. International Journal of Computer Science and Network Security, 20(5), 29-37.
[9] TensorFlow. (n.d.). TensorFlow Documentation. Retrieved from https://www.tensorflow.org/
[10] Kaggle. (n.d.). Datasets. Retrieved from https://www.kaggle.com/datasets