B.Tech Project Document - 2025
B.Tech Project Document - 2025
BACHELOR OF TECHNOLOGY
IN
DEPARTMENT OF COMPUTER SCIENCE AND ENGINEERING
Submitted by
P.PRANEETHA - 206N1A05B4
P.MUKESH NATH - 206N1A05B1
J.SINDHU - 206N1A0593
CH.TEJA SAI - 206N1A0577
2024-2025
SRINIVASA INSTITUTE OF ENGINEERING & TECHNOLOGY
(UGC – Autonomous Institution)
(Approved by AICTE, Permanently affiliated to JNTUK, Kakinada) (ISO 9001:2015 Certified Institute)
(Accredited by NAAC with ’A ‘Grade) (Recognized by UGC under sections 2 (f) &12(B))
CERTIFICATE
This is to certify that the project work entitled “ Currency Detection For Visually Impaired
People Using Image Processing” is a bonafide work of P.PRANEETHA,P.MUKESH NATH,
J.SINDHU and CH.TEJA SAI of IV B.Tech in Computer Science Engineering Department,
Srinivasa Institute of Engineering and Technology, Amalapuram, affiliated to Jawaharlal Nehru
Technological University Kakinada, during the academic year 2024-2025 in partial fulfilment of the
requirements for the award of bachelor of technology Computer Science Engineering.
EXTERNAL EXAMINER
I
ACKNOWLEDGEMENT
We express our deep hearted thanks to Mrs. V SAIPRIYA our beloved Head of the
Department for being helpful in providing us with his valuable advice and timely guidance.
We would like to thank the principal, Dr. M SREENIVASA KUMAR and management of
Srinivasa Institute of Engineering and Technology, for providing us with the requisite
facilities to carry out project in the campus.
Our deep hearted thanks to all the faculty members of our department for their value-based
impairing of theory and partial subjects, which we had put into use in our project. We are also
indebted to the non-teaching staff for their co-operation.
We would like to thanks our Friends and Family members for their help and support in
making our project a success.
P.Praneetha
P.Mukesh Nath
J.Sindhu
CH.Teja Sai
II
Sl.No INDEX Page No.
CERTIFICATE I
ACKNOWLEDGEMENTS II
CONTENTS III
LIST OF FIGURES V
ABSTRACT VI
1 INTRODUCTION 1
2 BACKGROUND
3 METHODOLOGY
3.1 ExistingSystem 6
3.4 Dataflowdiagram 9
3.5 Modules 10
4 EXPERIMENTAL ASPECTS
III
6 SYSTEM TESTING
7 SAMPLE CODE 43
9 CONCLUSION 53
10 BIBLIOGRAPHY 55
IV
- LIST OF FIGURES
V
ABSTRACT
Currency is the medium of exchange. Money related transactions are an important part of our day to
day lives. Along with technology the banking sector is also getting modern and being explored. In
spite of the widespread usage of ATMs, Credit Debit Cards, and other digital modes of payment like as
Google Pay, Paytm, and Phone Pay, money is still widely used for most daily transactions due to its
convenience. Currency recognition or bank-note recognition is a process of identifying the
denominational value of a currency. It is a simple and straightforward task for `the normal human
beings, but if we consider the visually challenged people currency recognition is a challenging task.
Visually handicapped people have a difficult time distinguishing between different cash
denominations. Even though unique symbols are embossed on different currencies in India, the task is
still too difficult and time-consuming for the blind. This brings a deep need for automatic currency
recognition systems. So, our paper studies about the systems in order to help the visually challenged or
impaired people; so that they can differentiate between various types of Indian currencies through
implementation of image processing techniques. The study aims to investigate different techniques for
recognising Indian rupee banknotes. The proposed work extracts different and distinctive properties of
Indian currency notes, few of them are the central number, RBI logo, colour band, and special symbols
or marks for visually impaired, and applies algorithms designed for the detection of each and every
specific feature. From our work the visually impaired people will be capable of recognizing different
types of Indian Currencies while their monetary transactions, so that they lead their life independently
both socially and financially.
VI
CHAPTER 1
INTRODUCTION
Currency Detection For Visually impaired people using image processing
1. INTRODUCTION
over 2.2 billion people worldwide suffer from visual impairment, including 1 billion people with
severe or acute distance vision impairment or blindness, the majority of whom are over 50 years
old.glaucoma, cataracts, untreated presbyopia, and refractive error are the most common causes of
debilitation. according to the world health organization, the number of persons affected by visual
impairment will more than double by 2020. assistive devices, such as walking dogs or white canes,
are commonly used by visually impaired people. the white cane is most usually chosen for reasons
such as low cost, portability, and widespread acceptance within the blind population. however,
when faced with a range of obstacles and conditions in their daily lives, these assistance devices
have their own limitations. people frequently regard such individuals as a burden and leave them to
fend for themselves. as a result, the visually impaired individual requires the assistance gadget on a
regular basis, which can assist in their day-to-day responsibilities and rehabilitation. People in their
eighties and nineties have a higher risk of vision loss for those with visual impairments, the
assistive system plays an important role in social situations. It feels difficult without this assistive
equipment they're reliant on others. In addition, the cost of rehabilitation is out of reach for low-
income people. Currently, India has around 12 million blind people which makes India home to
one-third of the world's blind population
So, a real-time Indian currency detection device will be very much beneficial for visually impaired
persons in India. Several frameworks and strategies for healthcare services have been created in the
last decade. The goal of these improvements is to lower the cost of medical diagnosis while also
assisting the health sector with technology that allows people to self-manage their lives more
readily than ever before without the need for direct supervision from an expert. People with
impairments, on the other hand, were not the primary beneficiaries of these achievements.
However, there is a pressing need for technology that may help and assist people in their daily lives,
improve their living in a simple way, and lead to independence. Visual Impairment has to be the
most serious of these disabilities. Currencies are significant as a medium of buying and selling
goods. Each country has its own currency, which comes in a variety of colors, sizes, shapes, and
patterns. Visually challenged people find it difficult to detect and count different denominations of
currency. Due to continual use, tactile marks on the banknote's surface evaporate or fade away,
making it difficult for visually impaired people to detect and identify banknotes accurately by
touch. Digital image processing is a large field that provides solutions to problems like these,in
which patterns and identification markings are searched for and extracted, and then compared to
actual banknote images. The suggested banknote detection and recognition system's key
contribution is to build a simple standalone system easy to use, which will assist individuals in
identifying banknotes in a real time scenario. After augmentation and human annotation, the
demanding self-built dataset is constructed, and transfer learning is done on the YOLO-v5model.
LITERATURE SURVE
Currency Detection For Visually impaired people using image processing
2. BACK GROUND
Deep convolutional neural networks are used to classify Turkish lira banknotes, which are
developed and trained using the DenseNet-121 architecture . Applies image processing techniques
on the front and back sides of Myanmar currency (kyats) in three denominations in Zernike
moments were employed for feature extraction, and the k-nearest neighbour method was applied for
classification. Also, in uses a neural network to overcome these types of problems for visually
challenged people. Their findings suggest that further major research on cognition frameworks and
neural activity could lead to more significant results in these types of challenges. In describes a
portable technology allowing blind persons to identify and recognize Euro currencies. The modified
Viola-Jones algorithms are used to detect banknotes. The recognition of currencies centered on a
modified Viola-Jones algorithm and SURF (Speed Up Robust Features) algorithm.
According to article the YOLO-v5 network has higher accuracy and speed. The YOLO-v5 network
is a more sophisticated version of the YOLO network and YOLO-v3 network .
YOLO-v5's network architecture which is divided into three sections: CSPDarknet is the backbone,
PANet is the neck, and Yolo Layer is the head.
The data is first supplied into CSP Darknet, which extracts features, and then into PANet, which
fuses them. Finally, Yolo Layer gives you the results of your detection (class, score, location, size).
YOLO-v5 is smaller, faster, and more accurate than other YOLO networks. As a result, a YOLO-v5
based CNN model can be used to create a rapid and accurate banknote detection and recognition
system for visually impaired and blind persons to assist them in their daily life. As a solution, a
YOLO-v5 based CNN model can be used to design a quick and precise currency identification and
tracking device for visually impaired and blind individuals to support them in their everyday life.
. IMAGE AUGMENTATION AND ANNOTATION A camera with a resolution of 1280*720
pixels is utilized in this method to capture images in various settings such as occlusion, illumination
(lighting from the front, side, and back), and occlusion and so on. In total, 3720 banknote pictures
were scanned. Images were also obtained from the camera and the web in many file types
(.jpg,.jpeg,.png, etc.). This collection of data is called training, validation, and certification for
various banknotes are separated into three categories. Set of trials for the competition, 65-70 percent
of the photographs were chosen at random. Each denomination of banknotes has a training set and a
resting set for testing and validation sets. It is done to enhance the image additionally, to create a
huge image dataset that avoids Preventing overfitting in the training model and retaining the
necessary details images from the data set After then, the resolution of these 3720 photos was
enhanced to a total of 10,000 photos.
Various image augmentation techniques are used to generate the dataset for all banknote categories.
A number of image enhancement techniques are available such as resizing, shear, rotation,
brightness, reflection, color elimination and translation of noise and background.
variety of applications. This algorithm which detects and recognise different items in an image (in
real-time). Object recognition in YOLO is performed as a regression problem, and the class
probabilities of the discovered images are returned. Convolutional neural networks (CNN) are used
in the YOLO method to recognise objects in real time. To detect objects, the approach just takes a
single forward propagation through a neural network, as the name suggests. This indicates that a
single algorithm run is used to forecast the entire image. At the same time, the CNN is utilised to
predict various bounding boxes and class label. There are several variations of the YOLO
algorithm. Tiny YOLO and YOLOv3 are two popular examples. For the following reasons, the
YOLO algorithm is significant: Speed: Since it can predict objects in real time, this system
improves detection speed.
. High precision: YOLO is a prediction technique that produces appropriate results with low
background errors.
Learning abilities: The algorithm has enriched learning abilities, allowing it to learn and apply
object representations to object detection.
The YOLO version5 is most advanced object detection algorithm which is launched by Ultralytics,
which was released in June 2020. It's a brand-new convolutional neural network (CNN) that
identifies things in real time with great precision. This method uses a single neural network to
evaluate the full image, then divides it into sections and predicts bounding boxes and probabilities
for each component. These bounding boxes are weighted based on the estimated likelihood. In the
sense that it produces predictions after only one forward propagation through the neural network,
the approach "looks once" at the image. After non-max suppression, it provides discovered items,
which assure the object recognition algorithm only identifies each object once. YOLOv5's initial
launch, which is faster in performing, and simple to use. YOLOv5 is incredibly user-friendly and
comes "out of the box" ready to use on bespoke objects. The majority of the performance gain in
YOLOv5 is acquired from PyTorch training processes, while the model architecture remains similar
to that of YOLOv4. The goal is to create an object detector model that is extremely fast (on the Y-
axis) in terms of inference time (X-axis).
Histogram and probability function The dynamic range of the gray levels in an image provides
global information on the extent or spread of intensity levels across the image. However the
dynamic range does not provide any information on the existence of intermediate gray levels in the
image.The occurrence of the gray level component is described as the co-occurrence matrices of
relative frequencies. The probability function of gray level image is estimated from its histogram,
METHODOLOGY
Currency Detection For Visually impaired people using image processing
3. METHODOLOGY
Disadvantages:
Visually impaired individuals are unable to use visual cues such as color, shape, or size
to differentiate between different denominations of currency. This can make it difficult
for them to quickly and accurately determine the value of money.
Visually impaired individuals may need to rely on others to identify and count their
money for them, which can be inconvenient and may compromise their independence.
Because visually impaired individuals may not be able to easily identify the value of
their money, they may be more vulnerable to scams and fraud.
Some visually impaired individuals may not have access to assistive technology such as
screen readers or magnifying devices that can help them identify the value of their
money. This can further limit their ability to manage their finances independently.
Advantages:
By using an image processing system for currency detection, visually impaired people can
become more independent when it comes to handling and managing their own finances.
They won't need to rely on others to identify the denominations of their bills..
An image processing system can accurately detect and recognize different currencies,
making it easier for visually impaired individuals to distinguish between them. This can
reduce of the risk of fraud and errors when handling money.
The use of an image processing system for currency detection can offer convenience to
visually impaired people when they need to make transactions, count money, or perform
other financial tasks.
With the help a currency detection system, visually impaired individuals can quickly
identify and count their money, saving them time and effort.
The use of a currency detection system can be a cost-effective solution for visually
impaired individuals, as it eliminates the need for them to rely on assistance from others
to handle their finances. This can also increase their confidence and sense of
independence.
3.5 Modules:-
Image Acquisition
Image Preprocessing
Feature Extraction
Classification
Currency Identification
User Interface
Goals:
• Provide users a ready-to-use, expressive visual modelling Language so that they can
develop and exchange meaningful models.
• Provide extendibility and specialization mechanisms to extend the core concepts.
• Be independent of particular programming languages and development process.
• Provide a formal basis for understanding the modelling language.
• Encourage the growth of OO tools market.
In software engineering, a class diagram in the Unified Modelling Language (UML) is a type
of static structure diagram that describes the structure of a system by showing the system's
classes, their attributes, operations (or methods), and the relationships among the classes. It
explains which class contains information.
<<filename>>
+instance
filename filenam
A use case diagram in the Unified Modelling Language (UML) is a type of behavioural diagram
defined by and created from a Use-case analysis. Its purpose is to present a graphical overview
of the functionality provided by a system in terms of actors, their goals (represented as use
cases), and any dependencies between those use cases. The main purpose of a use case diagram
is to show what system functions are performed for which actor. Roles of the actors in the
system can be depicted.
Activity diagram is another important diagram in UML to describe dynamic aspects of the
system. Activity diagram is basically a flow chart to represent the flow from one activity to
another activity. The activity can be described as an operation of the system. It captures the
dynamic behaviour of the system. Other four diagrams are used to show the message flow
from one object to another but activity diagram is used to show message flow from one
activity to another.
OPERATING ENVIRONMENT
Currency Detection For Visually impaired people using image processing
4. EXPERIMENTAL ASPECTS
PYTHON PROGRAM:
FEATURES OF PYTHON:
Open source:
Python is publicly available opensource software, any one can use source code that doesn't cost
anything.
Easy-to-learn:
Popular (scripting/extension) language, clear and easy syntax, no type declarations, automatic
memory management, high-level data types and operations, design to read (more English like syntax)
and write (shorter code compared to C, C++, and Java) fast.
High-Level Language:
High level language (closer to human) refers to the higher level of concept from machine language (for
example assembly languages). Python is an example of a high-level language like C, C++, Perl, and
Java with low-level optimization.
Portable:High level languages are portable, which means they are able to run across all major hardware
and software platforms with few or no change in source code. Python is portable and can be used on
Linux, Windows, Macintosh, Solaris, FreeBSD, OS/2, Amiga, AROS, AS/400 and many more.
Object-Oriented:
Python is Interactive :
Python has an interactive console where you get a Python prompt (command line) and interact with the
interpreter directly to write and test your programs. This is useful for mathematical programming.
Interpreted :
Python programs are interpreted, takes source code as input, and then compiles (to portable byte-code)
each statement and executes it immediately. No need to compiling or linking
Extendable :
Python is often referred to as a "glue" language, meaning that it is capable to work in mixed-language
environment. The Python interpreter is easily extended and can add a new built-in function or modules
written in C/C++/Java code.
Libraries :
Databases, web services, networking, numerical packages, graphical user interfaces, 3D graphics,
others.
Supports :
PYTHON INTERPRETER
In interactive mode, type Python programs and the interpreter displays the result:
The above symbol signals the start of a Python interpreter's command line.
Python interpreter evaluates inputs (For example >>> 4*(6-2) return 16).
Very stable. New, stable releases have been coming out roughly every 6 to 18 months since 1991,
and this seems likely to continue. Currently there are usually around 18 months between major
releases.
The latest stable releases can always be found on the Python download page. There are two
recommended production-ready versions at this point in time, because at the moment there are two
branches of stable releases: 2.x and 3.x. Python 3.x may be less useful than 2.x, since currently
there is more third party software available for Python 2 than for Python 3. Python 2 code will
generally not run unchanged in Python 3.
HISTORY:
The name Python was selected from "Monty Pythons Flying Circus" which was a British sketch
comedy series created by the comedy group Monty Python and broadcast by the BBC from 1969 to
1974.
Python was created in the early 1990s by Guido van Rossum at the National Research Institute for
Mathematics and Computer Science in Netherlands. Python was created as a successor of a
language called ABC (All Basic Code) and released publicly in1991. Guido remains Python's
principal author, although it includes many contributions from active user community.
Between 1991 and 2001 there are several versions released, current stable release is 3.2. In 2001 the
Python Software Foundation (PSF) was formed, a non-profit organization created specifically to
own Python-related Intellectual Property. Zope Corporation is a sponsoring member of the PSF.
All most all Python releases are Open Source. To see the details of release versions and licence
agreement of Python.
PYTHON ENVIRONMENT:
(OS/400)
or Pocket PC
Web Development.
Internet scripting.
Embedded scripting.
Game programming.
Why Python
o Python works on different platforms (Windows, Mac, Linux, RaspberryPi, etc).
o Python has a simple syntax similar to the English language.
o Python has syntax that allows developers to write programs with fewer lines than some
other programming languages.
o Python runs on an interpreter system, meaning that code can be executed as soon as it is
written. This means that prototyping can be very quick.
o Python can be treated in a procedural way, an object-orientated wayor a functional way.
Good to know
The most recent major version of Python is Python 3, which we shall be using in this
tutorial. However, Python2, although not being updated with anything other than security
updates, is still quite popular.
In this tutorial Python will be written in a text editor. It is possible to write Python in an
Integrated Development Environment, such as Thonny, Pycharm, Netbeans or Eclipse
which are particularly useful when managing larger collections of Python files.
o Python was designed for readability, and has some similarities to the English language with
influence from mathematics.
o Python uses new lines to complete a command, as opposed to other programming
languages which often use semicolons or parentheses.
o Python relies on indentation, using whitespace, to define scope; such as the scope of
loops, functions and classes. Other programming languages often use curly-brackets for
this purpose.
Installation on Windows
Visit the link https://www.python.org/downloads/ to download the latest release of
Python. Inthis process, we will install Python 3.7.6 on our Windows operating system.
Double-click the executable file which is downloaded; the following window will open.
The following windows how‘s all the optional features. All the features need to be
installed andare checked by default; we need to click next to continue.
The following window shows a list of advanced options. Check all the options which
you wantto install and click next. Here, we must notice that the first check-box (install for
all users) must be checked.
Now, try to run python on the command prompt. Type the command python in
case of python2 or python3 in case of python3. It will show an error as given in the
below image. It is because wehaven't set the path.
To set the path of python, we need to the right click on "my computer" and go to
Properties →Advanced → Environment Variables.
Type PATH as the variable name and set the path to the installation directory of the
python shown inthe below image
Now, the path is set, we are readyto run python on our localsystem. Restart CMD and type
pythonagain. It will open the python interpreter shell where we can execute the python
statements.
PackagesIntroduction
Python applications will often use packages and modules that don‘t come as part of the
standard library. Applications will sometimes need a specific version of a library, because
the application may require that a particular bug has been fixed or the application may be
written using an obsoleteversion of the library’s interface.
This means it may not be possible for one Python installation to meet the requirements of
every application. If application a needs version 1.0 of a particular module but application
B needs version 2.0, then the requirements are in conflict and installing either version 1.0
or 2.0 will leaveone application unable to run.
The solution for this problem is to create a virtual environment, a self-contained directory
tree thatcontains a Python installation for a particular version of Python, plus a number of
additional packages.
Different applications can then use different virtual environments. To resolve the
earlier example of conflicting requirements, application A can have its own virtual
environment with version 1.0installed while application B has another virtual environment
with version 2.0. If application B requires a library be upgraded to version3.0, this will not
affect application A‘s environment.
This will create the tutorial-env directory if it doesn‘t exist, and also create directories
inside it containing a copyof the Python interpreter, the standard library, and various
supporting files.
A common directory location for a virtual environment is venv. This name keeps
the directory typically hidden in your shell and thus out of the way while giving it a name
that explains why the directory exists. It also prevents clashing with env environment
variable definition files that sometooling supports.
tutorial-env\Scripts\activate.bat
Sourcetutorial-env/bin/activate
(This script is written for the bash shell. If you use the csh or fish shells, there are alternate
activate .csh and activate. Fish scripts you should use instead.)
Activating the virtual environment will change your shell‘s prompt to show what virtual
environment you‘re using, and modify the environment so that running python will get you
that particular version and installation of Python. For example:
$ Source ~/envs/tutorial-
env/bin/activate(Tutorial-env) $
Python
>>>import sys
>>>sys. path
['','/usr/local/lib/python37.
zip‘,
'~/envs/tutorial-env/lib/python3.5/site-packages']
C:\WINDOWS\system32>pip install
matplotlibCollecting matplotlib
Downloading
https://files.pythonhosted.org/packages/df/3f/6093a23565d0f50ce433f56223fcc34af6c912cd4
331dc582ba29d9b5a17/matplotlib-3.5.3-cp37-cp37m-win_amd64.whl (7.2MB).
WARNING: You are using pip version 19.2.3, however version 23.0.1 is
available. You should consider upgrading via the 'python -mpip install --upgrade pip'
command.
Downloading
https://files.pythonhosted.org/packages/40/69/4af412d078cef2298f7d90546fa0e03e65a032
558bd8531 9239c72ae0c3c/scipy-1.7.3-cp37-cp37m-win_amd64.whl (34.1MB)
|████████████████████████████████|34.1MB2.2MB/s
Collecting PyWavelets>=1.1.1 (from scikit-image)
Downloading
https://files.pythonhosted.org/packages/7c/0c/89dea7fed4a6cd3e6f5ade93c7d645a2991a048
3916f161a 9d72a0d817f8/PyWavelets-1.3.0-cp37-cp37m-win_amd64.whl (4.2MB)
|████████████████████████████████| 4.2MB3.3MB/sRequirement already
satisfied: pillow! = 7.1.0,! =7.1.1,!=8.3.0,>=6.1.0 in
c:\users\pavanvarmamudunuri\appdata\local\programs\python\python37\lib\site-packages
(from scikit-image) (9.4.0)
Collecting networkx>=2.2 (from scikit-image)
Downloading
https://files.pythonhosted.org/packages/e9/93/aa6613aa70d6eb4868e667068b5a11feca96454
98 fd31b954b6c4bb82fa5/networkx-2.6.3-py3-none-any.whl (1.9MB)
|████████████████████████████████|1.9MB2.2MB/sCollecting imageio>=2.4.1
(from scikit-image)
Using cached
https://files.pythonhosted.org/packages/dc/0b/202efcb00ba89c749bb7b22634c917f29a58bdf
05 2dabe8041f23743974f/imageio-2.26.0-py3-none-any.whl
Requirement already satisfied: packaging>=20.0 in
c:\users\pavanvarmamudunuri\appdata\local\programs\python\python37\lib\site-
packages(from scikit-image) (23.0)
Downloading
Collecting tifffile>=2019.7.26 (from scikit-image)
https://files.pythonhosted.org/packages/d8/38/85ae5ed77598ca90558c17a2f79ddab
173b3
cf8d8f545d34d9134f0d/tifffile-2021.11.2-py3-none-any.whl (178kB)
|████████████████████████████████| 184kB 1.6MB/s Installingcollected
packages: scipy, PyWavelets, networkx, imageio, tifffile, scikit-image Successfully installed
PyWavelets-1.3.0 imageio-2.26.0 networkx-2.6.3 scikit-image-
0.19.3 scipy-1.7.3 tifffile-2021.11.2
WARNING: You are using pip version 19.2.3, however version 23.0.1 is available. You
should consider upgrading via the 'python -m pip install --upgrade pip'command.
cf8d8f545d34d9134f0d/tifffile-2021.11.2-py3-none-any.whl (178kB)
|████████████████████████████████| 184kB 1.6MB/s Installingcollected
packages: scipy, PyWavelets, networkx, imageio, tifffile, scikit-image Successfully installed
PyWavelets-1.3.0 imageio-2.26.0 networkx-2.6.3 scikit-image-
0.19.3 scipy-1.7.3 tifffile-2021.11.2
WARNING: You are using pip version 19.2.3, however version 23.0.1 is available. You
should consider upgrading via the 'python -m pip install --upgrade pip'command.
https://files.pythonhosted.org/packages/9d/20/0ffe8665a44bce7616bd33d4368a19
8fecad3b226bcafa 38 c63ef0f6286f/scikit_learn-1.0.2-cp37-cp37m-win_amd64.whl
(7.1MB)
|████████████████████████████████| 7.1MB 2.2MB/s
Requirement already satisfied: scipy>=1.1.0 in
c:\users\pavanvarmamudunuri\appdata\local\programs\python\python37\lib\site-
packages (fromscikit-learn) (1.7.3)
Collecting threadpoolctl>=2.0.0 (from scikit-lear
Using cached
https://files.pythonhosted.org/packages/61/cf/6e354304bcb9c6413c4e02a747b600061c
21d38ba51e7e544ac7bc66aecc/threadpoolctl-3.1.0-py3-none-any.whl
Collecting joblib>=0.11 (fromscikit-learn)
Using cached
https://files.pythonhosted.org/packages/91/d4/3b4c8e5a30604df4c7518c562d4bf0502f2fa2922
1459226e140cf846512/joblib-1.2.0-py3-none-any.whl
Requirement already satisfied: numpy>=1.14.6 in
c:\users\pavanvarmamudunuri\appdata\local\programs\python\python37\lib\site-packages
(from scikit-learn) (1.21.6)
Collecting pandas
Using cached
https://files.pythonhosted.org/packages/b2/56/f886ed6f1777ffa9d54c6e80231b69db8a1f52dcc
33f5967b06a105dcfe0/pandas-1.3.5-cp37-cp37m-win_amd64.whl
Collecting pytz>=2017.3 (from pandas)
Using cached
https://files.pythonhosted.org/packages/2e/09/fbd3c46dce130958ee8e0090f910f1fe39e502cc5b
a0aadca1e8a2b932e5/pytz-2022.7.1-py2.py3-none-any.whl
Requirement already satisfied: python-dateutil>=2.7.3 in
c:\users\pavanvarmamudunuri\appdata\local\programs\python\python37\lib\site-packages
(from pandas) (2.8.2)
nuri\appdata\local\programs\python\python37\lib\site-packages(from pandas)
(2.8.2) Requirement already satisfied: numpy>=1.17.3; platform machine! =
"aarch64" and platformmachine != "arm64" and python_version < "3.10" in
c:\users\pavanvarmamudunuri\appdata\local\programs\python\python37\lib\site-
packages (from pandas) (1.21.6)
Requirement already satisfied: six>=1.5 in
c:\users\pavanvarmamudunuri\appdata\local\programs\python\python37\lib\site
-packages(frompython-dateutil>=2.7.3->pandas) (1.16.0)
Installing collected packages: pytz,
pandas Successfully installed pandas-
1.3.5 pytz-2022.7.1
WARNING: You are using pip version 19.2.3, however version 23.0.1 is
available. You should consider upgrading via the 'python -m pip install --
upgrade pip' command.
C:\WINDOWS\system32>
6. SYSTEM TESTING
6.1 SYSTEM TESTING
The purpose of testing is to discover errors. Testing is the process of trying to discover
every conceivable fault or weakness in a work product. There are various types of testing
each test type addresses a specific testing requirement.
o Unit Testing.
o Integration Testing.
o User Acceptance Testing.
o Output Testing.
o Validation Testing.
1. Unit testing:-
Unit testing focuses verification effort on the smallest unit of Software design that is the
module. Unit testing exercises specific paths in a module’s control structure to
ensure complete coverage and maximum error detection. This test focuses on each module
individually, ensuring that it functions properly as a unit. Hence, the naming is Unit Testing.
During this testing, each module is tested individually and the module interfaces are verified
for the consistency with design specification. All important processing path are tested for the
expected results. All error handling paths are also tested.
2. Integration Testing: -
Integration testing addresses the issues associated with the dual problems of verification and
program construction. After the software has been integrated a set of high order tests are
conducted. The main objective in this testing process is to take unit tested modules and
builds a program structure that has been dictated by design.
2. Bottom-up Integration :
This method begins the construction and testing with the modules at the lowest level in the
program structure. Since the modules are integrated from the bottom up, processing required
for modules subordinate to a given level is always available and the need for stubs is
eliminated. The bottom up integration strategy may be implemented with the following
steps:
o The low-level modules are combined into clusters into clusters that perfrom a specific
Software sub-function.
o A driver (i.e.) the control program for testing is written to coordinate test case input and
output.
o The cluster is tested.
o Drivers are removed and clusters are combined moving upward in the program structure
The bottom up approaches tests each module individually and then each module is module is
integrated with a main module and tested for functionality.
5. Validation Checking
Validation checks are performed on the following fields.
i. Text Field:
The text field can contain only the number of characters lesser than or equal to its size.
The text fields are alphanumeric in some tables and alphabetic in other tables. Incorrect
entry always flashes and error message.
ii. Numeric Field:
The numeric field can contain only numbers from 0 to 9. An entry of any character flashes
an error messages. The individual modules are checked for accuracy and what it has to
perform. Each module is subjected to test run along with sample data. The individually
tested modules are integrated into a single system. Testing involves executing the real
data information is used in the program the existence of any program defect is inferred from
the output. The testing should be planned so that all the requirements are individually
tested. A successful test is one that gives out the defects for the inappropriate data and
produces and output revealing the errors in the system.
It is difficult to obtain live data in sufficient amounts to conduct extensive testing. And,
although it is realistic data that will show how the system will perform for the typical
processing requirement, assuming that the live data entered are in fact typical, such data
generally will not test all combinations or formats that can enter the system. This bias toward
typical values then does not provide a true system test and in fact ignores the cases most
likely to cause system failure.
The package “Virtual Private Network” has satisfied all the requirements specified
as per software requirement specification and was accepted.
USER TRAINING
Whenever a new system is developed, user training is required to educate them about the
working of the system so that it can be put to efficient use by those for whom the system has
been primarily designed. For this purpose the normal working of the project was demonstrated
to the prospective users. Its working is easily understandable and since the expected users are
people who have good knowledge of computers, the use of this system is very easy.
MAINTAINENCE
This covers a wide range of activities including correcting code and design errors. To reduce
the need for maintenance in the long run, we have more accurately defined the user’s
requirements during the process of system development. Depending on the requirements, this
system has been developed to satisfy the needs to the largest possible extent. With development
in technology, it may be possible to add many more features based on the requirements in
future. The coding and designing is simple and easy to understand which will make
maintenance easier.
SAMPLE CODE
Currency Detection For Visually impaired people using image processing
7. SAMPLE CODE
warnings.filterwarnings("ignore")
global inputimg
def open_img():
x = openfilename()
global inputimg
inputimg=x
print(inputimg)
f = open("ip.txt", "w")
f.write(os.path.basename(inputimg))
f.close()
img = Image.open(x)
img = img.resize((250, 250), Image.ANTIALIAS)
img.save('ip.png')
img = ImageTk.PhotoImage(img)
#original = load
panel = Label(root, image = img)
panel.image = img
panel.grid(row = 4)
def openfilename():
filename = filedialog.askopenfilename(title ='"pen')
#inputimg=filename
global inputimg
inputimg=filename
print(inputimg)
return filename
def pre_process():
filename=inputimg;
print(filename)
img = cv2.imread(filename,0)
ret1,th1 = cv2.threshold(img,127,255,cv2.THRESH_BINARY)
ret2,th2 = cv2.threshold(img,0,255,cv2.THRESH_BINARY+cv2.THRESH_OTSU)
blur = cv2.GaussianBlur(img,(5,5),0)
ret3,th3 = cv2.threshold(blur,0,255,cv2.THRESH_BINARY+cv2.THRESH_OTSU)
images = [img, 0, th1,
img, 0, th2,
blur, 0, th3]
titles = ['Original Noisy Image','Histogram','Global Thresholding',
'Original Noisy Image','Histogram',"Otsu's Thresholding",
'Gaussian filtered Image','Histogram',"Otsu's Thresholding"]
for i in range(3):
plt.subplot(3,3,i*3+1),plt.imshow(images[i*3],'gray')
plt.title(titles[i*3]), plt.xticks([]), plt.yticks([])
plt.subplot(3,3,i*3+2),plt.hist(images[i*3].ravel(),256)
plt.title(titles[i*3+1]), plt.xticks([]), plt.yticks([])
plt.subplot(3,3,i*3+3),plt.imshow(images[i*3+2],'gray')
plt.title(titles[i*3+2]), plt.xticks([]), plt.yticks([])
plt.show()
#################################
ret,thresh1 = cv2.threshold(img,127,255,cv2.THRESH_BINARY)
ret,thresh2 = cv2.threshold(img,127,255,cv2.THRESH_BINARY_INV)
ret,thresh3 = cv2.threshold(img,127,255,cv2.THRESH_TRUNC)
ret,thresh4 = cv2.threshold(img,127,255,cv2.THRESH_TOZERO)
ret,thresh5 = cv2.threshold(img,127,255,cv2.THRESH_TOZERO_INV)
titles = ['Original
Image','BINARY','BINARY_INV','TRUNC','TOZERO','TOZERO_INV']
images = [img, thresh1, thresh2, thresh3, thresh4, thresh5]
for i in range(0,6):
plt.subplot(2,3,i+1),plt.imshow(images[i],'gray')
plt.title(titles[i])
plt.xticks([]),plt.yticks([])
plt.show()
########################################
########################################
########################################
img = cv2.imread(filename)
def find_currency(filename):
import detect
#################################################
def run():
global inputimg
filename = 'ip.png'
find_currency(filename)
#################MAIN#############################
root = Tk()
root.geometry("2000x2000")
root.title("OCR")
root.resizable(width = True, height = True)
inputtxt = Text(root, height = 10,
width = 25,
bg = "light yellow")
btn1 = Button(root, text ='open image', command=lambda : open_img()).grid(row = 1,
columnspan = 4)
btn2 = Button(root, text ='Pre Process', command=lambda :pre_process()).grid(row = 2,
columnspan = 14)
btn3 = Button(root, text ='run', command=lambda :run()).grid(row = 3, columnspan = 24)
root.mainloop()
In the above First we open the project folder it is setuped in a folder and we selecting and
opening the currency recognition visually impaired folder.
The above screen after opening the GUI code file we have to click on run module F5 to
run the python code
After run the code while execution process they show or open a window to select currency
image file and it will pre process the selected image
In the above screen after pre process the selected image and it converts to original
image to binary image
In the above screen we click on run and it will open a new window by closing it it
will say selected image voice output
CONCLUSION
Currency Detection For Visually impaired people using image processing
CONCLUSION:
A portable and easy access system for image recognition can be implemented using the
technology of cameras and the theory of the digital image processing. one of the main
constraints of the system developed in this paper is the fact that the background of the image
containing the object to identify(i.e. the currency), must be contrasting with that object.
another constraint is that the illumination conditions over the image must be uniform. this
solves a day to day problem which takes place in the life of the visually impaired people.
future works will include modifications of the technique and also merging of other image
processing techniques, such as, neural networks training using edge detection which would
extricate the process from the dependency over standard light intensity and standard distance
between image and camera during image acquisition adding on to the accuracy of the
process.
FUTURE ENHANCEMENT:
The use of currency detection through image processing technology can greatly enhance the
accessibility of visually impaired individuals, allowing them to confidently and independently
manage their finances. as image processing technology advances, the accuracy of currency
detection will continue to improve, ensuring that visually impaired individuals are able to identify
currency denominations with greater precision. with the increasing ubiquity of smartphones, the
integration of currency detection technology with mobile devices could provide a convenient and
portable solution for visually impaired individuals.currency detection technology could have a
significant impact on visually impaired individuals across the globe, providing a practical and
effective solution to a widespread problem.
BIBILIOGRAPHY
Currency Detection For Visually impaired people using image processing
10. BIBILIOGRAPHY
REFERENCES:
[1] Blindness and vision impairment, Available Online: Blindness and vision
impairment (who.int) (Accessed on 25 April 2022)
[2] Estimation of blindness in India from 2000 through 2020: implications for the
blindness control policy - PubMed (nih.gov) (Accessed on 25 April 2022)
[3] M. JIAO, J. HE and B. ZHANG, "Folding Paper Currency Recognition and
Research Based on Convolution Neural Network," 2018 International Conference on
Advances in Computing, Communications and Informatics (ICACCI), Bangalore,
2018, pp. 18- 23.
[4] F. M. Hasanuzzaman, X. Yang and Y. Tian, "Robust and Effective Component-
Based Banknote Recognition for the Blind, " in IEEE Transactions on Systems, Man,
and Cybernetics, Part C (Applications and Reviews), vol. 42, no.6, pp. 1021-1030,
Nov.2012.
[5] Rakesh Chandra Joshi, Saumya Yadav and Malay Kishore Dutta, “YOLO-v3
Based Currency Detection and Recognition System for Visually Impaired Persons”
2020 International Conference on Contemporary Computing and Applications
(IC3A)
[6] L. Tang, Y. Jin and M. Du, "A Hierarchical Approach for Banknote Image
Processing Using Homogeneity and FFD Model," in IEEE Signal Processing Letters,
vol. 15, pp. 425- 428, 2008.
[7] P. Priami, M. Gori and Frosini, "A neural network-based model for paper
currency recognition and verification," in IEEE Transactions on Neural Networks,
vol. 7, no. 6, pp. 1482-1490, Nov. 1996.
[8] H. Vo and Hoang, "Hybrid discriminative models for banknote recognition and
anti-counterfeit," 2018 5th NAFOSTED Conference on Information and Computer
Science (NICS), Ho Chi Minh City, 2018, pp. 394-399.