0% found this document useful (0 votes)
68 views88 pages

Traffic Sign Board

Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as PDF, TXT or read online on Scribd
0% found this document useful (0 votes)
68 views88 pages

Traffic Sign Board

Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as PDF, TXT or read online on Scribd
You are on page 1/ 88

1.

TITLE JUSTIFICATION

Our system helps in recognizing the Traffic sign and sending a voice alert through
the speaker to the driver so that he/ she may take necessary decisions. The proposed
system is trained using Convolutional Neural Network (CNN) which helps in traffic
sign image recognition and classification accurately.
So with traffic sign recognition and voice alert system we can reduce the accidents
in India.

1
2. ABSTRACT

In Indian road traffic constitutes a major problem of the country. As the road traffic
is increasing day by day due to increase in population and vehicle consumption,
there is a necessity of following the traffic rules with proper discipline otherwise
number of accidents will increase. To ensure a smooth and secure flow of traffic,
road signs are essential. A major cause of road accidents is negligence in viewing
the Traffic signboards and interpreting them incorrectly. The proposed system helps
in recognizing the Traffic sign and sending a voice alert through the speaker to the
driver so that he/ she may take necessary decisions. The proposed system is trained
using Convolutional Neural Network (CNN) which helps in traffic sign image
recognition and classification. A set of classes are defined and trained on a particular
dataset to make it more accurate. To provide a comprehensive assistance to the
driver for following the traffic signs, Traffic Sign Board Detection and Voice Alert
System due to this we can control vehicle speed and reduce the rate of accidents.
Following the detection of the sign by the system, a voice alert is sent through the
speaker which notifies the driver. The proposed system also contains a section
where the vehicle driver is alerted about the traffic signs in the near proximity which
helps them to be aware of what rules to follow on the route. The aim of this system
is to ensure the safety of the vehicle‟s driver, passengers, and pedestrians. The main
goals of this project are detection, and recognition and gives voice alert to the driver.
Speed will be controlled automatically according to the signboard.

2
3. INTRODUCTION

Millions of people are injured annually in vehicle accidents. Most of the traffic
accidents are the result of carelessness, ignorance of the rules and neglecting traffic
signboards, both at the individual level by the drivers and the society at large. The
magnitude of road accidents in India is alarming. This is evident from the fact that
every hour there are about 56 accidents taking place similarly, every hour more than
14 deaths occur due to road accidents. When someone neglects to obey traffic signs,
they are putting themselves at risk as well as other drivers, their passengers and
pedestrians. All the signs and signals help keep order in traffic and they also are
designed to reduce the number and severity of traffic accidents. Some drivers
believe that some traffic signs are simply not necessary.

All road signs are placed in specific areas to ensure the safety of all drivers. These
markers let drivers know how fast to drive. They help to create order on the
roadways and are employed to provide essential information to drivers. Traffic signs
include much useful environmental information which can help drivers learn about
the change of the road ahead and the driving requirements. Signs which are taken
out of specific places or not visible as a result of wear and tear can pose undesirable
risks to drivers. They also tell drivers when and where to turn or not to turn. In order
to be a terrific driver, you need to have an understanding of what the sign mean.
Road signs are designed to make sure that every driver is kept safe.

Our system will able to detect, recognize and infer the road traffic signs would be a
prodigious help to the driver. The objective of an automatic road signs recognition
system is to detect and classify one or more road signs from within live color images
captured by a camera.

There have been a lot of technological advancements and cars with auto-pilot mode
have come up. Autonomous vehicles have come into existence. There has been a
boom in the self-driving car industry. However, these features are available only in
some high end cars which are not affordable to the masses. We wanted to devise a
system which helps in easing the job of driving to some extent. On conducting a

3
survey we found that the magnitude of road accidents in India is alarming. Reports
suggest that every hour there are about 53 mishaps taking place on the roads.
Moreover, every hour more than 16 deaths occur due to these mishaps. When
someone neglects to obey traffic signs while driving, they are putting their life as
well the life of the other drivers, their passengers and those on the road at risk.
Hence, we came up with this system in which traffic signs are automatically
detected using the live video stream and are read out aloud to the driver who may
then take the required decision. Another area of focus in our system is the idea of
getting the location of the user using GPS. Also, all the traffic signs will be stored in
a database along with their location so that the driver will be notified in advance
regarding the next approaching Traffic Sign.

In this project we provide alertness to the driver about the presence of traffic
signboard at a particular distance apart. The system provides the driver with real
time information from road signs, which consist the most important and challenging
tasks. Next generate an acoustic warning to the driver in advance of any danger.
This warning then allows the driver to take appropriate corrective decisions in order
to mitigate or completely avoid the event.. However, sometimes, due to the change
of weather conditions or viewing angles, traffic signs are difficult to be seen until it
is too late. First, it is necessary to select the hardware equipment to solve this
problem. The second stage is based on color processing, or object detection method
based on rapid color changes. Image processing technology is mostly used for the
identification of the signboards. The alertness to the driver is given as audio output.
If the driver is not following the alert the automatic braking system gets activated
and the speed of the vehicle gets regulated based on the signboard.

General Background

In recent years, traffic sign recognition (TSR) has become a core technology of
safety and traffic applications. Simply stated, traffic sign recognition helps identify
traffic signs and provides easiness for drivers and thus a safe journey. Due to
different weather conditions the traffic sign board may be distorted, faded, tarred
and bend which leads to the misinterpretation of the traffic sign. Most previous

4
works have in some way restricted their working conditions, such as limiting the
number of images used, using complicated methods in classifying the dataset which
leads to reduced accuracy which results in over fitting. Hence, they were supposed
to use another method for rectifying this defect. In traffic environments, signs
regulate traffic, warn the driver, and command or prohibit certain actions.

Real-time and robust automatic traffic sign recognition can support and disburden
the driver, and thus, significantly increase driving safety and comfort. For instance,
it can remind the driver of the current speed limit, prevent him from performing
inappropriate actions such as entering a one-way street, passing another car in a no
passing zone, unwanted speeding, et The aim of this project is to lessen many of
these restrictions. Identification of the traffic signs is a demanding function for safe
driving for the driver as well as the vehicles following. One can recognize traffic
sign using its shape, color and orientation. We can use the various features of the
image dataset for classification.

OBJECTIVE

Traffic sign recognition is a technology which identifies traffic sign from a fair
distance. In this contribution, we describe a real-time system for vision based traffic
sign detection and recognition. We focus on an important and practically relevant
subset of (Indian) traffic signs, namely speed-signs and no-passing-signs, and their
corresponding end-signs, the problem of traffic sign recognition has some beneficial
characteristics. First, the design of traffic signs is unique, thus, object variations
are small. Further, sign colors often contrast very well against the environment.
Moreover, signs are rigidly positioned relative to the environment (contrary to
vehicles), and are often set up in clear sight to the driver.

Nevertheless, a number of challenges remain for a successful recognition. First,


weather and lighting conditions vary significantly in traffic environments,
diminishing the advantage of the above claimed object uniqueness. Additionally, as
the camera is moving, additional image distortions, such as, motion blur and abrupt
contrast changes, occur frequently. Further, the sign installation and surface

5
material can physically change over time, influenced by accidents and weather,
hence resulting in rotated signs and degenerated colors.

SCHEME

CNN is one of the neural network models for deep learning, which is characterized
by three specific characteristics, namely locally connected neurons, shared weight
and spatial or temporal sub-sampling. Generally, CNN can be considered to be made
up of two main parts. The first contains alternating convolutional and max pooling
layers. The input of each layer is just the output of its previous layer. As a result, this
forms a hierarchical feature extractor that maps the original input images into feature
vectors. Then the extracted features vectors are classified by the second part, that is,
the fully-connected layers, which is a typical feed forward neural network.

6
4. EXISTING SYSTEM AND ITS LIMITATIONS

In this era of a fast paced life, people generally tend to miss out on recognizing the
traffic sign and hence break the rules. A lot of research has been done in this domain
in order to reduce the number of accidents. Researchers have used a variety of
classification algorithms. The detection of traffic signs has been done in a variety of
techniques in numerous studies. One of the processes employs the Support Vector
Machine technique. The dataset was divided into 90/10 for training and testing
purposes, and it employs linear classification. To achieve the desired result, a series
of phases called Color Segmentation, Shape Classification, and Recognition were
followed.

Raspberry Pi is used in detecting and recognizing Traffic Signs with much less
coding. However, it requires the Raspberry Pi board at one's discourse for
implementation which is quite costly. Another way of Traffic sign recognition is
picture intensive. A video is acquired and broken down into frames. Image
preprocessing is done which includes separating the foreground and the background,
thinning and contrast enhancement. The signs are then categorized as hexagonal,
triangular, or circular in shape and transmitted for template matching after these
operations. The objects with some definite shape are matched from the pre trained
algorithm.

This model emphasizes an existing method that which is designed using the ANN
algorithm from the deep learning. Here the detection of the traffic signals is been
performed by using the ANN method. As, traffic signs are a well-researched
problem in the computer vision, the existing method was performed on the different
datasets that which can detect the traffic signs.

Limitations of Existing System:


 Low Accuracy
 High complexity
 Highly inefficient

7
5. PROPOSED SYSTEM AND ITS ADVANTAGES

Very often we see that many road accidents take place. This can be due to driver‟s
ignorance of traffic signals and road signs. As the road traffic is increasing day by
day there is a necessity of following the traffic rules with proper discipline. Traffic
signboard detection is an important part of driver assistant systems. The basic idea
of proposed system is to provide alertness to the driver about the presence of traffic
signboard at a particular distance apart. The system provides the driver with real
time information from road signs, which consist the most important and challenging
tasks. It generates an acoustic warning to the driver in advance of any danger. This
warning allows the driver to take appropriate actions in order to avoid the accident.
Image processing technology is mostly used for the identification of the signboards.
The alertness to the driver is given as an audio output. In our proposed method we
are performing the Traffic Sign Board Recognition and Voice Alert System using
Convolutional Neural Network (CNN).

Advantages of Proposed System:


 High accuracy
 Low complexity
 High efficient
 Accurate detection

8
6. HARDWARE AND SOFTWARE REQUIRMENTS

6.1 INTRODUCTION:
HTML:
HTML stands for Hyper Text Markup Language, which is the most widely used
language on Web to develop web pages. HTML was created by Berners-Lee in late
1991 but "HTML 2.0" was the first standard HTML specification which was
published in 1995. HTML 4.01 was a major version of HTML and it was published
in late 1999. Though HTML 4.01 version is widely used but currently we are having
HTML-5 version which is an extension to HTML 4.01, and this version was
published in 2012.

TML stands for Hypertext Markup Language, and it is the most widely used
language to write Web Pages.
 Hypertext refers to the way in which Web pages (HTML documents) are
linked together. Thus, the link available on a webpage is called Hypertext.
 As its name suggests, HTML is a Markup Language which means you use
HTML to simply "mark-up" a text document with tags that tell a Web
browser how to structure it to display.

Originally, HTML was developed with the intent of defining the structure of
documents like headings, paragraphs, lists, and so forth to facilitate the sharing of
scientific information between researchers. Now, HTML is being widely used to
format web pages with the help of different tags available in HTML language.

HTML is must for students and working professionals to become a great Software
Engineer especially when they are working in Web Development Domain.
 Create Web site - You can create a website or customize an existing web
template if you know HTML well.
 Become a web designer - If you want to start a career as a professional web
designer, HTML and CSS designing is a must skill.

9
 Understand web - If you want to optimize your website, to boost its speed
and performance, it is good to know HTML to yield best results.
 Learn other languages - Once you understand the basic of HTML then other
related technologies like JavaScript, php, or angular are become easier to
understand.

Applications of HTML:
 Web pages development - HTML is used to create pages which are rendered
over the web. Almost every page of web is having html tags in it to render its
details in browser.
 Internet Navigation - HTML provides tags which are used to navigate from
one page to another and is heavily used in internet navigation.
 Responsive UI - HTML pages now-a-days works well on all platform,
mobile, tabs, desktop or laptops owing to responsive design strategy.
 Offline support HTML pages once loaded can be made available offline on
the machine without any need of internet.
 Game development- HTML5 has native support for rich experience and is
now useful in gaming development arena as well.

CSS:

Cascading Style Sheets, fondly referred to as CSS, is a simple design language


intended to simplify the process of making web pages presentable.

CSS handles the look and feel part of a web page. Using CSS, you can control the
color of the text, the style of fonts, the spacing between paragraphs, how columns
are sized and laid out, what background images or colors are used, layout designs,
and variations in display for different devices and screen sizes as well as a variety of
other effects.

CSS is easy to learn and understand but it provides powerful control over the
presentation of an HTML document. Most commonly, CSS is combined with the
markup languages HTML or XHTML.

10
Advantages of CSS
 CSS saves time − you can write CSS once and then reuse same sheet in
multiple HTML pages. You can define a style for each HTML element and
apply it to as many Web pages as you want.
 Pages load faster − If you are using CSS, you do not need to write HTML
tag attributes every time. Just write one CSS rule of a tag and apply it to all
the occurrences of that tag. So less code means faster download times.
 Easy maintenance − To make a global change, simply change the style, and
all elements in all the web pages will be updated automatically.
 Superior styles to HTML − CSS has a much wider array of attributes than
HTML, so you can give a far better look to your HTML page in comparison
to HTML attributes.
 Multiple Device Compatibility − Style sheets allow content to be optimized
for more than one type of device. By using the same HTML document,
different versions of a website can be presented for handheld devices such as
PDAs and cell phones or for printing.
 Global web standards − Now HTML attributes are being deprecated and it
is being recommended to use CSS. So it‟s a good idea to start using CSS in
all the HTML pages to make them compatible to future browsers.

Applications of CSS
 CSS saves time - You can write CSS once and then reuse same sheet in
multiple HTML pages. You can define a style for each HTML element and
apply it to as many Web pages as you want.
 Pages load faster - If you are using CSS, you do not need to write HTML tag
attributes every time. Just write one CSS rule of a tag and apply it to all the
occurrences of that tag. So less code means faster download times.
 Easy maintenance - To make a global change, simply change the style, and
all elements in all the web pages will be updated automatically.
 Superior styles to HTML - CSS has a much wider array of attributes than
HTML, so you can give a far better look to your HTML page in comparison
to HTML attributes.

11
 Multiple Device Compatibility - Style sheets allow content to be optimized
for more than one type of device. By using the same HTML document,
different versions of a website can be presented for handheld devices such as
PDAs and cell phones or for printing.
 Global web standards - Now HTML attributes are being deprecated and it is
being recommended to use CSS. So it‟s a good idea to start using CSS in all
the HTML pages to make them compatible to future browsers.

JAVASCRIPT:
JavaScript is a lightweight, interpreted programming language. It is designed for
creating network-centric applications. It is complimentary to and integrated with
Java. JavaScript is very easy to implement because it is integrated with HTML. It is
open and cross-platform.

JavaScript is a dynamic computer programming language. It is lightweight and most


commonly used as a part of web pages, whose implementations allow client-side
script to interact with the user and make dynamic pages. It is an interpreted
programming language with object-oriented capabilities.

JavaScript was first known as Live Script, but Netscape changed its name to
JavaScript, possibly because of the excitement being generated by Java. JavaScript
made its first appearance in Netscape 2.0 in 1995 with the name Live Script. The
general-purpose core of the language has been embedded in Netscape, Internet
Explorer, and other web browsers.

 JavaScript is a lightweight, interpreted programming language.


 Designed for creating network-centric applications.
 Complementary to and integrated with Java.
 Complementary to and integrated with HTML.
 Open and cross-platform
 JavaScript is the most popular programming language in the world and that
makes it a programmer‟s great choice. Once you learnt JavaScript, it helps
you developing great front-end as well as back-end software‟s using different
JavaScript based frameworks like j Query, Node.JS etc.

12
 JavaScript is everywhere, it comes installed on every modern web browser
and so to learn JavaScript you really do not need any special environment
setup. For example Chrome, Mozilla Firefox, Safari and every browser you
know as of today, supports JavaScript.
 JavaScript helps you create really beautiful and crazy fast websites. You can
develop your website with a console like look and feel and give your users
the best Graphical User Experience.
 JavaScript usage has now extended to mobile app development, desktop app
development, and game development. This opens many opportunities for you
as JavaScript Programmer.
 Due to high demand, there is tons of job growth and high pay for those who
know JavaScript. You can navigate over to different job sites to see what
having JavaScript skills look like in the job market.
 Great thing about JavaScript is that you will find tons of frameworks and
Libraries already developed which can be used directly in your software
development to reduce your time to market.

Client-Side JavaScript
Client-side JavaScript is the most common form of the language. The script should
be included in or referenced by an HTML document for the code to be interpreted
by the browser.
It means that a web page need not be a static HTML, but can include programs that
interact with the user, control the browser, and dynamically create HTML content.

The JavaScript client-side mechanism provides many advantages over traditional


CGI server-side scripts. For example, you might use JavaScript to check if the user
has entered a valid e-mail address in a form field.

The JavaScript code is executed when the user submits the form, and only if all the
entries are valid, they would be submitted to the Web Server.

JavaScript can be used to trap user-initiated events such as button clicks, link
navigation, and other actions that the user initiates explicitly or implicitly.

13
Advantages of JavaScript
The merits of using JavaScript are −
 Less server interaction − you can validate user input before sending the
page off to the server. This saves server traffic, which means fewer loads on
your server.
 Immediate feedback to the visitors − they don't have to wait for a page
reload to see if they have forgotten to enter something.
 Increased interactivity − you can create interfaces that react when the user
hovers over them with a mouse or activates them via the keyboard.
 Richer interfaces − you can use JavaScript to include such items as drag-
and-drop components and sliders to give a Rich Interface to your site visitors.

Limitations of JavaScript
We cannot treat JavaScript as a full-fledged programming language. It lacks the
following important features −
 Client-side JavaScript does not allow the reading or writing of files. This has
been kept for security reason.
 JavaScript cannot be used for networking applications because there is no
such support available.
 JavaScript doesn't have any multi-threading or multiprocessor capabilities.

Once again, JavaScript is a lightweight, interpreted programming language that


allows you to build interactivity into otherwise static HTML pages.

JavaScript Development Tools


One of major strengths of JavaScript is that it does not require expensive
development tools. You can start with a simple text editor such as Notepad. Since it
is an interpreted language inside the context of a web browser, you don't even need
to buy a compiler.

To make our life simpler, various vendors have come up with very nice JavaScript
editing tools. Some of them are listed here −

14
 Microsoft FrontPage − Microsoft has developed a popular HTML editor
called FrontPage. FrontPage also provides web developers with a number of
JavaScript tools to assist in the creation of interactive websites.
 Macromedia Dreamweaver MX − Macromedia Dreamweaver MX is a very
popular HTML and JavaScript editor in the professional web development
crowd. It provides several handy prebuilt JavaScript components, integrates
well with databases, and conforms to new standards such as XHTML and
XML.
 Macromedia Home Site 5 − Home Site 5 is a well-liked HTML and
JavaScript editor from Macromedia that can be used to manage personal
websites effectively.

Applications of JavaScript Programming:

As mentioned before, JavaScript is one of the most widely used programming


languages (Front-end as well as Back-end). It has its presence in almost every area
of software development. I'm going to list few of them here:
 Client side validation - This is really important to verify any user input
before submitting it to the server and JavaScript plays an important role in
validating those inputs at front-end itself.
 Manipulating HTML Pages - JavaScript helps in manipulating HTML page
on the fly. This helps in adding and deleting any HTML tag very easily using
JavaScript and modify your HTML to change its look and feel based on
different devices and requirements.
 User Notifications - You can use JavaScript to raise dynamic pop-ups on the
webpages to give different types of notifications to your website visitors.
 Back-end Data Loading - JavaScript provides Ajax library which helps in
loading back-end data while you are doing some other processing. This really
gives an amazing experience to your website visitors.
 Presentations - JavaScript also provides the facility of creating presentations
which gives website look and feel. JavaScript provides Reveal JS and
Bespoke JS libraries to build web-based slide presentations.
 Server Applications - Node JS is built on Chrome's JavaScript runtime for
building fast and scalable network applications. This is an event based library

15
which helps in developing very sophisticated server applications including
Web Servers.

Python:
Python is a high-level, interpreted, interactive and object-oriented scripting
language. Python is designed to be highly readable. It uses English keywords
frequently where as other languages use punctuation, and it has fewer syntactical
constructions than other languages.

Python is a high-level, interpreted, interactive and object-oriented scripting


language. Python is designed to be highly readable. It uses English keywords
frequently where as other languages use punctuation, and it has fewer syntactical
constructions than other languages.
 Python is Interpreted − Python is processed at runtime by the interpreter.
You do not need to compile your program before executing it. This is similar
to PERL and PHP.
 Python is Interactive − you can actually sit at a Python prompt and interact
with the interpreter directly to write your programs.
 Python is Object-Oriented − Python supports Object-Oriented style or
technique of programming that encapsulates code within objects.
 Python is a Beginner's Language − Python is a great language for the
beginner-level programmers and supports the development of a wide range of
applications from simple text processing to WWW browsers to games.

History of Python
Python was developed by Guido van Rossum in the late eighties and early nineties at
the National Research Institute for Mathematics and Computer Science in the
Netherlands.

Python is derived from many other languages, including ABC, Modula-3, C, C++,
Algol-68, Small Talk, and Unix shell and other scripting languages.

Python is copyrighted. Like Perl, Python source code is now available under the
GNU General Public License (GPL).

16
Python is now maintained by a core development team at the institute, although
Guido van Rossum still holds a vital role in directing its progress.

Python Features
Python's features include −
 Easy-to-learn − Python has few keywords, simple structure, and a clearly
defined syntax. This allows the student to pick up the language quickly.
 Easy-to-read − Python code is more clearly defined and visible to the eyes.
 Easy-to-maintain − Python's source code is fairly easy-to-maintain.
 A broad standard library − Python's bulk of the library is very portable and
cross-platform compatible on UNIX, Windows, and Macintosh.
 Interactive Mode − Python has support for an interactive mode which allows
interactive testing and debugging of snippets of code.
 Portable − Python can run on a wide variety of hardware platforms and has
the same interface on all platforms.
 Extendable − you can add low-level modules to the Python interpreter.
These modules enable programmers to add to or customize their tools to be
more efficient.
 Databases − Python provides interfaces to all major commercial databases.
 GUI Programming − Python supports GUI applications that can be created
and ported to many system calls, libraries and windows systems, such as
Windows MFC, Macintosh, and the X Window system of Unix.
 Scalable − Python provides a better structure and support for large programs
than shell scripting.

Apart from the above-mentioned features, Python has a big list of good features, few
are listed below −
 It supports functional and structured programming methods as well as OOP.
 It can be used as a scripting language or can be compiled to byte-code for
building large applications.
 It provides very high-level dynamic data types and supports dynamic type
checking.
 It supports automatic garbage collection.
 It can be easily integrated with C, C++, COM, ActiveX, CORBA, and Java.

17
Python is a must for students and working professionals to become a great Software
Engineer especially when they are working in Web Development Domain. I will list
down some of the key advantages of learning Python:
 Python is Interpreted − Python is processed at runtime by the interpreter.
You do not need to compile your program before executing it. This is similar
to PERL and PHP.
 Python is Interactive − You can actually sit at a Python prompt and interact
with the interpreter directly to write your programs.
 Python is Object-Oriented − Python supports Object-Oriented style or
technique of programming that encapsulates code within objects.
 Python is a Beginner's Language − Python is a great language for the
beginner-level programmers and supports the development of a wide range of
applications from simple text processing to WWW browsers to games.

Characteristics of Python
Following are important characteristics of Python Programming −
 It supports functional and structured programming methods as well as OOP.
 It can be used as a scripting language or can be compiled to byte-code for
building large applications.
 It provides very high-level dynamic data types and supports dynamic type
checking.
 It supports automatic garbage collection.
 It can be easily integrated with C, C++, COM, ActiveX, CORBA, and Java.

Applications of Python
As mentioned before, Python is one of the most widely used language over the web.
I'm going to list few of them here:
 Easy-to-learn − Python has few keywords, simple structure, and a clearly
defined syntax. This allows the student to pick up the language quickly.
 Easy-to-read − Python code is more clearly defined and visible to the eyes.
 Easy-to-maintain − Python's source code is fairly easy-to-maintain.
 A broad standard library − Python's bulk of the library is very portable and
cross-platform compatible on UNIX, Windows, and Macintosh.

18
 Interactive Mode − Python has support for an interactive mode which allows
interactive testing and debugging of snippets of code.
 Portable − Python can run on a wide variety of hardware platforms and has
the same interface on all platforms.
 Extendable − You can add low-level modules to the Python interpreter.
These modules enable programmers to add to or customize their tools to be
more efficient.
 Databases − Python provides interfaces to all major commercial databases.
 GUI Programming − Python supports GUI applications that can be created
and ported to many system calls, libraries and windows systems, such as
Windows MFC, Macintosh, and the X Window system of Unix.
 Scalable − Python provides a better structure and support for large programs
than shell scripting.

PyCharm:
PyCharm is the most popular IDE used for Python scripting language. This chapter
will give you an introduction to PyCharm and explains its features. PyCharm offers
some of the best features to its users and developers in the following aspects −
 Code completion and inspection
 Advanced debugging
 Support for web programming and frameworks such as Django and Flask

Features of PyCharm:
Besides, a developer will find PyCharm comfortable to work with because of the
features mentioned below −
Code Completion

PyCharm enables smoother code completion whether it is for built in or for an


external package.
SQLAlchemy as Debugger

You can set a breakpoint, pause in the debugger and can see the SQL representation
of the user expression for SQL Language code.

19
Git Visualization in Editor

When coding in Python, queries are normal for a developer. You can check the last
commit easily in PyCharm as it has the blue sections that can define the difference
between the last commit and the current one.
Code Coverage in Editor

You can run .py files outside PyCharm Editor as well marking it as code coverage
details elsewhere in the project tree, in the summary section etc.
Package Management

All the installed packages are displayed with proper visual representation. This
includes list of installed packages and the ability to search and add new packages.
Local History
Local History is always keeping track of the changes in a way that complements like
Git. Local history in PyCharm gives complete details of what is needed to rollback
and what is to be added.
Refactoring
Refactoring is the process of renaming one or more files at a time and PyCharm
includes various shortcuts for a smooth refactoring process.

Anaconda
Anaconda is a free and open-source distribution of the Python and R programming
languages for scientific computing (data science, machine learning applications,
large- scale data processing, predictive analytics, etc.), that aims to simplify package
management and deployment. Package versions are managed by the package
management system conda. The Anaconda distribution is used by over 6 million
users and includes more than 1400 popular data-science packages suitable for
Windows, Linux, and MacOS.

Anaconda distribution comes with more than 1,400 packages as well as the Conda
package and virtual environment manager, called Anaconda Navigator, so it
eliminates the need to learn to install each library independently. The open source
packages can be individually installed from the Anaconda repository with the
conda install command or using the pip install command that is installed with
Anaconda. Pip packages provide many of the features of conda packages and in

20
most cases they can work together. Custom packages can be made using the conda
buildcommand, and can be shared with others by uploading them to Anaconda
Cloud, PyPI orother repositories.

The default installation of Anaconda2 includes Python 2.7 and Anaconda3 includes
Python 3.7. However, you can create new environments that include any version of
Python packaged with conda.

Anaconda Navigator

Anaconda Navigator is a desktop graphical user interface (GUI) included in


Anaconda distribution that allows users to launch applications and manage conda
packages, environments and channels without using command-line commands.
Navigator can search for packages on Anaconda Cloud or in a local Anaconda
Repository, install them in an environment, run the packages and update them. It is
available for Windows, mac, OS and Linux

21
Conda:
Conda is an open source, cross-platform, language-agnostic package manager and
environment management system that installs, runs, and updates packages and their
dependencies. It was created for Python programs, but it can package and
distribute software for any language (e.g., R), including multi-language projects.
The Condapackage and environment manager is included in all versions of
Anaconda, Miniconda,and Anaconda Repository.

Anaconda Cloud:
Anaconda Cloud is a package management service by Anaconda where you can
find, access, store and share public and private notebooks, environments, and conda
and PyPI packages. Cloud hosts useful Python packages, notebooks and
environments for a wide variety of applications. You do not need to log in or to
have a Cloud account, to search forpublic packages, download and install them.
1. Jupyter Notebook
The notebook extends the console-based approach to interactive computing in a
qualitatively new direction, providing a web-based application suitable for
capturing the whole computation process: developing, documenting, and
executing code, as well as communicating the results.
The Jupyter notebook combines two components:

A web application: A browser-based tool for interactive authoring of documents


which combine explanatory text, mathematics, computations and their rich
media output.

Notebook documents: A representation of all content visible in the web


application, including inputs and outputs of the computations, explanatory text,
mathematics, images, and rich media representations of objects.

22
6.2 ANALYSIS MODEL

Introduction to machine learning:


Machine learning (ML) is the scientific study of algorithms and statistical models
that computer systems use to perform a specific task without using explicit
instructions, relying on patterns and inference instead. It is seen as a subset of
artificial intelligence.

Machine learning algorithms build a mathematical model based on sample data,


known as "training data", in order to make predictions or decisions without being
explicitly programmed to perform the task. Machine learning algorithms are used in
a wide variety of applications, such as email filtering and computer vision, where it
is difficult or infeasible to develop a conventional algorithm for effectively
performing the task.

Machine learning is closely related to computational statistics, which focuses on


making predictions using computers. The study of mathematical optimization
delivers methods, theory and application domains to the field of machine learning.
Data mining is a field of study within machine learning, and focuses on exploratory
data analysis through unsupervised learning. In its application across business
problems, machine learning is also referred to as predictive analytic.

Difference between Traditional Programming & Machine Learning

Machine learning tasks are classified into several broad categories. In supervised
learning, the algorithm builds a mathematical model from a set of data that contains
both the inputs and the desired outputs. For example, if the task were determining
whether an image contained a certain object, the training data for a supervised
learning algorithm would include images with and without that object (the input),

23
and each image would have a label (the output) designating whether it contained the
object.

Machine learning methods:


Machine learning can be grouped into two broad learning tasks:

Supervised and Unsupervised.

Supervised Learning:
The majority of practical machine learning uses supervised learning. The goal is to
approximate the mapping function so well that when you have new input data (x)
that you can predict the output variables (Y) for that data. It is called supervised
learning because the process of an algorithm learning from the training dataset can
be thought of as a teacher supervising the learning process. We know the correct
answers, the algorithm iteratively makes predictions on the training data and is
corrected by the teacher. Learning stops when the algorithm achieves an acceptable
level of performance.

Supervised learning problems can be further grouped into regression and


classification problems.

Classification: A classification problem is when the output variable is a


category, such as“red” or “blue” or “disease” and “no disease”.
Regression: A regression problem is when the output variable is a real
value, such as “dollars” or “weight”.

(a) Supervised Learning Algorithms

24
Unsupervised Learning:

Unsupervised learning is where you only have input data (X) and no corresponding
output variables.

The goal for unsupervised learning is to model the underlying structure or


distribution in the data in order to learn more about the data.

These are called unsupervised learning because unlike supervised learning above
there are no correct answers and there is no teacher. Algorithms are left to their own
devises to discover and present the interesting structure in the data.

Unsupervised learning problems can be further grouped into clustering and


association problems.

Clustering: A clustering problem is where you want to discover the


inherent groupings in the data, such as grouping customers by purchasing
behavior.
Association: An association rule learning problem is where you want to
discover rules that describe large portions of your data, such as people that
buy X also tend to buy Y.

(b) Unsupervised Learning Algorithms

25
Machine learning applications
Machine learning is a buzzword for today's technology, and it is growing very
rapidly day by day. We are using machine learning in our daily life even without
knowing it such as Google Maps, Google assistant, Alexa, etc. Below are some
most trending real-world applications of Machine Learning:

1. Image recognition:

Image recognition is one of the most common applications of machine learning.


It is used to identify objects, persons, places, digital images, etc. The popular use
case of image recognition and face detection is, Automatic friend tagging
suggestion.

Facebook provides us a feature of auto friend tagging suggestion. Whenever we


upload a photo with our Facebook friends, then we automatically get a tagging
suggestion with name, and the technology behind this is machine learning's face
detection and recognition algorithm.

2. Speech recognition:

While using Google, we get an option of "Search by voice," it comes under


speech recognition, and it's a popular application of machine learning.

Speech recognition is a process of converting voice instructions into text, and it


is also known as "Speech to text", or "Computer speech recognition." At present,
machine learning algorithms are widely used by various applications of speech
recognition. Google assistant, Siri, Cortana, and Alexa are using speech
recognition technology to follow the voice Siri, Cortana, and Alexa are using
speech recognition technology to follow the voice instructions.
3. Traffic prediction

If we want to visit a new place, we take help of Google Maps, which shows us
the correct path with the shortest route and predicts the traffic conditions. It
predicts the traffic conditions such as whether traffic is cleared, slow-moving, or
heavily congested with the help of two ways:
1. Real Time location of the vehicle form Google Map app and sensors
2. Average time has taken on past days at the same time.

26
4. Product recommendations

Machine learning is widely used by various e-commerce and entertainment


companies such as Amazon, Netflix, etc., for product recommendation to the
user. Whenever we search for some product on Amazon, then we started getting
an advertisement for the same product while internet surfing on the same
browser and this is because of machine learning.

5. Self-driving cars
One of the most exciting applications of machine learning is self-driving cars.
Machine learning plays a significant role in self-driving cars. Tesla, the most
popular car manufacturing company is working on self-driving car. It is using
unsupervised learning method to train the car models to detect people and
objects while driving.

6. Email spam and malware filtering

Whenever we receive a new email, it is filtered automatically as important,


normal, and spam. We always receive an important mail in our inbox with the
important symbol and spam emails in our spam box, and the technology behind
this is Machine learning. Below are some spam filters used by Gmail:
1. Content Filter
2. Header filter
3. General blacklists filter
4. Rules-based filter

PYTHON LIBRARIES:

1. Numpy

2. Pandas

3. Matplotlib

4. Scikit–learn

27
1. Numpy:
Numpy is a general – purpose array – processing package. It provides a high-
Performance multidimensional array object, and tools for working with these arrays.
It is the fundamental package for scientific computing with Python. It contains
various Features including these important ones:

A powerful N-dimensional array object Sophisticated(broadcasting) functions

Tools for integrating C/C++ and Fortran code

Useful linear algebra, Fourier transform, and random number capabilities Besides its
obvious scientific uses, Numpy can also be used as an efficient multi- Dimensional
container of generic data. Arbitrary data – types can be defined using

Numpy which allows Numpy to seamlessly and speedily integrate with a wide
variety ofdatabases.

2. Pandas
Pandas is an open-source Python Library providing high-performance data
Manipulation and analysis tool using its powerful data structures. Python was
majorly Used for data munging and preparation. It had very little contribution
towards data analysis. Pandas solved this problem. Using Pandas, we can
accomplish five typical steps in the processing and analysis of data, regardless of
the origin of data load,prepare, manipulate, model, and analyze. Python with Pandas
isused in a wide range of fields including academic and commercial domains
including finance, economics, Statistics, analytics, etc.

3. Matplotlib
Matplotlib is a Python 2D plotting library which produces publication quality
figures in a variety of hardcopy formats and interactive environments across
platforms. Matplotlib can be used in Python scripts, the Python and I Python shells,
the Jupyter notebook, web application servers, and four graphical user interface
toolkits. Matplotlib tries to make easy things easy and hard things possible. You
cangenerate plots, histograms, power spectra, barcharts, error charts, scatterplots,
etc., with just a few lines of code. For examples, see the sample plots and thumb nail
gallery.

28
For simple plotting the pyplot module provides a MATLAB-like interface,
Particularly when combined with I Python. For the power user, you have full control
of Line styles, font properties axes properties, etc, via an object oriented interface or
via aset of functions familiar to MATLAB users.

4. Scikit–learn
Scikit – learn provides a range of supervised and unsupervised learning algorithms
via a Consistent interface in Python. It is licensed under a permissive simplified
BSD license And is distributed under many Linux distributions, encouraging
academic and Commercial use .The library is built upon the SciPy (Scientific
Python) that must be Installed be for you can use Scikit - learn. This stack that
includes:

NumPy: Basen – dimensional array package


SciPy: Fundamental library for scientific computing
Matplotlib: Comprehensive 2D / 3D plotting
IPython : Enhanced interactive console
Sympy: Symbolic mathematics
Pandas: Data structures and analysis
Extensions or modules for SciPy care conventionally named SciKits . As such, the

29
6.3 STUDY OF SYSTEM:

Feasibility study
Preliminary investigation examine project feasibility, the likelihood the system will
be useful to the organization. The main objective of the feasibility study is to test the
Technical, Operational and Economical feasibility for adding new modules and
debugging old running system. All system is feasible if they are unlimited resources
and infinite time. There are aspects in the feasibility study portion of the preliminary
investigation.

1. ECONOMIC FEASIBILITY
2. OPERATIONAL FEASIBILITY
3. TECHNICAL FEASIBILITY
1. Economic feasibility
A system can be developed technically and that will be used if installed must still be
a good investment for the organization. In the economic feasibility, the development
cost in creating the system is evaluated against the ultimate benefit derived from the
new systems. Financial benefits must equal or exceed the costs.
The system is economically feasible. It does not require any addition hardware or
software. Since the interface for this system is developed using the existing
resources and technologies available at NIC, There is nominal expenditure and
economic feasibility for certain.

2. Operational feasibility
Proposed projects are beneficial only if they can be turned out into information
system. That will meet the organization‟s operating requirements. Operational
feasibility aspects of the project are to be taken as an important part of the project
implementation. Some of the important issues raised are to test the operational
feasibility of a project includes the following:

a. Is there sufficient support for the management from the users?


b. Will the system be used and work properly if it is being developed
andimplemented?
c. Will there be any resistance from the user that will undermine the possible
application benefits?

30
This system is targeted to be in accordance with the above-mentioned issues.
Beforehand, the management issues and user requirements have been taken into
consideration. So there is no question of resistance from the users that can undermine
the possible application benefits.

3. Technical feasibility
The technical issue usually raised during the feasibility stage of the investigation
includes the following:

a. Does the necessary technology exist to do what is suggested?


b. Do the proposed equipments have the technical capacity to hold the
datarequired to use the new system?
c. Will the proposed system provide adequate response to inquiries, regardless
of the number or location of users?
d. Can the system be upgraded if developed?
e. Are there technical guarantees of accuracy, reliability, ease of access anddata
security?
Earlier no system existed to cater to the needs of „Secure Infrastructure
Implementation System‟. The current system developed is technically feasible. It is
a web based user interface for audit workflow at NIC-CSD. Thus it provides an easy
access tothe users. The purpose is to create, establish and maintain a workflow
among various entities in order to facilitate all concerned users in their various
capacities or roles. Permission to the users would be granted based on the roles
specified. Therefore, it provides the technical guarantee of accuracy, reliability and
security. The software and hard requirements for the development of this project are
not many and are already available in-house at NIC or are available as free as open
source.

31
6.4 SOFTWARE REQUIREMENT SPECIFICATION:

Hardware Configuration:
• Processor : I3/Intel Processor
• Hard Disk : 160GB
• RAM : 8Gb

Software Configuration:
• Operating System : Windows 7/8/10 .
• Server side Script : HTML, CSS & JS.
• IDE : Pycharm.
• Libraries Used : Numpy, IO, OS, Flask.
• Technology : Python 3.6+.

32
7 MODULES AND METHODOLOGY

MODULE DESCRIPTION

1. Feature Extraction Using CNN Extractor


In order to indicate that the deep convolutional features learnt by plain CNN is
discriminative enough for traffic sign recognition. Here we refer to the original and
simple structure proposed in to build up the CNN. The difference is that an extra
convolutional layer with 200 feature maps of 1×1 neuron is added before the fully
connected layer CNN architecture.
The max pooling layer here is non-overlapping and no rectification or inner-layer
normalization operation is used. Considering that the traffic signs images are
relatively invariable in shape and the size of the samples in dataset varies from
15×15 to 250×250, here we assume that the influence coming from cropping and
wrapping is considered neglectable. Thus, only images in bounding boxes given by
the annotations are cropped and resized to 48×48 uniformly. Note that data
augmentation is not used, which means that random deformation (translation,
rotation, scaling, etc.) is not applied to the training set. Since CNN is used to extract
deep features rather than conduct classification, the first eight layers of the CNN are
taken out as a feature extractor while the fully connected layers are removed when
training is done.

2. Image Classification Using CNN

Neural network consists of individual units called neurons. Neurons are located in
a series of groups- layers. Neurons in each layer are connected to neurons of the
next layer. Data comes from the input layer to the output layer along these
compounds. Each individual node performs simple mathematical calculation. Then
it transmits its data to all the nodes it is connected to. CNN is a special architecture
of artificial neural networks. CNN uses some features of the visual cortex. One of
the most popular uses of this architecture is image classification.

33
3. Voice alert Message

The classified traffic sign image will be converted into voice message

Methodology:
A. Dataset
In the proposed system, the German Traffic Sign Benchmarks (GTSRB) Dataset is
used. Fig. 1 shows the 43 different traffic signs that are considered to train the
model. It has 51,900 single images distributed among the 43 classes including the
training and the test dataset. The count of the number of photos per class is shown in
Fig. 2. There is no ambiguity as the images are just focussed on the traffic signs and
each of them is unique. The training dataset has different folders for each of the
present classes. A CSV file is also present wherein the path of each image and its
class and other details such as width, height is mentioned. Fig. 1. Traffic Signs
Taken into consideration Fig. 2. Number of images per class in the dataset

B. Data Preprocessing
To perform image processing, images need to be converted into numpy arrays (i.e.
numeric values). After loading the images, they are resized to 30*30 pixels. Post
this, the labels of the image are mapped with the image and hence the dataset is
ready to be trained.

34
C. Model
Convolutional Neural Network (CNN) is an algorithm falling in the domain of Deep
Learning. CNN can take a picture as input, assign priority to different items in the
picture, and distinguish them from one another. It requires much less preprocessing
as compared to other classification algorithms. Convolutional Network has the
ability to learn the filters or characteristics in the images as opposed to the primitive
methods filters where they are done manually. The architecture of a Convolutional
Network can be compared to the connectivity pattern of Neurons in the Human
Brain.

The design itself was inspired by the organization of neurons as present in the Visual
Cortex of the human brain. The neurons respond to stimuli only in a certain region
of the field of view which is known as the Receptive Field. The visual area is a
collection of a number of such receptive fields which help us in viewing objects.
Once the model is trained over a series of epochs i.e. iterations, it develops the
ability to distinguish between the dominating features and certain low level features
in the images. Based on this training, the model classifies them using the Softmax
Classification technique. Fig. 3 represents the number of layers used in the model.
There are 4 convolution layers and 2 max pooling layers along with dropout, flatten
and dense layers. Adam optimizer is used in the neural network. The input size of

35
the image is 30*30*1. The model employs the ReLU activation function. We obtain
a fully connected layer after the Flatten layer. And finally the output is determined
by using the softmax activation function.

D. Proposed Solution
Fig. 4 demonstrates the accuracy of the trained network. This model turned out to
give the best accuracy as compared to the other models that we analyzed.

E. Implementation
After training the model, it is saved and then the saved model is used for prediction.
A full stack web application with Node Js and Express Handlebars has been
developed using this model for prediction. It incorporates different logics to make it
a product which can be used with certain improvements in place. Fig. 5 depicts the
Flow The suggested system is depicted in this diagram. The CNN model is applied
in the first part wherein the input is an image. After being processed, one of the
classes out of the 43 classes is obtained as the output. If a certain image is not
containing a traffic sign, then the user gets a prompt of “No Sign Detected”. This is
done by analyzing the output array of the "model. predict" function in python. The
"model. predict" function returns an array of values representing how closely the
image falls under each of the 43 classes and finally predicts the class based on the
highest value..

36
ARCHITECTURE OF PROPOSED MODEL
Current popular algorithms mainly use convolutional neural networks to execute
both feature extraction and classification. Such these methods could achieve
impressive results but often on the basis of an extremely huge and complex network
or ensemble learning, together with over-massive data. For the purpose of making
full use of the advantages of CNN, we propose novel traffic sign recognition
architecture. Before sent to CNN for feature extraction, the average image of the
traffic signs is subtracted to ensure illumination invariance to some extent.

Data Data Trainin


preprocessing modeling g

Testin
g

Evaluatio Validatio
n n

37
8 FUNCTIONAL & NON FUNCTIONAL REQUIRMENTS

Software Requirement Specification (SRS) is the starting point of the software


developing activity. As system grew more complex it became evident that the goal
of the entire system cannot be easily comprehended. Hence the need for the
requirement phase arose. The software project is initiated by the client needs. The
SRS is the means of translating the ideas of the minds of clients (the input) into a
formal document (the output of the requirement phase.) The SRS phase consists of
two basic activities:

Problem Requirement Analysis


The process is order and more nebulous of the two, deals with understand the
problem, the goal and constraints.

Requirement Specification
Here, the focus is on specifying what has been found giving analysis such as
representation, specification languages and tools, and checking the specifications are
addressed during this activity. The Requirement phase terminates with the
production of the validate SRS document. Producing the SRS document is the basic
goal of this phase.

Role of SRS
The purpose of the Software Requirement Specification is to reduce the
communication gap between the clients and the developers. Software Requirement
Specification is the medium though which the client and user needs are accurately
specified. It forms the basis of software development. A good SRS should satisfy all
the parties involved in the system.

Functional requirements:
In software engineering a functional requirement defines a software system or its
component. A function is defined as a set of inputs the behavior and outputs.
Functional requirement may be calculations, technical details, data manipulation and
processing and also specify what a system is supposed to be accomplished.

38
INPUT : Traffic Signs
OUTPUT: Voice Message
PROCESS: Conventional Neural network.

Non-Functional Requirements:
A non-functional requirement is a requirement that specifies criteria that can be
used to judge the operation of a system rather than specific behaviors.
This should be contrasted with functional requirements that define specific
behavior or functions. The plan for implementing functional requirements is detailed
in the system design.
The major non-functional Requirements of the system are as follows.
Usability
The system is designed with completely automated process hence there is no or less
user intervention.
Reliability
The system is more reliable because of the qualities that are inherited from the
Chosen platform. The code built by using java is more reliable.
Performance
This system is developing in the high level languages and using the advanced front-
end and back-end technologies it will give response to the end user on client system
with in very less time.
Data integrity
Data integrity is the maintenance of and the assurance of the accuracy and
consistency of, data over its entire life-cycle and is a critical aspect to the design,
implementation and usage of any system which stores, processes, or retrieves data.
It is at times used as a proxy term for data quality while data validation is a pre-
requisite for data integrity. Data integrity is the opposite of data corruption.
Adaptability
Adaptability is a feature of a system or of a process. This word has been put to use
as a specialized term in different disciplines and in business operations. In ecology,
adaptability has been described as the ability to cope with unexpected disturbances

39
in the environment. Our project is able to adopt in any environment simple JVM
need to install.
Accessibility
The main goal of the project is Accessibility. We should design our project to
set any device and easy to access. Our simple to access with minimum bandwidth
internet connection.
Data integrity
Data integrity is the maintenance of and the assurance of the accuracy and
consistency of, data over its entire life-cycle and is a critical aspect to the design,
implementation and usage of any system which stores, processes, or retrieves data.
It is at times used as a proxy term for data quality while data validation is a pre-
requisite for data integrity. Data integrity is the opposite of data corruption.
Adaptability
Adaptability is a feature of a system or of a process. This word has been put to use
as a specialized term in different disciplines and in business operations. In ecology,
adaptability has been described as the ability to cope with unexpected disturbances
in the environment. Our project is able to adopt in any environment simple JVM
need to install.
Accessibility
The main goal of the project is Accessibility. We should design our project to
set any device and easy to access. Our simple to access with minimum bandwidth
internet connection.

40
9 UML DESIGN

INTRODUCTION TO UML
UML stands for Unified Modeling Language. UML is a standardized general-
purpose modeling language in the field of object-oriented software engineering. The
standard is managed, and was created by, the Object Management Group. The goal
is for UML to become a common language for creating models of object oriented
computer software. In its current form UML is comprised of two major components:
a Meta-model and a notation. In the future, some form of method or process may
also be added to; or associated with UML.
The Unified Modeling Language is a standard language for specifying,
Visualization, Constructing and documenting the artifacts of software system, as
well as for business modeling and other non-software systems. A UML system is
represented using five different views that describe the system from distinctly
perspective. Each view is defined by a set of diagrams, which is as follows.
Use-case view
Logical view
Implementation view
process view
deployment view

Use-case view
A view showing the functionality of the system as perceived by external actors. An
actor interacts with system. The actor can be a user or another system. The use-case
view is used by customers, designers, developers and testers. It is described in use-
case diagrams, sometimes with support from activity diagrams. The desired usage of
the system is described as a number of use cases in the use-case view, where a use
case is a generic description of a function requested.

Logical view
A view showing how the functionality is designed inside the system, in terms of the
system„s static structure and dynamic behavior. It is mainly for the designers and
developers. It describes both static structure (classes, objects and relationships) and

41
the dynamic collaborations that occur when the objects send messages to each other
to provide a given function.
Properties such as persistence and concurrency are also defined, as well as the
interfaces and the internal structure of classes. The static structure is described in
class and object diagrams. The dynamic modeling is described in state machines
and interaction and activity diagrams.

Implementation view
The implementation view describes the main modules and their dependencies. It is
mainly for developers and consists of the main software artifacts. The artifacts
include different types of code modules shown with their structure and
dependencies.

Process view
A view showing main elements in the system related to process performance. This
view includes scalability, throughput, and basic time performance and can touch on
some very complex calculations for advanced systems. The view consists of
dynamic diagrams (state machines, interaction and activity diagrams) and
implementation diagrams (interaction and deployment diagrams).

Deployment view
Finally the deployment view shows the physical deployment of the system such as
the computers and devices (nodes) and how they connect to each other. The various
execution environments within the processors can be specified as well. The
deployment view is used by the developers, integrators and testers. It is represented
by the deployment diagram.

UML Diagrams
The diagrams contain the graphical elements arranged to illustrate a particular part
or aspect of the system. A system model typically has several diagrams of varying
types, depending on the goal for the model.

42
Software design is a process that gradually changes as various new, better and more
Complex methods with a broader understanding of the whole problem in general
come in to existence. There are various kinds of diagrams used in software design.
Mainly these are as follows
Use case diagrams
Class diagram
Object diagram
Sequence diagrams
Collaboration diagrams
Activity diagram
State chart diagram
Component diagram
Deployment diagram

Use case diagram


Use Case diagrams identify the functionality provided by the system (use cases), the
users who interact with the system (actors), and the association between the users
and the functionality. Use Cases are used in the Analysis phase of software
development to articulate the high- level requirements of the system.
The primary goals of Use Case diagrams include
Providing a high-level view of what the system does
Identifying the users ("actors") of the system
Determining areas needing human-computer interfaces
Use Cases extend beyond pictorial diagrams. In fact, text-based use case
descriptions are often used to supplement diagrams, and explore use case
functionality in more detail.

43
Graphical Notation
The basic components of Use Case diagrams are the Actor, the Use Case, and the
Association.

Actor: An Actor, as mentioned, is a user of the system, and is


depicted using a stick figure. The role of the user is
written beneath the icon. Actors are not limited to
humans. If a system communicates with another
application, and expects input or delivers output, then that
application can also be considered an actor.

Use Case: A Use Case is functionality provided by the


system, typically described as verb object (eg. Register
Car, Delete User). Use Cases are depicted with an ellipse.
The name of the use case is written within the ellipse.

Association: Associations are used to link Actors with Use Cases, and
indicate that an Actor participates in the Use Case in some
form. Associations are depicted by a line connecting the
Actor and the Use Case.

The following image shows how these three basic elements work together to
form a use case diagram.

Text notation
Behind each Use Case is a series of actions to achieve the proper functionality, as
well as alternate paths for instances where validation fails, or errors occur. These
actions can be further defined in a Use Case description. Because this is not
addressed in UML 1.4, there are no standards for Use Case descriptions. However,

44
there are some common templates you can follow, and whole books on the subject
writing of Use Case descriptions.
Common methods of writing Use Case descriptions include
Write a paragraph describing the sequence of activities in the Use Case
List two columns, with the activities of the actor and the responses by the
system
Use a template (such as those from the Rational Unified Process or
Alistair Cockburn's book, Writing Effective Use Cases) identifying actors,
preconditions, post conditions, main success scenarios, and extensions.
Remember, the goal of the process is to be able to communicate the requirements of
the system, so use whichever method is best for your team and your organization.
Here are examples of a paragraph and template use case description for our Use Case
Diagram.

Class Diagram
Class diagrams identify the class structure of a system, including the properties and
methods of each class. Also depicted are the various relationships that can exist
between classes, such as an inheritance relationship. The Class diagram is one of
the most widely used diagrams from the UML specification. Part of the popularity
of Class diagrams stems from the fact that many CASE tools, such as Rational
XDE, will auto-generate code in a variety of languages, including Java, C++, and
C#, from these models. These tools can synchronize models and code, reducing your
workload, and can also generate Class diagrams from object-oriented code, for those
"code-then-design" maintenance projects.
Notation:
The elements on a Class diagram are classes and the relationships between
them

45
Class: Classes are the building blocks in object-oriented
programming. A Class is depicted using a rectangle divided
into three sections. The top section is the name of the Class.
The middle section defines the properties of the Class. The
bottom section lists the methods of the class.

Association: An Association is a generic relationship between two


classes, and is modeled by a line connecting the two classes.
This line can be qualified with the type of relationship, and
can also feature multiplicity rules (eg. one-to-one, one-to-
many, many-to-many) for the relationship.

Composition: If a class cannot exist by itself, and instead must be a


member of another class, then that class has a Composition
relationship with the containing class.

Dependency: When a class uses another class, perhaps as a member


variable or a parameter, and so "depends" on that class, a
Dependency relationship is formed. A Dependency
relationship is indicated by a dotted arrow.

Aggregation: Aggregations indicate a whole-part relationship, and are


known as "has-a" relationships. An Aggregation relationship
is indicated by a line with a hollow diamond.

Generalization: A Generalization relationship is the equivalent of an


inheritance relationship in object-oriented terms (an "is-a"
relationship). A Generalization relationship is indicated by
an arrow with a hollow arrowhead pointing to the base, or
"parent", class.

46
Consider the example of a veterinary system. Animals served, such as dogs and
birds, are tracked along with their owners. The following diagram models a potential
solution. Since dogs and birds are "a kind of" animal, we use a Generalization

relationship.
To validate your model, you can apply real-world data into instances of the classes.
In fact, there is a diagram for precisely this task.
Object Diagram
An object diagram is a variant of a class diagram and uses almost identical notation.
The difference between the two is that an object diagram shows a number of object
instances of classes, instead of the actual classes.
An object diagram is thus an example of a class diagram that shows a possible
snapshot of the system„s execution-what the system can look like at some point in
time. The same notation as that for class diagrams is used, with two exceptions:
Objects are written their names underlined and all instances in a relationship are
shown. Object diagrams are not as important as class diagrams but they can be used
to exemplify a complex class diagram by showing what the actual instances and the
relationships look like. Objects are also used as part of interaction diagrams that
show the dynamic collaboration between a set of objects.

Sequence Diagram
Sequence diagrams document the interactions between classes to achieve a result,
such as a use case. Because UML is designed for object-oriented programming,
these communications between classes are known as messages. The Sequence
diagram lists objects horizontally, and time vertically, and models these messages
over time

47
Notation
In a Sequence diagram, classes and actors are listed as columns, with vertical
lifelines indicating the lifetime of the object over time.

Object Objects are instances of classes, and are arranged horizontally.


The pictorial representation for an Object is a class (a
rectangle) with the name prefixed by the object name (optional)
and a semi-colon.

Actor Actors can also communicate with objects, so they too can be
listed as a column. An Actor is modeled using the ubiquitous
symbol, the stick figure.

Lifeline The Lifeline identifies the existence of the object over time.
The notation for a Lifeline is a vertical dotted line extending
from an object.

Activation Activations, modeled as rectangular boxes on the lifeline,


indicate when the object is performing an action.

Message Messages, modeled as horizontal arrows between Activations,


indicate the communications between objects.

Following is an example of a Sequence diagram, using the default named objects.


You can imagine many instances where a user performs an action in the user
interface, and the system in turn calls another object for processing.

48
Collaboration Diagram
Like the other Behavioral diagrams, Collaboration diagrams model the interactions
between objects. This type of diagram is a cross between an object diagram and a
sequence diagram. Unlike the Sequence diagram, which models the interaction in a
column and row type format, the Collaboration diagram uses the free-form
arrangement of objects as found in an Object diagram. This makes it easier to see all
integrations involving a particular object.

In order to maintain the ordering of messages in such a free-form diagram, messages


are labeled with a chronological number. Reading a Collaboration diagram involves
starting at message 1.0, and following the messages from object to object.
Notation

Object Objects are instances of classes, and are one of the entity types
that can be involved in communications. An Object is drawn as a
rectangular box, with the class name inside prefixed with the
object name (optional) and a semi-colon.

Actor Actors can also communicate with Objects, so they too can be
listed on Collaboration diagrams. An Actor is depicted by a stick
figure.

Message Messages, modeled as arrows between objects, and labeled with


an ordering number, indicate the communications between
objects.

Activity Diagram
Activity diagrams are used to document workflows in a system, from the business
level down to the operational level. When looking at an Activity diagram, you'll
notice elements from State diagrams. In fact, the Activity diagram is a variation of
the state diagram where the "states" represent operations, and the transitions
represent the activities that happen when the operation is complete. The general
purpose of Activity diagrams is to focus on flows driven by internal processing vs.
external events.

49
When an Activity State is completed, processing
Notation
moves to another Activity State. Transitions are
used to mark this movement. Transitions are
modeled using arrows.

Transition
Swim lane Swim lanes divide activities according to
objects by arranging objects in column format
and placing activities by that object within that
column. Objects are listed at the top of the
column, and vertical bars separate the columns
to form the swim lanes.

Initial State The Initial State marks the entry point and the
initial Activity State. The notation for the Initial
State is the same as in State chart diagrams, a
solid circle. There can only be one Initial state
chart.

50
State chart Diagram
State chart diagrams, also referred to as State diagrams, are used to document the
various modes ("state") that a class can go through, and the events that cause a state
transition. For example, your television can be in the off state, and when the power
button is pressed, the television goes into the on state. Pressing the power button yet
again causes a state transition from the on state to the off state. In comparison the
other behavioral diagrams which model the interaction between multiple classes,
State diagrams typically model the transitions within a single class.
Notation

State The State notation marks a mode of the entity, and is


indicated using a rectangle with rounded corners,
and the state name written inside.

Transition
A Transition marks the changing of the object State, caused

by an event. The notation for a Transition is an arrow,

with the Event Name written above, below, or alongside the arrow.
Initial State The Initial State is the state of an object before any

transitions. For objects, this could be the state when


instantiated. The Initial State is marked using a solid
circle. Only one initial state is allowed on a diagram.

Final State End States mark the destruction of the object whose state

We are modeling. These states are drawn using a solid


circle with a surrounding circle.
Here is an example State Diagram that models that status of a user's account in
a Bug Tracker system:

51
Component Diagram
Component diagrams fall under the category of an implementation diagram, a kind
of diagram that models the implementation and deployment of the system. A
Component Diagram, in particular, is used to describe the dependencies between
various software components such as the dependency between executable files and
source files. This information is similar to that within make files, which describe
source code dependencies and can be used to properly compile an application.
Notation

Component A component represents a software entity in a system.


Examples include source code files, programs,
documents, and resource files. A component is
represented using a rectangular box, with two rectangles
protruding from the left side, as seen in the image to the
right.

Dependency A Dependency is used to model the relationship between


two components. The notation for a dependency
relationship is a dotted arrow, pointingfrom a component
to the component

For example, the following Component diagram identifies

52
Deployment diagram
Deployment diagram depicts a static view of the run-time configuration of
processing nodes and the components that run on those nodes. In other words,
deployment diagrams show the hardware for your system, the software that is
installed on that hardware, and the middleware used to connect disparate

machines to one another.

53
UML DIAGRAMS
USE CASE DIAGRAM
To model a system the most important aspect is to capture the dynamic behavior. To
clarify a bit in details, dynamic behavior means the behavior of the system when it
is running operating. So only static behavior is not sufficient to model a system
rather dynamic behavior is more important than static behavior.
In UML there are five diagrams available to model dynamic nature and use case
diagram is one of them. Now as we have to discuss that the use case diagram is
dynamic in nature there should be some internal or external factors for making the
interaction. These internal and external agents are known as actors. So use case
diagrams are consists of actors, use cases and their relationships.
The diagram is used to model the system/subsystem of an application. A single use
case diagram captures a particular functionality of a system. So to model the entire
system numbers of use case diagrams are used. A use case diagram at its simplest is a
representation of a user's interaction with the system and depicting the specifications
of a use case. A use case diagram can portray the different types of users of a system
and the case and will often be accompanied by other types of diagrams as well.

54
CLASS DIAGRAM
In software engineering, a class diagram in the Unified Modeling Language (UML)
is a type of static structure diagram that describes the structure of a system by
showing the system's classes, their attributes, operations (or methods), and the
relationships among the classes. It explains which class contains information

55
SEQUENCE DIAGRAM
A sequence diagram in Unified Modeling Language (UML) is a kind of interaction
diagram that shows how processes operate with one another and in what order. It is a
construct of a Message Sequence Chart. Sequence diagrams are sometimes called
event diagrams, event scenarios, and timing diagrams.

56
COLLABORATION DIAGRAM

57
10 SAMPLE CODE

gui.py
import tkinter as tk
from tkinter import filedialog
from tkinter import *
from PIL import ImageTk, Image

import numpy
#load the trained model to classify sign
from keras.models import load_model
model = load_model('traffic_classifier.h5')

#dictionary to label all traffic signs class.


classes = { 1:'Speed limit (20km/h)',
2:'Speed limit (30km/h)',
3:'Speed limit (50km/h)',
4:'Speed limit (60km/h)',
5:'Speed limit (70km/h)',
6:'Speed limit (80km/h)',
7:'End of speed limit (80km/h)',
8:'Speed limit (100km/h)',
9:'Speed limit (120km/h)',
10:'No passing',
11:'No passing veh over 3.5 tons',
12:'Right-of-way at intersection',
13:'Priority road',
14:'Yield',
15:'Stop',
16:'No vehicles',
17:'Veh > 3.5 tons prohibited',
18:'No entry',
19:'General caution',
58
20:'Dangerous curve left',
21:'Dangerous curve right',
22:'Double curve',
23:'Bumpy road',
24:'Slippery road',
25:'Road narrows on the right',
26:'Road work',
27:'Traffic signals',
28:'Pedestrians',
29:'Children crossing',
30:'Bicycles crossing',
31:'Beware of ice/snow',
32:'Wild animals crossing',
33:'End speed + passing limits',
34:'Turn right ahead',
35:'Turn left ahead',
36:'Ahead only',
37:'Go straight or right',
38:'Go straight or left',
39:'Keep right',
40:'Keep left',
41:'Roundabout mandatory',
42:'End of no passing',
43:'End no passing veh > 3.5 tons',
44:'image not clear' }

#initialise GUI
top=tk.Tk()
top.geometry('800x600')
top.title('Traffic sign classification')
top.configure(background='#CDCDCD')

label=Label(top,background='#CDCDCD', font=('arial',15,'bold'))
59
sign_image = Label(top)

def classify(file_path):
global label_packed
image = Image.open(file_path)
image = image.resize((30,30))
image = numpy.expand_dims(image, axis=0)
image = numpy.array(image)
print(image.shape)
pred = model.predict([image])[0]
#pred = model.predict_classes([image])[0]

print(pred.round())
p=0
while p<len(pred):
if pred[p]==1:
break
p+=1
sign = classes[p+1]
print(sign)
label.configure(foreground='#011638', text=sign)
import pyttsx3
engine = pyttsx3.init()
engine.say(sign)
engine.runAndWait()

def show_classify_button(file_path):
classify_b=Button(top,text="Classify Image",command=lambda:
classify(file_path),padx=10,pady=5)
classify_b.configure(background='#364156',
foreground='white',font=('arial',10,'bold'))
classify_b.place(relx=0.79,rely=0.46)
60
def upload_image():
try:
file_path=filedialog.askopenfilename()
uploaded=Image.open(file_path)
uploaded.thumbnail(((top.winfo_width()/2.25),(top.winfo_height()/2.25)))
im=ImageTk.PhotoImage(uploaded)

sign_image.configure(image=im)
sign_image.image=im
label.configure(text='')
show_classify_button(file_path)
except:
pass

upload=Button(top,text="Upload an
image",command=upload_image,padx=10,pady=5)
upload.configure(background='#364156', foreground='white',font=('arial',10,'bold'))

upload.pack(side=BOTTOM,pady=50)
sign_image.pack(side=BOTTOM,expand=True)
label.pack(side=BOTTOM,expand=True)
heading = Label(top, text="Know Your Traffic Sign",pady=20,
font=('arial',20,'bold'))
heading.configure(background='#CDCDCD',foreground='#364156')
heading.pack()
top.mainloop()
traffic_sign.py
import numpy as np
import pandas as pd
import matplotlib.pyplot as plt
import cv2
import tensorflow as tf
61
from PIL import Image
import os
from sklearn.model_selection import train_test_split
from keras.utils import to_categorical
from keras.models import Sequential, load_model
from keras.layers import Conv2D, MaxPool2D, Dense, Flatten, Dropout

data = []
labels = []
classes = 43
cur_path = os.getcwd()

#Retrieving the images and their labels


for i in range(classes):
path = os.path.join(cur_path,'train',str(i))
images = os.listdir(path)

for a in images:
try:
image = Image.open(path + '\\'+ a)
image = image.resize((30,30))
image = np.array(image)
#sim = Image.fromarray(image)
data.append(image)
labels.append(i)
except:
print("Error loading image")

#Converting lists into numpy arrays


data = np.array(data)
labels = np.array(labels)

print(data.shape, labels.shape)
62
#Splitting training and testing dataset
X_train, X_test, y_train, y_test = train_test_split(data, labels, test_size=0.2,
random_state=42)

print(X_train.shape, X_test.shape, y_train.shape, y_test.shape)

#Converting the labels into one hot encoding


y_train = to_categorical(y_train, 43)
y_test = to_categorical(y_test, 43)

#Building the model


model = Sequential()
model.add(Conv2D(filters=32, kernel_size=(5,5), activation='relu',
input_shape=X_train.shape[1:]))
model.add(Conv2D(filters=32, kernel_size=(5,5), activation='relu'))
model.add(MaxPool2D(pool_size=(2, 2)))
model.add(Dropout(rate=0.25))
model.add(Conv2D(filters=64, kernel_size=(3, 3), activation='relu'))
model.add(Conv2D(filters=64, kernel_size=(3, 3), activation='relu'))
model.add(MaxPool2D(pool_size=(2, 2)))
model.add(Dropout(rate=0.25))
model.add(Flatten())
model.add(Dense(256, activation='relu'))
model.add(Dropout(rate=0.5))
model.add(Dense(43, activation='softmax'))

#Compilation of the model


model.compile(loss='categorical_crossentropy', optimizer='adam',
metrics=['accuracy'])
epochs = 15
history = model.fit(X_train, y_train, batch_size=32, epochs=epochs,
validation_data=(X_test, y_test))
63
model.save("my_model.h5")

#plotting graphs for accuracy


plt.figure(0)
plt.plot(history.history['accuracy'], label='training accuracy')
plt.plot(history.history['val_accuracy'], label='val accuracy')
plt.title('Accuracy')
plt.xlabel('epochs')
plt.ylabel('accuracy')
plt.legend()
plt.show()
plt.figure(1)
plt.plot(history.history['loss'], label='training loss')
plt.plot(history.history['val_loss'], label='val loss')
plt.title('Loss')
plt.xlabel('epochs')
plt.ylabel('loss')
plt.legend()
plt.show()
#testing accuracy on test dataset
from sklearn.metrics import accuracy_score
y_test = pd.read_csv('Test.csv')
labels = y_test["ClassId"].values
imgs = y_test["Path"].values
data=[]
for img in imgs:
image = Image.open(img)
image = image.resize((30,30))
data.append(np.array(image))
X_test=np.array(data)
pred = model.predict_classes(X_test)

# Plotting sample examples


64
list_images(X_train, y_train, "Training example") list_images(X_test, y_test,
"Testing example") list_images(X_valid, y_valid, "Validation example")
def histogram_plot(dataset, label):
"""
Plots a histogram of the input data.
Parameters:
dataset: Input data to be plotted as a histogram.
lanel: A string to be used as a label for the histogram.
"""
hist, bins = np.histogram(dataset, bins=n_classes) width = 0.7 * (bins[1] - bins[0])
center = (bins[:-1] + bins[1:]) / 2
plt.bar(center, hist, align='center', width=width) plt.xlabel(label)
plt.ylabel("Image count") plt.show()

# Training operation
self.one_hot_y = tf.one_hot(y, n_out)
self.cross_entropy = tf.nn.softmax_cross_entropy_with_logits(self.logits,
self.one_hot_y)
self.loss_operation = tf.reduce_mean(self.cross_entropy) self.optimizer =
tf.train.AdamOptimizer(learning_rate = learning_rate) self.training_operation =
self.optimizer.minimize(self.loss_operation)

# Accuracy operation
self.correct_prediction = tf.equal(tf.argmax(self.logits, 1), tf.argmax(self.one_hot_y,
1)) self.accuracy_operation = tf.reduce_mean(tf.cast(self.correct_prediction,
tf.float32))
import numpy as np
import pandas as pd
import matplotlib.pyplot as plt
import cv2
import tensorflow as tf
from PIL import Image
import os
65
from sklearn.model_selection import train_test_split
from keras.utils import to_categorical
from keras.models import Sequential, load_model
from keras.layers import Conv2D, MaxPool2D, Dense, Flatten, Dropout
data = []
labels = []
classes = 43
cur_path = os.getcwd()
#Retrieving the images and their labels
for i in range(classes):
path = os.path.join(cur_path,'train',str(i))
images = os.listdir(path)
for a in images:
try:
image = Image.open(path + '\\'+ a)
image = image.resize((30,30))
image = np.array(image)
#sim = Image.fromarray(image)
data.append(image)
labels.append(i)
except:
print("Error loading image")
#Converting lists into numpy arrays
data = np.array(data)
labels = np.array(labels)
print(data.shape, labels.shape)
#Splitting training and testing dataset
X_train, X_test, y_train, y_test = train_test_split(data, labels, test_size=0.2,
random_state=42)
print(X_train.shape, X_test.shape, y_train.shape, y_test.shape)
#Converting the labels into one hot encoding
y_train = to_categorical(y_train, 43)
y_test = to_categorical(y_test, 43)
66
#Building the model
model = Sequential()
model.add(Conv2D(filters=32, kernel_size=(5,5), activation='relu',
input_shape=X_train.shape[1:]))
model.add(Conv2D(filters=32, kernel_size=(5,5), activation='relu'))
model.add(MaxPool2D(pool_size=(2, 2)))
model.add(Dropout(rate=0.25))
model.add(Conv2D(filters=64, kernel_size=(3, 3), activation='relu'))
model.add(Conv2D(filters=64, kernel_size=(3, 3), activation='relu'))
model.add(MaxPool2D(pool_size=(2, 2)))
model.add(Dropout(rate=0.25))
model.add(Flatten())
model.add(Dense(256, activation='relu'))
model.add(Dropout(rate=0.5))
model.add(Dense(43, activation='softmax'))
#Compilation of the model
model.compile(loss='categorical_crossentropy', optimizer='adam',
metrics=['accuracy'])
epochs = 15
history = model.fit(X_train, y_train, batch_size=32, epochs=epochs,
validation_data=(X_test, y_test))
model.save("my_model.h5")
#plotting graphs for accuracy
plt.figure(0)
plt.plot(history.history['accuracy'], label='training accuracy')
plt.plot(history.history['val_accuracy'], label='val accuracy')
plt.title('Accuracy')
plt.xlabel('epochs')
plt.ylabel('accuracy')
plt.legend()
plt.show()
plt.figure(1)
plt.plot(history.history['loss'], label='training loss')
67
plt.plot(history.history['val_loss'], label='val loss')
plt.title('Loss')
plt.xlabel('epochs')
plt.ylabel('loss')
plt.legend()
plt.show()
#testing accuracy on test dataset
from sklearn.metrics import accuracy_score
y_test = pd.read_csv('Test.csv')
labels = y_test["ClassId"].values
imgs = y_test["Path"].values
data=[]
for img in imgs:
image = Image.open(img)
image = image.resize((30,30))
data.append(np.array(image))
X_test=np.array(data)
pred = model.predict_classes(X_test)
#Accuracy with the test data
from sklearn.metrics import accuracy_score
print(accuracy_score(labels, pred))
model.save(„traffic_classifier.h5‟)

68
11 TESTING AND TEST CASE DESIGN

System testing
The purpose of testing is to discover errors. Testing is the process of trying to
discover every conceivable fault or weakness in a work product. It provides a way to
check the functionality of components, sub-assemblies, assemblies and/or a finished
product It is the process of exercising software with the intent of ensuring that the
Software system meets its requirements and user expectations and does not fail in an
unacceptable manner. There are various types of test. Each test type addresses a
specific testing requirement.

Types of tests
Unit testing
Unit testing involves the design of test cases that validate that the internal program
logic is functioning properly, and that program inputs produce valid outputs. All
decision branches and internal code flow should be validated. It is the testing of
individual software units of the application .it is done after the completion of an
individual unit before integration. This is a structural testing, that relies on
knowledge of its construction and is invasive. Unit tests perform basic tests at
component level and test a specific business process, application, and/or system
configuration. Unit tests ensure that each unique path of a business process performs
accurately to the documented specifications and contains clearly defined inputs and
expected results.

Integration testing
Integration tests are designed to test integrated software components to determine if
they actually run as one program. Testing is event driven and is more concerned
with the basic outcome of screens or fields. Integration tests demonstrate that
although the components were individually satisfaction, as shown by successfully
unit testing, the combination of components is correct and consistent. Integration
testing is specifically aimed at exposing the problems that arise from the
combination of components.

69
Functional test
Functional tests provide systematic demonstrations that functions tested are
available as specified by the business and technical requirements, system
documentation, and user manuals.

Organization and preparation of functional tests is focused on requirements, key


functions, or special test cases. In addition, systematic coverage pertaining to
identify Business process flows; data fields, predefined processes, and successive

Processes must be considered for testing. Before functional testing is complete,


additional tests are identified and the effective value of current tests is determined.

System test
System testing ensures that the entire integrated software system meets
requirements. It tests a configuration to ensure known and predictable results. An
example of system testing is the configuration oriented system integration test.
System testing is based on process descriptions and flows, emphasizing pre-driven
process links and integration points.

White Box Testing


White Box Testing is a testing in which in which the software tester has knowledge
of the inner workings, structure and language of the software, or at least its purpose.
It is purpose. It is used to test areas that cannot be reached from a black box level.

Black Box Testing


Black Box Testing is testing the software without any knowledge of the inner
workings, structure or language of the module being tested. Black box tests, as most
other kinds of tests, must be written from a definitive source document, such as
specification or requirements document, such as specification or requirements
document. It is a testing in which the software under test is treated, as a black box
.you cannot “see” into it. The test provides inputs and responds to outputs without
considering how the software works.

70
Unit Testing:

Unit testing is usually conducted as part of a combined code and unit test phase of
the software lifecycle, although it is not uncommon for coding and unit testing to be
conducted as two distinct phases.

Test strategy and approach


Field testing will be performed manually and functional tests will be written in
detail.

Test objectives

 All field entries must work properly.


 Pages must be activated from the identified link.
 The entry screen, messages and responses must not be delayed.
Features to be tested

 Verify that the entries are of the correct format


 No duplicate entries should be allowed
 All links should take the user to the correct page.
Integration Testing
Software integration testing is the incremental integration testing of two or more
integrated software components on a single platform to produce failures caused by
interface defects.

The task of the integration test is to check that components or software applications,
e.g. components in a software system or – one step up – software applications at the
company level – interact without error.

Test Results: All the test cases mentioned above passed successfully. No defects
encountered.

Acceptance Testing
User Acceptance Testing is a critical phase of any project and requires significant
participation by the end user. It also ensures that the system meets the functional
requirements.

71
12 EXECUTION

with tf.Session() as sess:


LeNet_Model.saver.restore(sess, os.path.join(DIR,
"LeNet")) y_pred =
LeNet_Model.y_predict(X_test_preprocessed)
test_accuracy = sum(y_test == y_pred)/len(y_test)
On training Accuracy =
print("Test
EPOCH 1: Validation Accuracy = 81.451%
EPOCH 2: Validation Accuracy = 87.755%
EPOCH 3: Validation Accuracy = 90.113%
EPOCH 4: Validation Accuracy = 91.519%
EPOCH 5: Validation Accuracy = 90.658%
EPOCH 6: Validation Accuracy = 92.608%
EPOCH 7: Validation Accuracy = 92.902%
EPOCH 8: Validation Accuracy = 92.585%
EPOCH 9: Validation Accuracy = 92.993%
EPOCH 10: Validation Accuracy = 92.766%
EPOCH 11: Validation Accuracy = 93.356%
EPOCH 12: Validation Accuracy = 93.469%
EPOCH 13: Validation Accuracy = 93.832%
EPOCH 14: Validation Accuracy = 94.603%
EPOCH 15: Validation Accuracy = 93.333%
EPOCH 16: Validation Accuracy = 93.787%
EPOCH 17: Validation Accuracy = 94.263%
EPOCH 18 : Validation Accuracy = 92.857%

72
EPOCH 19: Validation Accuracy = 93.832%
EPOCH 20: Validation Accuracy = 93.605%
EPOCH 21: Validation Accuracy = 93.447%
EPOCH 22: Validation Accuracy = 94.286%
EPOCH 23: Validation Accuracy = 94.671%
EPOCH 24: Validation Accuracy = 94.172%
EPOCH 25: Validation Accuracy = 94.399%
EPOCH 26: Validation Accuracy = 95.057%
EPOCH 27: Validation Accuracy = 95.329%
EPOCH 28: Validation Accuracy = 94.218%
EPOCH 29 : Validation Accuracy = 94.286%
EPOCH 30 : Validation Accuracy = 94.853%

Model saved
As we can see, we've been able to reach a maximum accuracy of 95.3% on the
validation set over 30 epochs, using a learning rate of 0.001 on training the LeNet
model

73
# Loading and resizing new test images

new_test_images = []

path = './traffic-signs-data/new_test_images/'

for image in os.listdir(path):

img = cv2.imread(path + image) img = cv2.resize(img, (32,32))


img = cv2.cvtColor(img,
cv2.COLOR_BGR2RGB)
new_test_images.append(
img)
new_IDs = [13, 3, 14, 27, 17]

print("Number of new testing examples: ", len(new_test_images))

Number of new testing examples: 5

plt.figure(figsize=(15, 16))

for i in
range(len(new_test_images
)): plt.subplot(3, 5, i+1)
plt.imshow(new_test_imag
es[i])
plt.xlabel(signs[new_IDs[i]
]) plt.ylabel("New testing
image") plt.xticks([])
plt.yticks([])

# New test data preprocessing

new_test_images_preprocessed = preprocess(np.asarray(new_test_images)

Parameters:

74
X_data: Input data.

top_k (Default = 5): The number of top softmax probabilities to be generated.

"""

num_examples = len(Input_data)

y_pred = np.zeros((num_examples,
top_k), dtype=np.int32) y_prob =
np.zeros((num_examples, top_k))
with tf.Session() as sess:

LeNet_Model.saver.restore(sess, os.path.join(DIR, "LeNet"))

y_prob, y_pred =
sess.run(tf.nn.top_k(tf.nn.softmax(LeNet_Mo
del.logits), k=top_k),
feed_dict={x:Input_data, keep_prob:1,
keep_prob_conv:1})
return y_prob, y_pred

y_prob, y_pred = y_predict_model(new_test_images_preprocessed)

test_accuracy = 0

for i in
enumerate(new_test_imag
es_preprocessed): accu =
new_IDs[i[0]] ==
np.asarray(y_pred[i[0]])[0]
if accu == True:
test_accuracy += 0.2

print("New Images Test Accuracy = {:.1f}%".format(test_accuracy*100))

75
plt.figure(figsize=(15, 16))

for i in
range(len(new_test_im
ages_preprocessed)):
plt.subplot(5, 2, 2*i+1)
plt.imshow(new_test_i
mages[i])
plt.title(signs[y_pred[i]
[0]])

76
77
Output:

78
79
80
81
82
13 CONCLUSION

The Traffic Sign Board Detection and Voice Alert System is implemented using
Convolutional Neural Network. Various models under the CNN heading were
studied and the one with highest accuracy on the GTSRB dataset was implemented.
The creation of different classes for each Traffic sign has helped in increasing the
accuracy of the model. A voice message is sent after recognition of the sign which
alerts the driver. A map is displayed on which the signs in the vicinity of the driver
are displayed thus helping him/her take appropriate decisions. This project is a
significant advancement in the field of driving as it would ease the job of the driver
without compromising on the safety aspect. Also this system can easily be
implemented without the need of much hardware thus increasing its reach.

In this work I figured out what is deep learning. I assembled and trained the CNN
model to classify photographs of traffic sign. I measured how the accuracy depends
on the number of epochs in order to detect potential over fitting problem. In this
process of traffic sign recognition our first step is feature extraction followed by
image classification by utilizing variety of traffic signs using CNN classifier. Thus
this project uncovers the fundamental idea of CNN algorithm required to accomplish
image classification from traffic sign recognition. My next step would be to try this
model on more datasets and try to apply it to practical tasks.

I would also like to experiment with the neural network design in order to see how a
higher efficiency can be achieved in various problems. We expect to arrive at a
better recognition system for traffic sign recognition that allows convolution
networks to be trained with very few labeled samples. Using LeNet, we've been able
to reach a very high accuracy rate. We can observe that the models saturate after
nearly 10 epochs, so we can save some computational resources and reduce the
number of epochs to 10. We can also try other preprocessing techniques to further
improve the model's accuracy.

We can further improve on the model using hierarchical CNNs to first identify
broader groups (like speed signs) and then have CNNs to classify finer features

83
(such as the actual speed limit) This model will only work on input examples where
the traffic signs are centered in the middle of the image. It doesn't have the
capability to detect signs in the image corners.

84
14 FUTURE SCOPE

The prototype can be expanded to include an inbuilt alert system with a camera in
the vehicle's centre. Also, the feature of getting the estimated time for reaching that
particular traffic sign can be added. This system can also be expanded for
identification of traffic signals and hence prompt the user about the time to reach
that particular signal and its status as well. The user can accordingly plan their trip
start time and hence cross all signals without having to wait. Also the driver
verification will be done with the help of an API providing the information about the
license holder and the license number.

85
REFERENCES:
[1]Yadav, Shubham & Patwa, Anuj & Rane, Saiprasad & Narvekar, Chhaya. (2019).
Indian Traffic Sign Board Recognition and Driver Alert System Using Machine
Learning. International Journal of Applied Sciences and Smart Technologies. 1. 1-
10. 10.24071/ijasst.v1i1.1843.
[2] Anushree.A., S., Kumar, H., Iram, I., & Divyam, K. (2019). Automatic
Signboard Detection System by the Vehicles.
[3] S. Harini, V. Abhiram, R. Hegde, B. D. D. Samarth, S. A. Shreyas and K. H.
Gowranga, "A smart driver alert system for vehicle traffic using image detection and
recognition technique," 2017 2nd IEEE International Conference on Recent Trends
in Electronics, Information & Communication Technology (RTEICT), Bangalore,
India, 2017, pp. 1540-1543, doi: 10.1109/RTEICT.2017.8256856.
[4] C. Wang, "Research and Application of Traffic Sign Detection and Recognition
Based on Deep Learning," 2018 International Conference on Robots & Intelligent
System (ICRIS), Changsha, China, 2018, pp. 150-152, doi:
0.1109/ICRIS.2018.00047.
[5] M A Muchtar et al 2017 J. Phys.: Conf. Ser. 801 012010
[6] Y. Yuan, Z. Xiong and Q. Wang, "VSSA-NET: Vertical Spatial Sequence
Attention Network for Traffic Sign Detection," in IEEE Transactions on Image
Processing, vol. 28, no. 7, pp. 3423-3434, July 2019, doi:
10.1109/TIP.2019.2896952.
[7] S. Huang, H. Lin and C. Chang, "An in-car camera system for traffic sign
detection and recognition," 2017 Joint 17th World Congress of International Fuzzy
Systems Association and 9th International Conference on Soft Computing and
Intelligent Systems (IFSA-SCIS), Otsu, Japan, 2017, pp. 1-6, doi: 10.1109/IFSA-
SCIS.2017.8023239.
[8] Bi, Z., Yu, L., Gao, H. et al. Improved VGG model-based efficient traffic sign
recognition for safe driving in 5G scenarios. Int. J. Mach. Learn. & Cyber. (2020).
[9] Chuanwei Zhang et al., Study on Traffic Sign Recognition by Optimized Lenet-5
Algorithm, International Journal of Pattern Recognition and Artificial Intelligence,
doi:0.1142/S0218001420550034

86
[10] Han, C., Gao, G. & Zhang, Y. Real-time small traffic sign detection with
revised faster-RCNN. Multimed Tools Appl 78, 13263–13278 (2019).
https://doi.org/10.1007/s11042-018-6428-0
[11] H. S. Lee and K. Kim, "Simultaneous Traffic Sign Detection and Boundary
Estimation Using Convolutional Neural Network," in IEEE Transactions on
Intelligent Transportation Systems, vol. 19, no. 5, pp. 1652-1663, May 2018, doi:
10.1109/TITS.2018.2801560.
[12] R. Qian, Y. Yue, F. Coenen and B. Zhang, "Traffic sign recognition with
convolutional neural network based on max pooling positions," 2016 12th
International Conference on Natural Computation, FUZZY Systems and Knowledge
Discovery (ICNC-FSKD), Changsha, China, 2016, pp. 578-582, doi:
10.1109/FSKD.2016.7603237.
[13] A. Pon, O. Adrienko, A. Harakeh and S. L. Waslander, "A Hierarchical Deep
Architecture and Mini-batch Selection Method for Joint Traffic Sign and Light
Detection," 2018 15th Conference on Computer and Robot Vision (CRV), Toronto,
ON, Canada, 2018, pp. 102-109, doi: 10.1109/CRV.2018.00024.
[14] Saha S., Islam M.S., Khaled M.A.B., Tairin S. (2019) An Efficient Traffic Sign
Recognition Approach Using a Novel Deep Neural Network Selection Architecture.
In: Abraham A., Dutta P., Mandal J., Bhattacharya A., Dutta S. (eds) Emerging
Technologies in Data Mining and Information Security. Advances in Intelligent
Systems and Computing, vol 814. Springer, Singapore. https://doi.or g/10.1007/978-
981-13-1501-5_74
[15] A. Welzel, A. Auerswald and G. Wanielik, "Accurate camera-based traffic sign
localization," 17th International IEEE Conference on Intelligent Transportation
systems (ITSC), Qingdao, China, 2014, pp. 445-450, doi:
10.1109/ITSC.2014.6957730.
[16] M. Karaduman and H. Eren, "Deep learning based traffic direction sign
detection and determining driving style," 2017 International Conference on
Computer Science and Engineering (UBMK), Antalya, Turkey, 2017, pp. 1046-
1050, doi: 10.1109/UBMK.2017.8093453.
[17] E. Winarno, W. Hadikurniawati and R. N. Rosso, "Location based service for
presence system using haversine method," 2017 International Conference on

87
Innovative and Creative Information Technology (ICITech), Salatiga, Indonesia,
2017, pp. 1-4, doi: 10.1109/INNOCIT.2017.8319153.
[18] Pal R, Ghosh A, Kumar R, et al. Public health crisis of road traffic accidents in
India: Risk factor assessment and recommendations on prevention on the behalf of
the Academy of Family Physicians of India. J Family Med Prim Care.
2019;8(3):775-783.

88

You might also like