0% found this document useful (0 votes)
268 views57 pages

Road Accident Analysis Using Machine Learning

The document discusses using machine learning techniques to analyze road accident data and predict accident severity. It proposes developing models using accident records to understand factors influencing accidents like driver behavior, road conditions, and weather. The models would classify accident severity into categories and identify statistically significant risk factors. This analysis aims to highlight important accident data and allow predictions to help reduce accidents and casualties.

Uploaded by

blue Cutee
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as DOC, PDF, TXT or read online on Scribd
0% found this document useful (0 votes)
268 views57 pages

Road Accident Analysis Using Machine Learning

The document discusses using machine learning techniques to analyze road accident data and predict accident severity. It proposes developing models using accident records to understand factors influencing accidents like driver behavior, road conditions, and weather. The models would classify accident severity into categories and identify statistically significant risk factors. This analysis aims to highlight important accident data and allow predictions to help reduce accidents and casualties.

Uploaded by

blue Cutee
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as DOC, PDF, TXT or read online on Scribd
You are on page 1/ 57

Road Accident Analysis Using Machine Learning

Abstract

There are many inventories in in automobile industries to design and build


safety measures for automobiles, but traffic accidents are unavoidable. There is
a huge number of accidents prevailing in all urban and rural areas. Patterns
involved with different circumstances can be detected by developing an
accurate prediction models which will be capable of automatic separation of
various accidental scenarios. These cluster will be useful to prevent accidents
and develop safety measures. We believe to acquire maximum possibilities of
accident reduction using low budget resources by using some scientific
measures. There is a huge impact on the society due to traffic accidents where
there is a great costs of fatalities and injuries. In recent years, there is a increase
in the researches attention to determine the significantly affect the severity of
the drivers injuries which is caused due to the road accidents. Accurate and
comprehensive accident records are the basis of accident analysis. the effective
use of accident records depends on some factors, like the accuracy of the data,
record retention, and data analysis. There is many approaches applied to this
scenario to study this problem. A recent study illustrated that the residential and
shopping sites are more hazardous than village areas as might have been
predicted, the frequencies of the casualties were higher near the zones of
residence possibly because of the higher exposure. A study revealed that the
casualty rates among the residential areas are classified as relatively deprived
and significantly higher than those from relatively affluent areas

FRONT END : PYTHON

TOOLS USED : ANACONDA

IDE : SPYDER
INTRODUCTION

The traffic has been transformed into the difficult structure in points of
designing and managing by the reason of increasing number of vehicles. This
situation has discovered road accidents problem, influenced public health and
country economy and done the studies on solution of the problem. Large
calibrated data agglomerations have increased by the reasons of the
technological improvements and data storage with low cost. Arising the need of
accession to information from this large calibrated data obtained the corner
stone of the data mining. In this study, assignment of the most compatible
machine learning classification techniques for road accidents estimation by data
mining has been intended.

EXISTING SYSTEM

In recent years, the road accident has become a global problem and marked as
the ninth prominent cause of death in the world Due to the enormous number of
road accidents every year, it has become a major problem. To analyze traffic
accidents more deeply to determine the intensity of accidents machine learning
approaches is used. In existing system Analysis has been done, by using
Decision Tree, K-Nearest Neighbors (KNN), Naïve Bayes and AdaBoost these
four supervised learning techniques, to classify the severity of accidents into
Fatal, Grievous, Simple Injury and Motor Collision these four categories
Proposed System

The main objective of the road accident prediction system is Analyze the
previously occurred accidents in the locality which will help us to determine the
most accident-prone area and help us to set up the immediate required help for
them, and to make predictions based on constraints like weather, pollution, road
structure, etc. Models are created using accident data records which can help to
understand the characteristics of many features like drivers behaviour, roadway
conditions, light condition, weather conditions and so on. This can help the
users to compute the safety measures which is useful to avoid accidents. It can
be illustrated how statistical method based on directed graphs, by comparing
two scenarios based on out-of-sample forecasts. the model is performed to
identify statistically significant factors which can be able to predict the
probabilities of crashes and injury that can be used to perform a risk factor and
reduce it.

Here the road accident study is done by analyzing some data by giving some
queries which is relevant to the study. The queries like what is the most
dangerous time to drive , what fractions of accidents occur in rural, urban and
other areas. What is the trend in the number of accidents that occur each year,do
accidents in high speed limit areas have more casualties and so on … These data
can be accessed using Microsoft excel sheet and the required answer can be
obtained. This analysis aims to highlight the data of the most importance in a
road traffic accident and allow predictions to be made. The results from this
methodology can be seen in the next section of the report.
Modules

Data Set Selection

Data is the most import part when you work on prediction systems. It plays a
very vital role your whole project i.e., you system depends on that data. So
selection of data is the first and the critical step which should be performed
properly, For our project we got the data from the kaggle or UCI Data
repository website. These datasets were available for all. There are other tons of
websites who provide such data. The dataset we choose wad selected based on
the various factors and constraints we were going to take under the
consideration for our prediction system. In the road accident prediction system
uses the dataset is in terms of values and some data is plain English word so, the
numerical values data is easily predicted and also the calculation are easily done
but, the normal word are display as it is or the non predicted data are drop in the
table

Data Cleaning and Data Transformation

After we have selected the dataset. The next step is to clean the data and
transform it into the desired format as it is possible the dataset we use may be of
different format. It is also possible that we may use multiple datasets from
different sources which may be in different file formats. So to use them we need
to convert them into the format we want to or the type that type prediction
system supports. The reason behind this step is that it is possible that the data
set contains the constraints which are not needed by the prediction system and
including them makes the system complicated and may extend the processing
time. Another reason behind data cleaning is the dataset may contain null value
and garbage values too. So the solution to this issue is when the data is
transformed the garbage values are replaced. There are many methods to
perform that. So, This dataset are many columns and rows and all numbers of
null values will be fulfil in forward fill method and also use the classification
algorithm entire dataset.

Data Processing and Algorithm Implementation

After the data is been cleaned and transformed it’s ready to process further.
After the data has been cleaned and we have taken the required constraints. We
divide the whole dataset into the two parts that can be either 70-30 or 80-20.
The larger portion of the data is for the processing. The machine learning
algorithm is applied on that part of data. Which helps the algorithm to learn on
its own and make prediction for the future data or the unknown data. The
algorithm is executed in which we take only the required constraints from the
cleaned data. The output of the algorithm is in ‘yes’ and ‘no’. It gives the error
rate and the success rate.

Output and User Side Experience

After the prediction system is ready to use. The Website is developed for the
user. The user just has to fill a form which consists of different options they
need to select. They are like the type of climate, the type of vehicle and so on.
Once the user submits the form the algorithm is triggered and the input given by
the user is passed to the prediction system. The user is given how accident
prone the road can be in percentage.
HARDWARE SPECIFICATION: (MINIMUM REQUIREMENT)

PROCESSOR : DUAL CORE

HARD DISK CAPACITY : 400 GB

MONITOR : 14 “SAMTRON MONITOR

INTERNAL MEMORY CA : 2 GB

KEYBOARD : LOGITECH OF 104 KEYS

CPU CLOCK : 1.08 GHz

MOUSE : LOGITECH MOUSE

SOFTWARE SPECIFICATION:

OPERATING SYSTEM : WINDOWS 7

LANGUAGE : PYTHON

BACKEND : MYSQL

TOOLS USED : FLASK


PYTHON
Python is an easy to learn, powerful programming language. It has efficient
high-level data structures and a simple but effective approach to object-oriented
programming. Python’s elegant syntax and dynamic typing, together with its
interpreted nature, make it an ideal language for scripting and rapid application
development in many areas on most platforms.
The Python interpreter and the extensive standard library are freely available in
source or binary form for all major platforms. Python is a high-level,
interpreted, interactive and object-oriented scripting language. Python is
designed to be highly readable. It uses English keywords frequently where as
other languages use punctuation, and it has fewer syntactical constructions than
other languages. Python can be used on a server to create web applications.
Python works on different platforms (Windows, Mac, Linux, Raspberry Pi, etc).
Python can connect to database systems. It can also read and modify files.
Python is Interpreted − Python is processed at runtime by the interpreter. You
do not need to compile your program before executing it. This is similar to
PERL and PHP.
Python is Interactive − You can actually sit at a Python prompt and interact
with the interpreter directly to write your programs.
Python is Object-Oriented − Python supports Object-Oriented style or
technique of programming that encapsulates code within objects.
Python is a Beginner's Language − Python is a great language for the
beginner-level programmers and supports the development of a wide range of
applications from simple text processing to WWW browsers to games.
Python is derived from many other languages, including ABC, Modula-3, C, C+
+, Algol-68, SmallTalk, and Unix shell and other scripting languages.
Often, programmers fall in love with Python because of the increased
productivity it provides. Since there is no compilation step, the edit-test-debug
cycle is incredibly fast. Debugging Python programs is easy: a bug or bad input
will never cause a segmentation fault. Instead, when the interpreter discovers an
error, it raises an exception. When the program doesn't catch the exception, the
interpreter prints a stack trace. A source level debugger allows inspection of
local and global variables, evaluation of arbitrary expressions, setting
breakpoints, stepping through the code a line at a time, and so on. The debugger
is written in Python itself, testifying to Python's introspective power. On the
other hand, often the quickest way to debug a program is to add a few print
statements to the source: the fast edit-test-debug cycle makes this simple
approach very effective.
Python was conceived in the late 1980s by Guido van Rossum at Centrum
Wiskunde & Informatica (CWI) in the Netherlands as a successor to the ABC
language (itself inspired by SETL), capable of exception handling and
interfacing with the Amoeba operating system. Its implementation began in
December 1989. Van Rossum shouldered sole responsibility for the project, as
the lead developer, until 12 July 2018, when he announced his "permanent
vacation" from his responsibilities as Python's Benevolent Dictator For Life, a
title the Python community bestowed upon him to reflect his long-term
commitment as the project's chief decision-maker.[37] He now shares his
leadership as a member of a five-person steering council. In January 2019,
active Python core developers elected Brett Cannon, Nick Coghlan, Barry
Warsaw, Carol Willing and Van Rossum to a five-member "Steering Council"
to lead the project.
Python 2.0 was released on 16 October 2000 with many major new features,
including a cycle-detecting garbage collector and support for Unicode.

Python 3.0 was released on 3 December 2008. It was a major revision of the
language that is not completely backward-compatible. Many of its major
features were backported to Python 2.6.x and 2.7.x version series. Releases of
Python 3 include the 2to3 utility, which automates (at least partially) the
translation of Python 2 code to Python 3.
Python 2.7's end-of-life date was initially set at 2015 then postponed to 2020 out
of concern that a large body of existing code could not easily be forward-ported
to Python 3.
Python is a multi-paradigm programming language. Object-oriented
programming and structured programming are fully supported, and many of its
features support functional programming and aspect-oriented programming
(including by metaprogramming and metaobjects (magic methods)). Many other
paradigms are supported via extensions, including design by contract and logic
programming.
Python uses dynamic typing and a combination of reference counting and a
cycle-detecting garbage collector for memory management. It also features
dynamic name resolution (late binding), which binds method and variable
names during program execution.
Python's design offers some support for functional programming in the Lisp
tradition. It has filter, map, and reduce functions; list comprehensions,
dictionaries, sets, and generator expressions. The standard library has two
modules (itertools and functools) that implement functional tools borrowed
from Haskell and Standard ML
Rather than having all of its functionality built into its core, Python was
designed to be highly extensible. This compact modularity has made it
particularly popular as a means of adding programmable interfaces to existing
applications. Van Rossum's vision of a small core language with a large
standard library and easily extensible interpreter stemmed from his frustrations
with ABC, which espoused the opposite approach
MYSQL
MySQL Server is a powerful database management system and the user can
create application that requires little or no programming. It supports GUI
features and an entire programming language, Phpmyadmin which can be used
to develop richer and more developed application. There are quite a few reasons,
the first being that MySQL is a feature rich program that can handle any
database related task you have. You can create places to store your data build
tools that make it easy to read and modify your database contents, and ask
questions of your data. MySQL is a relational database, a database that stores
information about related objects. In MySQL that database means a collection of
tables that hold data. It collectively stores all the other related objects such as
queries, forms and reports that are used to implement function effectively.

The MySQL database can act as a back end database for PHP as a front end,
MySQL supports the user with its powerful database management functions. A
beginner can create his/her own database very simply by some mouse clicks.
Another good reason to use MySQL as backend tool is that it is a component of
the overwhelmingly popular Open source software.

MySQL is a freely available open source Relational Database Management


System (RDBMS) that uses Structured Query Language (SQL). SQL is the most
popular language for adding, accessing and managing content in a database. It is
most noted for its quick processing, proven reliability, ease and flexibility of
use. MySQL is an essential part of almost every open source PHP application.
One of the most important things about using MySQL is to have a MySQL
specialized host. Its name is a combination of "My", the name of co-founder
Michael Widenius's daughter and "SQL", the abbreviation for Structured Query
Language.
MySQL is free and open-source software under the terms of the GNU General
Public License, and is also available under a variety of proprietary licenses.
MySQL was owned and sponsored by the Swedish company MySQL AB,
which was bought by Sun Microsystems (now Oracle Corporation). In 2010,
when Oracle acquired Sun, Widenius forked the open-source MySQL project to
create MariaDB.

MySQL is written in C and C++. Its SQL parser is written in yacc, but it uses a
home-brewed lexical analyzer.[15] MySQL works on many system platforms,
including AIX, BSDi, FreeBSD, HP-UX, eComStation, i5/OS, IRIX, Linux,
macOS, Microsoft Windows, NetBSD, Novell NetWare, OpenBSD,
OpenSolaris, OS/2 Warp, QNX, Oracle Solaris, Symbian, SunOS, SCO
OpenServer, SCO UnixWare, Sanos and Tru64. A port of MySQL to OpenVMS
also exists.

MySQL was created by a Swedish company, MySQL AB, founded by David


Axmark, Allan Larsson and Michael "Monty" Widenius. Original development
of MySQL by Widenius and Axmark began in 1994. The first version of
MySQL appeared on 23 May 1995. It was initially created for personal usage
from mSQL based on the low-level language ISAM, which the creators
considered too slow and inflexible. They created a new SQL interface, while
keeping the same API as mSQL. By keeping the API consistent with the mSQL
system, many developers were able to use MySQL instead of the (proprietarily
licensed) mSQL antecedent.

MySQL is based on a client-server model. The core of MySQL is MySQL


server, which handles all of the database instructions (or commands). MySQL
server is available as a separate program for use in a client-server networked
environment and as a library that can be embedded (or linked) into seperate
applications. MySQL operates along with several utility programs which
support the administration of MySQL databases. Commands are sent to
MySQLServer via the MySQL client, which is installed on a computer. MySQL
was originally developed to handle large databases quickly. Although MySQL
is typically installed on only one machine, it is able to send the database to
multiple locations, as users are able to access it via different MySQL client
interfaces. These interfaces send SQL statements to the server and then display
the results.

MySQL enables data to be stored and accessed across multiple storage engines,
including InnoDB, CSV, and NDB. MySQL is also capable of replicating data
and partitioning tables for better performance and durability. MySQL users
aren't required to learn new commands; they can access their data using
standard SQL commands.

The RDBMS supports large databases with millions records and supports many
data types including signed or unsigned integers 1, 2, 3, 4, and 8 bytes long;
FLOAT; DOUBLE; CHAR; VARCHAR; BINARY; VARBINARY; TEXT;
BLOB; DATE; TIME; DATETIME; TIMESTAMP; YEAR; SET; ENUM; and
OpenGIS spatial types. Fixed- and variable-length string types are also
supported
HYPER TEXT MARKUP LANGUAGE (HTML)

HTML is an application of the Standard Generalized Markup Language


(SGML), which was approved as an international standard in the year 1986.
SGML provides a way to encode hyper documents so they can be interchanged.

SGML is also a Meta language for formally describing document markup


system. Infact HTML uses SGML to define a language that describes a WWW
hyper document’s structure and inter connectivity.

Following the rigors of SGML, TBL bore HTML to the world in 1990. Since
then, many of us have it to be easy to use but sometimes quite limiting. These
limiting factors are being addressed but the World Wide Web Consortium (aka
W3c) at MIT. But HTML had to start somewhere, and its success argues that it
didn’t start out too badly.

Hypertext Markup Language (HTML) is the standard markup language for


documents designed to be displayed in a web browser. It can be assisted by
technologies such as Cascading Style Sheets (CSS) and scripting languages
such as JavaScript. HTML is a computer language devised to allow website
creation. These websites can then be viewed by anyone else connected to the
Internet. It is relatively easy to learn, with the basics being accessible to most
people in one sitting; and quite powerful in what it allows you to create. It is
constantly undergoing revision and evolution to meet the demands and
requirements of the growing Internet audience under the direction of the W3C,
the organisation charged with designing and maintaining the language.

HyperText is the method by which you move around on the web — by clicking
on special text called hyperlinks which bring you to the next page. The fact that
it is hyper just means it is not linear — i.e. you can go to any place on the
Internet whenever you want by clicking on links — there is no set order to do
things in. Markup is what HTML tags do to the text inside them. They mark it
as a certain type of text (italicised text, for example). HTML is a Language, as it
has code-words and syntax like any other language.

HTML consists of a series of short codes typed into a text-file by the site author
— these are the tags. The text is then saved as a html file, and viewed through a
browser, like Internet Explorer or Netscape Navigator. This browser reads the
file and translates the text into a visible form, hopefully rendering the page as
the author had intended. Writing your own HTML entails using tags correctly to
create your vision. You can use anything from a rudimentary text-editor to a
powerful graphical editor to create HTML pages.

The tags are what separate normal text from HTML code. You might know
them as the words between the <angle-brackets>. They allow all the cool stuff
like images and tables and stuff, just by telling your browser what to render on
the page. Different tags will perform different functions. The tags themselves
don’t appear when you view your page through a browser, but their effects do.
The simplest tags do nothing more than apply formatting to some text

Web browsers receive HTML documents from a web server or from local
storage and render the documents into multimedia web pages. HTML describes
the structure of a web page semantically and originally included cues for the
appearance of the document.

HTML elements are the building blocks of HTML pages. With HTML
constructs, images and other objects such as interactive forms may be embedded
into the rendered page. HTML provides a means to create structured documents
by denoting structural semantics for text such as headings, paragraphs, lists,
links, quotes and other items. HTML elements are delineated by tags, written
using angle brackets. Tags such as <img /> and <input /> directly introduce
content into the page. Other tags such as <p> surround and provide information
about document text and may include other tags as sub-elements. Browsers do
not display the HTML tags, but use them to interpret the content of the page.

HTML can embed programs written in a scripting language such as JavaScript,


which affects the behavior and content of web pages. Inclusion of CSS defines
the look and layout of content. The World Wide Web Consortium (W3C),
former maintainer of the HTML and current maintainer of the CSS standards,
has encouraged the use of CSS over explicit presentational HTML since 1997.

The first publicly available description of HTML was a document called


"HTML Tags", first mentioned on the Internet by Tim Berners-Lee in late 1991.
It describes 18 elements comprising the initial, relatively simple design of
HTML. Except for the hyperlink tag, these were strongly influenced by
SGMLguid, an in-house Standard Generalized Markup Language (SGML)-
based documentation format at CERN. Eleven of these elements still exist in
HTML 4.

After the HTML and HTML+ drafts expired in early 1994, the IETF created an
HTML Working Group, which in 1995 completed "HTML 2.0", the first HTML
specification intended to be treated as a standard against which future
implementations should be based.

Of course, but since making websites became more popular and needs increased
many other supporting languages have been created to allow new stuff to
happen, plus HTML is modified every few years to make way for
improvements. Cascading Stylesheets are used to control how your pages are
presented, and make pages more accessible. Basic special effects and interaction
is provided by JavaScript, which adds a lot of power to basic HTML. Most of
this advanced stuff is for later down the road, but when using all of these
technologies together, you have a lot of power at your disposal.
CSS

Cascading Style Sheets (CSS) is a style sheet language used for describing the
presentation of a document written in a markup language like HTML. CSS is a
cornerstone technology of the World Wide Web, alongside HTML and
JavaScript. CSS is designed to enable the separation of presentation and
content, including layout, colors, and fonts. This separation can improve content
accessibility, provide more flexibility and control in the specification of
presentation characteristics, enable multiple web pages to share formatting by
specifying the relevant CSS in a separate .css file, and reduce complexity and
repetition in the structural content.

Separation of formatting and content also makes it feasible to present the same
markup page in different styles for different rendering methods, such as on-
screen, in print, by voice (via speech-based browser or screen reader), and on
Braille-based tactile devices. CSS also has rules for alternate formatting if the
content is accessed on a mobile device. The name cascading comes from the
specified priority scheme to determine which style rule applies if more than one
rule matches a particular element. This cascading priority scheme is predictable.

The CSS specifications are maintained by the World Wide Web Consortium
(W3C). Internet media type (MIME type) text/css is registered for use with CSS
by RFC 2318 (March 1998). The W3C operates a free CSS validation service
for CSS documents. In addition to HTML, other markup languages support the
use of CSS including XHTML, plain XML, SVG, and XUL.

CSS has a simple syntax and uses a number of English keywords to specify the
names of various style properties. A style sheet consists of a list of rules. Each
rule or rule-set consists of one or more selectors, and a declaration block.
Before CSS, nearly all presentational attributes of HTML documents were
contained within the HTML markup. All font colors, background styles,
element alignments, borders and sizes had to be explicitly described, often
repeatedly, within the HTML. CSS lets authors move much of that information
to another file, the style sheet, resulting in considerably simpler HTML.

Stands for "Cascading Style Sheet." Cascading style sheets are used to format
the layout of Web pages. They can be used to define text styles, table sizes, and
other aspects of Web pages that previously could only be defined in a page's
HTML.

CSS helps Web developers create a uniform look across several pages of a Web
site. Instead of defining the style of each table and each block of text within a
page's HTML, commonly used styles need to be defined only once in a CSS
document. Once the style is defined in cascading style sheet, it can be used by
any page that references the CSS file. Plus, CSS makes it easy to change styles
across several pages at once. For example, a Web developer may want to
increase the default text size from 10pt to 12pt for fifty pages of a Web site. If
the pages all reference the same style sheet, the text size only needs to be
changed on the style sheet and all the pages will show the larger text.

While CSS is great for creating text styles, it is helpful for formatting other
aspects of Web page layout as well. For example, CSS can be used to define the
cell padding of table cells, the style, thickness, and color of a table's border, and
the padding around images or other objects. CSS gives Web developers more
exact control over how Web pages will look than HTML does. This is why most
Web pages today incorporate cascading style sheets.

CSS is created and maintained through a group of people within the W3C called
the CSS Working Group. The CSS Working Group creates documents called
specifications. When a specification has been discussed and officially ratified
by the W3C members, it becomes a recommendation. These ratified
specifications are called recommendations because the W3C has no control over
the actual implementation of the language. Independent companies and
organizations create that software.

JAVASCRIPT

JavaScript is a dynamic computer programming language. It is lightweight and


most commonly used as a part of web pages, whose implementations allow
client-side script to interact with the user and make dynamic pages. It is an
interpreted programming language with object-oriented capabilities. JavaScript
was first known as LiveScript, but Netscape changed its name to JavaScript,
possibly because of the excitement being generated by Java. JavaScript made its
first appearance in Netscape 2.0 in 1995 with the name LiveScript. The general-
purpose core of the language has been embedded in Netscape, Internet Explorer,
and other web browsers

Client-side JavaScript is the most common form of the language. The script
should be included in or referenced by an HTML document for the code to be
interpreted by the browser. It means that a web page need not be a static HTML,
but can include programs that interact with the user, control the browser, and
dynamically create HTML content. The JavaScript client-side mechanism
provides many advantages over traditional CGI server-side scripts. For
example, you might use JavaScript to check if the user has entered a valid e-
mail address in a form field. The JavaScript code is executed when the user
submits the form, and only if all the entries are valid, they would be submitted
to the Web Server. JavaScript can be used to trap user-initiated events such as
button clicks, link navigation, and other actions that the user initiates explicitly
or implicitly.
JavaScript can be implemented using JavaScript statements that are placed
within the <script>... </script> HTML tags in a web page.

You can place the <script> tags, containing your JavaScript, anywhere within
your web page, but it is normally recommended that you should keep it within
the <head> tags.

The <script> tag alerts the browser program to start interpreting all the text
between these tags as a script.

JavaScript ignores spaces, tabs, and newlines that appear in JavaScript


programs. You can use spaces, tabs, and newlines freely in your program and
you are free to format and indent your programs in a neat and consistent way
that makes the code easy to read and understand. Simple statements in
JavaScript are generally followed by a semicolon character, just as they are in
C, C++, and Java. JavaScript, however, allows you to omit this semicolon if
each of your statements are placed on a separate line. For example, the
following code could be written without semicolons.

JavaScript is a case-sensitive language. This means that the language keywords,


variables, function names, and any other identifiers must always be typed with a
consistent capitalization of letters. So the identifiers Time and TIME will
convey different meanings in JavaScript.

All the modern browsers come with built-in support for JavaScript. Frequently,
you may need to enable or disable this support manually. This chapter explains
the procedure of enabling and disabling JavaScript support in your browsers:
Internet Explorer, Firefox, chrome, and Opera.

JavaScript often abbreviated as JS, is an interpreted programming language that


conforms to the ECMAScript specification. JavaScript is high-level, often just-
in-time compiled, and multi-paradigm. It has curly-bracket syntax, dynamic
typing, prototype-based object-orientation, and first-class functions. Alongside
HTML and CSS, JavaScript is one of the core technologies of the World Wide
Web. JavaScript enables interactive web pages and is an essential part of web
applications. The vast majority of websites use it for client-side page behavior,
and all major web browsers have a dedicated JavaScript engine to execute it.
As a multi-paradigm language, JavaScript supports event-driven, functional,
and imperative programming styles. It has application programming interfaces
(APIs) for working with text, dates, regular expressions, standard data
structures, and the Document Object Model (DOM). However, the language
itself does not include any input/output (I/O), such as networking, storage, or
graphics facilities, as the host environment (usually a web browser) provides
those APIs. Originally used only in web browsers, JavaScript engines are also
now embedded in server-side website deployments and non-browser
applications. Although there are similarities between JavaScript and Java,
including language name, syntax, and respective standard libraries, the two
languages are distinct and differ greatly in design.
Flask

Flask is a web application framework written in Python. Armin Ronacher, who


leads an international group of Python enthusiasts named Pocco, develops it.
Flask is based on Werkzeug WSGI toolkit and Jinja2 template engine. Both are
Pocco projects. Flask is a micro web framework written in Python. It is
classified as a microframework because it does not require particular tools or
libraries. It has no database abstraction layer, form validation, or any other
components where pre-existing third-party libraries provide common functions.
However, Flask supports extensions that can add application features as if they
were implemented in Flask itself. Extensions exist for object-relational
mappers, form validation, upload handling, various open authentication
technologies and several common framework related tools. Extensions are
updated far more frequently than the core Flask program.

Web Application Framework or simply Web Framework represents a collection


of libraries and modules that enables a web application developer to write
applications without having to bother about low-level details such as protocols,
thread management etc. Flask is a web application framework written in
Python. It is developed by Armin Ronacher, who leads an international group of
Python enthusiasts named Pocco. Flask is based on the Werkzeug WSGI toolkit
and Jinja2 template engine. Both are Pocco projects. Web Server Gateway
Interface (WSGI) has been adopted as a standard for Python web application
development. WSGI is a specification for a universal interface between the web
server and the web applications.

Python 2.6 or higher is usually required for installation of Flask. Although Flask
and its dependencies work well with Python 3 (Python 3.3 onwards), many
Flask extensions do not support it properly. Hence, it is recommended that Flask
should be installed on Python 2.7. virtualenv is a virtual Python environment
builder. It helps a user to create multiple Python environments side-by-side.
Thereby, it can avoid compatibility issues between the different versions of the
libraries. This command needs administrator privileges. Add sudo before pip on
Linux/Mac OS. If you are on Windows, log in as Administrator. On Ubuntu
virtualenv may be installed using its package manager. The route() function of
the Flask class is a decorator, which tells the application which URL should call
the associated function. Importing flask module in the project is mandatory. An
object of Flask class is our WSGI application. Flask constructor takes the name
of current module (__name__) as argument. The rule parameter represents URL
binding with the function. The options is a list of parameters to be forwarded to
the underlying Rule object. Finally the run() method of Flask class runs the
application on the local development server.

A Flask application is started by calling the run() method. However, while the
application is under development, it should be restarted manually for each
change in the code. To avoid this inconvenience, enable debug support. The
server will then reload itself if the code changes. It will also provide a useful
debugger to track the errors if any, in the application. The Debug mode is
enabled by setting the debug property of the application object to True before
running or passing the debug parameter to the run() method.

Modern web frameworks use the routing technique to help a user remember
application URLs. It is useful to access the desired page directly without having
to navigate from the home page. The route() decorator in Flask is used to bind
URL to a function. As a result, if a user visits http://localhost:5000/hello URL,
the output of the hello_world() function will be rendered in the browser. The
add_url_rule() function of an application object is also available to bind a URL
with a function as in the above example, route() is used. It is possible to build a
URL dynamically, by adding variable parts to the rule parameter. This variable
part is marked as <variable-name>. It is passed as a keyword argument to the
function with which the rule is associated. In the following example, the rule
parameter of route() decorator contains <name> variable part attached to URL
‘/hello’. Hence, if the http://localhost:5000/hello/TutorialsPoint is entered as a
URL in the browser, ‘TutorialPoint’ will be supplied to hello() function as
argument.

An advantage of using Flask might be the fact that this framework is light, and
the risk for encountering Flask security bugs is minimal. At the same time, a
drawback might be the fact that it requires quite some effort from the part of the
programmer in order to boost the list of dependencies via plugins. A great thing
about Flask is the template engine available. The purpose of such templates is to
allow basic layout configuration for web pages with the purpose of mentioning
which element is susceptible to change. As such, you will be able to define your
template once and keep it the same all over the pages of a website. With the aid
of a template engine, you will be able to save a lot of time when setting up your
application, and even when it comes to updates or maintenance issues. Overall,
Flask is easy to learn and manage as a scalable tool. It allows any type of
approach or programming technique, as there are no restrictions included on the
app architecture or data abstraction layers. You can even run it on embedded
systems like a Raspberry Pi. Your web app can be loaded on any device,
including mobile phone, desktop pc or even a tv. Besides, it benefits from a
community that offers support and solutions suggestions to a multitude of
problems that programmers might face when using Flask in Python. The core
benefit of Flask is that the programmer controls everything, while he or she will
get a deeper understanding of how internal mechanics of frameworks function.

Werkzeug

Werkzeug is a utility library for the Python programming language, in other


words a toolkit for Web Server Gateway Interface (WSGI) applications, and is
licensed under a BSD License. Werkzeug can realize software objects for
request, response, and utility functions. It can be used to build a custom
software framework on top of it and supports Python 2.6, 2.7 and 3.3

Jinja

Jinja (template engine)

Jinja, also by Ronacher, is a template engine for the Python programming


language and is licensed under a BSD License. Similar to the Django web
framework, it provides that templates are evaluated in a sandbox.

A framework "is a code library that makes a developer's life easier when
building reliable, scalable, and maintainable web applications" by providing
reusable code or extensions for common operations. There are a number of
frameworks for Python, including Flask, Tornado, Pyramid, and Django. Flask
is an API of Python that allows to build up web-applications. It was developed
by Armin Ronacher. Flask’s framework is more explicit than Django’s
framework and is also easier to learn because it have less base code to
implement a simple web-Application. A Web-Application Framework or Web
Framework is the collection of modules and libraries that helps the developer to
write applications without writing the low-level codes such as protocols, thread
management, etc. Flask is based on WSGI(Web Server Gateway Interface)
toolkit and Jinja2 template engine

Why Flask?

 easy to use.
 built in development server and debugger
 integrated unit testing support
 RESTful request dispatching
 uses Jinja2 templating
 support for secure cookies (client side sessions)
 100% WSGI 1.0 compliant
 Unicode based
 extensively documented

Database:

A database is simply a collection of used data just like phone book. MySQL
database include such objects as tables, queries, forms, and more.

Tables:

In MySQL tables are collection of similar data. With all tables can be organized
differently, and contain mostly different information- but they should all be in
the same database file. For instance we may have a database file called video
store. Containing tables named members, tapes, reservations and so on. These
tables are stored in the same database file because they are often used together
to create reports to help to fill out on screen forms.

Relational database:

MySQL is a relational database. Relational databases tools like access can help
us manage information in three important ways.

 Reduce redundancy
 Facilitate the sharing of information
 Keep data accurate.

Fields

Fields are places in a table where we store individual chunks of information.

Primary key and other indexed fields:


MySQL use key fields and indexing to help speed many database operations.
We can tell MySQL, which should be key fields, or MySQL can assign them
automatically.

Controls and objects:

Queries are access objects us display, print and use our data. They can be things
like field labels that we drag around when designing reports. Or they can be
pictures, or titles for reports, or boxes containing the results of calculations.

Queries and dynasts:

Queries are request to information. When access responds with its list of data,
that response constitutes a dynaset. A dynamic set of data meeting our query
criteria. Because of the way access is designed, dynasts are updated even after
we have made our query.

Forms:

Forms are on screen arrangement that make it easy to enter and read data. we
can also print the forms if we want to. We can design form our self, or let the
access auto form feature.

Reports:

Reports are paper copies of dynaset. We can also print reports to disk, if we
like. Access helps us to create the reports. There are even wizards for complex
printouts.

Properties:

Properties are the specification we assigned to parts of our database design. We


can define properties for fields, forms, controls and most other access objects.
DESIGN AND DEVELOPMENT PROCESS

FUNDAMENTAL DESIGN CONCEPTS

System design is a “how to” approach to creation of a new system. System


design goes through 2 phases. They are

- Logical design

- Physical design

Logical design reviews the present physical system, prepares input and output
specifications, makes edit security and control specifications

Physical design maps out the details of the physical system, plans, system
implementation, device a test and implementation plan.

DESIGN PROCESS

INPUT DESIGN

Input design is the process of converting the user-oriented. Input to a computer


based format. The goal of the input design is to make the data entry easier ,
logical and free error. Errors in the input data are controlled by the input design.
The quality of the input determines the quality of the system output.

All the data entry screen are interactive in nature, so that the user can directly
enter into data according to the prompted messages. The user are also can
directly enter into data according to the prompted messages. The users are also
provided with option of selecting an appropriate input from a list of values. This
will reduce the number of error, which are otherwise likely to arise if they were
to be entered by the user itself.

Input design is one of the most important phase of the system design. Input
design is the process where the input received in the system are planned and
designed, so as to get necessary information from the user, eliminating the
information that is not required. The aim of the input design is to ensure the
maximum possible levels of accuracy and also ensures that the input is
accessible that understood by the user. The input design is the part of overall
system design, which requires very careful attention. If the data going into the
system is incorrect then the processing and output will magnify the errors.

The objectives considered during input design are:


 Nature of input processing.
 Flexibility and thoroughness of validation rules.
 Handling of properties within the input documents.
 Screen design to ensure accuracy and efficiency of the input
relationship with files.
 Careful design of the input also involves attention to error
handling, controls, batching and validation procedures.
Input design features can ensure the reliability of the system and produce result
from accurate data or they can result in the production of erroneous information.

Data Flow Diagram (DFD)

The first step is to draw a data flow diagram (DFD). The DFD was first
developed by Larry Constantine as a way of expressing system requirements in
graphical form.

A DFD also known as a “bubble chart” has the purpose of clarifying system
requirements and identifying major transformations that will become programs
in system design. So, it is the starting point of the design phase that functionally
decomposes the requirements specifications down to the lowest level of detail.
A DFD consists of series of bubbles join by the data flows in the system.
The purpose of data flow diagrams is to provide a semantic bridge between
users and systems developers. The diagrams are:

• Graphical, eliminating thousands of words;

• Logical representations, modeling WHAT a system does, rather than


physical models showing HOW it does it;

• Hierarchical, showing systems at any level of detail; and

• jargon less, allowing user understanding and reviewing.

The goal of data flow diagramming is to have a commonly understood model


of a system. The diagrams are the basis of structured systems analysis. Data
flow diagrams are supported by other techniques of structured systems analysis
such as data structure diagrams, data dictionaries, and procedure-representing
techniques such as decision tables, decision trees, and structured English.

External Entity

An external entity is a source or destination of a data flow, which is outside the


area of study. Only those entities, which originate or receive data, are
represented on a business process diagram. The symbol used is an oval
containing a meaningful and unique identifier.

Process

A process shows a transformation or manipulation of data flows within the


system. The symbol used is a rectangular box, which contains 3 descriptive
elements: Firstly an identification number appears in the upper left hand corner.
This is allocated arbitrarily at the top level and serves as a unique reference.
Secondly, a location appears to the right of the identifier and describes where in
the system the process takes place.

Data Flow

A data flow shows the flow of information from its source to its destination. A
data flow is represented by a line, with arrowheads showing the direction of
flow. Information always flows to or from a process and may be written, verbal
or electronic. Each data flow may be referenced by the processes or data stores
at its head and tail, or by a description of its contents.

Data Store

A data store is a holding place for information within the system: It is


represented by an open ended narrow rectangle. Data stores may be long-term
files such as sales ledgers, or may be short-term accumulations: for example
batches of documents that are waiting to be processed. Each data store should
be given a reference followed by an arbitrary number.

Resource Flow

A resource flow shows the flow of any physical material from its source to its
destination. For this reason they are sometimes referred to as physical flows.
The physical material in question should be given a meaningful name. Resource
flows are usually restricted to early, high-level diagrams and are used when a
description of the physical flow of materials is considered to be important to
help the analysis.
OUTPUT DESIGN

The output form of the system is either by screen or by hard copies. Output
design aims at communicating the results of the processing of the users. The
reports are generated to suit the needs of the users .The reports have to be
generated with appropriate levels. In our project outputs are generated by asp as
html pages. As its web application output is designed in a very user-friendly this
will be through screen most of the time.

CODE DESIGN

The main purpose of code design is to simplify the coding and to achieve better
performance and quality with free of errors. The coding is prepared in such a
way that the internal procedures are more meaningful validation manager is
displayed for each column. The coding of the variables is done in such a way
that one other than person who developed the packages can understand its
purpose.

To reduce the server load, the project is designed in a way that most of the
Validation of fields is done as client side validation, which will be more
effective.

DATABASE DESIGN

The database design involves creation of tables that are represented in physical
database as stored files. They have their own existence. Each table constitute of
rows and columns where each row can be viewed as record that consists of
related information and column can be viewed as field of data of same type. The
table is also designed with some position can have a null value.
The database design of project is designed in such a way values are kept without
redundancy and with normalized format.

DEVELOPMENT APPROACH

TOP DOWN APPROACH

The importance of new system is that it is user friendly and a better interface
with user’s working on it. It can overcome the problems of manual system and
the security problem.

Top down approach of software development is the incremental approach to the


construction of program structure. Modules are integrated by moving through
the control hierarchy, beginning with the main control module. Module
subordinate to the main control modules is incorporate into the structure in
either a depth first or breadth first manner.

The top down approach is performed in a serious of five steps

1. The main module that is overall software is divided into five


modules that are under the control of the main control module.
2. Depending on the top down approach selected subordinate
stubs is replaced one at a time with actual components.
3. Tests are conducted as each component is integrated
4. On completion of each test another stub is replaced with real
time component.
5. Regression testing may be conducted to ensure the new
errors have not been introduced.

TESTING AND IMPLEMENTATION

SYSTEM TESTING
It is the process of exercising software with the intent of finding and ultimately
correcting errors. This fundamental philosophy does not change for web
applications, because web based system and applications reside on network and
inter-operate with many different operating systems, browsers, hardware
platforms and communication protocols. Thus searching for errors is significant
challenge for web applications.

Testing issues:

1. Client GUI should be considered.


2. Target environment and platform considerations
3. Distributed database considerations
4. Distributed processing consideration

TESTING AND METHODLOGIES

System testing is the state of implementation, which is aimed at ensuring that


the system works accurately and efficiently as expect before live operation,
commences. It certifies that the whole set of programs hang together System
testing requires a test plan that consists of several key activities and steps for
run program, string, system and user acceptance testing. The implementation of
newly design package is important in adopting a successful new system

Testing is important stage in software development. System test is


implementation should be a confirmation that all is correct and an opportunity
to show the users that the system works as they expected It accounts the largest
percentage of technical effort in software development process.

Testing phase is the development phase that validates the code against the
functional specifications. Testing is a vital to the achievement of the system
goals. The objective of testing is to discover errors. To fulfill this objective a
series of test step such as the unit test, integration test, validation and system
test where planned and executed.

Unit testing

Here each program is tested individually so any error apply unit is debugged.
The sample data are given for the unit testing. The unit test results are recorded
for further references. During unit testing the functions of the program unit
validation and the limitations are tested.

Unit testing is testing changes made in a existing or new program this test is
carried out during the programming and each module is found to be working
satisfactorily. For example in the registration form after entering all the fields
we click the submit button. When submit button is clicked, all the data in form
are validated. Only after validation entries will be added to the database.

Unit testing comprises the set of tests performed by an individual prior to


integration of the unit into large system. The situation is illustrated in as follows

Coding-> Debugging ->Unit testing -> Integration testing

The four categories of test that a programmer will typically perform on a


program unit

1. Functional test
2. Performance test
3. Stress Test
4. Structure test
Functional test involve exercising the code with nominal input values for
which the expected results are known as well as boundary values and special
values.
Performance testing determines the amount of execution time spent in
various parts of unit program through put and response time and device
utilization by the program.

A variation of stress testing called sensitivity testing in same situations a


very small range of data contained in a bound of valid data may cause extreme
and even erroneous processing or profound performance degradation.

Structured testing is concerned with a exercising the internal logic of a


program and traversing paths. Functional testing, stress testing performance
testing are referred as “black box” testing and structure testing is referred as
“white box” testing

VALIDATION TESTING

Software validation is achieved through a serious of testes that demonstrate


conformity with requirements. Thus the proposed system under consideration
has been tested by validation & found to be working satisfactory.

OUTPUT TESTING

Asking the user about the format required by them tests the output generated by
the system under consideration .It can be done in two ways, One on screen and
other on printer format. The output format on the screen is found to be correct
as the format designed n system test.

SYSTEM TESTING
In the system testing the whole system is tested for interface between
each module and program units are tested and recorded. This testing
is done with sample data. The securities, communication between
interfaces are tested

System testing is actually a series of different tests whose primary


purpose is to fully exercise the computer based system although
each test has a different purpose all work to verify that all
system elements properly integrated and perform allocate function.

It involves two kinds of activities namely

1. Integrated testing

2. Acceptance testing

Integrated testing

Integrated testing is a systematic technique for constructing tests to uncover


errors associated with interface.

Objective is to take unit tested modules and build a program structure that has
been dictated by design

Acceptance testing

Acceptance testing involves planning an execution of a functional test,


performance test and stress test to verify that the implemented system satisfies
the requirement.

The acceptance testing is the final stage of the user the various possibilities of
the data are entered and the results are tested.

Validation testing

Software validation is achieved through a series of test that demonstrates the


conformity and requirements. Thus the proposed system under consideration
has to be tested by validation and found to be working satisfactorily. For
example in customer enters phone number field should contain number
otherwise it produces an error message similarly in all the forms the fields are
validated

Testing results
All the tests should be traceable to customer requirements the focus of testing
will shift progressively from programs Exhaustive testing is not possible To be
more effective testing should be which has probability of finding errors

The following are the attributes of good test

1. A good test has a probability of finding a errors

2. A good test should be “best of breeds”

3. A good test to neither simple nor too complex

QUALITY ASSURANCE

Quality assurance consists of the auditing and reporting functions of


management. The goal of quality assurance is to provide management with the
data entries necessary to be informed about the product quality thereby gaining
the goal of insight and confidence that the product quality is meeting

Greater emphasis on quality in organization requires quality assurance. To be an


integral part of the information system development .The development process
must include checks throughout the process to ensure that the final product
meets the original user requirements.

Quality assurance thus becomes an important component of the development


process It’s included in the industry standard (IEEE 1993) On the development
process quality assurance process is integrated into a linear development cycle
through validation and verification performed at crucial system development
steps .The goals of the management is to institute and monitor a quality
assurance program with in the development process

Quality assurance induces


1. Validation of the system against requirements

2. Checks for errors in design documents and in the system itself

3. Quality assurance for usability

Quality assurance Goals:

Correctness: The extent to which the program meets the system


specifications and user objectives

Reliablility: The degree to which the system performs its intended functions
overtime

Efficiency: The amount of computer resources required by a program to


perform a function

Usability: The effort required learning and operating a system

Maintainability: To use with which program errors are located and corrected

Testability: The effort required a testing a program to ensure its correct


performance

Portability: To ease of transporting a program from hardware configuration to


another

Accuracy: The required position in input editing computation and output

GENERIC RISKS

Risk identification is the systematic attempt to specify threats to the project plan
(estimates the schedule resource overloading etc.). By identifying know and
predictable risk the first step is to avoiding them. When possible and controlling
them when necessary there are two types of risk.

1. Generic Risk
2. Product specific risk
Generic risks are potential threats to every software project. Only those with a
clear understanding of technology can identify product specific risk The people
and the environment that is specific to the project at a hand and to identify the
product specific risk and the project the plan and the software statement of
scope are examined and answer to the following question is developed.

What special characteristics of this product may threaten the project plan.

One method for identifying risk is to create a risk item and checklists. The
checklist can be used for risk identification and focus on some subset to know
and predictable risk in the following sub categories.

1. Product risk
2. Risk associated with overall size of software to built or modified
3. Business imparts
4. Risk associated with constraints imposed with management
5. Customer characteristics

Risk associated with sophisticated of the customer and developers ability to


communicate with the customer in a timely manner.

Different categories of risks are considered

Project Risks
It identify a potential budgetary, schedule, personnel like staffing, organizing,
resource, customer requirement, problems and their impact on a software
project
Technical risks

Technical risks identify potential design implementation interface, verification,


and maintenance problems.

SECURITY TECHNOLOGIES AND POLICIES

Any system developed should be secured & protected against possible hazards.
Security measures are provided to prevent unauthorized access to database at
various levels. Password protection & simple procedures to change the
unauthorized access are provided to the users.

The user will have to enter the user name and password and if it is validated he
can participate in auction. Otherwise if he/she is a new user he should get
registered and then he can place an order

When he/she registered they should provide authentication through jpg files
(like ration card Xerox, voter identity card Xerox). A multi layer security
architecture comprising firewalls filtering routers encryption & digital
certification must be assured in this project in real time that order details are
protected from unauthorized access.

SYSTEM IMPLEMENTATION

Implementation is the stage in the project where the theoretical design is turned
into a working system. The most crucial stage is achieving a successful new
system and giving a user confidence in that the new system will work efficiently
and effectively in the implementation stage. The stage consist of

 Testing a developed program with sample data


 Detection and correction of error
 Creating whether the system meets a user requirement.
 Making necessary changes as desired by users.
 Training user personal

IMPLEMENTATION PROCEDURES

The implementation phase is less creative than system design. A system design
may be dropped at any time prior to implementation, although it becomes more
difficult when it goes to the design phase. The final report of the
implementation phase includes procedural flowcharts, record layouts, and a
workable plan for implementing the candidate system design into a operational
design.

USER TRAINING

It is designed to prepare the users for testing & converting the system. There is
several ways to trail the users they are:

1) User manual

2) Help screens

3) Training demonstrations.

1) User manual:
The summary of important functions about the system & software can be
provided as a document to the user. User training is designed to prepare the user
for testing and convening a system

The summary of important functions about the system and the software can be
provided as a document to the user

1. Open http page


2. Type the file name with URL index .php in the address bar
3. Index. php is opened existing user the type the username and
password
4. Click the submit button
2) Help screens:
This features now available in every software package, especially
when it is used with a menu. The user selects the “Help” option from the menu.
The System success the necessary description or information for user reference.
3) Training demonstration:
Another user training element is a training demonstration. Live
demonstration with personal contact is extremely effective for training users.
OPERATIONAL DOCUMENTATION
Documentation means of communication; it establishes the design and
performance criteria of the project. Documentation is descriptive information
that portrays the use and /or operation of the system. The user will have to enter
the user name and password if it is valid he participate in auction. Otherwise if
it is new user he needs to register

Documentation means of communication; it establishes design & performance


criteria for phases of the project. Documentation is descriptive information that
portrays he use &/or operation of the system.

1) Documentation tools:

Document production & desktop publishing tool support nearly ever aspect of
software developers. Most software development organizations spend a
substantial amount of time developing documents, and in many cases the
documentation process itself is quite inefficient. It is not use unusual for a
software development effort on documentation. For this reason, Documentation
tools provide an important opportunity to improve productivity.
2) Document restructuring:

Creating document is far too timed consuming. If the system work’s, we’ll live
with what we have. In some cases, this is the correct approach. It is not possible
to recreate document for hundreds of computer programs.

Documentation must be updated, but we have limited resources. It may not be


necessary to fully redocument an application. Rather, those portions of the
system that are currently undergoing change are fully documented.

The system is business critical and must be fully redocumented. Even in this
case, an intelligent approach is to pare documentation to an essential minimum.

SYSTEM MAINTENANCE

Maintenance is actually implementation of the review plan as important as


it is programmers and analyst is to perform or identify with him or herself with
the maintenance. There are psychologically personality, and professional reasons
for this. Analyst and programmers spend fair more time maintaining programmer
then they do writing them Maintenances account for 50-80% of total system
development. Maintenance is expensive .One way to reduce the maintenance
costs are through maintenance mgt and software modification audits Types of
maintenance are

1. Perf
ective maintenance
2. Prev
entive maintenance
Perfective maintenance:

Changes made to the system to add features or to improve the performance.

Preventive maintenance:

Changes made to the system to avoid future problems. Any changes can be
made in the future and our project can adopt the changes.

CONCLUSION

Road Accidents are caused by various factors. By going through all the research
papers it can be concluded that Road Accident cases are hugely affected by the
factors such as types of vehicles, age of the driver, age of the vehicle, weather
condition, road structure and so on. Thus we have build an application which
gives efficient prediction of road accidents based on the above mentioned
factors.

FUTUE WORK

Future work may be after studying these models anyone can relate the problem
definition that what are the features that are actually causing a problem(here
accident) and based on that he/she can take precautionary measures to avoid the
problem in future. In future a data analytic framework can be used to analysed
traffic accidents data and established a model for predicting injury severity.
Public people can use available data to build prediction models for injury
severity level
Road Data Pre-processing
Admin
Accident
Analysis

Road View Predictions


User Accident
Analysis

Admin Data
accident.csv
Collection

Data
Acquisition accident.csv

Data Pre-
processing accident.csv

accident.csv
Prediction using
SVM

Create
Model File
User View
accident.csv
Prediction
DATASET

Importing the dataset into python


Splitting X value
Splitting y value
Algorithms accuracy score and prediction values

Graph
Giving input
Predicting Accident Severity
Model.py

import pandas as pd

import matplotlib.pyplot as plt

import pickle

data = pd.read_csv('accident.csv')

X = data.iloc[:,0:-1]

Y = data.iloc[:,-1]

from sklearn.model_selection import train_test_split

X_train,X_test,Y_train,Y_test = train_test_split(X,Y,test_size=0.1)

from sklearn import linear_model

model = linear_model.LinearRegression()

model.fit(X_train,Y_train)

accl = model.score(X, Y)

print("Accuracy: ",accl*100," %.")

clf_acc = accl*100

#Prediction

prediction = model.predict(X_test) #Final predicton

from sklearn import svm

SVM = svm.LinearSVC()

SVM.fit(X_train,Y_train)

acc = SVM.score(X, Y)

print("Accuracy: ",acc*100," %.")

svm_acc = acc*100
print(model.predict([[2,3,1,1,1,2,9,1,2,80]]))

print(model.predict([[1,6,1,1,1,1,9,3,2,70]]))

print(model.predict([[3,3,1,1,1,2,9,1,2,29]]))

data = {'LogisticRegression':clf_acc, 'SVM':svm_acc}

courses = list(data.keys())

values = list(data.values())

fig = plt.figure(figsize = (10, 5))

# creating the bar plot

plt.bar(courses, values, color ='green',

width = 0.4)

plt.xlabel("Algorithm")

plt.ylabel("Accuracy")

plt.title("Accuracy of Algorithms")

plt.show()

"""Deployment codes starts here"""

file=open('my_model.pkl','wb')

pickle.dump(model,file,protocol=2)
App.py

from flask import Flask, request, render_template

import numpy as np

import pickle

app = Flask(__name__)

app.config['SEND_FILE_MAX_AGE_DEFAULT'] = 0

app.config["CACHE_TYPE"] = "null"

# change to "redis" and restart to cache again

# some time later

file=open('my_model.pkl','rb')

model=pickle.load(file)

file.close()

@app.route('/')

def home():

return render_template('index.html')

@app.route('/predict',methods=['POST'])

def predict():

''' For rendering results on HTML GUI '''

#int_features = [int(x) for x in request.form.values()]

if request.method == 'POST':

NOV = request.form["NOV"]

FRC = request.form["FRC"]
RC = request.form["RC"]

LC = request.form["LC"]

WC = request.form["WC"]

VN = request.form["VN"]

TOV = request.form["TOV"]

CC = request.form["CC"]

SOC = request.form["SOC"]

AOC = request.form["AOC"]

'''data= [NOV, FRC, RC, LC, WC, VN, TOV, CC, SOC, AOC]

data = np.array(data)

data = data.astype(np.float).reshape(1,-1)

predict = model.predict(data) '''

data= [NOV, FRC, RC, LC, WC, VN, TOV, CC, SOC, AOC]

data = np.array(data)

data = data.astype(np.float).reshape(1,-1)

predict = model.predict(data)

if(predict==1):

score='Slight!'

elif(predict<2.5):

score='Serious!'

elif(predict>=2.5):
score='FATAL'

else:

score='Low'

return render_template('index.html', prediction_text='The Accident Severity


is *{}* , Score : {} '.format(score,predict))

#return render_template('index.html', prediction_text='The Accident Severity


is *{}* , Score : {} / 4'.format(score,output))

"""@app.route('/predict_api',methods=['POST'])

def predict_api():

'''

For direct API calls trought request

'''

data = request.get_json(force=True)

prediction = model.predict([np.array(list(data.values()))])

output = prediction[0]

return jsonify(output)

"""

@app.route('/dashboard')

def dashboard():

return render_template('dashboard.html')

if __name__ == "__main__":

app.run(debug=False)

You might also like